[Summary] [Vision] [Call for papers] [Organization] [Submission] [Programme] [Registration] Designing the Conceptual Landscape for a XAIR Validation InfrastructureThe rapid advancement of artificial intelligence (AI) technologies necessitates the development of robust validation infrastructures and data documentation schemas to ensure that AI systems are transparent, reliable, and trustworthy. The DCLXVI 2024 International Workshop will survey the conceptual landscape of explainable-AI-ready (XAIR) models and data. The definition of core concepts relevant to data integrity, algorithmic transparency, and user interpretability will be explored. Explainable AI (XAI) addresses the challenge of transparency by making AI systems' inner workings understandable. Ontologies permit documenting such an understanding, so that AI systems can supply explanations that are accurate, relevant, and understandable - by machines and by humans. There, semantic technology becomes necessary. One use case for this, among many, is retrieval-augmented generation applied to large language models, where semantic technology can contribute to ensuring that information is processed correctly in context. Such combinations of learning by induction and deduction will become essential to XAI systems for applications requiring precise, context-aware information, such as in safety-critical industrial environments. However, any successful metadata standardization effort for explainable-AI-readiness presupposes an exploration and critical discussion of the core concepts for documenting models and data such that they become XAIR. At DCLXVI 2024, we will explore the potential of metadata standardization as well as conceptual analysis and engineering for AI validation at the state of the art of applied ontology and epistemology. Building XAIR digital infrastructures involves ensuring data privacy, addressing ethical concerns, and adhering to regulatory requirements. Standardization is highly relevant to international regulatory efforts, e.g., the introduction of digital product passports. DCLXVI 2024 will investigate the prerequisites at a conceptual level for developing and documenting AI systems that are not only technically robust but also transparent, interpretable, and trustworthy. AI4Work is funded from the EC's Horizon Europe research and innovation programme under GA no. 101135990.
|