[Summary]
[Vision]
[Call for papers]
[Organization]
[Submission]
[Programme]
[Registration]
Designing the Conceptual Landscape for a XAIR Validation Infrastructure
Manuscript categories and scope of the workshop
This workshop is closely related to the work plan of the Knowledge Graph Alliance's working group on explainable-AI-ready data and metadata principles (XAIR principles). Technically, its purpose is to develop the working group deliverable Synopsis of XAIR Core Concepts by gathering community input. The Synopsis of XAIR Core Concepts is stated to "identify the core concepts, analyse and summarize the literature characterizing these concepts."
The scope of the workshop is also described in the DCLXVI 2024 vision statement. Based on this, we welcome the following types of manuscripts:
- Discussion of a core concept for explainable-AI-readiness, including a critical analysis of multiple definitions from the literature. (But no need to comprehensively review the entire literature.)
- Surveying the landscape of multiple core concepts (two or three, not more), including an analysis of how or whether different definitions of these concepts can be combined with each other.
- Applied ontology techniques, methodology, software, or digital artefacts that can be used for conceptual landscape discovery and visualization or for conceptual landscape design, including a demonstration of how this can be applied to core concepts for explainable-AI-readiness.
- Papers on going beyond FAIR, including a discussion of requirements that are insufficiently addressed by the FAIR principles, so that they need to be supplemented, updated, or revised.
What are these "core concepts"?
The core concepts for explainable-AI-readiness (i.e., the concepts to be discussed at the workshop) include, but are not limited to the following:
- Explainability and explanation
- Reproducibility, reliability, and reliance
- Opacity and transparency, interpretability and interpretation
- DIKW: Data, information, knowledge, and wisdom
- Responsibility, trust, trustworthiness and reasons/motivations for trusting
- Model design, parameterization, and optimization
- Holistic validation and unit testing (of models and simulation codes)
- Theoretical virtues (of models)
- Epistemic agents, vices, and virtues
- The four elements of "FAIR," possibly requiring a revision or update
- Simulation; applying and evaluating models
- Context awareness, subject matter, and logical subtraction
Manuscript format
Prepare your manuscript according to the specifications for publication in the Springer series Lecture Notes in Networks and Systems (LNNS). Please use LaTeX format (not MS Word).
The minimum size is eight pages, not including literature and any appendices; the maximum size limit is 20 pages, including literature and any appendices.
AI4Work is funded from the EC's Horizon Europe research and innovation programme under GA no. 101135990.
BatCAT is funded from the EC's Horizon Europe research and innovation programme under GA no. 101137725.
|