Explainability for machine learning (ML) models is crucial to comprehending why models make certain decisions. Without explanations, it's hard to trust these models and their usability would be questioned. Moreover, since traditional ML-models often constitute black boxes, their predictions are hardly comprehensible by humans. In contrast, white box models make their predictions in a transparent way. Such white-box models are particularly promising to apply to knowledge graphs (KG) and description logic (DL) to represent knowledge in a human-readable form. Description logic can be used to define the semantics of entities and relationships in a knowledge graph, enhancing its expressiveness and enabling more sophisticated reasoning capabilities. E.g., consider the triples (Calculus, hasPrerequisite, Algebra), (Calculus, hasPrerequisite, Trigonometry) and (CS, hasPrerequisite, Calculus). Then the DL-axioms (MathCourse isSubClassOf hasPrerequisite some (Algebra or Trigonometry)) and (ScienceCourse isSubClassOf hasPrerequisite some Calculus) represent the relationships between course categories "MathCourse" and "ScienceCourse".
To better understand the explainability of ML models, this seminar will explore the existing approaches within the context of machine learning models applied to knowledge graphs and description logics. In particular, we will focus on (1) Class expression learning, (2) Argumentation and dialogue-based approaches, and (3) Abduction.
Kakas, A. C., & Michael, L. (2020). Abduction and Argumentation for Explainable Machine Learning: A Position Survey. CoRR abs/2010.12896 (2020)
Kouagou, N. J., Heindorf, S., Demir, C., & Ngomo, A. N. (2023). Neural Class Expression Synthesis in description logic ALCHIQ(D). ECML PKDD 2023
The seminar will be available in PAUL (TBA).