← Go back

RAKI - Explaining ML actions with Knowledge Graphs and OWL ontologies

4 years ago by Dr. Diego Moussallem

With the ever-growing adoption of Artificial Intelligence (AI) models comes an increasing demand for making their output actions understandable to non-Machine Learning (ML) experts. Obtaining an explainable AI system demands the capability to automatically generate coherent explanations in natural language texts from non-linguistic data. In our project, RAKI, the non-linguistic data is represented as Knowledge Graphs (KGs) and Web Ontology Language (OWL). The process of automatically generating texts is called Natural Language Generation (NLG) (Reiter, 2000). NLG approaches include (i) rule-based approaches (ii) modular statistical approaches, which divide the process into three phases (planning, selection, and surface realization) and use data-driven approaches for one or more of these phases, (iii) hybrid approaches that rely on a combination of handcrafted rules and corpus statistics, and (iv) the more recent neural network-based models.

Recently, there has been increased interest in the development of NLG systems that focus on verbalizing triples present in KGs (Gardent et al, 2017). KGs store factual knowledge in structured data with relationships between resources. However, KGs are usually composed of description languages that rely on Semantic Web visions such as RDF, SPARQL, and OWL, and a few works have investigated the use of OWL to support the explainability of ML models. Although OWL class expressions are rather difficult to understand for non-expert users, OWL can still be used to represent the data and therefore support decisions taken by ML models. For example, while the meaning of the OWL class expression “Class: Professor SubClassOf: worksAt SOME University” is obvious to every Semantic Web expert, this expression (``Every professor works at a university'') is rather difficult to fathom for laypersons.

The DICE group has been working on this area and has published several papers in the NLG field on Semantic Web technologies. DICE is therefore responsible for generating explanations of ML actions in the RAKI project. The results of a recent work published by DICE, named LD2NL, suggest that although generating explanations based on KGs and OWL is in its infancy, LD2NL can generate verbalizations that are close to natural languages, and non-experts can easily understand them. In addition, LD2NL enables non-domain experts to interpret AI actions with more than 91% of the accuracy of domain experts.

Further steps in RAKI comprise the creation of new NLG approaches to handle data from distinct industry domains, as well as taking into account complex OWL class expressions, which are generated by processing on-the-fly ML actions.

Stay tuned!