The goal of the project is to develop methods of ML explainability for a question that is highly relevant for practice:
Which explanations can be offered to the user to make episodic interactive learning efficient and valid, especially in applications where manual data annotation is costly?
In addition to a classical feature representation of data, the project will also consider latent representations in embedding spaces
(as is common in automatic language processing and knowledge graph processing)
that are relevant for practice.
By Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Blübaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast
By Alexander Bondarenko, Magdalena Wolska, Stefan Heindorf, Lukas Blübaum, Axel-Cyrille Ngonga Ngomo, Benno Stein, Pavel Braslavski, Matthias Hagen, Martin Potthast