← Go back

Argumentation for Explainable AI

Seminar Master

Explainability in AI is crucial for trust, especially in decision-making. Abstract argumentation models reasoning as a graph, where arguments interact through attack or support relations. Argumentation frameworks (AFs) determine which arguments are accepted based on logical consistency.

For example, in legal AI, an argumentation system evaluating a contract dispute might include:

  • A1: "The contract is invalid due to missing signatures."
  • A2: "The contract is valid because both parties agreed verbally."

Here, A1 attacks A2. If no stronger argument supports A2, the system concludes that the contract is invalid. This structured approach helps AI justify decisions, making reasoning more transparent and explainable.

This seminar aims to explore formal models, reasoning structures, and practical applications of argumentation in AI explainability. In particular, we will examine how argumentation frameworks enhance AI transparency, justify decisions, and improve user trust.

Bench-Capon, Trevor J. M., and Dunne, Paul E. (2005). Argumentation in AI and Law: Editors' Introduction. Artif. Intell. Law 13(1): 1-8

Walton, D. (2009). Argumentation Theory: A Very Short Introduction. Argumentation in Artificial Intelligence 2009: 1-22

Course in PAUL

The seminar will be available in PAUL (TBA).

Contact

Yasir Mahmood