RT Dissertation/Thesis T1 Argumentative Conversational Agents for Explainable Artificial Intelligence A1 Stepin, Ilia K1 explainable artificial intelligence K1 counterfactuals K1 dialogue game K1 interpretable fuzzy modelling K1 human evaluation AB Recent years have witnessed a striking rise of artificialintelligence algorithms that are able to show outstandingperformance. However, such good performance is oftentimesachieved at the expense of explainability. Not only can the lackof algorithmic explainability undermine the user's trust in thealgorithmic output, but it can also cause adverse consequences.In this thesis, we advocate the use of interpretable rule-basedmodels that can serve both as stand-alone applications andproxies for black-box models. More specifically, we design anexplanation generation framework that outputs contrastive,selected, and social explanations for interpretable (decisiontrees and rule-based) classifiers. We show that the resultingexplanations enhance the effectiveness of AI algorithms whilepreserving their transparent structure. YR 2023 FD 2023 LK http://hdl.handle.net/10347/31084 UL http://hdl.handle.net/10347/31084 LA eng DS Minerva RD 28 abr 2026