Argumentative Conversational Agents for Explainable Artificial Intelligence
Loading...
Identifiers
Publication date
Authors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Recent years have witnessed a striking rise of artificial
intelligence algorithms that are able to show outstanding
performance. However, such good performance is oftentimes
achieved at the expense of explainability. Not only can the lack
of algorithmic explainability undermine the user's trust in the
algorithmic output, but it can also cause adverse consequences.
In this thesis, we advocate the use of interpretable rule-based
models that can serve both as stand-alone applications and
proxies for black-box models. More specifically, we design an
explanation generation framework that outputs contrastive,
selected, and social explanations for interpretable (decision
trees and rule-based) classifiers. We show that the resulting
explanations enhance the effectiveness of AI algorithms while
preserving their transparent structure.
Description
Bibliographic citation
Relation
Has part
Has version
Is based on
Is part of
Is referenced by
Is version of
Requires
Sponsors
Rights
Attribution-NonCommercial-NoDerivatives 4.0 Internacional








