An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information

dc.contributor.affiliationUniversidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Informacióngl
dc.contributor.affiliationUniversidade de Santiago de Compostela. Departamento de Electrónica e Computacióngl
dc.contributor.affiliationUniversidade de Santiago de Compostela. Departamento de Filosofía e Antropoloxíagl
dc.contributor.areaÁrea de Enxeñaría e Arquitectura
dc.contributor.authorStepin, Ilia
dc.contributor.authorPereira Fariña, Martín
dc.contributor.authorAlonso Moral, José María
dc.contributor.authorCatalá Bolós, Alejandro
dc.date.accessioned2022-11-29T11:58:37Z
dc.date.available2022-11-29T11:58:37Z
dc.date.issued2022
dc.description.abstractThe explanatory capacity of interpretable fuzzy rule-based classifiers is usually limited to offering explanations for the predicted class only. A lack of potentially useful explanations for non-predicted alternatives can be overcome by designing methods for the so-called counterfactual reasoning. Nevertheless, state-of-the-art methods for counterfactual explanation generation require special attention to human evaluation aspects, as the final decision upon the classification under consideration is left for the end user. In this paper, we first introduce novel methods for qualitative and quantitative counterfactual explanation generation. Then, we carry out a comparative analysis of qualitative explanation generation methods operating on (combinations of) linguistic terms as well as a quantitative method suggesting precise changes in feature values. Then, we propose a new metric for assessing the perceived complexity of the generated explanations. Further, we design and carry out two human evaluation experiments to assess the explanatory power of the aforementioned methods. As a major result, we show that the estimated explanation complexity correlates well with the informativeness, relevance, and readability of explanations perceived by the targeted study participants. This fact opens the door to using the new automatic complexity metric for guiding multi-objective evolutionary explainable fuzzy modeling in the near futuregl
dc.description.peerreviewedSIgl
dc.description.sponsorshipIlia Stepin is an FPI researcher (grant PRE2019-090153). Jose M. Alonso-Moral is a Ramon y Cajal researcher (grant RYC-2016–19802). This work was supported by the Spanish Ministry of Science and Innovation (grants RTI2018-099646-B-I00, PID2021-123152OB-C21, and TED2021-130295B-C33) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431F2018/02, ED431G2019/04, and ED431C2022/19). All the grants were co-funded by the European Regional Development Fund (ERDF/FEDER program).gl
dc.identifier.citationInformation Sciences 618 (2022). https://doi.org/10.1016/j.ins.2022.10.098gl
dc.identifier.doi10.1016/j.ins.2022.10.098
dc.identifier.essn0020-0255
dc.identifier.urihttp://hdl.handle.net/10347/29483
dc.language.isoenggl
dc.publisherElseviergl
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-099646-B-I00/ES/MODELOS, TECNICAS Y METODOLOGIAS BASADAS EN LA INTELIGENCIA ARTIFICIAL PARA LA MEJORA DE LA ADHERENCIA TERAPEUTICAgl
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2021-123152OB-C21/ESgl
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/TED2021-130295B-C33/ESgl
dc.relation.publisherversionhttps://doi.org/10.1016/j.ins.2022.10.098gl
dc.rights©2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)gl
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional
dc.rights.accessRightsopen accessgl
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectExplainable artificial intelligencegl
dc.subjectInterpretable fuzzy modelinggl
dc.subjectFuzzy rule-based classificationgl
dc.subjectCounterfactual explanationgl
dc.subjectHuman evaluationgl
dc.titleAn empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise informationgl
dc.typejournal articlegl
dc.type.hasVersionVoRgl
dspace.entity.typePublication
relation.isAuthorOfPublication0150b339-bec0-4820-a75b-ebb1da27d8dc
relation.isAuthorOfPublication47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69
relation.isAuthorOfPublication2d82830a-9264-499e-905a-dba76d3676fc
relation.isAuthorOfPublication.latestForDiscovery0150b339-bec0-4820-a75b-ebb1da27d8dc

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2022_infsci_stepin_empirical.pdf
Size:
1.19 MB
Format:
Adobe Portable Document Format
Description:
Artigo de investigación