An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information
| dc.contributor.affiliation | Universidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Información | gl |
| dc.contributor.affiliation | Universidade de Santiago de Compostela. Departamento de Electrónica e Computación | gl |
| dc.contributor.affiliation | Universidade de Santiago de Compostela. Departamento de Filosofía e Antropoloxía | gl |
| dc.contributor.area | Área de Enxeñaría e Arquitectura | |
| dc.contributor.author | Stepin, Ilia | |
| dc.contributor.author | Pereira Fariña, Martín | |
| dc.contributor.author | Alonso Moral, José María | |
| dc.contributor.author | Catalá Bolós, Alejandro | |
| dc.date.accessioned | 2022-11-29T11:58:37Z | |
| dc.date.available | 2022-11-29T11:58:37Z | |
| dc.date.issued | 2022 | |
| dc.description.abstract | The explanatory capacity of interpretable fuzzy rule-based classifiers is usually limited to offering explanations for the predicted class only. A lack of potentially useful explanations for non-predicted alternatives can be overcome by designing methods for the so-called counterfactual reasoning. Nevertheless, state-of-the-art methods for counterfactual explanation generation require special attention to human evaluation aspects, as the final decision upon the classification under consideration is left for the end user. In this paper, we first introduce novel methods for qualitative and quantitative counterfactual explanation generation. Then, we carry out a comparative analysis of qualitative explanation generation methods operating on (combinations of) linguistic terms as well as a quantitative method suggesting precise changes in feature values. Then, we propose a new metric for assessing the perceived complexity of the generated explanations. Further, we design and carry out two human evaluation experiments to assess the explanatory power of the aforementioned methods. As a major result, we show that the estimated explanation complexity correlates well with the informativeness, relevance, and readability of explanations perceived by the targeted study participants. This fact opens the door to using the new automatic complexity metric for guiding multi-objective evolutionary explainable fuzzy modeling in the near future | gl |
| dc.description.peerreviewed | SI | gl |
| dc.description.sponsorship | Ilia Stepin is an FPI researcher (grant PRE2019-090153). Jose M. Alonso-Moral is a Ramon y Cajal researcher (grant RYC-2016–19802). This work was supported by the Spanish Ministry of Science and Innovation (grants RTI2018-099646-B-I00, PID2021-123152OB-C21, and TED2021-130295B-C33) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431F2018/02, ED431G2019/04, and ED431C2022/19). All the grants were co-funded by the European Regional Development Fund (ERDF/FEDER program). | gl |
| dc.identifier.citation | Information Sciences 618 (2022). https://doi.org/10.1016/j.ins.2022.10.098 | gl |
| dc.identifier.doi | 10.1016/j.ins.2022.10.098 | |
| dc.identifier.essn | 0020-0255 | |
| dc.identifier.uri | http://hdl.handle.net/10347/29483 | |
| dc.language.iso | eng | gl |
| dc.publisher | Elsevier | gl |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/RTI2018-099646-B-I00/ES/MODELOS, TECNICAS Y METODOLOGIAS BASADAS EN LA INTELIGENCIA ARTIFICIAL PARA LA MEJORA DE LA ADHERENCIA TERAPEUTICA | gl |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2021-123152OB-C21/ES | gl |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/TED2021-130295B-C33/ES | gl |
| dc.relation.publisherversion | https://doi.org/10.1016/j.ins.2022.10.098 | gl |
| dc.rights | ©2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) | gl |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | |
| dc.rights.accessRights | open access | gl |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Explainable artificial intelligence | gl |
| dc.subject | Interpretable fuzzy modeling | gl |
| dc.subject | Fuzzy rule-based classification | gl |
| dc.subject | Counterfactual explanation | gl |
| dc.subject | Human evaluation | gl |
| dc.title | An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information | gl |
| dc.type | journal article | gl |
| dc.type.hasVersion | VoR | gl |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | 0150b339-bec0-4820-a75b-ebb1da27d8dc | |
| relation.isAuthorOfPublication | 47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69 | |
| relation.isAuthorOfPublication | 2d82830a-9264-499e-905a-dba76d3676fc | |
| relation.isAuthorOfPublication.latestForDiscovery | 0150b339-bec0-4820-a75b-ebb1da27d8dc |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 2022_infsci_stepin_empirical.pdf
- Size:
- 1.19 MB
- Format:
- Adobe Portable Document Format
- Description:
- Artigo de investigación