An operational framework for guiding human evaluation in Explainable and Trustworthy AI
| dc.contributor.affiliation | Universidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Información | es_ES |
| dc.contributor.area | Área de Enxeñaría e Arquitectura | |
| dc.contributor.author | Confalonieri, Roberto | |
| dc.contributor.author | Alonso Moral, José María | |
| dc.date.accessioned | 2024-02-05T09:30:20Z | |
| dc.date.available | 2024-02-05T09:30:20Z | |
| dc.date.issued | 2023-11 | |
| dc.description.abstract | The assessment of explanations by humans presents a significant challenge within the context of Explainable and Trustworthy AI. This is attributed not only to the absence of universal metrics and standardized evaluation methods, but also to complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies. | es_ES |
| dc.description.peerreviewed | SI | es_ES |
| dc.description.sponsorship | The authors would like to thank Marzo Zenere for the implementation of the Python wizard during his MSc thesis. This work is supported by MCIN/AEI/10.13039/501100011033 (grants PID2021-123152OB-C21, TED2021-130295BC33 and RED2022-134315-T) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431G2019/04 and ED431C2022/19 which are co-funded by the ERDF/FEDER program). | es_ES |
| dc.identifier.citation | R. Confalonieri, J. M. Alonso-Moral, "An operational framework for guiding human evaluation in Explainable and Trustworthy AI", IEEE Intelligent Systems, 2023, https://doi.org/10.1109/MIS.2023.3334639 | es_ES |
| dc.identifier.doi | 10.1109/MIS.2023.3334639 | |
| dc.identifier.essn | 1941-1294 | |
| dc.identifier.issn | 1541-1672 | |
| dc.identifier.uri | http://hdl.handle.net/10347/32315 | |
| dc.language.iso | eng | es_ES |
| dc.publisher | IEEE | es_ES |
| dc.relation.publisherversion | https://doi.org/10.1109/MIS.2023.3334639 | es_ES |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. | es_ES |
| dc.rights.accessRights | open access | es_ES |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Artificial Intelligence | es_ES |
| dc.subject | Taxonomy | es_ES |
| dc.subject | Planning | es_ES |
| dc.subject | Intelligent Systems | es_ES |
| dc.subject | Cognitive Science | es_ES |
| dc.subject | Ethics | es_ES |
| dc.subject | Task Analysis | es_ES |
| dc.title | An operational framework for guiding human evaluation in Explainable and Trustworthy AI | es_ES |
| dc.type | journal article | es_ES |
| dc.type.hasVersion | AM | es_ES |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | 47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69 | |
| relation.isAuthorOfPublication.latestForDiscovery | 47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- online-10322863.pdf
- Size:
- 638.26 KB
- Format:
- Adobe Portable Document Format
- Description: