An operational framework for guiding human evaluation in Explainable and Trustworthy AI

dc.contributor.affiliationUniversidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Informaciónes_ES
dc.contributor.areaÁrea de Enxeñaría e Arquitectura
dc.contributor.authorConfalonieri, Roberto
dc.contributor.authorAlonso Moral, José María
dc.date.accessioned2024-02-05T09:30:20Z
dc.date.available2024-02-05T09:30:20Z
dc.date.issued2023-11
dc.description.abstractThe assessment of explanations by humans presents a significant challenge within the context of Explainable and Trustworthy AI. This is attributed not only to the absence of universal metrics and standardized evaluation methods, but also to complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies.es_ES
dc.description.peerreviewedSIes_ES
dc.description.sponsorshipThe authors would like to thank Marzo Zenere for the implementation of the Python wizard during his MSc thesis. This work is supported by MCIN/AEI/10.13039/501100011033 (grants PID2021-123152OB-C21, TED2021-130295BC33 and RED2022-134315-T) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431G2019/04 and ED431C2022/19 which are co-funded by the ERDF/FEDER program).es_ES
dc.identifier.citationR. Confalonieri, J. M. Alonso-Moral, "An operational framework for guiding human evaluation in Explainable and Trustworthy AI", IEEE Intelligent Systems, 2023, https://doi.org/10.1109/MIS.2023.3334639es_ES
dc.identifier.doi10.1109/MIS.2023.3334639
dc.identifier.essn1941-1294
dc.identifier.issn1541-1672
dc.identifier.urihttp://hdl.handle.net/10347/32315
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.relation.publisherversionhttps://doi.org/10.1109/MIS.2023.3334639es_ES
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.es_ES
dc.rights.accessRightsopen accesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectArtificial Intelligencees_ES
dc.subjectTaxonomyes_ES
dc.subjectPlanninges_ES
dc.subjectIntelligent Systemses_ES
dc.subjectCognitive Sciencees_ES
dc.subjectEthicses_ES
dc.subjectTask Analysises_ES
dc.titleAn operational framework for guiding human evaluation in Explainable and Trustworthy AIes_ES
dc.typejournal articlees_ES
dc.type.hasVersionAMes_ES
dspace.entity.typePublication
relation.isAuthorOfPublication47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69
relation.isAuthorOfPublication.latestForDiscovery47f74ee4-a6d5-49cd-8a38-bf9fdeef8f69

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
online-10322863.pdf
Size:
638.26 KB
Format:
Adobe Portable Document Format
Description: