RT Journal Article T1 An operational framework for guiding human evaluation in Explainable and Trustworthy AI A1 Confalonieri, Roberto A1 Alonso Moral, José María K1 Artificial Intelligence K1 Taxonomy K1 Planning K1 Intelligent Systems K1 Cognitive Science K1 Ethics K1 Task Analysis AB The assessment of explanations by humans presents a significant challenge within the context of Explainable and Trustworthy AI. This is attributed not only to the absence of universal metrics and standardized evaluation methods, but also to complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies. PB IEEE SN 1541-1672 YR 2023 FD 2023-11 LK http://hdl.handle.net/10347/32315 UL http://hdl.handle.net/10347/32315 LA eng NO R. Confalonieri, J. M. Alonso-Moral, "An operational framework for guiding human evaluation in Explainable and Trustworthy AI", IEEE Intelligent Systems, 2023, https://doi.org/10.1109/MIS.2023.3334639 NO The authors would like to thank Marzo Zenerefor the implementation of the Python wizardduring his MSc thesis. This work is supportedby MCIN/AEI/10.13039/501100011033 (grantsPID2021-123152OB-C21, TED2021-130295BC33 and RED2022-134315-T) and the GalicianMinistry of Culture, Education, ProfessionalTraining and University (grants ED431G2019/04and ED431C2022/19 which are co-funded by theERDF/FEDER program). DS Minerva RD 1 may 2026