RT Journal Article T1 Operationalizing Explainable AI in the EU Regulatory Ecosystem A1 Nannini, Luca A1 Alonso Moral, José María A1 Catalá Bolós, Alejandro A1 Lama Penín, Manuel A1 Barro Ameneiro, Senén K1 Artificial intelligence K1 Law K1 Regulation K1 Intelligent systems K1 Ecosystems K1 Stakeholders K1 Explainable AI K1 Sociotechnical systems K1 Europe AB The European Union’s (EU’s) regulatory ecosystem presents challenges with balancing legal and sociotechnical drivers for explainable artificial intelligence (XAI) systems. Core tensions emerge on dimensions of oversight, user needs, and litigation. This article maps provisions on algorithmic transparency and explainability across major EU data, AI, and platform policies using qualitative analysis. We characterize the involved stakeholders and organizational implementation targets. Constraints become visible between useful transparency for accountability and confidentiality protections. Through an AI hiring system example, we explore the complications with operationalizing explainability. Customization is required to satisfy explainability desires within confidentiality and proportionality bounds. The findings advise technologists on prudent XAI technique selection given multidimensional tensions. The outcomes recommend that policy makers balance worthy transparency goals with cohesive legislation, enabling equitable dispute resolution. PB IEEE YR 2024 FD 2024 LK http://hdl.handle.net/10347/34596 UL http://hdl.handle.net/10347/34596 LA eng NO L. Nannini, J. M. Alonso-Moral, A. Catalá, M. Lama and S. Barro, "Operationalizing Explainable Artificial Intelligence in the European Union Regulatory Ecosystem," in IEEE Intelligent Systems, vol. 39, no. 4, pp. 37-48, July-Aug. 2024, doi: 10.1109/MIS.2024.3383155 DS Minerva RD 23 abr 2026