Nannini, LucaAlonso Moral, José MaríaCatalá Bolós, AlejandroLama Penín, ManuelBarro Ameneiro, Senén2024-08-052024-08-052024L. Nannini, J. M. Alonso-Moral, A. Catalá, M. Lama and S. Barro, "Operationalizing Explainable Artificial Intelligence in the European Union Regulatory Ecosystem," in IEEE Intelligent Systems, vol. 39, no. 4, pp. 37-48, July-Aug. 2024, doi: 10.1109/MIS.2024.3383155http://hdl.handle.net/10347/34596The European Union’s (EU’s) regulatory ecosystem presents challenges with balancing legal and sociotechnical drivers for explainable artificial intelligence (XAI) systems. Core tensions emerge on dimensions of oversight, user needs, and litigation. This article maps provisions on algorithmic transparency and explainability across major EU data, AI, and platform policies using qualitative analysis. We characterize the involved stakeholders and organizational implementation targets. Constraints become visible between useful transparency for accountability and confidentiality protections. Through an AI hiring system example, we explore the complications with operationalizing explainability. Customization is required to satisfy explainability desires within confidentiality and proportionality bounds. The findings advise technologists on prudent XAI technique selection given multidimensional tensions. The outcomes recommend that policy makers balance worthy transparency goals with cohesive legislation, enabling equitable dispute resolution.engAtribución 4.0 Internacional© 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Artificial intelligenceLawRegulationIntelligent systemsEcosystemsStakeholdersExplainable AISociotechnical systemsEuropeOperationalizing Explainable AI in the EU Regulatory Ecosystemjournal article10.1109/MIS.2024.33831551941-1294open access