Operationalizing Explainable AI in the EU Regulatory Ecosystem

Research Projects

Organizational Units

Journal Issue

Abstract

The European Union’s (EU’s) regulatory ecosystem presents challenges with balancing legal and sociotechnical drivers for explainable artificial intelligence (XAI) systems. Core tensions emerge on dimensions of oversight, user needs, and litigation. This article maps provisions on algorithmic transparency and explainability across major EU data, AI, and platform policies using qualitative analysis. We characterize the involved stakeholders and organizational implementation targets. Constraints become visible between useful transparency for accountability and confidentiality protections. Through an AI hiring system example, we explore the complications with operationalizing explainability. Customization is required to satisfy explainability desires within confidentiality and proportionality bounds. The findings advise technologists on prudent XAI technique selection given multidimensional tensions. The outcomes recommend that policy makers balance worthy transparency goals with cohesive legislation, enabling equitable dispute resolution.

Description

Bibliographic citation

L. Nannini, J. M. Alonso-Moral, A. Catalá, M. Lama and S. Barro, "Operationalizing Explainable Artificial Intelligence in the European Union Regulatory Ecosystem," in IEEE Intelligent Systems, vol. 39, no. 4, pp. 37-48, July-Aug. 2024, doi: 10.1109/MIS.2024.3383155

Relation

Has part

Has version

Is based on

Is part of

Is referenced by

Is version of

Requires

Sponsors

Rights

Atribución 4.0 Internacional
© 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License