ELaw Coauthors Award-Winning Explainable AI 2.0 Article

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, is co-author of the article "Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions" which has been awarded the 2025 Best Paper Award by Information Fusion.

The paper takes stock of where explainable AI currently stands and argues that the field is entering a new phase. Rather than treating explanations as a purely technical add-on, the article frames explainability as a socio-technical problem: explanations have to be meaningful for real audiences, suitable for specific contexts, and robust enough to support governance, accountability and contestation.

On this basis, the authors outline a research agenda for "XAI 2.0". A central theme is the need to clarify what counts as a good explanation, depending on who the explanation is for and what it is meant to achieve, whether that is supporting developers in debugging, enabling impacted individuals to understand and challenge outcomes, or allowing regulators and auditors to assess compliance. The article also stresses that progress requires more robust and shared approaches to evaluation, moving beyond isolated metrics towards methods that are comparable across settings while still sensitive to context and purpose.

The manifesto further highlights the practical limits and trade-offs that explainability faces in real systems. These include tensions between interpretability and predictive performance, as well as transparency and security, and the risk that explanations can be superficial or misleading even when they appear technically plausible. Relatedly, the paper emphasises human factors and interaction: explanations are interpreted, trusted, ignored, or strategically used in ways that can shape decision making and responsibility, which makes user-centred design and empirical testing an important part of the research agenda.

From an eLaw perspective, the article is particularly relevant because it connects technical debates on explainability with governance and legal questions. It situates explanations within accountability practices such as risk management, auditing, documentation, and oversight, and highlights the role explanations can play in supporting meaningful transparency and the contestability of automated or AI-supported decisions.

The article was coordinated by Dr Luca Longo and is authored by an interdisciplinary team spanning computer science, explainable AI, and law.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.