by Ljupcho Grozdanovski
Repository Citation
Ljupcho Grozdanovski THE EXPLANATIONS ONE NEEDS FOR THE EXPLANATIONS ONE GIVES—THE NECESSITY OF EXPLAINABLE AI (XAI) FOR CAUSAL EXPLANATIONS OF AI-RELATED HARM: DECONSTRUCTING THE ‘REFUGE OF IGNORANCE’ IN THE EU’S AI LIABILITY REGULATION Spring 2024 Int’l J. L. Ethics Tech. 2 (2024).
Available at: https://www.doi.org/10.55574/TQCG5204
Author Information: National Foundation for Scientific Research (FNRS); Faculty of Law, Political Science and Criminology, University of Liège, Belgium.
Abstract:
This paper examines how explanations related to the adverse outcomes of Artificial Intelligence (AI) contribute to the development of causal evidentiary explanations in disputes surrounding AI liability. The study employs a dual approach: first, it analyzes the emerging global caselaw in the field of AI liability, seeking to discern prevailing trends regarding the evidence and explanations considered essential for the fair resolution of disputes. Against the backdrop of those trends, the paper evaluates the upcoming legislation in the European Union (EU) concerning AI liability, namely the AI Liability Directive (AILD) and Revised Product Liability Directive (R-PLD). The objective is to ascertain whether the systems of evidence and procedural rights outlined in this legislation, particularly the right to request the disclosure of evidence, enable litigants to adequately understand the causality underlying AI-related harms. Moreover, the paper seeks o determine if litigants can effectively express their views before dispute-resolution authorities based on that understanding. An examination of the AILD and R-PLD reveals that their evidence systems primarily support ad hoc explanations, allowing litigants and courts to assess the extent of the defendants’ compliance with the standards enshrined in regulatory instruments, such as the AI Act. However, the paper contends that, beyond ad hoc explanations, achieving fair resolution in AI liability disputes necessitates post-hoc explanations. These should be directed at unveiling the functionalities of AI systems and the rationale behind harmful automated decisions. The paper thus suggests that ‘full’ explainable AI (XAI) that is, both ad hoc and post hoc, is necessary so that the constitutional requirements associated with the right to a fair trial (access to courts, equality of arms, contradictory debate) can be effectively met.
Keywords: AI, Causation, Explainability, Fair Trial, Procedural Fairness, Equality of Arms, Effective Participation, AI liability, Product Liability, AI Act, AI Liability Directive, Product Liability Directive
Tables of contents
Attribution 4.0 International (CC BY 4.0)
Persistent link: https://www.ijlet.org/2024-2-155-262/
DOI: https://www.doi.org/10.55574/TQCG5204
Full-text PDF article