Skip to content

“TRUSTWORTHY AI” CANNOT BE TRUSTED: A VIRTUE JURISPRUDENCE-BASED APPROACH TO ANALYSE WHO IS RESPONSIBLE FOR AI ERRORS

by Shilun Zhou

Repository Citation

Shilun Zhou “TRUSTWORTHY AI” CANNOT BE TRUSTED: A VIRTUE JURISPRUDENCE-BASED APPROACH TO ANALYSE WHO IS RESPONSIBLE FOR AI ERRORS

Summer 2024 Int’l J. L. Ethics Tech. 3 (2024).
Available at: https://www.doi.org/10.55574/DWJZ4472

Author Information:  University of Edinburgh, UK

Abstract: 

Erroneous results generated by artificial intelligence (AI) have opened up new questions of who is responsible for AI errors in legal scholarship. I support the prevailing academic view that human subjects should be held responsible for AI errors. However, I argue that the underlying reason is not pertained to the reliability of AI, but rather the inability of humans to establish a trusting relationship with AI. The term ‘Trustworthy AI’ is just a metaphor, which presents a sense of trust; AI itself is not trustworthy. The first section outlines the academic debate on the responsibility of AI. It contends that the perspective of these debates has shifted from the characteristics of AI, such as autonomy and explainability, to a human-centred perspective, which is how humans should develop AI. The assumption of responsibility depends on the existence of a trust relationship because when people believe that an individual can fulfil his or her responsibilities, they are willing to hand over power, resources or tasks to that individual. It applies a virtue jurisprudence-based approach to explain why humans cannot establish a trust relationship with AI To establish such a relationship, one subject must indicate to the other that its behaviour is based on specific moral motivation and that it can be held moral responsibility. Nevertheless, AI lacks moral motivation and moral responsibility. The third section reconsiders the scope of responsible subjects for AI errors. It posits that accountability should be limited to the individuals who are direct beneficiaries of the AI product. Finally, it argues that the scope of responsibility for AI errors should be disparate pursuant to the risk level of the AI. For high-risk AI, responsible subjects must fulfil both the obligations under the AI Act and the obligation to provide technical authentication.


Keywords: AI Law, Trustworthy AI, AI Act, Legal Responsibility, Virtue Jurisprudence

Tables of contents

Attribution 4.0 International (CC BY 4.0)

Persistent link: https://www.ijlet.org/2024-3-186-216/

DOI: https://www.doi.org/10.55574/DWJZ4472

Full-text PDF article