Have a personal or library account? Click to login
Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles Cover

Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles

Open Access
|Apr 2023

Abstract

The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world.

A key informant to any analysis in these cases is the extent to which the model in question is “explainable”.

This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).

DOI: https://doi.org/10.2478/vjls-2022-0006 | Journal eISSN: 2719-3004 | Journal ISSN: 2719-5872
Language: English
Page range: 1 - 38
Published on: Apr 19, 2023
Published by: Hochiminh City University of Law
In partnership with: Paradigm Publishing Services
Publication frequency: 3 issues per year

© 2023 Jake Van Der Laan, published by Hochiminh City University of Law
This work is licensed under the Creative Commons Attribution 4.0 License.