Exploring the Challenges of Bias and Lack of Transparency in AI: Implications for the Legal Industry

This blog post is the first in a series adapted from a detailed article co-authored by Sophie Nappert and Sarah Chojecki entitled “Evidence in International Arbitration through the Looking-Glass of the Digital Economy”, and is intended to provide a summary on a number of the issues raised. — Editor

AI and The Law

Artificial Intelligence (AI) has the potential to automate repetitive tasks, increase efficiencies, and reduce costs, making it an attractive option to make lawyers and legal staff more productive, and create new roles. However, AI also challenges our notions of fairness and due process, which are the cornerstones of the rule of law, and essential aspects of international arbitration. In particular, the issue of bias in AI and the lack of transparency and explainability in AI models could affect the validity and reliability of evidence obtained through AI.


Validity and Reliability of AI: The Legal Context

This blog post explores the factors affecting the validity and reliability of AI applied in the legal context, with a focus on the challenges posed by bias and the lack of transparency and explainability in AI models. The issue of bias in AI is a significant challenge that affects its validity and reliability. AI's propensity to replicate or introduce human error or bias could lead to decisions that appear objective but are, in fact, discriminatory. Arvind Narayanan, a computer scientist and professor at Princeton University, put it this way: “Today’s AI/ML [machine learning] is uninterpretable, biased and fragile. When it works, we don’t understand why.” This is particularly relevant in the legal industry, where the use of AI in eDiscovery and arbitrator selection could result in skewed or unfair outcomes. The lack of transparency and explainability in AI models used in arbitration further exacerbates this issue, making it difficult to identify any biases or flaws in the algorithm. 

Moreover, the lack of transparency and explainability in AI models used in arbitration could lead to challenges in the validity and reliability of evidence obtained through AI. If the decision-making process of AI is opaque, it can be challenging to evaluate the accuracy and reliability of the evidence produced by it. James Dempsey, of the Berkeley Center for Law and Technology, even notes that “AI may replicate human error or bias or introduce new types of error or bias ….  AI trained on data that reflects biases that infected past decisions could incorporate those biases into future decision-making, yet give such decisions the appearance of objectivity.”

Examples of areas where Bias and Lack of Transparency in AI could affect the Legal Disputes Industry:

  1. Bias in eDiscovery and AI analytics: AI tools are being used in eDiscovery to search through and analyze vast amounts of electronic data to find relevant evidence in a legal case. However, if the AI algorithm is trained on biased inputs, it could result in the exclusion of relevant evidence or the inclusion of irrelevant evidence, leading to a biased output and analysis.

  2. Bias in arbitrator selection: AI tools can be used to select arbitrators based on certain attributes or previous decisions. However, if the AI model possesses hidden biases, it could result in a pool of arbitrators that is not diverse or inclusive, or even not entirely suited to the dispute at all. AI hallucination observed in certain large language models (LLMs) such as ChatGPT.

It is worrisome, to say the least that, as Maxi Scherer observed, the use of algorithms in criminal risk assessment in the US has led to racially biased outcomes, demonstrating the potential for AI to perpetuate societal biases and prejudices. To address the lack of transparency regarding how and why AI can reach a particular output, the so-called ‘black box’ feature of AI,, an entire branch of AI research is being dedicated to developing Explainable Artificial Intelligence (XAI), AI that can be explained to humans. However, even with XAI, challenges remain, such as the tradeoff between algorithmic accuracy and explainability and the potential for AI providers to claim proprietary trade secrets and resist disclosure of data and algorithms.


Addressing the Challenges of AI in Law

As the use of AI in the legal industry and other fields continues to expand, the challenges posed by bias, and the lack of transparency and explainability in AI models must be addressed to ensure that AI is used in a fair and unbiased manner, leading to more objective outcomes. This can be achieved, to some extent, by developing and implementing Explainable Artificial Intelligence (XAI) and, in line with the European Union’s proposed Artificial Intelligence Act, ensuring that AI is subject to robust testing for validity and reliability, transparency and explainability, and accountability. Additionally, it is essential to recognize that AI should be viewed as a tool that can aid in the performance of legal work but not as a replacement for human judgment and expertise. By addressing these challenges, we can unlock the full potential of AI while ensuring that it is used in a way that upholds the principles of fairness and due process.

Previous
Previous

Exploring the Boundaries of AI in Legal Disputes and Ethical Considerations

Next
Next

How Has Technology Changed The Legal Profession? Technology’s Impact On The Future of Arbitration