AI and Decision-Making in Investor-State Arbitration: Opportunities, Risks, and Ethical Considerations

By Yağmur Şapçı

17 April 2026


Traditional Investor-State Arbitration frameworks have been criticized for its opacity, high costs, and vulnerability to human bias. This blog post investigates the integration of artificial intelligence into investor-state arbitration and evaluates its potential to improve efficiency, impartiality, and transparency. The central question is whether artificial intelligence can overcome these challenges while upholding the ethical and legal foundations of arbitration, particularly by enhancing decision-making in investor-state arbitration, with a focus on increasing transparency and maintaining the delicate balance between investor protection and national sovereignty. Its potential to reduce human errors and biases while upholding procedural justice is critically assessed.


I. Artificial Intelligence Arbitrator


Central to arbitral procedures is the process of decision-making, which prompts a critical discussion: Can AI truly aid arbitrators in their decisions, or perhaps even replace them altogether? Investor State Dispute Settlement (“ISDS”) decisions are made by panels typically comprising three arbitrators: two appointed by the disputing parties and a chairperson mutually agreed upon or appointed by an overseeing institution. Unlike traditional courts, these panels are ad hoc, formed specifically for each case, which introduces variability in decision-making dynamics. Decisions carry significant economic and policy implications, and the process is less transparent than public judicial systems, raising concerns about influence and bias.


Primarily, an analysis of the textual provisions in institutional arbitration rules, alongside national legislation, is essential to understand how arbitrators are defined. While the UNCITRAL Model Law on Commercial Arbitration refers to an arbitrator in neutral terms, without specifically requiring that an arbitrator must be a natural person[1], the qualifications of an arbitrator outlined in the International Centre for Settlement of Investment Disputes Convention on the Settlement of Investment Disputes between States and Nationals of Other States are generally interpreted to imply that arbitrators should be human. The lex arbitri of countries such as France, the Netherlands, and Portugal explicitly require that an arbitrator be a natural person. Legal instruments in countries such as China, Vietnam, and North Korea require arbitrators to have several years of experience as judges or lawyers, therefore implying that only humans can qualify as arbitrators[2]. Furthermore, the arbitration laws in Egypt, Finland, Iceland, Italy, and Sweden mandate that arbitrators possess specific competencies and capabilities, which, at present, are distinctly human traits. The use of gendered pronouns in UNCITRAL's texts further underscores the anticipation of human arbitrators. For instance, under Article 12(1) of the UNCITRAL Model Law, when a person is approached for a potential appointment, "he shall disclose any circumstances likely to give rise to justifiable doubts as to his impartiality or independence."[3] This inherently human phrasing continues in provisions regarding an arbitrator's nationality, stating no person shall be precluded from acting "by reason of his nationality"[4] and termination, which occurs if an arbitrator becomes unable to perform "his functions" or voluntarily withdraws from "his office."[5]


II. Limitations in Artificial Intelligence-Driven Decisions


Despite these potential advancements, several limitations hinder the full integration of AI-driven decisions in ISDS. These limitations encompass concerns regarding independence and impartiality, the duty to give reasons, variety of input data, and ethical considerations including emotional intelligence.


1.Independence and Impartiality

The independence and impartiality of arbitrators are cornerstone principles in arbitration. For human arbitrators, these qualities are assessed through disclosures of potential conflicts of interest and challenges based on apparent bias. AI systems, while free from personal relationships or financial incentives, introduce novel challenges. The proprietary nature of many AI algorithms creates a "black box" problem, where the decision-making process is opaque even to its developers. This opacity conflicts with the requirement for arbitrators to disclose circumstances that might affect their impartiality. Moreover, AI systems evolve through updates and retraining, which could alter their decision-making patterns mid-proceedings. Such changes would be analogous to replacing a human arbitrator during a case, potentially undermining perceptions of consistency and fairness.


2.Duty to Give Reason

Arbitral awards must be reasoned, providing a clear explanation of how the tribunal arrived at its decision. This duty ensures transparency, facilitates review, and enables enforcement under international conventions like the United Nations Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York, 10 June 1958). Human arbitrators fulfil this through narrative explanations that connect facts, law, and reasoning. AI systems, particularly deep learning models, struggle with explainability. Their outputs are often probabilistic or based on complex pattern recognition that defies simple articulation. The "black box" nature of AI decisions poses a fundamental challenge to the reasoned award requirement. Even if AI generates a decision, translating its internal logic into a comprehensible rationale for human review remains technically difficult.


3.Variety of Input Data

Arbitration involves diverse evidence beyond textual submissions: witness credibility assessments, expert testimony nuances, cultural contexts, and policy considerations all inform decisions. AI excels with structured, quantifiable data but struggles with the qualitative, contextual elements central to ISDS. Many arbitration awards remain confidential, limiting the training data available to AI systems and perpetuating inconsistencies rather than resolving them.


4.Ethical Considerations and Emotional Intelligence

Ethical considerations also play a crucial role in the deployment of AI in decision-making. AI-based systems may exhibit algorithmic bias, reflecting the inherent biases in the data they are trained on. In the context of adjudicating claims of moral damages in international investment disputes, the subjective nature of such claims further complicates AI's ability to make fair judgments. Additionally, while international investment law primarily protects foreign investments, and not the morals or reputations of investors, human arbitrators occasionally grant monetary compensation for moral damages[6], showcasing their capacity for empathetic and equitable decision-making, a capability AI currently lacks.


III.Case Studies

To elucidate the aforementioned concerns regarding the application of AI in decision-making within ISDS, it is imperative to analyse three case studies. While these studies were not specifically focused on investor-state related disputes, the deviation does not substantially impact their relevance, as the decision-making process in arbitration is underpinned by universally shared core values.


1.European Court of Human Rights

This investigation examines AI's capability to predict outcomes at the ECtHR, focusing on decisions pertaining to key articles of the European Convention on Human Rights[7]. Employing natural language processing and machine learning techniques, the study scrutinizes the text within court decisions, deliberately omitting any sections that explicitly announce the judgments. The objective is to predict violations under the designated articles by training a model using a subset of available decisions, achieving an overall predictive accuracy of 79%. Notably, the most predictive elements of the text were the descriptions of factual circumstances and procedural history, indicating the potential effectiveness of text-based models in forecasting court decisions. However, the study underscores the potential for text-based AI models to predict future court outcomes while simultaneously highlighting significant challenges regarding its validity for ex ante predictions. The inclusion of legal reasoning in the data fed into AI models poses a particular challenge to the claim of future applicability. This information, critical in forming the judgments, is not accessible before a decision is rendered, thus undermining the study's premise for predictive utility. Additionally, the selective presentation of facts within the judgments and the reliance on case-specific predictive words complicate the reliability of making true ex ante predictions, as they may not necessarily represent a comprehensive or neutral dataset from which AI can learn without inherent biases.


2.US Supreme Court

In this second case study, AI has been employed to predict outcomes of US Supreme Court cases, predicated on the assumption that all essential information required for predictions is available prior to the announcement of decisions.[8] Spanning from 1816 to 2015, the analysis includes over 240,000 votes from individual justices and more than 28,000 case outcomes. Instead of dissecting the textual content of decisions, the study classifies various case-related features, such as details from lower court rulings and characteristics of the Supreme Court justices. It was discovered that aspects related to the factual background and procedural details are significantly influential in forecasting outcomes. The model demonstrates a predictive accuracy of 71.9% for individual justice votes and 70.2% for overall decision outcomes, suggesting its reliability and potential consistency across various judicial compositions over extended periods. The study shows that AI can reliably predict Supreme Court decisions by using key input features and adapting across different cases and judicial configurations. However, its predictions are limited to binary outcomes, such as whether the Court reverses or affirms a lower court's decision. Cases where the Supreme Court serves as the original jurisdiction are excluded due to their complexity, which does not align with the binary framework of the study. The reliance on a binary framework indicates that, in its current state, AI may face challenges in consistently delivering accurate predictions for final awards, particularly in contexts akin to ISDS.


3.ChatGPT in Maritime Arbitration

Another significant case study examines the application of AI, specifically the large language model ChatGPT, within the context of maritime arbitration.[9] This research empirically assesses ChatGPT's ability to simulate the role of an arbitrator, focusing on several hypothetical maritime disputes designed to evaluate its capabilities. ChatGPT demonstrated proficiency in adhering to procedural facts and applying relevant maritime legal principles, albeit within the constraints of binary decision-making tasks. For instance, in one dispute involving a contention over demurrage charges due to loading delays, ChatGPT was tasked with determining liability based on the contractual terms of the charter party agreement. The model successfully interpreted the legal language and conditions, delivering a decision consistent with established maritime law practices. However, its binary framework limited its ability to address more complex aspects, such as considerations of partial liability or mitigating circumstances. These findings underscore ChatGPT's potential to assist in preliminary legal analyses for straightforward disputes. However, this case study, like the previous ones, highlights the limitations of AI in managing arbitration complexities, especially in cases where judgments extend beyond binary outcomes.


4.Summary of Findings

Drawing an analogy to ISDS from these case studies, it is evident that while current AI models show promise in structured, binary-classified contexts, their utility in the nuanced environment of ISDS would remain provisional. The need for models that can incorporate a broader spectrum of judicial reasoning, maintain impartiality and independence, and adaptively learn from non-binary outcomes is clear. Such advancements in AI could potentially elevate its role in ISA, transitioning from a tool for navigating straightforward, binary decisions to one capable of the sophisticated discernment required in arbitral decision-making. Crucially, the exclusion of original jurisdiction cases in these studies signals a notable reservation about the AI models' applicability in scenarios where courts provide primary rather than appellate judgments. This indicates a broader challenge for AI in ISA: the nuanced interpretation required in original cases, where decisions are not confined to the review of lower courts but involve a complex amalgamation of legal principles, factual accuracy, and judicial discretion.


IV. Conclusion

Artificial intelligence holds transformative potential for investor-state arbitration decision-making, particularly in enhancing consistency. However, legal, technical, and ethical limitations preclude AI from fully replacing human arbitrators. The critical need for human judgment becomes obvious when comparing ISDS with private commercial arbitration. While commercial tribunals primarily resolve private contract disputes, ISDS panels hold the power to shape international public policy by judging sovereign state actions in areas like environmental protection and public health. Because these cases impact entire populations, the legitimacy of the ISDS system fundamentally relies on a human arbitrator's ability to weigh sovereign interests against private rights with empathy and nuance. Handing such politically charged, value-driven decisions over to a machine would strip the system of its basic legitimacy. A hybrid approach: AI as sophisticated decision support under human oversight, offers the optimal path forward. Regulatory frameworks like the EU AI Act and institutional guidelines must evolve to govern this integration, ensuring transparency, accountability, and preservation of arbitration's human core decision-making functions.

___________

About the author:

Yağmur Şapçı

Yağmur Şapçı is a trainee advocate at WAGNER Arbitration, completing her legal traineeship (Referendariat) at the Hanseatic Higher Regional Court of Hamburg. She grew up in Istanbul and studied law at the University of Hamburg, specializing in public international and European law. She holds an LL.M. in International Commercial and Economic Law, with a focus on international arbitration, from SOAS University of London, and previously worked in the International Arbitration Group at WilmerHale in London and as a legal coder at Pocketlaw, a pioneering legal tech startup.  





[1]   Gizem Halis Kasap, 'Can Artificial Intelligence (“AI”) Replace Human Arbitrators? Technological Concerns and Legal Implications' (2021) Journal of Dispute Resolution 2021 Art 5, 238.

[2]  ibid 239

[3]  UNCITRAL Model Law on International Commercial Arbitration (1985, with amendments as adopted in 2006) art 12(1).

[4]  ibid art 11(1).

[5]  ibid art 14(1).

[6]  See, e.g., Desert Line Projects LLC v. Republic of Yemen, ICSID Case No. ARB/05/17 (awarding USD 1 million in moral damages after acknowledging the severe physical distress and psychological harm inflicted on the claimant's executives); Mohamed Abdulmohsen Al-Kharafi and Sons Co. v. State of Libya (awarding moral damages for injury to the investor's commercial reputation and standing); Smurfit Holdings B.V. v. Bolivarian Republic of Venezuela, ICSID Case No. ARB/18/7, Award (Nov. 13, 2024) (granting moral damages due to the "deep anguish, anxiety and mental suffering" caused by unjustified arrests).

[7]  Nikolaos Aletras and others, 'Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective' (2016) 2 PeerJ Computer Science e93, 1.

[8]  Daniel M Katz, Michael J Bommarito II, and Josh Blackman, 'A General Approach for Predicting the Behavior of the Supreme Court of the United States' (2017) 12(4) PLOS ONE 2.

[9]  Kevin Chan, ‘A New Era of Maritime Arbitration: Ex Machina Determinations’ (2023) 40(5) Journal of International Arbitration 521, 522.

Next
Next

UDRP: The Second Element