Embracing AI in Practice: The 2025 AAA-ICDR’s Future Dispute Resolution Conference

By Lexie Gao (Ph.D. Candidate, Durham University)

7 July 2025

Held on 10 June 2025 at the Peace Palace in The Hague, the AAA–ICDR’s Future Dispute Resolution Conference brought together global experts in arbitration, mediation, and legal tech. The event explored how Artificial Intelligence (AI)* and emerging technologies are transforming international dispute resolution. Across five sessions, speakers discussed new tools, ethical frameworks, and evolving practices that are reshaping how disputes are handled.

Session 1: Unleashing AI or Restraining It? A Counterpoint for Practitioners

The first session featured two debates on key issues: whether parties should disclose their use of AI, and how far AI should be used to assist arbitrators. Moderated by Sophie Nappert (3VB, London), the panel brought together seasoned arbitrators and counsel in a format that encouraged real-time audience participation through pre- and post-debate polls, visibly shifting opinions as arguments unfolded.

In the first debate, Liz Meade (Cooke, Young & Keidan LLP, London) argued in favor of requiring parties to disclose their use of (Gen)AI. She contended, for example, that if an expert report is generated using AI, then the tribunal might as well produce it themselves, making the entire process redundant. When it comes to witness statements, the involvement of AI would have impact on credibility. In addition, submissions, she highlighted, should reflect counsel’s own judgment. Her emphasis was on the fact that lawyers are accountable while AI is not. Finally, she stressed the importance of sanctions, stating that the duty of disclosure is meaningless if failure to disclose carriesno consequences.

Reza Mohtashami KC (Three Crowns LLP, London) argued against the duty of disclosure, describing it as unnecessary and impractical. He offered five reasons supporting his position: disclosure would be unworkable due to the sheer volume; it would interfere with client-attorney privilege and confidentiality; verification would be difficult; it would go beyond current soft law; and the misuse of AI is already covered by existing ethical obligations not to mislead the tribunal.

After the debate, the audience shifted notably. Before the debate, support for the duty of disclosure had been ahead; however, the post-debate vote showed that opposition to such a duty had doubled the number of supporting votes. It was a thought-provoking exchange.

The second debate focused on whether arbitral tribunals should be under an obligation to use (Gen)AI for all tasks other than the core duties of the arbitrator, as part of their duty to conduct proceedings efficiently and cost-effectively. Dorothee Schramm (Independent Arbitrator, Geneva) supported this motion. She began by clarifying what counts as non-essential tasks, such as preliminary review of submissions for procedural requests and drafting the skeleton of procedural orders—duties typically handled by tribunal secretaries. Within this framework, she gave two reasons in support of herargument: first, it benefits the parties by saving time and cost, citing Article 22 of the ICDR Rules which encourages the use of technology to improve efficiency; second, on a broader level, it serves the community by addressing long-standing criticisms that arbitration is too slow and expensive. She proposed starting with small and less complex cases, allowing AI to learn and build trust gradually.

Anke Sessler (Skadden, Frankfurt) opposed the motion. She raised two main concerns. First, AI has clear limitations: data access is restricted due to confidentiality, there are risks of bias and hallucinations, and effective use of AI depends on the user’s skill, such as the ability to write good prompts. Second, it would be difficult for parties or institutions to force arbitrators to adopt AI. She argued that party agreements are unlikely to override an arbitrator’s discretion in how to organize their internal work. Also, if institutions were to introduce best practice guidelines, such soft law obligations could undermine the foundational flexibility of arbitration and compromise party autonomy in selecting arbitrators they trust. That trust includes the arbitrator’s judgment and personal footprint—both of which could be distorted by AI. She also raised a broader concern about over-regulating the profession.

This debate also shifted the audience’s opinion. While the pre-debate vote showed clear opposition, the post-debate result was nearly evenly split.

Session 2: Engineering Ethical AI: Advancing & Defending Procedural Integrity in International Dispute Resolution

Moderated by Kathleen Paisley (AMBOS, Brussels), the second session focused on how to ensure procedural fairness and integrity when AI is introduced into the dispute resolution process.

Stan Putter (HBN Law & Tax, The Hague) highlighted two key issues. He first raised concerns about confidentiality, specifically questioning whether feeding data into AI systems compromises it—and whether anonymization alone is sufficient to ensure protection. Second, he argued that tribunals should disclose their use of AI, suggesting this as a step toward building trust and avoiding surprises.

Fernanda Flores (ICDR, Los Angeles) emphasized the importance of data protection. She also reminded the audience that errors are not unique to AI—humans make mistakes too, and this should be kept in mind when assigning blame.

Nino Sievi (Nater Dallafior, Zurich) took a pragmatic stance. He pointed out that checking the underlying code of AI systems is not realistic, as most legal professionals lack the technical expertise. He also raised cybersecurity issues and discussed how the use of encrypted email, although highly advisable, can complicate client communication. Like Fernanda, he underscored that responsibility for AI-driven errors ultimately lies with humans, not the technology.

Audience engagement was particularly strong in this session. One participant asked how to reconcile differing ethical requirements across jurisdictions. Kathleen responded that while formal standards may differ, core ethical principles tend to be consistent. Another participant brought up a recent U.S. case where an arbitral award was challenged because the arbitrator had relied on AI. Stan admitted these challenges may be inevitable but emphasized that early precautions can help reduce the risk. In a comment, Sophie Nappert cautioned against equating AI with a tribunal secretary. While the work of junior staff is often critically reviewed, humans find it difficult to challenge AI, precisely because they do not fully understand how it works. She also touched on the idea of human oversight, but noted that none of the guidelines tell us how this is exercised by the human mind—a reminder that the ethical debate is as much about trust in ourselves as in the tools we create.

Session 3: Wiping the Slate Clean: Re-Engineering Dispute Management to Drive Business Outcomes

The third session, moderated by Eric Tuchmann (AAA-ICDR, New York), invited senior corporate counsels to reimagine dispute resolution using AI, data analytics, and online tools. With law departments under increasing pressure to deliver business-aligned outcomes, the panel discussed how they are adapting tools, reshaping internal strategies, and balancing innovation with caution.

Rebecca Anderson (Microsoft, Germany) shared that AI is already being used within the company to generate meeting briefs and summaries. She recounted a recent example where her team used a two- page draft prompt to ask AI to categorize a task, highlighting both the promise and the complexity of AI-assisted legal work. On the issue of accountability, she pointed to Microsoft’s internal ethical framework as a guide for responsible deployment.

Marine Assadollahi (Fives, France) took a different angle. She emphasized that a company’s most valuable assets are its intellectual property and data, and that the use of AI must be carefully weighed against the risks to both. As such, she advocated for caution when integrating AI into business workflows.

For Simon Leimbacker (Jet Aviation, Switzerland), effective dispute management rests on two principles: pragmatism and fortitude. In his view, AI still has limitations in performing the tasks expected of it, and internal efforts must continue to align its use with legal frameworks and real-world conditions. He further stressed the need to pair new tools with sound judgment and clear strategy.

Migle Zukauskaite-Totore (Nord Security, Lithuania) noted that they are actively embracing AI products. She highlighted AI’s role in enhancing legal research and making collaboration with external counsel more effective. With AI offering actionable insights early on, legal teams are better positioned to understand their options and define what support they need.

Toward the end of the session, the panel addressed a key ethical question: should AI serve as a decision-maker? Views differed. Marine argued it depends on what we value in human judgment, highlighting the importance of emotional support and non-verbal cues in mediation. Rebecca agreed, emphasizing the irreplaceable nature of human empathy. In contrast, Migle suggested AI may read emotions, challenging the idea that emotional intelligence is uniquely human. Simon warned that AI justice could distort the very definition of justice, which is shaped by cultural and contextual nuances and cannot be reduced to code.

Session 4: Contrast and Collaboration: The Dynamic Relationship Between International Mediation and Arbitration

Moderated by Steve Andersen (AAA-ICDR, Los Angeles), this session explored the evolving interplay between mediation and arbitration. The panel examined how the two mechanisms differ, where they overlap, and what legal and cultural factors shape their effectiveness.

Leonardo D’Urso (ADR Center, Rome) opened by noting how dispute resolution procedures, including both mediation and arbitration, moved online following the COVID-19 pandemic. He shared that in Rome, mediation has become practically mandatory in many cases, and lawyer fees remain relatively moderate, making mediation an accessible option.

Nathan O’Malley (Musick Peeler, Los Angeles) approached the topic with some skepticism. He reminded the audience that not all processes withstand the test of time, giving the example that “joint session in mediation is dead.” He also discussed mixed models like Med-Arb, referencing Hong Kong’s Arbitration Ordinance section 33, but criticized such frameworks for potentially breaching mediation confidentiality. Nathan further suggested that the skill sets of judges and mediators are largely interchangeable.

Michael Cover (ARBDB Chambers, London) called for the creation of a formal mediation act, noting that in practice, what parties care about most is simply resolution—not the label of the process used. In his view, a clear legal framework could help reinforce the legitimacy of mediation as a standalone mechanism.

Colin Rule (ODR.com, Palo Alto) agreed that mediation is increasingly gaining legitimacy. He stressed the need to stay focused on the needs of the parties, rather than rigid procedural distinctions. He also explained why mandatory mediation is becoming more common: in many judicial systems, it is unrealistic to expect parties to opt into mediation themselves, since most of them do not understand the difference between mediation and arbitration to begin with.

Session 5:How Do We Get Comfortable with AI-Driven Decision-Making?

The final session was moderated by Linda Beyea (AAA-ICDR, Atlanta) and explored the role of human judgment, oversight, and transparency when AI is involved in decision-making.

Colin Rule (ODR.com, Palo Alto) opened with a distinction between soft and hard rules. In his views, soft rules expand possibilities and set aspirations, while hard rules serve as a minimum baseline. He clarified that institutions could create soft, aspirational rules and use them as a competitive differentiator, but if a hard rule is imposed, it may constrain practices.

Peter Bekker (Dentons, New York) responded to the scenario in which parties give consent to AI decision-making. He raised concerns about unequal access to justice, noting that not all parties, law firms, and arbitrators are well-versed in its use. He also raised environmental concerns, as AI depends on data centers and high energy consumption. Does this align with the arbitration community’s green goals? He concluded that consent is not enough; human oversight and evaluation remain essential.

Andrea Menaker (White & Case, London) considered the idea of AI as a co-arbitrator. She supported a model of co-intelligence—AI can assist, but ultimate authority must remain with humans. She also discussed the enforcement risks of AI-made decisions and conducted a brief comparative analysis of jurisdictional requirements for arbitrator status, many of which imply or require a human identity. Her view was that AI would not replace arbitrators, but could enhance their work.

On this point, David Evans (Murphy & King, Boston) and Colin both shared practical examples demonstrating how AI could act as a co-arbitrator or assistant. In addition, David concluded with a well-structured presentation on explainable AI, showing how it can help overcome skepticism around transparency and encourage adoption of AI systems that users can understand and defend.

The Future of Dispute Resolution

The active participation and thoughtful exchanges brought by this conference reflect a growing recognition of AI’s role in the future of dispute resolution. As Dr. Hab. Marcin Czepelak (Secretary-General of the Permanent Court of Arbitration) emphasized in his keynote speech, a balanced approach is essential. We must embrace AI while upholding our unwavering commitment to ethical standards, human oversight, and procedural fairness. The path ahead is promising, but it is up to us to continue building it.

About the Author

Lexie Gao (高倩) is a Ph.D. candidate at Durham University, researching the reform of due process in AI-enabled international arbitration under the supervision of Prof. Lei Chen and Dr. Can Eken. Her research is supported by a China Scholarships Council PhD Studentship. She holds an LL.M. in Arbitration and Dispute Resolution from the University of Hong Kong, where her thesis explored the challenges and future prospects of AI-enabled arbitration in China. Lexie has gained practical experience through internships at leading law firms including Huizhong Law Firm, King & Wood Mallesons, and Dentons, working in areas such as international arbitration, company law, and financial regulation. She is a seasoned moot court participant, coach, and arbitrator, having received multiple awards such as Best Oralist and Best Applicant Memorandum in competitions including the Vis Moot and Jessup. She has also completed advanced legal training at the University of Vienna and Ludwig Maximilian University of Munich (LMU). In addition, Lexie is a certified English–Chinese translator, holding CATTI Level 2 certification.


* According to the Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449, 2019), an AI system is “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions...” Such a definition aligns with several domestic and regional legislative instruments, including: The European Parliament and of the Council, Regulation (EU) 2024/1689, art. 3; Innovation, Science and Economic Development Canada, Artificial Intelligence and Data Act (AIDA), Draft Legislation, Part III of Bill C-27, 2022 <https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading> accessed 3 July 2025; US Code, 15 USC § 9401(3) (Definition of Artificial Intelligence in the National AI Initiative Act of 2020); Artificial Intelligence (Regulation) Bill [HL] 2023 (UK) <https://researchbriefings.files.parliament.uk/documents/LLN-2024-0016/LLN-2024-0016.pdf> accessed 3 July 2025.

Previous
Previous

The “WOW” Factor: Human Cognition in the Age of Algorithmic Summarisation

Next
Next

What Is the Legal Status of DAOs in the UK? Insights from the Law Commission’s Scoping Paper