THL-LOGO


AI in Arbitration

 

 

Guidelines and Considerations for Arbitrators

By David Allgeyer


 

Artificial intelligence is in the news these days and is on everyone’s mind. 

AI has come into sharp focus in dispute resolution following the now-famous cases of lawyers who used AI to draft court filings. Unfortunately, some cases they cited were the result of AI “hallucinations.” The cases did not exist.1

Arbitrators and advocates in arbitration are also focusing on AI. The leading tech ADR organization, Silicon Valley Arbitration and Mediation Center (“SVAMC”), is forging the way with draft “Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration.” Considerations for international arbitrations are, of course, instructive for domestic arbitrations, too. The draft guidelines have been released for public comment. 

What is SVAMC? 

As explained on its website, SVAMC is a non-profit foundation based in Palo Alto, California that serves the global technology sector.  SVAMC promotes efficient technology dispute resolution, including advancing the use of arbitration and mediation in technology and technology-related business disputes in Silicon Valley, throughout the U.S., and around the world. 

SVAMC does not administer cases. Rather, it collaborates with leading ADR providers, technology companies, law firms, neutrals and universities to address the merits of arbitration and mediation in resolving technology and technology-related disputes. 

SVAMC publishes the annual List of the World’s Leading Technology Neutrals, known as “The Tech List®.” It describes the Tech List as “peer-vetted and limited to exceptionally qualified arbitrators and mediators known globally for their experience and skill in crafting business-practical legal solutions in the technology sector.”  This group is, therefore, well positioned to provide guidance on AI, today’s leading tech issue. 

The AI Draft Guidelines 

Recognizing the increasing role of AI, SVAMC’s Drafting Subcommittee formulated a set of best practices for use of AI in international arbitration.  The guidelines will be finalized after public comments are received and considered. 

Here is a brief outline of the draft Guidelines. It provides a useful checklist of things to consider regarding AI in arbitration: 

CHAPTER 1: Guidelines applicable to all participants in international arbitration  
  • Understanding the uses, limitations and risks of AI applications  
  • Safeguarding confidentiality  

CHAPTER 2: Guidelines for parties and party representatives  
  • Duty of competence in the use of AI   
  • Respect for the integrity of the proceedings and evidence  

CHAPTER 3: Guidelines for arbitrators  
  • Non-delegation of decision-making responsibilities  
  • Respect for due process  
  • Protection and disclosure of records  

Understanding AI  

Among other things, the draft Guidelines and commentary provide a baseline for understanding AI and its role in arbitration and issues its use can create.    

For example, the Guidelines make this observation:  

Generative AI tools produce natural-sounding and contextually relevant text based on speech patterns and semantic abstractions learned during their training. However, these outputs are a product of infinitely complex probabilistic calculations rather than intelligible “reasoning” (the so-called “black box” problem). Despite any appearance otherwise, AI tools lack self-awareness or the ability to explain their own algorithms.  

In response to this problem, participants may, as far as practical, use AI tools and applications that incorporate explainable AI features or otherwise allow them to understand how a particular output was generated based on specific inputs.   

They also observe that: 

Large language models have a tendency to “hallucinate” or offer incorrect but plausible-sounding responses when they lack information to provide an accurate response to a particular query. Hallucinations occur because these models use mathematical probabilities (derived from linguistic and semantic patterns in their training data) to generate a fluent and coherent response to any question. However, they typically cannot assess the accuracy of the resulting output. 

 They further observe that: 

[E]xisting biases in the data may create, exacerbate or perpetuate any form of discrimination, racial, gender or other profiling in the search and appointment of individuals as arbitrators, experts, counsel, or any other roles in connection with arbitrations. Biases may occur when the underrepresentation of certain groups of individuals is carried over to the training data used by the AI tool to make selections or assessments. 

Recognizing these and other issues with AI, the draft Guidelines require that “[a]ll participants using AI tools in connection with an arbitration should make reasonable efforts to understand each AI tool’s relevant limitations, biases, and risks and, to the extent possible, mitigate them.” 

Protecting confidentiality  

The Guidelines also recognize confidentiality issues with use of AI.  Thus, Guideline 2 provides that “[o]nly AI tools that adequately safeguard confidentiality should be approved for uses that involve sharing confidential or legally privileged information with third parties. For this purpose, participants should review the data use and retention policies offered by the relevant AI tools and opt for more secure solutions.”    

Disclosing use of AI  

The draft Guidelines provide alternative versions of provisions governing disclosure of the use of AI tools and a request for comments as to which version is preferable.  One option “identifies a range of factors that may be relevant in the assessment of whether disclosure is warranted, specifically whether (i) the output of an AI tool is to be relied upon in lieu of primary source material, (ii) the use of the AI tool could have a material impact on the proceeding, and (iii) the AI tool is used in a non-obvious and unexpected manner.”    

Another option makes disclosure of use of AI mandatory “(i) when the output of AI tools is used to prepare or create materially relied-upon documents (including evidence, demonstratives, witness statements and expert reports) and (ii) when the output of that AI tool can have a material impact on the proceedings or their outcome.  

It will be interesting to see how the comments go on which option should be adopted.   

Guiding arbitrators 

Regarding arbitrators’ use of AI, the most important guideline appears to be that “[a]n arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making function.” Further guidance is given for respecting due process and confidentiality in arbitration through examples of the limits and risks of AI by Parties and Arbitrators.  

Here are a few examples: 


Guideline 4: Duty of competence or diligence in the use of AI (parties)

Compliant
Non-Compliant
  • Using AI tools to assist with drafting language for pleadings or written submissions where the final work product is fully source-checked and vetted for accuracy from a factual and legal standpoint.
  • Using AI tools to draft pleadings or written submissions without checking the accuracy of their output from a factual and legal standpoint.
  • Using specialized AI tools to find or summarize relevant cases, vetting the accuracy of the descriptions before incorporating them into pleadings.
  • Using AI tools to summarize cases and “copy-paste” them into pleadings without verifying whether the AI’s output may contain any errors.
  • Using AI tools to assist in the preparation of cross-examination questions or find inconsistencies in witness statements
 

Guideline 6: Non-delegation of decision-making responsibilities (arbitrators)

Compliant
Non-compliant
  • As an arbitrator, using an AI tool capable of providing accurate summaries and citations to create a first draft of the procedural history of a case, or generate timelines of key facts, and then double-checking with underlying sources and making other appropriate edits.
  • As an arbitrator, using an AI tool to provide an assessment of the parties’ submissions of evidence and incorporate such output into a decision without conducting an independent analysis of the facts, the law and the evidence to make sure it reflects the arbitrator’s personal and independent judgement.

Source: SVAMC.org


An ongoing process 

As noted, the draft guidelines are being refined with public input and comment.  When completed, parties will be able to incorporate them in their arbitration agreements to govern use of AI in any arbitration. But they are not yet “ready for prime time.” 

Have a look  

Review the draft and visit SVAMC.org's page on the Guidelines and provisions allowing comment.  The comment period was extended because of intense interest in the Guidelines. 

In the Meantime 

Arbitrators’ and arbitration organizations’ evaluation of AI continues to evolve. In the meantime, I plan to suggest to counsel that we include something like the following in my upcoming scheduling orders: 

If generative AI has been used in drafting any request, motion, brief or other filing, that fact shall be disclosed by counsel of record or an unrepresented party. The filing counsel or unrepresented person are deemed to be certifying that the filing has been reviewed by a human who is responsible for ensuring that (1) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law; (2) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or exchange of information. 

Basic Guidance 

For now, the following suggestion should get you through the AI situation until we have final guidance from organizations like SVAMC and arbitration providers. If you plan to use AI in your arbitration work, you should (1) know whether its use is allowed in your proceeding, (2) understand how it works, (3) know whether you can use it and still maintain confidentiality, (4) know whether or not you need to reveal its use, and (5) check it carefully for factual and legal accuracy. You should, of course, do that for any filing. But now that you know AI can hallucinate, you will want to be extra careful.


David Allgeyer has served as arbitrator in over 100 commercial and intellectual property disputes. In 2018, he formed Allgeyer ADR, devoted to serving as an arbitrator and mediator. David is a Fellow in the American College of Commercial Arbitrators and included on the Silicon Valley Arbitration and Mediation Center’s list of Leading Technology Neutrals. A frequent lecturer and panelist on ADR and intellectual property matters, David’s ABA Book, Arbitrating Patent Cases, a Practical Guide is available at shopaba.org and Amazon.com.  His recent Chapter, “Mediating Intellectual Property Cases,” is included in the ABA Book, Mediating Legal Disputes: Effective Techniques to Resolve Cases.


Notes

1. See Mata v. Avianca, No. 22-cv1461 (S.D.N.Y. May 4, 2023); People v. Crabil, No. 23PDJ067 (Colo. Nov. 22, 2023); United States v. Michael Cohen, No. 18-cr-00602, Doc. 104 (SDNY, Dec. 29, 2023).

Managing Editor
Elsa Cournoyer

Executive Editor

Joseph Satter