From practice innovations to battling the justice gap, the bar’s new Artificial Intelligence Standing Committee has big plans
By Damien Riehl
Shortly after the November 2022 launch of ChatGPT, the MSBA was among the first state bar associations to consider how large language models (LLMs) could affect legal practice. In early 2023, the MSBA appointed an AI Working Group to explore how artificial intelligence (AI) might implicate the unauthorized practice of law (UPL). The working group quickly expanded this mandate, analyzing how LLMs could affect lawyers’ ethical obligations under the Minnesota Rules of Professional Conduct and expand access to justice.
For over a year, the working group met regularly, comprehensively assessing how LLMs might affect the legal profession. The working group began by thoroughly examining the UPL framework, and they also explored how the Minnesota Rules of Professional Conduct might apply to the use of LLM-backed tools in legal practice. Recognizing the rapid advancements in legal technology, the working group also devoted significant attention to understanding the current state of the art in AI-assisted legal tools.
Throughout its efforts, the working group kept returning to access to justice—with a particular focus on how LLMs could potentially bridge the gap for the estimated 90 percent-plus of civil legal needs that remain unmet owing to the cost of legal services.1 So the working group sought to provide a balanced and informed perspective on these tools’ potential role in better delivering legal services, while also upholding the highest standards of professional ethics and public protection.
By establishing an AI Working Group, an AI Standing Committee, and an innovative AI Sandbox, the MSBA is arguably leading the country in responsibly evaluating the risks and benefits of LLM-backed tools. This article outlines the AI Working Group’s key findings and recommendations, the newly formed AI Standing Committee’s mission and composition, and the pioneering concept of the AI Sandbox. It also explores LLMs’ potential impact on the judiciary. By embracing responsible innovation and engaging stakeholders across the legal system, the MSBA’s initiatives could help provide a glimpse into the future of legal practice, setting an example for bar associations worldwide.
Summary of AI Working Group’s report
In June, the working group issued its final report,2 which included these findings and recommendations.
UPL analysis
Historically, the line between permissible “legal information” and regulated “legal advice” has always been unclear, but today’s LLMs essentially obliterate that line. The working group declined to define “legal advice,” but it did note that courts have traditionally interpreted “legal advice” as applying facts to law (or vice versa). Until very recently, that task could be done only by humans. But today’s LLMs can now functionally apply law to facts.
Access to justice
With respect to lawyers’ ethical duty to ensure equal justice for all, our profession has failed. The Legal Services Corporation (LSC) reports that low-income Americans lack legal help for 92 percent of their substantial civil legal problems.3 Chief Justice Roberts, in his 2023 Federal Judiciary Year-End Report, suggested that LLMs could help.4 LLMs offer powerful tools to bridge the access-to-justice gap, assisting self-represented litigants in many tasks, such as completing forms, navigating the legal system, and translating legal prose into plain language.
AI Sandbox
The MSBA’s AI Working Group proposed the creation of an artificial intelligence “sandbox,” providing a controlled environment for organizations to use LLMs to help improve access to justice without fear of UPL prosecution. The working group suggested that this regulatory sandbox would foster experimentation and evaluation, ensuring safe and beneficial deployment of LLMs. It also aligns with the legal profession’s commitment to justice and fairness.
Report acceptance and adoption
In July, the MSBA Board of Governors unanimously adopted the AI Working Group’s report. Shortly thereafter, MSBA President Samuel Edmunds established the AI Committee, which will further define the AI Sandbox’s scope and its applicable legal areas.
The AI Committee
The newly formed AI Standing Committee will implement the AI Working Group’s recommendations, providing ongoing guidance on LLMs’ use in legal practice. The AI Committee’s mission serves several objectives.
The AI Committee will seek to facilitate the legal profession’s responsible LLM adoption, balancing the potential risks, benefits, ethical standards, and public-interest protection. The committee will explore use cases, monitor developments in AI technology, and provide recommendations to the MSBA Board of Governors.
The committee will comprise a diverse group of firm lawyers, in-house counsel, technology experts, legal aid professionals, public defenders, and representatives of the government and judiciary. This broad base of experience will help ensure that the committee can draw upon a wide range of perspectives and expertise, reaching its goal of serving the legal profession and the public.
Recognizing the need for ongoing education and guidance, the AI Committee will organize educational opportunities for MSBA members. These sessions will cover various aspects of LLM integration, including best practices, ethical considerations, and practical applications. By providing these educational experiences, the committee seeks to empower legal professionals to make informed decisions about the use of AI in their practices.
One of the committee’s primary responsibilities is monitoring the initiatives undertaken within the AI Sandbox. The committee will work closely with participating organizations to ensure that LLM-backed tool use aligns with protective guidelines and safeguards. This oversight role will help ensure that the AI tools’ deployment remains consistent with the MSBA’s goals.
The AI Sandbox
The AI Sandbox represents a novel approach to exploring the potential of LLM-backed tools for legal work. By creating a controlled environment, the sandbox will allow organizations to develop and test AI applications without the fear of being prosecuted for the unauthorized practice of law (UPL).
This arrangement recognizes the need for innovation while also acknowledging the importance of regulatory oversight. MSBA supervision allows innovation with guardrails. By operating within the sandbox, participants can explore AI’s capabilities in a way that prioritizes public protection, adherence to ethical standards, and helping those who need it most.
Among the AI Sandbox’s precipitating motivations is the potential for LLM-backed tools to help low-income Americans meet their basic legal needs. Permitting LLM-backed tools in the AI Sandbox, the AI Committee will enable solutions that can bridge this access-to-justice gap, providing much-needed help to underserved populations.
The sandbox’s initial pilot projects will target specific areas of law where the need for assistance is particularly acute. Potential starting points include housing and immigration law, given their significant impact on vulnerable populations. The AI Committee welcomes other potential legal areas that might serve our underserved populations’ needs.
The AI Sandbox will use a risk-based framework, drawing inspiration from the U.S. Executive Order on AI, as well as the European Union’s AI Act—both of which classify AI applications by risk levels. The U.S. focuses on security, privacy, fairness, and innovation, while the EU categorizes AI systems from minimal to unacceptable risk, banning harmful uses like social scoring and emotion manipulation. The AI Committee will also discuss which legal use cases, if any, belong in each of these categories:
- Unacceptable risk (prohibited)
- High risk
- Limited risk
- Minimal risk
The AI Working Group, as well as the AI Committee, place the AI Sandbox’s emphasis on legal aid and access to justice—such as self-represented litigant (SRL) assistance and procedural support—in the “minimal risk” category. These uses pose low risks while significantly enhancing access to justice.
As the AI Sandbox initiatives progress, the AI Committee will continue assessing the tools’ impact and effectiveness. The committee will establish a framework for evaluating the sandbox’s outcomes, considering factors such as legal efficacy, user satisfaction, efficiency gains, and adherence to ethical rules. This evaluation process will inform future iterations of the sandbox and guide the MSBA in making informed decisions about the broader integration of LLMs in legal practice.
As the first state to approve an AI Sandbox for legal services, Minnesota is also setting a precedent that’s being closely observed by other states and countries. The Washington State Bar Association and the Canadian Bar Association, in particular, have both expressed interest in learning from Minnesota’s experience and have engaged Minnesota in collaborative discussions. As the AI Committee’s chair, I am also contributing to committees from other states—including Florida, Indiana, Louisiana, and others—as those states think through AI’s implications and the ways it can improve legal work. These interstate and international dialogues highlight the AI Sandbox’s potential to guide other jurisdictions in assessing their own approaches to regulating the use of LLMs for legal work. Minnesota can be an access-to-justice North Star.
LLMs’ potential impact on the legal profession
Introducing LLM-backed tools through the AI Sandbox will coincide with the legal profession’s adoption of LLM tools more generally. While some might initially view LLMs as a threat to traditional legal practice, a thoughtful approach can permit legal professionals to mitigate risk while exploring the technology’s many potential benefits.
As lawyers explore the potential of LLM-backed tools, their approach should begin with “trust but verify.” The Rules of Professional Conduct require lawyers to provide competent representation, maintain client confidentiality, and charge reasonable fees. LLMs offer significant benefits, but attorneys must ensure that facts, propositions, and citations are accurate. When using LLM-backed tools, lawyers should carefully review the generated content to ensure both accuracy and appropriateness—thus protecting both their clients’ interests and their own professional reputation. See Rule 11. Moreover, ethics rules require lawyers to stay current with relevant technological developments.5 As such, attorneys should proactively educate themselves about LLM capabilities and best practices for their use in legal work. By combining these tools with human expertise and judgment, lawyers can mitigate risks while reaping the benefits of enhanced efficiency and work-product quality.
One potential outcome of broader use of LLM-backed legal tools is that cases may become longer in duration. As LLM-backed tools enable self-represented litigants—all litigants, really—to present more robust and well-crafted arguments, the sophistication level of legal cases will likely increase. This, in turn, could lead to longer case timelines, as defense lawyers engage with their self-represented opponents’ more deftly crafted legal arguments. As such, rather than AI causing defense lawyers to lose money, defense lawyers could make more money: What was previously a weeklong case (resulting in dismissal) might now expand into months (for courts to consider more potentially meritorious claims). So by increasing self-represented litigants’ access to justice, LLMs might actually provide opposing lawyers with more work.
As AI Sandbox initiatives proliferate and the use of LLM-backed tools becomes more commonplace, the legal profession will adapt to this changing landscape. Lawyers will develop new skills and strategies to effectively engage with AI-assisted litigants and to leverage these tools themselves. To account for the rapidly shifting dynamics, law firms may also need to reevaluate their business models and pricing structures. Larger clients’ law departments might do more work themselves. And other technologically advanced competitor firms might cause lawyers to more rapidly adopt LLM-backed tools to stay competitive.
While the full impact of LLM-backed tools generally (and the AI Sandbox specifically) remains to be seen, the MSBA’s approach to the AI Committee and sandbox demonstrates the bar’s commitment to exploring how technology can help improve Minnesota’s legal practice.
Possible judicial impacts
As more litigants rely on LLM-backed tools, and as the AI Sandbox increases self-represented litigants’ efficacy, the judiciary may experience a shift in filings — both in quality and quantity. This change presents the court system with both opportunities and challenges.
Early judicial observations suggest that self-represented litigants’ LLM-backed briefs and pleadings have already shown improvement. From time immemorial, self-represented litigants’ submissions have often been incomplete or incoherent. In contrast, LLMs’ coherent and well-structured arguments—if based on ground-truth law and facts—could make the judiciary’s task of evaluating cases more efficient. It’s easier to rule on an argument that’s comprehensible. This improvement in submission quality may expedite judicial decision-making.
Of course, if litigants (both self-represented and lawyer-represented) have greater access to LLM-backed tools, that may lead to an increased number of court filings. Aided by LLMs, some tools can draft a respectable motion, brief, complaint, or answer in minutes, and human editing will allow more actions to be filed with the court more quickly. As more self-represented litigants, plaintiffs’ firms, and defense lawyers leverage LLM-backed tools to prepare their cases, the judiciary may face a potential caseload surge. This increase could strain court resources and require adaptations to judicial strategies.
To manage the potential filing increase and to enhance judicial efficiency, courts may explore the use of LLMs to assist judicial clerks in some of their tasks. For example, LLM-backed legal-research tools can greatly expedite legal research and drafting. And LLM-backed tools can also shrink the day-long work of drafting a traditional bench memo, reducing it to mere minutes. By leveraging AI tools to expedite otherwise time-consuming tasks, judicial clerks can focus on more complex legal analyses, supporting their judges’ caseloads.
LLM-backed tools may also provide much-needed support to judges who lack law clerks. By serving as “clerks to the clerkless,” LLM-backed tools could help ensure that all judges have access to research, analysis, and drafting assistance to better administer justice.
Of course, the ultimate decision-makers can and should always remain human. LLM-backed tools are just that: tools. Humans should always remain in control. Just as the 1990s transition from book-based research to electronic research improved legal practice, today’s LLM-backed tools represent an important step in enhancing judicial efficiency. They can assist judges and their staffs in managing cases more efficiently. But the tools do not and should not replace judges’ critical attributes: human judgment, discretion, and sense of justice.
Conclusion
The MSBA’s proactive approach to using LLM-backed tools is an excellent example of a bar association demonstrating responsible innovation. By carefully evaluating these technologies’ risks and benefits, the MSBA is creating a path that other jurisdictions are watching—and will likely follow. The MSBA’s efforts demonstrate a commitment to embracing AI’s transformative potential, while also prioritizing ethical obligations and public protection.
The success of the MSBA’s AI Working Group—combined with the potential of the AI Standing Committee and the AI Sandbox—is a testament to the power of bar collaboration. By engaging stakeholders across the legal system, the MSBA is enabling an important dialogue about how legal practice can be improved. This collaborative approach ensures that LLM-backed tools will be deployed, overseen, and guided by a broad range of perspectives and expertise, ultimately leading to more robust and equitable solutions.
By proactively addressing the challenges and opportunities presented by today’s technologies, Minnesota’s AI Sandbox and other AI Committee initiatives will likely serve as a case study for the other states and countries grappling with similar questions. So as the MSBA continues this important work, we invite the broader legal community to participate. If you have ideas for potential uses of the AI Sandbox initiative, or if you represent organizations that might benefit from being part of the AI Sandbox, please reach out. This initiative’s success depends largely on our legal community’s contributions.
Through the measures described in this article, the MSBA is encouraging responsible technology adoption that embraces innovation while upholding the legal system’s core values. Our profession is walking toward a future where humans collaborate with tools to improve access to justice, expedite legal outcomes, and strengthen the rule of law. The MSBA’s leadership invites us to imagine, and help build, a legal system that is better than ever before. What an exciting time to be part of the legal profession.
Damien Riehl is a lawyer, vLex employee, and chair of the Minnesota State Bar Association’s AI Committee. That committee’s duties include overseeing the AI Sandbox, which allows organizations to use AI to further access to justice—but any views in this article are his own, not those of his employer, the MSBA, or the AI Committee.
Notes
1 https://justicegap.lsc.gov/
2 https://www.mnbar.org/docs/default-source/default-document-library/msba-ai-working-group-final-report-and-recommendations.pdf
3 Supra note 1.
4 https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf
5 https://www.lawnext.com/tech-competence Comment 8 to Rule 1.1 (“a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology”) (emphasis added).