B&B_NEW_LOGO_400

ChatGPT: The human element

ChatGPT is continuing to make headlines. It seems like the talk surrounding AI is continuing to evolve as well. Sam Altman, the CEO of OpenAI, admits that even he is a little afraid of the possibilities.1 On May 16, Altman told a Senate Judiciary subcommittee that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”2 During this hearing, Altman highlighted the double-edged nature of AI—the potential loss of jobs, but likewise the potential creation of new jobs; the risk of voter fraud and misinformation, but also the ways in which AI can be used to counter these issues. 

The May 16 hearing is being seen by many commentators as what one called “the beginning of what will likely be a long, but broadly bipartisan, process regulating the use of AI and its amazing promise.… [A] regulatory roadmap is beginning to coalesce.”3 Altman proposed strict adherence to safety requirements and extensive testing processes in AI development, all within the structure of federal regulation and oversight. Acknowledging the great potential for worldwide harm as a result of misused or unrestrained AI technologies, Altman emphasized the need for government and industry collaboration and transparency. 

Last month I wrote that ChatGPT was still banned in Italy owing to numerous privacy concerns (“This article is human-written: ChatGPT and navigating AI,” May/June Bench & Bar). Since then, it’s been reinstated after adding certain disclosures and controls.4 This episode illustrates the tweaks to AI’s functioning that will likely continue to be made. In the meantime, however, some of the previously hypothetical crises have indeed come to fruition. 

In May, a New York City attorney was found to have used ChatGPT to find case citations for court documents.5 When these citations were found to be fake, he admitted to using ChatGPT in conducting his research. In a sworn affidavit, he stated that he has “never utilized Chat GPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false.”6 As with any new technology that an organization may plan on incorporating, it is critical to conduct research and create a plan for how it will be best implemented. A quick Google search easily reveals that ChatGPT is rather notorious for giving misleading or even completely false information in conversations. In this case, the consequences for not knowing ChatGPT’s weaknesses have been steep. 

Partly in response to this event, restrictions are being adopted to manage AI in the courtroom. U.S. District Judge Brantley Starr of the Northern District of Texas, for example, “has ordered attorneys to attest that they will not use ChatGPT or other generative artificial intelligence technology to write legal briefs because the AI tool can invent facts.”7  Though Judge Starr acknowledged some possible uses of the technology that could be appropriate in other situations, he banned using AI alone for legal briefing given its unreliability. Regardless of its application, verifying the authenticity and accuracy of what ChatGPT produces is the user’s responsibility, especially within the legal community. 

In addition to the ethical issues on display in this particular case, ChatGPT is even being viewed by some as a harbinger of the end—human extinction. What will happen when jobs are replaced by AI? What if life as we know it is taken over by “minds” more powerful than ours? This alarmist view is tempered by the idea that this is a tool that can be used carefully and efficiently to improve human life, not tear it asunder. 

Within the cybersecurity field, many experts believe that AI holds the key in combatting the ever-growing number and variety of cyberattacks that are perpetrated daily. If AI can be used to develop sophisticated phishing campaigns, maybe AI is the best resource we have to combat those types of attacks. As far as detection and mitigation goes, ever-evolving AI could be a deal breaker in how organizations scan and respond to cyberattacks. But some take it even a step further. Could AI possibly be the foolproof cybersecurity solution we’ve been hoping for all along?

Maybe not. In his recently published book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks,8 Yale Professor Scott J. Shapiro describes the dangers of solutionism, especially within the realm of cybersecurity. He explains that cybersecurity technology tools are often touted as the best of the best, with AI frequently being the deciding factor as to what makes one product better than any other. But Shapiro goes on to point out that technological fixes are not always what’s needed to correct cybersecurity problems. “Cybersecurity is not a primarily technological problem that requires a primarily engineering solution,” he writes. “It is a human problem that requires an understanding of human behavior.” Similarly, though ChatGPT “passed” the bar,9 it is not bound to the same standards required of an actual attorney, who must be qualified to deal with “human problems.” Judge Starr further highlights this disqualifying feature of AI in his ban: “Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle.”10

Though I frequently discuss the “human element” of cybersecurity, I think the prevalence of AI and the fears surrounding its ascent are making us all question the “human element” in other industries. For one, AI poses a data security risk—consider an employee who inputs confidential data into a conversation. Or a breach that compromises chat history. But AI may also pose a greater “security” risk as many see it—the risk to human beings’ way of life. Within the legal community, it’s been challenging to weigh the risks and benefits, as both seem abundant. Ethical guidelines and governance rules will undoubtedly continue to be created to manage the strengths of AI in relation to its pitfalls. In the meantime, it is important to keep an eye on how AI is being used today. Establishing firm requirements for its use and setting clear expectations can help mitigate risk. 


Notes

1 https://www.cnbc.com/2023/03/20/openai-ceo-sam-altman-says-hes-a-little-bit-scared-of-ai.html

2 https://www.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html

3 https://www.forbes.com/sites/michaelperegrine/2023/05/17/sam-altman-sends-a-message-to-corporate-leaders-on-ai-risk-management/?sh=42ab1e96dbef

4 https://www.bbc.com/news/technology-65431914#

5 https://www.forbes.com/sites/mattnovak/2023/05/27/lawyer-uses-chatgpt-in-federal-court-and-it-goes-horribly-wrong/?sh=4a4c089d3494

6 https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.32.1_1.pdf

7 https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/

8 Shapiro, Scott. J. Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks,” Farrar, Straus and Giroux, 2023.

9 https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile

10 https://www.txnd.uscourts.gov/judge/judge-brantley-starr



Mark Lanterman is CTO of Computer Forensic Services. A former member of the U.S. Secret Service Electronic Crimes Taskforce, Mark has 28 years of security/forensic experience and has testified in over 2,000 matters. He is a member of the MN Lawyers Professional Responsibility Board.