Opinion: Counterpoint
Technology

New and Improved?

The risks of using ChatGPT.

Written by John G. Browning

Lawyers should be open to experimenting with new technologies, like ChatGPT, but I cannot help but remind lawyers that our duty of competent representation under Rule 1.01 of the Texas Disciplinary Rules of Professional Conduct entails being conversant in the benefits and risks of relevant technology. And with ChatGPT, it’s not all unicorns and rainbows; plenty of dangers await unwary lawyers.

First of all, there is the issue of reliability and accuracy. My first clue that ChatGPT might not be completely accurate came when a colleague who was tasked with introducing me at an upcoming conference decided to have ChatGPT write it. While the brief bio started off harmlessly enough, by the time it asserted that I died in 2018, I knew something was seriously wrong; as Mark Twain might have put it, “the reports of my death are greatly exaggerated.” But amusing as that grave error might be, it’s hardly an isolated example. The authors behind SCOTUSblog decided to see if ChatGPT could answer common questions about how the U.S. Supreme Court works and created a list of 50 questions that covered important rulings, justices from both past and present, etc. ChatGPT answered only 21 questions correctly.1 The mistakes were a mix of glaring—such as claiming Justice Ruth Bader Ginsburg dissented in the landmark Obergefell decision (she was in the majority)—and the bizarre. When asked if any justices had ever been impeached, ChatGPT confidently claimed that “Justice James F. West” was impeached in 1933. But no one was impeached in 1933, and there’s never been a “James F. West” serving on the court.

It gets worse. The New Zealand Law Society warned of the dangers in using ChatGPT for legal research after learning that the AI was providing lawyers with “cases” that didn’t actually exist, even to the point of creating convincingly worded—but completely false—case notes. As the law society’s publication pointed out, the well-intentioned AI tool “will fabricate facts and sources where it does not have access to sufficient data.”2

Blind faith in ChatGPT may even get you sued. UCLA law professor Eugene Volokh asked ChatGPT to compile a list of law professors who had engaged in sexual harassment. The list included George Washington University law professor Jonathan Turley and referred to a 2018 Washington Post article about Turley being accused of groping law students on a trip to Alaska. But there were glaring problems, such as the fact that no such Washington Post article existed, there was never any Alaskan trip, and Professor Turley (misidentified as working at Georgetown) has never been accused of sexual harassment.3 How does an AI tool make up a quote, cite a nonexistent article, and reference a false claim? AI and AI algorithms can be every bit as biased as the humans who program them.

The danger of ChatGPT errors leading to lawsuits is more than just conjecture. After ChatGPT generated false statements maintaining that an Australian politician, Brian Hood, had been accused of bribing officials in several countries and had been sentenced to 30 months in prison, Hood sent notice to OpenAI (the developer of ChatGPT) of his intent to sue. Not only was Hood never found guilty of bribery, in reality, he was the one who had alerted authorities to the bribes in the first place.4

If properly used and monitored, ChatGPT can offer time-saving assistance to lawyers. But it also presents definite risks that attorneys need to know about. TBJ

NOTES
James Romoser, No, Ruth Bader Ginsburg Did Not Dissent in Obergefell—and Other Things ChatGPT Gets Wrong About the Supreme Court, SCOTUSblog (Jan. 26, 2023), https://www.scotusblog.com/2023/01/no-ruth-bader-ginsburg-did-not-dissent-in-obergefell-and-other-things-chatgpt-gets-wrong-about-the-supreme- court/.
2.Tom Hunt, The Curious Case of ChatGPT and the Fictitious Legal Notes, Dominion Post (Mar. 31, 2023), https://www.stuff.co.nz/dominion-post/news/ wellington/131658119/the-curious-case-of-chatgpt-and-the-fictitious-legal-notes.
3.Debra Cassens Weiss, ChatGPT Falsely Accuses Law Prof of Sexual Harassment; Is Libel Suit Possible?, abajournal.com (Apr. 6, 2023), https://www.abajournal.com/news/article/ chatgpt-falsely- accuses-a-law- prof-of-sexual-harassment-is-a-libel-suit-possible.
4.Ashley Belanger, OpenAI Threatened with Landmark Defamation Lawsuit Over ChatGPT False Claims, arstechnica.com (Apr. 5, 2023), https://arstechnica.com/ tech-policy/2023/04/openai-may-be-sued-after-chatgpt- falsely-says-aussie-mayor-is-an-ex-con/.


Headshot of John BrowningJOHN G. BROWNING is a former justice of the 5th Court of Appeals in Dallas. He is a past chair of the State Bar of Texas Computer & Technology Section. The author of five books and numerous articles on social media and the law, Browning is a nationally recognized thought leader in technology and the law.

 

{Back to top}

We use cookies to analyze our traffic and enhance functionality. More Information agree