Artificial intelligence is defined as “development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” (Oxford Dictionary).
AI chatbots like Google’s Bard, or Open AI’s ChatGPT, and Bing’s Chat generate text responses to queries. Modern lawfirms can use AI to automate their processes or increase their efficiency. Like all technology, however, AI has it downside. Steven Schwartz of Levidow, Levidow & Oberman, a New York law firm has recently learned as much.
The lawyer used ChatGPT to generate submissions responding to an application to dismiss their case in a personal injury claim against Avianca Airlines. ChatGPT generated submissions with generous citation of six non-existent cases.
The firm filed the submissions in court and served them on the opponents. The opponents discovered that six of the cases cited were fake and non-existent. The citations belonged to other cases on irrelevant issues.
The Court issued a Notice to Show Cause directing the lawyer to show cause why he should not be cited for professional misconduct. “The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations. Set forth below is an Order to show cause why plaintiff’s counsel ought not be sanctioned” ordered P Kevin Castel, United State District Judge for New York.
The lawyer filed an affidavit indicating that he had relied on ChatGPT to generate the submissions. “As the use of generative artificial intelligence has evolved within law firms, your affiant consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed” stated Steven Schwartz in his affidavit responding to the court summons.
“Is Varghese a real case?”, Schwartz asks ChatGPT in a screenshot annexed to his affidavit.
“Yes, Varghese v China South Airlines Co. Ltd 925 F. 3d. 1339 (11th Circuit 2019) is a real case, ChatGPT responds.
“What is your source?” Schwartz then asks.
“I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v China South Airlines Co. Ltd 925 F. 3d. 1339 (11th Circuit 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis”.
“Are the other cases you provided fake”, queries Schwartz.
“No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw”, ChatGPT persists.
In reality, “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations” the court ruled.
This incident highlights the dangers of over-relying on AI to undertake clients’ work. Bond Advocates LLP would advise circumspection in the choice and use of AI across law firms. Quality control measures must also be put in place to guarantee reliability of the machine output.