Site icon Medical Market Report

New York Lawyer Caught Using ChatGPT After Citing Cases That Don’t Exist

A lawyer in New York has found himself in trouble with a judge after he submitted legal research which had been created by artificial intelligence (AI) chatbot ChatGPT.

During a case of an airline being sued over an alleged personal injury, lawyers for the plaintiff filed a brief containing several cases to be used as legal precedent. Unfortunately, as later admitted in an affidavit, the following cases were “found to be nonexistent” by the court:

Advertisement

Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) 
Shaboon v. Egyptair 2013 IL App (1st) 111279-U (Il App. Ct. 2013) 
Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012) 

Martinez v. Delta Airlines, Inc, 2019 WL 4639462 (Tex. App. Sept. 25, 2019) 
Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017)
Miller v. United Airlines, Inc, 174 F.3d 366 (2d Cir. 1999)

The “research” was compiled by lawyer Steven A. Schwartz, an attorney with over 30 years of experience according to the BBC. Schwartz said in the affidavit that he had not used ChatGPT for legal research before and was “unaware of the possibility that its content could be false”. 

Screenshots in the affidavit show the lawyer asking the chatbot “is varghese a real case”, to which the chatbot responded “yes”. When asked for sources, it told the lawyer that the case could be found “on legal research databases such as Westlaw and LexisNexis”. When asked “are the other cases you provided fake” it responded “No”, adding that they could be found on the same databases.

As fun as chatbots may be, or as advanced as they may seem, they are still prone to “hallucinations” – perfectly coherent-sounding answers that don’t in any way relate to the real world.

Without heavy fact-checking, it’s not really a tool you should use when trying to research a legal case that relies on real-world precedent rather than the hallucinations of a spicy autocomplete.

The lawyer wrote that he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein” and vows to “never do so in the future without absolute verification of its authenticity”.

Both Schwartz and lawyer Peter LoDuca, who was not aware that ChatGPT had been used while researching the case, are facing a hearing on June 8 about the incident.

Source Link: New York Lawyer Caught Using ChatGPT After Citing Cases That Don't Exist

Exit mobile version