An attorney who heavily relied on ChatGPT to prepare a court filing on behalf of a man suing an airline has come face to face with the limitations of this artificial intelligence tool, including its tendency to fabricate facts.
Roberto Mata filed a lawsuit against Avianca, a Colombian airline, last year, alleging that a metal food and beverage cart caused an injury to his knee during a flight to Kennedy International Airport in New York. When Avianca requested the dismissal of the lawsuit based on the statute of limitations, Mata’s lawyer, Steven A. Schwartz, submitted a brief that incorporated research conducted by ChatGPT. Schwartz, a member of the law firm Levidow, Levidow & Oberman, explained this in an affidavit.
While ChatGPT has proven to be a valuable tool for professionals across various industries, including the legal field, it has demonstrated its limitations and lack of reliability. In this particular case, the AI-generated content included fictional court cases that had never occurred, yet were presented as genuine.
The fabrication came to light when Avianca’s legal representatives approached Judge Kevin Castel of the Southern District of New York, informing him that they were unable to locate the mentioned cases from Mata’s lawyers’ brief in any legal databases.
Among the invented decisions were cases with titles such as Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines.
“It became apparent that something was awry when we couldn’t identify any of the cases referenced in their opposition brief,” stated Bart Banino, Avianca’s attorney from the firm Condon & Forsyth, during an interview with CBS MoneyWatch. “We suspected it was some kind of chatbot or similar technology.”
In response, Schwartz submitted an affidavit last week, acknowledging that he had “consulted” ChatGPT to “augment” his legal research but discovered that the AI tool was “an unreliable source.” He further admitted that this was his first time using ChatGPT for work and thus was unaware of the potential for erroneous information.
Schwartz even sought confirmation from the AI regarding the authenticity of the cited cases. ChatGPT assured him that they were real. Schwartz then inquired about the source of this information.
To this, ChatGPT responded, “I apologize for the earlier confusion,” before stating that the Varghese case could be found in the Westlaw and LexisNexis databases.
Judge Castel has scheduled a hearing on June 8 to address this legal blunder and has demanded that Schwartz and the law firm Levidow, Levidow & Oberman present arguments as to why they should not face penalties.
As of now, Levidow, Levidow & Oberman have not issued any immediate comments on the matter.