ChatGPT found guilty of fabricating cases and citations for a Manhattan lawyer's Federal...

Jimmy2x

Posts: 192   +17
Staff
Cutting corners: Legal fees certainly aren't cheap, so when we retain legal representation, we assume we're paying for that legal professional's time and expertise. Rather than provide the typical services retained, one Manhattan lawyer tried to shorten the research process by letting ChatGPT cite his case references for a Federal Court filing. And as he found out the hard way, fact-checking is pretty important, especially when your AI has a penchant for making up facts.

Attorney Steven A. Schwartz was retained by a client to represent them in a personal injury case against Avianca Airlines. According to the claim, Schwartz's client was allegedly struck in the knee with a serving cart during a 2019 flight into Kennedy International Airport.

As one would expect in this type of legal situation, the airline asked a Manhattan Federal judge to toss the case, which Schwartz immediately opposed. So far, it sounds like a pretty typical courtroom exchange. That is, until Schwartz, who admittedly never before used ChatGPT, decided that it was time to let technology do the talking.

In his opposition to Avianca's request, Schwartz submitted a 10-page brief citing several relevant court decisions. The citations referenced similar cases, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. According to the New York Times' article, the last citation even provided a lengthy discussion of federal law and "the tolling effect of the automatic stay on a statute of limitations."

While it sounds like Schwartz may have come armed and ready to defend the case, there was one underlying problem: none of those cases are real. Martinez, Zicherman, and Varghese don't exist. ChatGPT fabricated them all with the sole purpose of supporting Schwartz's submission.

When confronted with the error by Judge P. Kevin Castel, Schwartz conceded that he had no intent to deceive the court or the airline. He also expressed regret for relying on the AI service, admitting that he had never used ChatGPT, and was "...unaware of the possibility that its content could be false." According to Schwartz's statements, he at one point attempted to verify the authenticity of the citations by asking the AI if the cases were in fact real. It simply responded with "yes."

Judge Castel has ordered a follow-on hearing on June 8 to discuss potential sanctions related to Schwartz's actions. Castel's order accurately presented the strange new situations as "an unprecedented circumstance," littered with "bogus judicial decisions, with bogus quotes and bogus internal citations." And in a cruel twist of fate, Schwartz's case could very well end up as one of the citations used in future AI-related court cases.

Permalink to story.

 

godrilla

Posts: 771   +425
Wow upto 25 % anonomoly rate! While I do believe this will plautau closer to zero over time people are becoming too dependent on this technology may result into a disastrous outcomes. Some beta testers were criticizing the 6 month halt on this technology because they want to contribute more to to the beta testing. 🙃
 

kiwigraeme

Posts: 1,575   +1,134
Chatbot is only a tool - like most other things - a tool that will get better
No problem the Lawyer using it- to review documents for inconsistencies , mistakes.
No problem trying to get to build a case or suggest strategies ( though Chatbot says if does do legal case easy work arounds )

He used it poorly and wrong - as stated this is shockingly bad - as any lawyer would have pulled the quoted case up in question- have it printed out and referenced to submit to Judge.

The story is really about a silly lawyer.
Law firms - already have search engines - they are already building purpose built AI
Learn , understand your tools and limitations
That's what good schools are doing - not banning it
 

Xelions

Posts: 22   +19
Always understood AI to be independent but I've read that behind the scenes there has been much human contribution in the form of corrections, modifications and answers to however this system works for chatgpt - ChatGPT is powered by a hidden army of contractors making $15 per hour

At what point is AI, actually true AI? self learning perhaps, growing out of the confines of the program? Can we teach it for 5 years and then it begins to adapt and learn on its own - I don't see it. Have I got the definition for AI completely wrong? People keep spluttering AI here and there but all I see is a coded program or piece of software that has confined limitations that it'll never grow out of.
 
Last edited: