A lawyer who used ChatGPT got in trouble because its entries were “fake.”
A lawyer who was suing the Colombian airline Avianca used ChatGPT to put together a brief full of fake cases from the past that were made by the AI software.
But the other lawyer pointed out the false cases in his brief, and US District Judge Kevin Castel said, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
Steven A. Schwartz had used OpenAI’s chatbot to do research and submit a statement. To check the cases, he just asked the bot if it was lying.
The bot had proven the cases, but it only gave two sites, Westlaw and LexisNexis, as sources for verification.
The lawyer for the other side was quick to point out the fake cases, most of which couldn’t be proven and one of which had the wrong dates.
Schwartz said, “I didn’t know that the information in it could be false.” He said that he “deeply regrets having used generative artificial intelligence to add to the legal research done here” and that he would never do that again without being sure it was real.
The case shows how silly it is to do professional research with AI chatbots without double-checking or proving the information.