Bad to Worse: Lawyer Who Used ChatGPT to Write Court Brief May Be Sanctioned
Our interest at Seeflection.com is one of helping our readers navigate the new world of chat AI. We hope today’s story will be a tale of caution for professionals who desire to have chatbots do their work for them. The lawyer in this story does exactly that and finds himself in hot water with a judge.
From futurism.com comes a story first published on May 30, 2023.
As described in an early May affidavit, an attorney representing a man suing an airline for an alleged injury admitted he used the AI chatbot to research his client’s case. This was why, in his legal brief, he cited a bunch of court cases — all with official-sounding names like “Martinez v. Delta Air Lines” and “Varghese v. China Southern Airlines” — that never actually happened, and hence do not exist.
The attorney, Steven Schwartz of Manhattan’s Levidow, Levidow & Oberman law firm, told the court that it was the first time in his more than a three-decade career that he’d used ChatGPT, so per the New York Times,
“was unaware of the possibility that its content could be false.”
Schwartz told the court that he “greatly regrets” using ChatGPT to do his research for the case “and will never do so in the future without absolute verification of its authenticity.”
Judge Castel, however, doesn’t seem swayed, and in his May 4 order he, in no uncertain terms, described the gravity of the situation.
“The Court is presented with an unprecedented circumstance,” reads the judge’s order for a future hearing. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
Then Judge Castel ordered a June hearing to discuss whether or not Schwartz should be sanctioned. But in the meantime, this bizarre case should hopefully act as a cautionary tale for lawyers — and everyone else — looking to experiment with ChatGPT professionally.
Fast forward to June 8th when Lawyer Schwartz had to appear before a Manhattan judge, alongside his associate Peter LoDuca, the New York Times reports — and it’s not going well.
Some of the data that the ChatGPT spat out for Schwartz to present to the court was so silly it made the judge comment:
“What did you think when you read this?” he asked LoDuca, as quoted by Russell. “It’s gibberish. Does that make any sense to you?”
Both lawyers pleaded that they were duped by the chatbot and promised not to let it happen again. We are still waiting on the court’s decision about Schwatz and if he gets to practice law in New York City.
It is still unclear if ChatGPT has developed a sense of humor or just made an honest mistake.
read more futurism.com