Tech News
← Back to articles

Lawyer Gets Caught Using AI in Court, Responds in the Worst Possible Way

read original related products more articles

What is it with lawyers and AI? We don’t know, but it feels like an inordinate number of them keep screwing up with AI tools, apparently never learning from their colleagues who get publicly crucified for making the same mistake.

But this latest blunder from a New York attorney, in a lawsuit centered on a disputed loan, takes the cake. As 404 Media reports, after getting caught using AI by leaving in hallucinated quotes and citations in his court filings, defense lawyer Michael Fourte then submitted a brief explaining his AI usage — which was also written with a large language model.

Needless to say, the judge was not amused.

“In other words, counsel relied upon unvetted AI — in his telling, via inadequately supervised colleagues — to defend his use of unvetted AI,” wrote New York Supreme Court judge Joel Cohen in a decision filed earlier this month.

“This case adds yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession,” the judge further lamented.

Perhaps one of the reasons that we keep hearing about these completely avoidable catastrophes is that catching your opponent even making a single mistake using an AI tool is an easy way to gain an upper hand in court, so everyone’s on the lookout for them.

That’s what happened here: it was the plaintiff’s legal team that first caught the mistakes, which included inaccurate or completely made up citations and quotations. The plaintiffs then filed a request for the judge to sanction Fourte, which is when he committed the legal equivalent of shoving a stick between the spokes of your bike wheel: he used AI again.

In his opposition to the sanctions motion, Fourte’s submitted document contained more than double the amount of made-up or erroneous citations as last time, an astonished-sounding Cohen wrote.

His explanation was also pretty unsatisfactory. Fourte neither admitted nor denied the use of AI, wrote judge Cohen, but instead tried to pass off the botched citations as merely “innocuous paraphrases of accurate legal principles.”

Somehow, it gets worse. After the plaintiffs flagged the new wave of errors in Fourte’s opposition to the sanctions motion, the defense lawyer — who by now was presumably sweating more than a character in a Spaghetti Western — then strongly implied that AI wasn’t used at all, complaining that the plaintiffs provided “no affidavit, forensic analysis, or admission” confirming the use of the tech. When he had an opportunity to set the record straight during oral arguments in court, Fourte further insisted that the the “cases are not fabricated at all,” the judge noted.

... continue reading