Tech News
← Back to articles

Judge Horrified as Lawyers Submit Evidence in Court That Was Faked With AI

read original related products more articles

Lawyers across the country have been landing themselves in hot water for submitting botched court documents written with the help of AI, in blunders that were clear signs of the tech’s rapid inroads into the courtroom.

But it was only a matter of time before AI wasn’t just producing clerical errors, but actual submitted “evidence.”

That’s what recently played out in a California court over a housing dispute — and it didn’t end well for the AI-fielding party.

As NBC News reports, the plaintiffs in the case, Mendones v. Cushman & Wakefield, Inc, submitted a strange video that was supposed to be witness testimony. In it, the witness’s face is fuzzy and barely animated. Aside from the rare blink, the only noticeable movement comes from her flapping lips, while the rest of her expression remains unchanged. There’s also a jarring cut, after which the movements repeat themselves.

In other words, it was obviously an AI deepfake. And according to the reporting, it might be one of the first documented instances of a deepfake being submitted as purportedly authentic evidence in court — or at least one that was caught.

Judges have had little patience for AI making a mockery of their profession, and the one on this case, judge Victoria Kolakowski, wasn’t having any of it, either. Kolakowski dismissed the case on September 9, citing the AI-generated witness testimony. The plaintiffs filed a motion for reconsideration, arguing that Kolakowski failed to prove that their incredibly janky deepfake was the creation of AI. Their request was denied on November 6.

Kolakowski says her profession is only just beginning to grapple with AI. The release of OpenAI’s video generating AI app, Sora 2, was a wakeup call for just how easily convincing video evidence could be faked, as users quickly found that they could create realistic videos of people committing crimes like shoplifting. Creating deepfakes may have once required some degree of technical knowhow, but now, anyone with a smartphone and a prompt could spit them out.

“The judiciary in general is aware that big changes are happening and want to understand AI, but I don’t think anybody has figured out the full implications,” Kolakowski told NBC. “We’re still dealing with a technology in its infancy.”

Among judges and other legal experts interviewed by NBC, there seems to be two prevailing schools of thought on how to deal with AI. One argues that we should get ahead of the AI threat by updating judicial rules, such as instituting guidelines that dictate how lawyers verify their evidence, or having it so it’s the judge’s rather than the jury’s duty to identify AI fakery. But the other camp maintains that we should leave it up to the judges to figure it out among themselves, and see if an apocalypse of AI-forged evidence really comes to pass.

Right now, the latter sentiment is informing official policy. In May, NBC noted, the US Judicial Conference’s Advisory Committee on Evidence Rules rejected proposals to update the guidance on AI, arguing that “existing standards of authenticity are up to the task of regulating AI evidence.”

... continue reading