AI tools have become a hit with lawyers. But judges have shown they have little patience for when their experiments with the tech go wrong. When combing over a document submitted by two defense lawyers from the firm Cozen O'Connor, district judge David Hardy found at least 14 citations of case law that appeared to be fictitious, Reuters reported. Others were misquoted or misrepresented. After being confronted, the two defense lawyers soon pleaded guilty: one of them had used ChatGPT to draft and edit the document. Where other judges have sanctioned lawyers for committing similar sins, judge Hardy offered a humiliating ultimatum last week that's borderline cruel and unusual. The two stooges could pay $2,500 each in monetary sanctions, face removal from the case, and be referred to the state bar. Or, instead, they could swallow their pride and write to their former law school deans and bar officials explaining how they screwed up — plus volunteer to speak on topics like AI and professional conduct. In their shoes, we'd opt for option c): disappear off the face of the Earth. The Cozen pair were representing Uprise, an internet service provider. The law firm apologized to the judge and explained that an associate Daniel Mann accidentally filed an early and uncorrected draft that was made with the help of ChatGPT, according to the reporting. Mann was fired, but the other lawyer, Jan Tomasik, appears to have stayed on. Cozen told Reuters that it has a "strict and unambiguous" AI policy that bans publicly-available AI tools for client work, though apparently not specialized ones. "We take this responsibility very seriously and violations of our policy have consequences," the firm said, per Reuters. Judge Hardy's punishment may have been unorthodox, but he's far from the only one to take punitive action against lazy lawyers that don't double check their AI homework. After lawyers from the large law firm Morgan & Morgan apologized for submitting AI-hallucinated case law in a suit against Walmart, the judge slapped on thousands of dollars in fines, after deciding not to pursue more severe punishment. (The firm later sent out a panicked company-wide email where it warned about the shortcomings of AI but still, questionably, praised its usefulness.) There are countless similar stories. The plot usually is that the lawyers use a large language model to help cite relevant case law. But AI being AI, it invents cases out of thin air, or misrepresents them, or mishes and mashes real cases together. Being the kind of person lazy or careless enough to let an AI do their job for them, the lawyer neglects to verify the work. The guilty parties often use AI chatbots like ChatGPT, but even AI tools tailor made for law have been involved in screw-ups. Not even big firms like Morgan & Morgan have come out unscathed. Understandably, some judges are taking a hardline to stamp this out before it takes off even further — like in the case of three Butler Snow lawyers who got kicked off their case after the judge caught their bogus AI citations. "There's this increased frustration by many judges that these continue to occur and proliferate," Gary Marchant of Arizona State University's law school told Reuters. Judges may be "looking for creative ways to really, not only punish the lawyers involved, but to send a strong message that would hopefully deter others from being so sloppy." To be fair to doofus lawyers, AI is becoming an issue in every part of the law sphere. Earlier this year, the California state bar admitted that some of a recent bar exam's questions were generated with the help of a large language model. Even a Mississippi judge was accused of using AI to issue a garbled ruling, stunning attorneys. More on AI: Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"