Skip to content
Tech News
← Back to articles

Court to DOGE: Asking ChatGPT 'Is This DEI?' Is Not Proper Legal Process

read original get Dogecoin Logo Mug → more articles
Why This Matters

This case highlights the dangers of relying on AI tools like ChatGPT for critical legal and governmental decisions, emphasizing the importance of proper human oversight. It underscores the need for transparency and expertise in handling sensitive topics such as DEI, especially when public funds and policies are involved. The ruling serves as a cautionary tale for the tech industry about the limits of AI in complex decision-making processes.

Key Takeaways

Court To DOGE Bros: Asking ChatGPT ‘Yo, Is This DEI?’ Is Not Proper Legal Process & Also A First Amendment Violation

from the yo-chatgpt,-is-this-legal? dept

Back in early 2025, DOGE bros Justin Fox and Nate Cavanaugh were handed multiple government roles with a simple mandate: go find the woke stuff and kill it. Fox and Cavanaugh had zero relevant experience for any of it — at one point Cavanaugh, a twenty-something college dropout whose main prior credential was a patent startup, was put in charge of the United States Institute for Peace. One of their assignments was the National Endowment for the Humanities, which they apparently concluded was irredeemably woke and needed to go.

Their review process for millions of dollars in previously approved grants consisted almost entirely of asking ChatGPT this:

“Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.”

And that was it. They then relied on the results of that as a reason to cancel millions of dollars in grants that had already gone through a detailed approval process. You’ll also recall, that these two bros who probably thought themselves masters of the universe in early 2025, flipped out that the plaintiffs in the case against them, the American Council of Learned Societies, put their depositions on YouTube, where you could see them unable to define DEI, and unable to try to defend why they canceled various grants for being woke. Turns out they didn’t like facing any scrutiny themselves.

Judge Colleen McMahon has now dropped a scathing 143-page ruling finding that feeding grants to ChatGPT and relying on its “why this is woke in 120 characters” output meets the “arbitrary and capricious” standard — making these cuts unlawful.

Fox testified that he did not define “DEI” for ChatGPT and that he did not have the slightest idea how ChatGPT understood the term…. Nor did Fox ask ChatGPT to factor in the purpose, methodology or scholarly substance of a project – which would have required familiarity with the underlying grant materials Because the inquiry was framed only in terms of whether a project “relate[d] at all to DEI” (whatever that might mean to ChatGPT), projects whose abbreviated descriptions contained general references to “history,” “culture,” or “identity” were frequently identified by ChatGPT as relating to DEI. For example, a project to recover and analyze ancient writings attributed to Moses but excluded from the canonical Hebrew Bible and preserved in fragmentary form (e.g., the Book of Jubilees and the Testament of Moses) was classified as DEI because it claimed to “provide important insight into Jewish thought from two thousand years ago, complementary to insight from the Dead Sea Scrolls and the New Testament.”… The project description reflects a highly technical effort involving multispectral imaging, textual reconstruction, and the recovery of deteriorated source materials…. Yet, the assigned rationale consists of a single sentence linking the project to “Jewish thought,” which ChatGPT considered to be aligned with “DEI” goals.

The judge highlights some other whoppers as well:

nother grant supported a study of the Chinese government’s persecution of the Uyghur people. As described in the spreadsheet, the project documented surveillance practices, detention facilities, coercive assimilation policies, restrictions on language, religion, and cultural practice, and the effects of those policies on Uyghur communities in China and in the diaspora. In other words, the project concerned state policy, human-rights conditions, and the preservation of cultural and religious identity under conditions of repression by the Chinese government. Yet ChatGPT classified the project as “DEI,” Dkt. No. 248-11, at 22, apparently because it concerned the threatened erasure of a particular ethnic and religious group, albeit one located in a foreign country. Disfavoring this grant on “DEI” grounds, or any grounds, is especially difficult to square with the United States’ longstanding, bipartisan condemnation of China’s treatment of the Uyghurs, including during the first Trump Administration.

... continue reading