Skip to content
Tech News
← Back to articles

Judge rules DOGE used ChatGPT in a way that was both dumb and illegal

read original more articles
Why This Matters

This case highlights the risks of relying on AI tools like ChatGPT for critical decision-making in government processes, especially when used without clear understanding or oversight. It underscores the importance of transparency and accountability in deploying AI within the public sector, as misuse can lead to legal and ethical issues. For consumers and the tech industry, it serves as a cautionary tale about the limitations and potential pitfalls of AI-driven automation in sensitive areas.

Key Takeaways

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Posts from this author will be added to your daily email digest and your homepage feed.

The Department of Government Efficiency’s cancellation of over $100 million in grants was unconstitutional, according to a ruling on Thursday. In the 143-page decision, US District Judge Colleen McMahon cites DOGE’s process for eliminating grants, which involved using ChatGPT to determine if something is related to diversity, equity, and inclusion (DEI).

The ruling, which stems from a 2025 lawsuit filed by humanities groups, says “it could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding” from the National Endowment for the Humanities (NEH). Judge McMahon cites several instances in which DOGE appeared to use ChatGPT to scan and eliminate grants using their relation to characteristics like race, national origin, religion, and sexuality.

The filing mentions testimony from Justin Fox, a DOGE staffer who worked with his colleague Nate Cavanaugh to eliminate 97 percent of grants under the NEH, in part by relying on ChatGPT’s understanding of DEI:

Fox testified that he used ChatGPT “[t]o highlight why [a] grant may relate to DEI” and “to pull out anything related to DEI.” To do so, he submitted each cursory grant description from the NEH spreadsheet to ChatGPT using a standardized prompt: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.” Fox testified that he did not define “DEI” for ChatGPT and that he did not have the slightest idea how ChatGPT understood the term.

In addition to asking ChatGPT for signs that something is related to DEI, Fox also asked the AI chatbot to scan NEH grants for what he called “Detection Codes” related to “protected characteristics,” according to the filing:

After being deployed from DOGE to NEH, Justin Fox used search terms, which he labeled as “Detection Codes,” to identify grants that he dubbed the “Craziest Grants” and “Other Bad Grants.” The search terms included, among other terms, “BIPOC (Black, Indigenous, People of Color),” “Minorities,” “Native,” “Tribal,” “Indigenous,” “Immigrant,” “LGBTQ,” “Homosexual,” and “Gay.” When Fox was asked whether he “r[a]n this list of words through every grant description” he received from NEH, he confirmed, “yes.” In this way, Fox constructed and applied explicit classifications based on protected characteristics and used them as the operative criteria for revoking federal grants.

Judge McMahon writes that DOGE deemed hundreds of grants “wasteful because they related to Blacks, women, Jews, Asian Americans, and Indigenous people,” adding that “the very subjects DOGE treated as markers of waste, lack of merit, or ideological contamination are the subjects that Congress made expressly germane to NEH’s mission.” Some of the grants lumped into the “wasteful” category related to projects about the Holocaust, civil rights, and an educational experience that would allow participants to “explor[e] indigenous knowledge, culture, and climate.”

McMahon also pushes back on the government’s argument that “there is no real constitutional problem here because any viewpoint-based classification was ChatGPT’s doing, rather than the Government’s:”

... continue reading