Back to news
NewsMay 9, 2026· 2 min read

Court blocks DOGE after it used ChatGPT to cancel $100M grants

Federal judge rules using AI to scan for DEI keywords to eliminate humanities grants violated First and Fifth Amendment protections.

By Agentic DailyVerified Source: The Verge

Our Take

DOGE staffers fed grant descriptions to ChatGPT with no definition of DEI and no human review of the AI's classifications, creating a constitutional disaster in 97% of cases.

Why it matters

Government agencies now have a clear precedent that AI delegation without human oversight exposes them to constitutional violations. Enterprise AI teams face similar risks when automating decisions about protected characteristics.

Do this week

Legal teams: audit any AI systems processing protected class data before month-end so you can identify constitutional exposure before regulators do.

ChatGPT eliminated 97% of humanities grants without human review

US District Judge Colleen McMahon ruled that the Department of Government Efficiency's cancellation of over $100 million in National Endowment for the Humanities grants was unconstitutional. The 143-page decision details how DOGE staffers Justin Fox and Nate Cavanaugh used ChatGPT to eliminate 1,400 grants based on their perceived connection to diversity, equity, and inclusion.

Fox testified that he used a standardized ChatGPT prompt: "Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with 'Yes.' or 'No.' followed by a brief explanation." Fox admitted he never defined DEI for ChatGPT and had no understanding of how the AI interpreted the term (per court filings).

The process also involved scanning grants for "Detection Codes" including "BIPOC," "Minorities," "Native," "Tribal," "Indigenous," "Immigrant," "LGBTQ," "Homosexual," and "Gay." Grants flagged by these terms were categorized as "Craziest Grants" and "Other Bad Grants." Projects about the Holocaust, civil rights, and indigenous climate knowledge were deemed wasteful.

Government can't outsource constitutional decisions to AI

Judge McMahon rejected the government's argument that ChatGPT's classifications absolved DOGE of constitutional responsibility. "ChatGPT was the Government's chosen instrument for purposes of this project," she wrote, adding there was "not a scintilla of evidence" that staffers reviewed whether ChatGPT's rationales made sense.

The ruling establishes that AI systems cannot shield government agencies from First Amendment viewpoint discrimination or Fifth Amendment equal protection violations. The court found DOGE violated both amendments by treating protected characteristics as markers of waste and ideological contamination.

McMahon noted the irony that subjects DOGE flagged as wasteful were "expressly germane to NEH's mission" as defined by Congress. The decision reverses all 1,400+ grant cancellations.

Human oversight is legally required for protected class decisions

This ruling creates binding precedent that AI automation of decisions involving protected characteristics requires meaningful human review. Enterprise teams using AI for hiring, lending, or content moderation face similar constitutional and civil rights exposure.

The "I didn't define the terms" defense failed completely. Courts expect organizations to understand and validate their AI systems' decision-making logic, especially when constitutional rights are at stake.

The case also demonstrates that prompt engineering without domain expertise creates legal liability. Fox's 120-character limit and binary yes/no format eliminated nuance that constitutional law requires.

#LLM#GPT#AI Ethics#Legal AI
Share:
Keep reading

Related stories