'Nobody Really Knows': Trump's Anthropic Ban Throws Federal Agencies Into Chaos
Three Weeks of Confusion
Nearly three weeks after Trump ordered federal agencies to "immediately cease" using Anthropic's technology, most agencies have still not received formal guidance — just a post on the president's social media account.
That's according to The Hill, reporting this week based on conversations with multiple federal technology leaders. The response has been chaotic and fragmented across the government.
How It Escalated
The conflict began March 3 when the Pentagon labeled Anthropic a "supply chain risk" after the company refused to remove safety restrictions preventing Claude from being used for autonomous weapons and domestic surveillance.
Trump followed up with a social media directive ordering all agencies to stop using Claude. But no formal Executive Order has been issued — creating a legal and operational vacuum.
A Split Government Response
- GSA and HHS: Removed Claude within hours of Trump's directive
- Other agencies: Continue using the system while "reviewing" the situation
- Legal front: Anthropic has sued the Pentagon. Former federal judges have sided with the company, raising concerns about using the "supply chain risk" label as a political instrument
Anthropic vs. The Pentagon
Wired reports that Anthropic argues it would be "legally unsound" for the Pentagon to blacklist its technology. The company maintains that its safety guardrails are precisely what make Claude safe to use — and that removing them represents a risk, not an advantage.
The lawsuit is a rare open confrontation between an AI company and the US Defense Department over who controls AI safety boundaries.
Implications for CIOs
For enterprises evaluating AI in regulated sectors, this is a critical reminder: vendor risk now includes geopolitical and government factors. What was "cleared for classified use" in January can be banned in March.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.