Pentagon Bans Claude — Hegseth Labels Anthropic a Security Risk
Hegseth Designates Anthropic as Supply-Chain Risk
Defense Secretary Pete Hegseth has designated Anthropic as a "supply-chain risk" for the US military and banned use of its Claude model within the Pentagon — after contract negotiations collapsed over the company's AI ethics principles.
This is a dramatic reversal in a partnership that just months ago was described as the closest between any AI company and the US government.
Background: Claude at Classified Level — Then Collapse
For over a year, Claude was the preferred AI model in US defense — the first frontier system cleared for classified use. According to Time Magazine, the model played a role in the capture of Venezuelan President Nicolás Maduro in January 2026.
But things broke down when the Pentagon wanted to use Claude without the usage restrictions Anthropic maintains. CEO Dario Amodei refused to remove them. Hegseth designated the company a risk on March 3.
Military Personnel Want to Keep Claude
Reuters reports that military employees using Claude daily for classified work are frustrated by the ban. According to these sources, the tool is difficult to replace in the short term — making Hegseth's ban more symbolic than operational for now.
Tech Industry Rallies Behind Anthropic
Silicon Valley has largely supported Anthropic. The LA Times reports that the Pentagon's attempt to strong-arm the company has triggered reflection about what ethical standards AI companies can and should maintain — even against governments.
What This Means for CIOs
This is an important case for anyone evaluating enterprise AI contracts: Vendors' ethical guidelines are not negotiable fine print. Anthropic is willing to lose one of the world's largest contracts rather than remove its AI safety principles.
For CIOs: Know your vendor's ethical boundaries. They can be triggered — with consequences for your contracts too.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.