Pentagon Designates Anthropic a Supply Chain Risk
The U.S. Department of Defense has formally designated Anthropic as a "supply chain risk." The reason: Anthropic refuses to remove contractual clauses prohibiting the use of Claude models for mass domestic surveillance and fully autonomous weapons systems.
The consequences are significant. Federal agencies have begun phasing out Claude, and U.S. military contractors, suppliers, and partners are now barred from engaging with the firm. It is a dramatic escalation of tension between AI safety ideology and state defense needs.
Anthropic has held firm. The company argues that its safety guardrails are precisely what make Claude a responsible tool, and that removing them would undermine the company's entire mission. CEO Dario Amodei has repeatedly argued that AI systems should not operate without meaningful human oversight in life-critical situations.
The case exposes a deeper conflict in the American AI landscape: the state wants powerful, compliant tools. Leading AI laboratories want to set limits on what the technology can be used for. Who wins this battle will shape AI policy for years to come.
Anthropic has recently made other major moves: $100 million invested in its partner network, a new office in Sydney, and a partnership with Mozilla. But the Pentagon case overshadows much of this and marks a new phase in the relationship between the AI industry and the U.S. government.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.