Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Pentagon Bans Claude — Hegseth Labels Anthropic a Security Risk
Breaking
CIOAnthropicPentagonAI SafetyCompliance

Pentagon Bans Claude — Hegseth Labels Anthropic a Security Risk

JH
Joachim Høgby
20. mars 202620. mars 20264 min lesingKilde:

Hegseth Designates Anthropic as Supply-Chain Risk

Defense Secretary Pete Hegseth has designated Anthropic as a "supply-chain risk" for the US military and banned use of its Claude model within the Pentagon — after contract negotiations collapsed over the company's AI ethics principles.

This is a dramatic reversal in a partnership that just months ago was described as the closest between any AI company and the US government.

Background: Claude at Classified Level — Then Collapse

For over a year, Claude was the preferred AI model in US defense — the first frontier system cleared for classified use. According to Time Magazine, the model played a role in the capture of Venezuelan President Nicolás Maduro in January 2026.

But things broke down when the Pentagon wanted to use Claude without the usage restrictions Anthropic maintains. CEO Dario Amodei refused to remove them. Hegseth designated the company a risk on March 3.

Military Personnel Want to Keep Claude

Reuters reports that military employees using Claude daily for classified work are frustrated by the ban. According to these sources, the tool is difficult to replace in the short term — making Hegseth's ban more symbolic than operational for now.

Tech Industry Rallies Behind Anthropic

Silicon Valley has largely supported Anthropic. The LA Times reports that the Pentagon's attempt to strong-arm the company has triggered reflection about what ethical standards AI companies can and should maintain — even against governments.

What This Means for CIOs

This is an important case for anyone evaluating enterprise AI contracts: Vendors' ethical guidelines are not negotiable fine print. Anthropic is willing to lose one of the world's largest contracts rather than remove its AI safety principles.

For CIOs: Know your vendor's ethical boundaries. They can be triggered — with consequences for your contracts too.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.

Relaterte saker

Meta velger AWS Graviton for agentisk AI i stor skala
CIOInfrastructure

Meta velger AWS Graviton for agentisk AI i stor skala

Akkurat nå4 min lesing
Åpne saken
Meta taps AWS Graviton to scale agentic AI
CIOInfrastructure

Meta taps AWS Graviton to scale agentic AI

Akkurat nå4 min lesing
Åpne saken
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet
Breaking
CIOOpen Source

DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet

Akkurat nå4 min lesing
Åpne saken