Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

USA kan overstyre AI-risikoflagg i Anthropic-strid • EU klarte ikke å enes om mykere AI Act-regler • OpenAI flytter GPT-5.5, Codex og agenter inn i Amazon Bedrock

US may override AI risk flag in Anthropic dispute
Breaking
CIOCEOStyreAI StrategySecurity

US may override AI risk flag in Anthropic dispute

JH
Joachim Høgby
29. april 202629. april 20263 min lesingKilde: Reuters

Reuters reports that the White House is considering a path around a risk flag against Anthropic.

The facts first: Reuters cites an Axios report saying US federal agencies could receive guidance allowing them to bypass a supply-chain-risk designation on Anthropic and onboard new AI models, including Mythos. Reuters also states that it could not immediately verify the Axios report independently. Anthropic declined to comment, and the White House did not respond to Reuters before publication.

This is not a routine procurement dispute. According to Reuters, the conflict followed Anthropic's earlier refusal to remove guardrails against using its AI for autonomous weapons or domestic surveillance. The Pentagon then designated the Claude maker as a supply-chain risk. President Donald Trump said last week that Anthropic was “shaping up” after meetings between Anthropic CEO Dario Amodei and White House officials.

The story matters outside the US. Norwegian and European organisations are increasingly building AI portfolios across several model providers, often through hyperscalers and public procurement frameworks. The case shows that model access, safety classification and vendor usage limits can become political bargaining points, not just technical product attributes.

The assessment: CIOs should not treat a model as safe simply because it appears in an approved catalogue. A risk flag that can be overridden is not, by itself, a control. Each organisation needs its own criteria for where the model may be used, what data it may see, who can approve exceptions, and what logging is required when agentic systems are used in security-sensitive workflows.

For executives, the practical implication is to separate vendor approval from use-case risk. Put contractual limits around weapons, surveillance, vulnerability research and automated action. Make sure procurement, security and legal teams can stop a model deployment even if the vendor, the cloud platform or a public authority opens the door.

This is not an argument against Anthropic or Mythos. It is an argument for stronger internal AI governance. As AI models become strategic infrastructure, the exception process must be as explicit as the procurement process itself.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.