Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

Anthropic åpner Claude Security for Enterprise-kunder • Britisk AISI: GPT-5.5 når Mythos-nivå i cybertester • OpenAI åpner GPT-5.5-Cyber kun for utvalgte forsvarere

White House blocks broader access to Anthropic Mythos
Breaking
CIOCEOStyreAI StrategySecurity

White House blocks broader access to Anthropic Mythos

JH
Joachim Høgby
30. april 202630. april 20263 min lesingKilde: The Wall Street Journal

The White House is opposing Anthropic's plan to broaden access to its Mythos AI model.

According to The Wall Street Journal, Anthropic wanted to let roughly 70 additional companies and organizations use Mythos. That would have brought total access to about 120 entities. Administration officials reportedly opposed the move because of security concerns, and one concern was that broader access could consume compute capacity the government wants available for its own use. Bloomberg reported the same opposition, citing an administration official.

The new element is not that Mythos is sensitive. That has already been part of the dispute between Anthropic, the Pentagon and the White House. The new element is direct government intervention in distribution: who can use the model, how broadly it can be rolled out, and whether the supplier has enough compute to serve both commercial and government demand.

Factually, this story is based on reporting from WSJ and Bloomberg, not on a public Anthropic launch. The details should therefore be treated as reported information. The direction is still clear: frontier models with cyber capabilities are becoming more than a product choice. They are becoming a national-security, capacity-allocation and regulatory-control issue.

For CIOs and boards, the practical consequence is straightforward. A model that is available today may be restricted tomorrow. A supplier can be pulled between commercial customers, defense customers and political requirements. Security decisions can happen outside the normal enterprise procurement process.

This should be reflected in AI governance now:

  • Maintain a formal register of which models are used for code, security, data analysis and operational decisions.
  • Classify high-risk models explicitly, especially models that can automate cyber work or touch production systems.
  • Build exit plans for critical workflows if a model, region or API becomes restricted.
  • Require suppliers to document logging, data handling, region support, capacity commitments and notification duties when regulatory conditions change.
  • Separate productivity tools from models that should be treated as security-critical infrastructure.

Assessment: this is an early warning that AI access may become as politicized as cloud regions, cryptography and export controls. Companies do not need to stop using advanced models. But they should stop treating model choice as a pure developer or procurement decision. For the most capable models, risk ownership belongs with leadership, not only with the team that found a good API.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.