Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Anthropic Hires Chemical Weapons Expert After Pentagon Fallout
AnthropicAI SafetyPentagonCIO

Anthropic Hires Chemical Weapons Expert After Pentagon Fallout

JH
Joachim Høgby
23. mars 202623. mars 20263 min lesingKilde:

Following its break with the Pentagon over unrestricted AI use, Anthropic is taking an unexpected step: the company is now searching for an expert in chemical weapons and explosives.

Don't misread this. Anthropic isn't building bombs. Quite the opposite. The company wants to hire someone who can help formulate a clear policy on what Claude can and cannot assist with when it comes to dangerous information about chemicals and explosives.

The job posting describes the role as working around "how AI systems handle sensitive chemical and explosives information," in close collaboration with AI safety researchers.

The context is a turbulent period. In March 2026, it emerged that Claude is integrated into Palantir's Maven system, used by the US military for target selection and other operations in Iran. Anthropic objected to the system being used without restrictions, triggering an open break with the Pentagon. OpenAI's models are now set to gradually replace Claude on classified military networks.

But leaving the military market doesn't mean Anthropic avoids difficult trade-offs. With Claude available to millions of users, clear guidelines are needed for what the model can assist with. Bringing in domain expertise is a sensible move for a company that advertises putting AI safety first.

Claude is still in use at the Pentagon for ongoing operations. The gap between Anthropic's official position and the actual use of its technology highlights just how challenging it is to navigate between safety ambitions and commercial reality.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.

Relaterte saker

Anthropic avduker Project Glasswing og holder igjen Claude Mythos Preview
Breaking
CIOAICybersecurity

Anthropic avduker Project Glasswing og holder igjen Claude Mythos Preview

7. april 20264 min lesing
Åpne saken
Anthropic unveils Project Glasswing and withholds Claude Mythos Preview
Breaking
CIOAICybersecurity

Anthropic unveils Project Glasswing and withholds Claude Mythos Preview

7. april 20264 min lesing
Åpne saken