Anthropic Hires Chemical Weapons Expert After Pentagon Fallout
Following its break with the Pentagon over unrestricted AI use, Anthropic is taking an unexpected step: the company is now searching for an expert in chemical weapons and explosives.
Don't misread this. Anthropic isn't building bombs. Quite the opposite. The company wants to hire someone who can help formulate a clear policy on what Claude can and cannot assist with when it comes to dangerous information about chemicals and explosives.
The job posting describes the role as working around "how AI systems handle sensitive chemical and explosives information," in close collaboration with AI safety researchers.
The context is a turbulent period. In March 2026, it emerged that Claude is integrated into Palantir's Maven system, used by the US military for target selection and other operations in Iran. Anthropic objected to the system being used without restrictions, triggering an open break with the Pentagon. OpenAI's models are now set to gradually replace Claude on classified military networks.
But leaving the military market doesn't mean Anthropic avoids difficult trade-offs. With Claude available to millions of users, clear guidelines are needed for what the model can assist with. Bringing in domain expertise is a sensible move for a company that advertises putting AI safety first.
Claude is still in use at the Pentagon for ongoing operations. The gap between Anthropic's official position and the actual use of its technology highlights just how challenging it is to navigate between safety ambitions and commercial reality.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.