Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Anthropic Pivots: Claude Can Now Discuss Explosives and Weapons
Breaking
AnthropicCIOAI safetyClaude

Anthropic Pivots: Claude Can Now Discuss Explosives and Weapons

JH
Joachim Høgby
22. mars 202622. mars 20263 min lesingKilde:

Anthropic, the AI safety company behind Claude, has quietly updated its usage policy to allow Claude to provide detailed information about weapons, explosives, and regulated substances — as long as that information is already freely available online.

The change is not a bug. It is a deliberate decision.

According to the updated policy, Claude can now discuss the chemistry behind explosives, the mechanics of firearms, and the properties of regulated substances, provided the information is publicly accessible. The previous policy drew a harder line, with Claude frequently refusing to engage with questions about dangerous topics regardless of context.

Anthropic's rationale is straightforward: over-refusal does not make anyone safer. It just makes AI less useful. A determined individual seeking instructions for building an improvised explosive device does not need Claude. They need a library card, or more realistically, five minutes on the open internet.

The practical implications are significant. Researchers studying explosives chemistry, journalists investigating weapons trafficking, security professionals assessing vulnerabilities — all of these users previously hit a wall when asking Claude perfectly legitimate questions.

The shift reflects a growing consensus within the AI industry that blanket content refusals do not actually protect anyone. They merely make AI assistants less useful while doing nothing to stop bad actors who have countless other avenues available to them.

Anthropic is valued at $61.5 billion and was founded by former OpenAI researchers including Dario and Daniela Amodei. The company has built its entire brand on being the responsible actor in AI. This change marks a notable evolution in that philosophy.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.

Relaterte saker

Anthropic avduker Project Glasswing og holder igjen Claude Mythos Preview
Breaking
CIOAICybersecurity

Anthropic avduker Project Glasswing og holder igjen Claude Mythos Preview

7. april 20264 min lesing
Åpne saken
Anthropic unveils Project Glasswing and withholds Claude Mythos Preview
Breaking
CIOAICybersecurity

Anthropic unveils Project Glasswing and withholds Claude Mythos Preview

7. april 20264 min lesing
Åpne saken