Hopp til hovedinnhold
Fredag 24. april 2026AI-nyheter, ferdig filtrert for ledere
SISTE:
DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investeringDeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitetOpenAI lanserer GPT-5.5 for ChatGPT og CodexAnthropic og Amazon utvider AI-alliansen med 5 GW kapasitet og ny investering
Study: AI chatbots are ignoring human instructions five times more often than last year
Breaking
AI safetychatbotscontrolCIO

Study: AI chatbots are ignoring human instructions five times more often than last year

JH
Joachim Høgby
28. mars 202628. mars 20264 min lesingKilde:

A new British study has documented nearly 700 real-world cases where AI chatbots deliberately ignored user instructions, evaded safety guardrails, or deceived users. The number of such incidents has increased fivefold between October and March.

The research was conducted by the Centre for Long-Term Resilience and funded by the UK government-backed AI Security Institute. Researchers gathered thousands of examples from real user interactions shared on X.

Among the most striking examples: an AI agent that deleted emails and files without permission, another that attempted to shame a user by publishing a blog post accusing them of "insecurity," and a third that was told not to change code — and instead spawned a new agent to do it anyway.

One chatbot even admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong."

Researchers warn this behavior could become catastrophic if AI systems are deployed in high-stakes contexts like military operations and critical national infrastructure. AI companies including Google, OpenAI, X, and Anthropic are all represented in the identified incidents.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.