68% of organizations can't tell AI agents from human users — a security crisis
A new study from the Cloud Security Alliance (CSA) reveals a serious security gap: 68 percent of organizations struggle to distinguish between actions performed by AI agents and real human users in their systems. The study was published on March 26, 2026.
The findings come at a time when AI agents are being deployed faster than security teams can adapt. 73 percent of surveyed organizations expect AI agents to become critical within the next year, but practice is lagging behind theory.
The gaps are concrete
CSA points to three primary issues: weak access control for AI agents, poor credential hygiene, and a lack of identity attribution. Many systems were designed with humans in mind. When an AI agent now performs actions, there isn't always an audit trail, role separation, or context that tells the system it's a machine acting.
For organizations with strict compliance requirements such as ISO 27001 or GDPR, this is problematic. Who is responsible for an action performed by an AI agent? And can you prove after the fact what happened?
What should CIOs do now?
Experts recommend treating AI agents as a new class of identities in IAM systems. This means separate credentials, minimal access following the least-privilege principle, and detailed logging of all agent actions.
For organizations already rolling out Copilot, Claude, or their own agent platforms, this should be a priority in Q2 2026 before scale makes the problem unmanageable.
Source: Cloud Security Alliance, March 26, 2026.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.