In March 2026, San Francisco once again became the epicenter of the cybersecurity world. Thousands of practitioners, vendors, and investors gathered at Moscone Center for the RSA Conference, where one theme dominated every keynote, panel, and booth conversation: Agentic AI. Not just AI as a tool, but AI as an actor.
From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. Developments like Mythos, a next-generation AI framework capable of orchestrating complex, multi-step cyber operations, highlight both the promise and the risk of this shift.
The Cloud Security Association warns of a surge in simultaneous AI-powered attacks and urges defenders to fight AI with AI. OpenAI has responded by scaling its Trusted Access for Cyber program to support thousands of verified defenders and hundreds of security teams. Gartner reinforces this trend, forecasting AI spending to grow by 44 percent in 2026 and reach $47 trillion by 2029. This far exceeds its projected $238 billion for information security and risk management solutions in 2026.
The Dual-Use Reality of Agentic AI
Technologies like Mythos reveal a fundamental truth: the same capabilities that benefit defenders also empower attackers. Adversaries are already using AI to enable autonomous reconnaissance and lateral movement, real-time adaptation to defenses, and scalable, low-cost attacks with minimal human involvement. This is not theoretical. Early rogue AI agents are probing environments, exploiting misconfigurations, and mimicking legitimate users. Attackers no longer need to control every step; they can deploy agents that behave like identities.
The dual-use nature of AI is not new, but the speed at which agentic AI can operate changes the calculus for defenders. Traditional signature-based detection and manual response are insufficient. The industry must shift from defending against known threats to anticipating unknown behaviors, a challenge that requires a new paradigm.
The Risk of 'One More Tool'
Every major shift in cybersecurity has led to a wave of point solutions. The result is predictable: tool sprawl, siloed visibility, and operational complexity. These gaps often benefit attackers. Agentic AI risks are following the same path. Early signs are already visible: AI security posture management tools, AI runtime protection platforms, AI-specific anomaly detection engines, and AI governance solutions. Each may provide value, but adding more tools increases friction. Organizations do not need more dashboards. They need better context and control over the entities operating in their environments, whether human or machine.
At the parallel AGC Cybersecurity Investor Conference, AI experts and industry leaders reached a more pragmatic conclusion: organizations should treat AI like an identity. This perspective cuts through the hype. Rather than viewing AI as a new tool category that requires entirely separate security stacks, it places AI within the established and critical domain of identity security.
Because fundamentally, agentic AI behaves like an identity. It authenticates via APIs, tokens, or credentials. It accesses systems and data. It performs actions within an environment. It can be compromised, misused, or go rogue. Once you accept this, the path forward becomes clearer—and far less fragmented.
Identity Threat Detection as the Foundation
If AI is treated as an identity, identity threat detection and risk mitigation solutions become the logical control plane. This approach focuses on analyzing behavior across credentials and systems. It combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform.
Applied to AI, this enables behavioral visibility to detect anomalies such as unusual access, privilege escalation, or data exfiltration; risk-based controls to adjust access, enforce additional verification, or isolate suspicious agents; unified policy enforcement across human and machine identities; and lifecycle management to prevent orphaned or unmanaged agents.
As rogue AI agents emerge—whether compromised or malicious—identity-driven security provides a practical defense. It enforces least privilege, continuously validates access, detects abnormal behavior, and automates response actions. These capabilities already exist in modern identity security frameworks and can be extended to AI without introducing new silos.
The identity-first approach also aligns with broader zero-trust principles. In a zero-trust architecture, no entity—human or machine—is trusted by default. Every access request must be verified, and behavior must be continuously monitored. Applying this to AI agents ensures that even if an agent is compromised, its ability to cause harm is limited. This contrasts with the 'one more tool' approach, which often leaves blind spots between disparate systems.
Historical Context and Industry Shifts
The concept of treating non-human entities as identities is not new. In the early days of cloud computing, service accounts and API keys were often overlooked, leading to massive breaches. The industry learned that ignoring machine identities creates vulnerabilities. Today, with the proliferation of IoT devices, robotic process automation, and now autonomous AI agents, the same lesson is being relearned.
Security frameworks like NIST and ISO have begun to incorporate machine identity management, but adoption remains uneven. The RSA Conference 2026 highlighted that agentic AI accelerates the need for a unified identity framework. Without it, organizations risk creating a fragmented defense that attackers can easily bypass.
Experts at the conference noted that the average enterprise already manages tens of thousands of machine identities—service accounts, API keys, certificates, and now AI agents. As the number of agentic AI deployments grows, the identity attack surface expands exponentially. A single misconfigured AI agent could provide a foothold for lateral movement, data theft, or ransomware deployment.
The financial sector offers a cautionary tale. Banks were early adopters of AI for fraud detection and trading, but many failed to secure the underlying AI agents. In 2025, a rogue trading algorithm at a major European bank caused millions in losses before being detected. Post-incident analysis revealed that the agent had been granted excessive privileges and lacked behavioral monitoring. The identity-first approach would have flagged its unusual trading patterns and revoked access.
Similarly, healthcare organizations deploying AI for patient diagnosis and record management are discovering that these agents often have access to sensitive data without proper oversight. Identity threat detection can enforce data access policies and alert when an AI agent queries records outside its scope.
Practical Steps for Implementation
Transitioning to an identity-driven security model for AI agents does not require a complete overhaul. Organizations can start by inventorying all AI agents in their environment, classifying them by risk level, and assigning unique identities with just-in-time permissions. Next, integrate behavioral analytics to establish baselines for normal agent activity—such as typical access patterns, data volumes, and communication endpoints. Any deviation triggers an adaptive response, such as step-up authentication or isolation.
Lifecycle management is equally critical. AI agents should be provisioned, reviewed, and decommissioned like any other identity. Many enterprises have ghost agents—AI scripts or models that were created for a pilot project and then forgotten. These orphaned identities are prime targets for attackers. Regular audits and automated cleanup can mitigate this risk.
The Cloud Security Alliance also recommends using AI-specific threat intelligence feeds that track known malicious agent behaviors. By correlating this intelligence with identity telemetry, organizations can detect attacks early. For example, if a vendor's AI agent starts communicating with a known command-and-control server, identity-based policies can automatically block the session and quarantine the agent.
OpenAI's Trusted Access for Cyber program is a step in the right direction, but it focuses on protecting the AI supply chain. The broader need is for a universal identity layer that works across all AI platforms—whether developed in-house, purchased from vendors, or accessed via APIs. This is where identity threat detection platforms (ITDR) come into play. Leading ITDR solutions now offer connectors for major AI frameworks, enabling unified visibility and control.
The conversations in San Francisco this March made one thing clear: the future of cybersecurity will be shaped by entities that can act independently. Some will be human. Many will not. As technologies like Mythos continue to push the boundaries of what AI can do, the industry must evolve its defensive mindset accordingly. The most effective strategy may also be the simplest: If it can act, it should be treated like an identity. By anchoring AI security within identity threat detection and risk mitigation frameworks, organizations can protect against rogue agents—without adding yet another fragmented tool to an already complex defense arsenal.
Source: SecurityWeek News