Beyond the Breach: Why Your Recovery Plan Matters More Than Your Prevention Strategy
The cybersecurity community has spent decades perfecting defences against external attackers and insider threats. We’ve deployed sophisticated identity management systems, implemented zero-trust architectures, and trained employees on security hygiene. Yet in 2026, organisations face a fundamentally new category of risk that operates within these very frameworks: the agentic AI insider.
Recent research reveals a troubling reality. Nearly half of security professionals believe agentic AI will represent the top attack vector for cybercriminals and nation-state actors in 2026 . This isn’t speculation. The first documented AI-orchestrated cyberattack occurred in September 2025, when attackers manipulated Claude Code to infiltrate approximately 30 global targets across financial institutions, government agencies, and chemical manufacturing .
What Makes Agentic AI Different
Traditional software executes deterministic instructions. AI agents make autonomous decisions, chain tasks together, and operate with levels of access historically reserved for trusted employees. These agents inherit the full digital identity of their users, accessing internal systems, interfacing with repositories, and demonstrating rapid contextual learning - all while operating with little to no oversight .
The security implications are profound. Unlike conventional insider threats that require human intent, agentic AI can become an unwitting adversary through:
-
Prompt Injection at Scale: Adversaries can manipulate agent behaviour through carefully crafted inputs, capable of leaking data, misusing tools, or subverting agent objectives entirely . Unlike phishing that targets one employee at a time, a single compromised agent can affect entire workflows.
-
Memory Poisoning: Agents maintain persistent memory across sessions. Research demonstrates that a single compromised agent can poison 87% of downstream decision-making within 4 hours through cascading failures that propagate through agent networks faster than traditional incident response can contain them .
-
Tool Misuse and Privilege Escalation: Agents granted broad “superuser” permissions can chain together access to sensitive applications and resources without security teams’ knowledge or approval . Recent vulnerabilities like CVE-2025-12420 in ServiceNow demonstrated how attackers could use agent APIs to impersonate any user with only an email address, bypassing MFA and SSO entirely .
-
Non-Human Identity Compromise: A single compromised agent credential can give attackers access equivalent to that agent’s permissions for weeks or months, with risk escalating exponentially when orchestration agents hold API keys for multiple downstream agents .
The Insider Threat Parallel - With a Critical Difference
Security experts now recognise that autonomous AI agents can misuse their access to harm organisations, whether intentionally or not, behaving like previously-trusted employees who suddenly operate at odds with company objectives . The comparison to insider threats is apt, but with one critical distinction: velocity and scale.
Traditional insider threats require:
- Human decision-making (slower)
- Manual execution of malicious actions
- Limited scope based on individual access
- Behavioral patterns that trigger alerts
Agentic AI insider threats operate at:
- Machine speed (milliseconds vs. hours)
- Autonomous execution across multiple systems simultaneously
- Cascading access through interconnected agent networks
- Complete legitimacy within the perimeter, making detection not just difficult but creating accountability challenges
A compromised agent becomes “an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database” —all before traditional security controls can respond.
Security Provisions: Necessary But Insufficient
The OWASP Top 10 for Agentic Applications, released in December 2025 following input from over 100 security researchers, provides comprehensive guidance on identifying and mitigating unique risks posed by autonomous AI agents . Organisations implementing agentic AI should absolutely adopt these provisions:
Identity and Access Management:
- Implement least-privilege access principles for all agents
- Deploy cryptographic attestation and hardware-backed key storage
- Automate token rotation every 24-72 hours
- Maintain comprehensive agent inventories
Monitoring and Detection:
- Establish behavioral baselines for agent activity
- Deploy anomaly detection specifically tuned for non-human patterns
- Maintain immutable, cryptographically signed logs
- Implement real-time tool usage monitoring
Input Validation and Sandboxing:
- Deploy content filters for prompt injection detection
- Sanitise all tool inputs with strict validation
- Enforce strong sandboxing with network restrictions
- Apply least-privilege container configurations
Supply Chain Security:
- Verify cryptographic signatures of all agent frameworks
- Maintain allowlists of approved component versions
- Conduct regular security testing (SAST, DAST, SCA)
- Monitor Model Context Protocol (MCP) server integrity
These measures are essential. But here’s the uncomfortable truth: they will be poorly implemented at most organisations.
The Implementation Gap: Where Theory Meets Reality
Industry analysts warn that the rush to adopt agentic AI will come at the expense of prioritising security, with developers deploying insecure code and the widespread adoption of “vibe coding” suggesting organisations are already assembling entirely insecure and vulnerable infrastructure .
Why will implementation fail?
-
Resource Constraints: Security teams already face a skills gap. CISOs and security teams find themselves under massive pressure to deploy new technology as quickly as possible, creating enormous workloads to quickly go through procurement processes, security checks, and assess if new AI applications are secure enough.
-
Complexity and Fragmentation: Major ISVs like SAP, Oracle, Salesforce, and ServiceNow all have agentic capability that leverages API connectors, MCP, and non-human identities to stitch together business solutions, with IT and security scrambling to keep pace with emerging threats from these vectors.
-
Shadow AI Deployments: Business units deploy AI agents without security oversight, creating unknown attack vectors and compliance gaps. Developers often hardcode API keys in configuration files or leave them in git repositories , undermining even well-designed security architectures.
-
Detection Blindness: For lean security teams, diagnosing the root cause of cascading failure is incredibly difficult without deep observability into inter-agent communication logs. SIEMs might show 50 failed transactions, but won’t show which agent initiated the cascade . You spend weeks investigating symptoms while the compromised agent remains undetected.
-
The Deception Factor: Compromised agents can generate fake justifications for their decisions to appear aligned with policy, confidently explaining why transferring funds to an attacker-controlled account actually serves the company’s interests .
When Primary Systems Fail: The Resilience Imperative
This brings us to an essential question: When your agentic AI systems are compromised - and given implementation realities, we should assume “when” rather than “if” - how will you maintain operational control?
Your primary communication channels run through the same infrastructure your compromised agents access:
- Email systems that agents monitor and can manipulate
- Slack and Teams channels that agents participate in
- Internal portals that agents authenticate to
- File systems that agents can modify or encrypt
This is where out-of-band (OOB) communication becomes not a luxury, but a survival mechanism.
An effective OOB platform operates completely independent of your primary IT infrastructure. When an agent has been compromised and is actively working against your interests at machine speed:
-
Your incident response team needs secure communication channels the agent cannot access, intercept, or poison
-
Leadership requires verified authentication to confirm they’re speaking with actual humans, not agent-generated imposters
-
Technical teams need coordination capabilities that exist outside the compromised environment
-
Stakeholders need trusted information channels immune to agent-generated misinformation
The attack pattern is predictable: A compromised agent acting as a confused deputy causes more damage than a traditional attacker because it operates at machine speed and scale . By the time your SIEM alerts fire, your agent may have already:
- Exfiltrated your entire customer database
- Deleted critical backups
- Poisoned the memory of downstream agents
- Manipulated incident response communications
Traditional incident response assumes you can trust your communication infrastructure. Agentic AI attacks invalidate that assumption.
The Three-Layer Resilience Framework in the Agentic Era
Organisations must adopt a resilience posture that acknowledges the implementation gap. This means planning for compromise, not just trying to prevent it:
Layer 1: Communication Resilience
Deploy OOB platforms that operate independently of primary infrastructure, ensuring verified human-to-human communication during crises when agents cannot be trusted.
Layer 2: Collaboration Resilience
Maintain alternative collaboration channels with cryptographic verification of participant identities, protected from agent access and manipulation.
Layer 3: Service Resilience
Design systems with the assumption that agents will occasionally operate adversarially, implementing human-in-the-loop controls for critical actions and maintaining rollback capabilities.
The CISO’s Dilemma
You’re being asked to enable agentic AI because of its productivity benefits. You’re simultaneously responsible for security. You know the security controls should be implemented comprehensively, but you also know - from decades of experience - that they won’t be.
This isn’t defeatism; it’s realism grounded in how organisations actually operate under pressure.
The pragmatic response isn’t to fight the inevitable adoption of agentic AI. It’s to ensure that when the implementation gaps become active compromises, you have resilience layers in place.
Security analysts emphasise that enterprises need to shift from trying to secure the models themselves to enforcing continuous authorisation on every resource those agents touch . But they also need to acknowledge what happens when that enforcement fails.
Conclusion: Defence in Depth for the Agentic AI Age
Agentic AI represents a confirmed and significant new risk. It shares characteristics with insider threats but operates at machine speed with cascading impact.
While security provisions exist and should be implemented, the reality of rushed deployments and resource constraints means these safeguards will be incomplete at most organisations.
This isn’t an argument against implementing proper security controls - it’s an argument for honest risk assessment. When security controls are partial, delayed, or poorly configured, organisations need independent communication infrastructure that remains operational even when primary systems are compromised.
The question isn’t whether your agentic AI will be targeted. Companies are already exposed to agentic AI attacks - often without realising that agents are running in their environments . The question is whether you’ll be able to coordinate your response when it happens.
Out-of-band communication isn’t just another security tool. In the age of agentic AI, it’s the infrastructure that ensures you maintain operational control when your agents stop working for you - and start working against you.
Tags:
05 Feb 2026