Beyond the Model: The Expanded Attack Surface of AI Agents
Watch Now – Beyond the Model: The Expanded Attack Surface of AI Agents
This webinar from Airia, hosted on the Hacker News webinar series, features Rahul Parwani, Head of Product for Security Solutions at Airia, in conversation with CISO and moderator James Azar. The discussion explores why securing AI agents requires a fundamentally different approach than securing traditional AI models or LLMs alone.
The core argument: Guardrails alone are not enough. When organizations move from simple chatbots to agents with tools, data access, and autonomous capabilities, the attack surface expands dramatically. Traditional input/output filtering fails to address risks like indirect prompt injection, data exfiltration, and unintended tool actions.
Key Takeaways:
-
Model security ≠ Agent security. Agents with tools and data access have far more attack vectors than standalone LLMs.
-
Guardrails are necessary but insufficient. They can’t stop malicious actions once a prompt injection bypasses detection.
-
Indirect prompt injection is the top threat. Malicious instructions hidden in data sources can hijack agents without user input.
-
Real exploits exist today. Microsoft Copilot vulnerabilities have enabled zero-click data exfiltration.
-
Use multi-layered security. Combine guardrails, intent analysis, deterministic tool constraints, and parameter validation.
-
“Security in the prompt” doesn’t work. System prompt instructions can be overridden by injections.
-
Apply least-privilege access to agents. Grant only the minimum permissions needed for each task.
-
Agentic identity is emerging. Short-lived, scoped agent identities (via Entra, Okta, etc.) are becoming essential.