Joakim E. Andresen
January 31, 2025

Securely enabling AI agents

Securely enabling AI agents

AI agents are not just advancing—they are fundamentally transforming business operations. As major tech companies continue advancing AI capabilities, these tools are evolving from simple automation assistants into autonomous systems capable of executing complex, real-world tasks.

Just last week, OpenAI launched Operator, an AI agent designed to navigate and interact with web interfaces independently. This is a big step forward in the types of AI agents coming onto the market, moving beyond internal workflow or automation tools - to active digital participants, engaging with the same systems and environments as human users.

With the global AI agent market expected to grow from $5.1 billion in 2024 to $47.1 billion by 2030, these tools are poised to reshape industries. However, their adoption raises a critical question: How do we effectively use and enable AI agents without compromising the security of systems and data?

The dynamic nature of AI agents

Unlike traditional applications or human users, AI agents operate autonomously and adapt to real-time changes. They aren’t static tools but active participants in enterprise ecosystems, executing tasks that span multiple systems and environments.  For example, an AI agent in logistics might optimize delivery routes, monitor shipments, and alert teams to potential delays.

This autonomy and complexity are what make AI agents so powerful - but also difficult to manage securely. The dynamic, evolving nature of their operations requires security frameworks that can keep pace, enabling seamless functionality without introducing unnecessary risk.

Why existing security approaches fall short

The foundation of traditional access control systems lies in static roles and permissions, which assume predictable user behavior and predefined workflows. While this model works well for human users performing consistent tasks, it quickly becomes a bottleneck when applied to AI agents.

AI agents often interact with sensitive data and systems in unpredictable ways, requiring permissions that change based on the context of their tasks. Static permissions can either over-restrict AI agents - limiting their potential - or overexpose sensitive systems, increasing the risk of exploitation.

This disconnect highlights the need for smarter access control that is both dynamic and context-aware, adapting permissions in real time to meet the demands of modern AI-driven environments.

Key pillars of secure access control for AI agents

Effective access control for AI agents requires a strategic approach built on the following core principles:

  1. Context-Aware Access: AI agents often perform interconnected tasks across teams. Access decisions must consider the context of the request to ensure that actions comply with organizational policies.
  2. Adaptive, Temporary Permissions: Persistent access creates vulnerabilities. AI agents should operate under dynamic provisioning models, where access is granted only when needed and revoked immediately afterward.
  3. Governance and Auditability: Enterprises must establish clear policies for AI agent activities and maintain comprehensive audit trails to ensure accountability and compliance.
  4. Scalability: As businesses deploy more AI agents, unified frameworks that dynamically adapt to complex environments are critical for effective management.

The rise of guardian agents

An emerging and popular idea is to just use AI to secure itself by establishing ‘guardian agents’ - Gartner predicts as much as 40% of CIOs will be insisting on using them by 2028.

While these guardians will most likely play a significant role in AI oversight, they cannot replace the fundamental need for adaptive security. Guardian agents may help mitigate risks, enforce policies, and ensure AI systems operate within ethical and operational boundaries, but they will only be as effective as the security and governance structures they are built upon.

How to securely scale AI agent adoption

Ensuring AI agents can operate effectively without introducing security risks requires a shift toward real-time, adaptive security models. Instead of relying on static permission frameworks, businesses need security controls that adjust dynamically based on the AI agent’s context and tasks.

This means implementing granular, time-bound access, ensuring AI agents only have the permissions they need, when they need them. Governance structures must also evolve, providing continuous oversight and auditability to track AI agent activities and enforce policies in a scalable way.

While tools like guardian agents will assist in AI oversight, they should complement—not replace—the core security and governance foundations businesses must establish.

Organizations that proactively adopt flexible, scalable security measures will be best positioned to integrate AI agents safely, unlocking their full potential without compromising control or trust.

To learn more about empowering and securing AI agents in your enterprise, download the E-guide: Access Control for AI Agents.

Keep updated

Don’t miss a beat from your favourite identity geeks