SailPoint: AI agent surge sparks urgent call for stronger security

The report highlights a notable contradiction: while 96% of technology professionals view AI agents as an emerging security risk, 98% of organisations intend to expand their use in the coming year.

author-image
Voice&Data Bureau
New Update
CyberSecurity

SailPoint, a provider of unified identity security solutions for enterprises, has published a new research report titled "AI Agents: The New Attack Surface – A Global Survey of Security, IT Professionals and Executives." The report underscores the urgent need for stronger identity security measures as AI agents become more widely adopted.

Advertisment

The survey was conducted by independent research firm Dimensional Research and included 353 qualified IT professionals responsible for AI, security, identity management, compliance, and operations at enterprise organisations. Participants, representing all levels of seniority, were based across five continents, offering a broad international perspective.

According to the findings, 82% of organisations are already using AI agents, yet only 44% have policies in place to secure them. The report highlights a notable contradiction: while 96% of technology professionals view AI agents as an emerging security risk, 98% of organisations intend to expand their use in the coming year.

The term “AI agent” (or “agentic AI”) broadly refers to autonomous systems capable of perceiving, making decisions, and taking actions to achieve specific goals within a given environment. These agents typically require multiple machine identities to access data, applications, and services. They also present unique challenges, such as self-modification and the potential to create sub-agents. Notably, 72% of respondents believe AI agents pose a greater risk than machine identities.

Advertisment

Key concerns associated with AI agents include:

  • Access to privileged data (60%)

  • Potential to perform unintended actions (58%)

  • Sharing of sensitive information (57%)

  • Decision-making based on inaccurate or unverified data (55%)

  • Accessing or distributing inappropriate information (54%)

“Agentic AI is both a powerful tool for innovation and a potential source of risk,” said Chandra Gnanasambandam, EVP of Product and CTO at SailPoint. “These autonomous systems are reshaping the way work is done, but they also represent a new attack surface. Operating with broad access to sensitive data and systems, yet with limited oversight, they are particularly vulnerable to exploitation. As organisations expand their use of AI agents, it is critical to adopt an identity-first approach, treating these agents with the same level of governance as human users, ensuring real-time permissions, least privilege, and full visibility into their actions,”Gnanasambandam added. 

Advertisment

AI agents are currently being used to access customer data, financial records, intellectual property, legal documents, supply chain information, and other highly sensitive material. However, many organisations express concerns about their ability to control what data these agents access and share. An overwhelming 92% of respondents said that governing AI agents is essential to enterprise security. Alarmingly, 23% reported incidents where AI agents were deceived into revealing access credentials. Furthermore, 80% indicated that their AI agents had taken unintended actions, including:

  • Accessing unauthorised systems or resources (39%)

  • Accessing or sharing sensitive or inappropriate data (31% and 33%, respectively)

  • Downloading sensitive content (32%)

AI agents represent more than just system components, they are a distinct category of identity. With nearly all surveyed organisations (98%) planning to expand their use of agentic AI in the next year, robust identity security frameworks must be in place. These should cover human, machine, and AI identities, providing comprehensive discovery, unified visibility, enforcement of zero standing privilege, and auditability. In an era of frequent data breaches, inadequately governed AI agents significantly heighten organisational risk.