Tenable report warns of growing AI exposure gap in cloud environments

Tenable’s 2026 report finds organisations are accumulating AI and cloud security risks faster than they can manage them, particularly across identities and supply chains.

author-image
Voice&Data Bureau
New Update
GenAI power demand in data centre architectures

Tenable has published itsCloud and AI Security Risk Report 2026, examining security risks linked to cloud and artificial intelligence (AI) adoption. The research indicates that organisations are accumulating AI-related cyber risks more quickly than they can manage them, creating what the company describes as a “zero-margin AI exposure gap”.

Advertisment

According to the report, engineering velocity, driven by AI adoption, increased use of third-party code and the scale of cloud environments, has outpaced the ability of security teams to assess, prioritise and remediate risks before they are exploited.

The AI exposure gap

The report characterises the AI exposure gap as a largely unseen layer of risk emerging across applications, infrastructure, identities, agents and data. Many security teams lack the visibility and integrated controls required to manage this complexity effectively. Analysis of cloud environments identified significant risks in four areas: AI security posture, software supply chain exposure, implementation of least-privilege access and cloud workload security. The report also outlines practical steps for security and business leaders to mitigate risk across cloud and AI environments.

Key findings

The findings show that 70 per cent of organisations have integrated at least one AI or Model Context Protocol (MCP) third-party package, embedding AI within applications and infrastructure, often without centralised security oversight. A further 86 per cent host third-party code packages containing critical-severity vulnerabilities, making the software supply chain a persistent source of exposure. In addition, 13 per cent have deployed packages with a known history of compromise, including the s1ngularity or Shai-Hulud worms.

Advertisment

The research also indicates that 18 per cent of organisations have granted AI services administrative permissions that are rarely audited, potentially creating accessible privilege pathways. Non-human identities, such as AI agents and service accounts, account for a higher proportion of risk at 52 per cent, compared with 37 per cent attributed to human users. These identities often form complex combinations of permissions that fragmented security tools fail to correlate. The report further notes that 65 per cent of organisations hold unused or unrotated cloud credentials, described as “ghost” secrets, with 17 per cent of these linked to critical administrative privileges. In addition, 49 per cent of identities with critical-severity excessive permissions are dormant.

Liat Hayun, Senior Vice President of Product Management and Research at Tenable, said AI systems embedded in infrastructure introduce additional risks that security leaders must address alongside emerging threats from AI and cloud technologies. She added that limited visibility and governance can leave organisations exposed to risks such as over-privileged cloud identities, and that a unified exposure management approach can help prioritise business risk more effectively.

Managing emerging risks

To address emerging risks, the report recommends strengthening oversight of AI integration through improved visibility and identity-centred controls. This includes enforcing least-privilege access for AI roles, addressing risks posed by dormant or unused credentials and reducing exposure from static secrets. It also highlights the need to treat third-party code and external accounts as extensions of organisational infrastructure, calling for unified visibility across software packages, virtual machines, identity access systems and cloud environments to reduce supply chain risk.

Advertisment

The 2026 report is based on analysis by Tenable Research using anonymised telemetry from public cloud and enterprise environments collected between April and October 2025, with AI-specific findings extended through December 2025. It defines exposure management as the practice of identifying, evaluating and prioritising risks across all potential entry points an attacker could exploit, including software vulnerabilities, misconfigurations, excessive user privileges, cloud security gaps and assets introduced through AI and third-party supply chains.