Gartner warns misconfigured AI could shut G20 infra by 2028

Gartner warns misconfigured AI in cyber-physical systems may shut down critical infrastructure in a G20 nation by 2028, urging stronger safeguards.

author-image
Voice&Data Bureau
New Update
Gartner-Predicts-That-by-2028-Misconfigured-AI-2

Gartner, Inc. has warned that, by 2028, a misconfigured artificial intelligence (AI) system within cyber-physical systems (CPS) could lead to the shutdown of national critical infrastructure in a G20 country.

Gartner defines CPS as engineered systems that integrate sensing, computation, control, networking and analytics to interact with the physical world, including human operators. The term encompasses operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), the industrial Internet of Things (IIoT), as well as robots, drones and other Industrie 4.0 technologies.

According to Wam Voster, Vice President Analyst at Gartner, the next major infrastructure failure may not stem from a cyberattack or natural disaster, but from an internal configuration error. He noted that a well-intentioned engineer, a flawed update script or even a minor miscalculation could trigger widespread disruption. Voster emphasised the need for a secure “kill switch” or override mechanism, accessible only to authorised operators, to prevent unintended shutdowns resulting from AI misconfiguration.

Misconfigured AI systems can autonomously suspend essential services, misinterpret sensor inputs or initiate unsafe operational responses. Such actions may cause physical damage or large-scale service outages, directly affecting public safety and economic stability. Critical systems such as power grids and manufacturing facilities are particularly exposed to these risks.

In modern electricity networks, for example, AI models are increasingly used to balance supply and demand in real time. A predictive model that incorrectly interprets normal demand fluctuations as instability could trigger unnecessary grid isolation or load shedding across entire regions, or even at national level.

Voster also highlighted the growing complexity of AI models, describing many as effectively “black boxes”. Even developers may struggle to anticipate how minor configuration changes could influence system behaviour. As these systems become more opaque, the potential consequences of errors increase, reinforcing the importance of maintaining human oversight and intervention capabilities.

To mitigate such risks, Gartner advises chief information security officers (CISOs) to adopt several measures. These include implementing secure override modes for all critical CPS environments to ensure human control during autonomous operations; developing full-scale digital twins of critical infrastructure systems to test updates and configuration changes before deployment; and establishing real-time monitoring with rollback mechanisms for AI-driven changes. Gartner also recommends the creation of national AI incident response teams to coordinate action in the event of system failures.

Further analysis is available to clients in the report “Predicts 2026: Emergent Critical Risks of AI in CPS Security”, which outlines the evolving threat landscape associated with AI-enabled infrastructure.

Advertisment