2026: When AI moves from software to the physical world

Analog Devices executives predict that by 2026 AI will become more physical, decentralised and embedded at the edge, reshaping robotics, consumer devices and industrial automation.

author-image
Ayushi Singh
Updated On
New Update
Tech-Mahindra-and-Crosscall-Collaborate-to-Deliver-AI-powered-Mobility-Solutions1

For much of the past decade, artificial intelligence has largely lived on screens and in data centres, powering chatbots, recommendation engines, and analytics tools. By 2026, that model is expected to change. According to senior executives at Analog Devices (ADI), AI is entering a phase where it increasingly interacts with the physical world, reshaping industries ranging from manufacturing and robotics to consumer electronics and automotive systems.

ADI, a long-established semiconductor company, operates largely behind the scenes, supplying the analogue, mixed-signal and embedded technologies that convert real-world phenomena, such as sound, motion, vibration and heat, into electrical signals that digital systems can process. As AI moves closer to the edge, where data is generated rather than transmitted to the cloud, this conversion layer is becoming central to how intelligent systems function.

Executives at ADI working across robotics, automation, and emerging AI argue that 2026 will mark an inflection point: AI systems will become more autonomous, more decentralised, and more tightly integrated with physical environments.

From Digital Intelligence to Physical Intelligence

Paul Golding, Vice President of Edge AI and Robotics at ADI, expects the next wave of AI to move beyond text and images towards what he describes as “physical intelligence”. While the scaling laws behind large language and vision models will continue to hold, he says they will increasingly be applied to models trained on physical signals such as motion, sound, and magnetic fields.

These systems, Golding argues, will shift from the data centre to the edge, enabling machines to reason and act locally without relying on constant connectivity. For businesses, this could reduce latency, improve reliability, and lower infrastructure costs, particularly in environments such as factories, warehouses, and transport systems where real-time decisions matter.

Instead of rigid automation, manufacturers may see robots that adapt to unexpected conditions, learning from limited examples and responding to new situations autonomously. Hybrid “world models” that combine physics-based reasoning with sensor-driven data are expected to become more common, allowing machines not just to describe their surroundings but to interact with them and learn through experience.

Audio Emerges as a Key AI Interface

Another shift expected by 2026 is the growing role of audio as an AI interface in consumer and industrial devices. Golding points to advances in spatial audio, sensor fusion, and on-device processing that could turn hearables, in-vehicle systems, and augmented reality glasses into contextual companions rather than passive accessories.

For device makers, this could translate into improved noise cancellation, longer battery life, and new form factors, as audio-based AI begins to interpret intent, emotion, and environmental context. In sectors such as automotive, mobility, and consumer electronics, audio-driven interfaces may reduce reliance on screens while enabling more natural human–machine interaction.

Agentic AI and Digital Twins on the Factory Floor

Golding also expects 2026 to see wider adoption of agentic AI, systems that do not merely predict outcomes but make decisions and act on them. These models are likely to be trained and tested within physically accurate simulation environments, accelerating the use of digital twins across industrial settings.

For businesses, this could move AI from advisory roles into operational control. Rather than simply flagging a machine likely to fail, an AI agent could autonomously reroute production, adjust workloads, and coordinate with supply chain systems. While such autonomy raises governance and safety questions, it also promises productivity gains and reduced downtime in capital-intensive industries.

Smaller Models with Deeper Reasoning

Contrary to the focus on ever-larger foundation models, Golding predicts the rise of “micro-intelligences”, compact, task-specific AI systems capable of sophisticated reasoning within narrow domains. Running directly on chips and sensors, these models could orchestrate specialised agents at the edge.

For enterprises, this shift may lower barriers to deployment by reducing compute and energy requirements, enabling AI in environments where large models are impractical. It also signals a move towards new benchmarks that prioritise engineering problem-solving over abstract performance metrics.

Decentralised AI and Humanoid Robotics

From a hardware perspective, Massimiliano “Max” Versace, Vice President of Emergent AI at ADI, sees decentralised AI architectures entering early commercial use by the end of 2026, particularly in humanoid robotics. These systems draw inspiration from biology, distributing intelligence across sensors and local processing units rather than relying on a single central processor.

For robotics manufacturers, decentralised AI could mean smoother motion, faster reflexes, and lower power consumption, critical factors for robots operating in dynamic, real-world environments. By embedding neuromorphic and in-memory computing directly within sensors, robots can respond instantly to physical stimuli while freeing central systems for planning and learning.

The Return of Analogue AI Compute

Versace also points to the re-emergence of analogue AI computing, driven by the energy and latency constraints facing digital architectures. Unlike conventional systems that separate sensing and computation, analogue AI leverages the physics of the hardware itself to perform inference.

Initial deployments are expected in robotics, wearables, and autonomous systems, where power efficiency and real-time responsiveness are essential. For businesses, analogue AI could unlock longer battery life, more natural interactions, and reduced system complexity, particularly at the edge.

What This Means for Businesses in 2026

Taken together, these trends suggest that 2026 will be less about AI becoming smarter in isolation and more about it becoming embedded, autonomous, and physically aware. For enterprises, this shift will require rethinking infrastructure, skills, and risk management, as intelligence moves closer to machines, sensors, and users.

Rather than treating AI as a cloud-based software layer, businesses may increasingly view it as part of their physical systems, shaping how factories run, how devices interact with users, and how robots navigate the real world. As sensing, computation, and response converge at the edge, AI’s next phase is likely to be defined not by what it can say, but by what it can do.

Advertisment