/vnd/media/media_files/2025/12/05/rami-keynote-2025-12-05-12-54-12.jpg)
Rami Rahim, EVP and GM of Networking at HPE, outlines the company’s AI-native networking strategy at HPE Discover Barcelona 2025.
Hewlett Packard Enterprise (HPE) has outlined a strategy for AI-native networking and autonomous IT operations through an integrated platform combining assets from Aruba and Juniper Networks. The announcement was made at HPE Discover Barcelona 2025, five months after HPE completed its acquisition of Juniper, and highlights what the company calls a self-driving network approach to address operational complexity and support AI at scale.
HPE said the expansion delivers a consistent AIOps experience across HPE Aruba Networking and HPE Juniper Networking Mist, while extending observability and automation across compute, storage, networking, and cloud. These developments, the company claims, are designed to address increasing demand for operational intelligence and secure connectivity across hybrid IT environments.
“Customers need networks that are purpose-built with AI and for AI to handle the rapid growth of connected devices, complex environments, and increasing security threats,” said Rami Rahim, Executive Vice President and General Manager of Networking at HPE.
A Unified AI-Native Foundation for Hybrid Environments
Central to the announcement is the use of a shared agentic AI and microservices framework, which HPE said enables AIOps-driven operations across both HPE Aruba Networking Central and HPE Juniper Networking Mist platforms. The goal is to unify experiences across campus, data centre, and edge networks while protecting prior investments.
Juniper’s Mist Large Experience Model (LEM) is being integrated with Aruba Central to analyse data from collaboration applications, using synthetic data from digital twins for early detection and resolution of video issues. Meanwhile, Aruba’s Agentic Mesh will be adopted by the Mist platform, enhancing anomaly detection and root-cause analysis through assistive or autonomous actions.
Further integration includes unified organisational insights and global NOC views across both platforms, as well as support for new WiFi-7 access points designed to operate in either environment. Aruba Networking Central On-Premises 3.0 now includes traditional and generative AIOps capabilities, AI alerts, client insights, and a new user interface for on-premises deployments.
Infrastructure Built to Support AI Scale and Edge Inferencing
To meet the requirements of AI workloads, especially inferencing at the edge, HPE has introduced new networking hardware. The HPE Juniper Networking QFX5250 switch, available in Q1 2026, is based on Broadcom’s Tomahawk 6 silicon and offers 102.4 Tbps of bandwidth. The switch is designed to connect GPUs within data centres and supports Ethernet transport for performance-intensive AI infrastructure.
Also launched is the MX301 multiservice edge router, which will ship in December 2025. The router delivers 1.6 Tbps performance with 400G connectivity, aimed at inference workloads near the data source across metro, mobile, enterprise, and multiservice environments.
HPE said these new systems are aligned with AI deployment patterns that require reduced latency, lower cost of data transport, and compliance with data sovereignty frameworks.
In parallel, HPE announced new solutions developed through partnerships with NVIDIA and AMD. These include Juniper Networking long-haul data centre interconnect systems and on-ramps for AI factory networking, complementing HPE’s existing support for NVIDIA’s Spectrum-X platform and BlueField-3 DPUs.
AMD’s Helios AI rack-scale system, featuring a custom HPE Juniper scale-up Ethernet switch, is also part of the new offering. According to HPE, the system enables training for trillion-parameter models and delivers rack-scale inferencing using standards-based networking.
Self-Driving Operations with Unified Observability
HPE’s broader strategy is to unify IT operations by embedding agentic AIOps into its software and hardware stack. The company has extended OpsRamp integrations to include Juniper’s Apstra Data Center Director and Data Center Assurance software. These integrations are available through HPE’s GreenLake platform and offer full-stack visibility across compute, networking, storage, and cloud.
New features in Compute Ops Management, launching in December 2025, include integration with OpsRamp, an AI-enabled Compute Copilot, and support for root-cause self-service tools. The Model Context Protocol (MCP), which supports third-party AI agents for no-code integrations, is now available to select customers and expected to roll out more broadly in early 2026.
GreenLake Intelligence has also been updated to include AI agents for HPE’s Sustainability Insight Center and Wellness Dashboard. These capabilities aim to eliminate data silos and provide a unified command centre for hybrid operations, according to the company.
To reduce the cost of adoption, HPE Financial Services has introduced two financing options. These include 0% financing for AIOps software purchased through term licensing, and a leasing programme with savings equivalent to 10% for customers upgrading to AI-native networking. An optional take-out service is available for older equipment, with resale revenue sharing included.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)