/vnd/media/media_files/2025/12/04/hpe-discover-keynote-overview-2025-12-04-22-53-21.jpg)
HPE President and Antonio Neri presenting AI-native infrastructure strategy at HPE Discover 2025 in Barcelona.
At HPE Discover 2025 in Barcelona, Hewlett Packard Enterprise signalled a bold transformation in its role—from technology provider to principal architect of sovereign, AI-native infrastructure for enterprises and cloud providers.
The company’s announcements, made alongside its expanding partnerships with NVIDIA, AMD, Broadcom, and security specialists like CrowdStrike and Fortanix, showcase HPE’s evolving blueprint: AI factories that industrialise the development and deployment of large-scale models, supported by modular, compliant, and intelligent platforms.
With a clear focus on simplifying AI infrastructure, ensuring regulatory alignment, and accelerating adoption, HPE’s strategy now hinges on tightly integrated innovations across compute, networking, storage, and operations. It is building an ecosystem that empowers customers to manage and scale their AI ambitions while retaining control over their data and digital sovereignty.
A Sovereign AI Vision Rooted in Infrastructure and Compliance
HPE’s vision for AI-ready data centres takes shape around the idea of the “AI factory”—a secure and scalable environment designed to produce intelligence at an industrial scale. Central to this are the newly announced AI Factory Labs: the first in Grenoble, France, and a second Private AI Lab in London. Developed in collaboration with NVIDIA and Carbon3.ai, respectively, these labs provide testbeds for enterprises to validate AI workloads in compliance with European regulations and data sovereignty requirements.
The labs are equipped with a combination of HPE Alletra storage, HPE servers, NVIDIA GPUs, Spectrum-X Ethernet, and HPE Juniper Networking gear. By using government-ready NVIDIA AI Enterprise software in an air-cooled setup, the Grenoble facility enables customers to build AI factories that are both sovereign and secure.
Antonio Neri, President and CEO of Hewlett Packard Enterprise, described the initiative as foundational. “Together, HPE and NVIDIA are showcasing our unique strengths to deliver true full-stack AI infrastructures that provide enterprises with a greater range of performance for more diverse workloads,” Neri said during the keynote in Barcelona.
This approach is not limited to lab environments. HPE’s Private Cloud AI platform has been expanded with new compliance-driven capabilities, including GPU fractionalisation via NVIDIA MIG, STIG-hardened software for air-gapped setups, and reference architectures aligned with national regulations. These are supported by new Data Centre Ops Agents, developed in partnership with World Wide Technology (WWT), NVIDIA, and HPE, that offer enhanced management and compliance readiness for large-scale AI operations.
Ethernet Becomes the New Backbone for AI Factories
In addition to compute innovations, HPE is rearchitecting the data centre network to meet the extreme bandwidth and latency demands of AI. The company’s integration of Juniper Networks—just five months after the acquisition—has already resulted in a harmonised, AI-native networking stack. This includes self-driving capabilities powered by AIOps, unified across HPE Aruba Networking and HPE Juniper Networking platforms.
These advances were further strengthened by the debut of the HPE Juniper Networking QFX5250 switch, the world’s first to use Broadcom’s Tomahawk 6 silicon for Ultra Ethernet Transport. Capable of delivering 102.4 Tbps, the switch is purpose-built for AI workloads requiring lossless transport and scale-up compute clusters. Paired with the MX301 edge router for high-performance AI inferencing at the edge, these solutions close the loop between user data, cloud workloads, and AI factories.
“By delivering autonomous, high-performing networks, HPE is poised to disrupt the networking industry with future-ready solutions that redefine user experiences and provide robust, secure connectivity across all environments,” said Rami Rahim, Executive Vice President, President and General Manager of HPE’s networking division.
To encourage faster adoption of AI-native networking, HPE Financial Services has introduced zero-percent financing for networking AIOps software and a special programme offering the equivalent of 10% cash savings on AI-supporting infrastructure, including enterprise routing and data centre upgrades. The offer also includes a multi-OEM trade-in service with revenue sharing on resale.
Scaling AI Compute from Inference to Exaflops
While NVIDIA-powered AI Factory Labs focus on compliance and validation, HPE is also building production-scale AI compute platforms to support inference and training. The newly introduced NVIDIA GB200 NVL4 by HPE offers a high-density, power-efficient platform combining Grace CPUs with four Blackwell GPUs. With up to 136 GPUs per rack, the solution supports generative AI use cases, especially large language model inference, in constrained environments.
At the other end of the scale, HPE will soon offer the AMD Helios AI rack, built for large-scale training. The rack connects 72 AMD Instinct MI455X GPUs per unit, delivering 2.9 AI exaflops of FP4 performance, 260 TB/s of bandwidth, and 31 TB of HBM4. The rack-scale platform integrates liquid cooling and uses open standards such as UALoE over Ethernet, powered by Broadcom’s Tomahawk 6 switch and HPE Juniper Networking software. This marks the first time a scale-up switch has been purpose-built for AI over standards-based Ethernet—an alternative to proprietary fabrics.
“With the new AMD ‘Helios’ and our purpose-built HPE scale-up networking solution, we are providing our cloud service provider customers with faster deployments, greater flexibility, and reduced risk in how they scale AI computing in their businesses,” Neri explained.
The Helios platform complements the broader HPE-NVIDIA infrastructure, together forming a spectrum of full-stack solutions—validation in lab, deployment in sovereign cloud, and training in hyperscale racks. Each is designed to maximise openness, interoperability, and flexibility for customers scaling AI across verticals.
Data Intelligence, Security, and Full-Stack Observability
AI workloads depend not just on training power, but on how data is ingested, enriched, and moved. To address this, HPE introduced the Alletra Storage MP X10000 Data Intelligence Nodes, which turn storage into an active layer for inline analytics. By embedding NVIDIA accelerated compute and AI Enterprise software, these nodes classify and optimise data as it is ingested—making them a powerful enabler for AI pipelines that span edge, core, and cloud environments.
On the operations front, HPE showcased enhancements to its AIOps platforms, notably the integration of Juniper’s Apstra with OpsRamp for full-stack observability. The company is moving toward a unified hybrid operations model that uses GreenLake Intelligence and agentic AI to interpret telemetry from compute, storage, and network domains, allowing for root-cause analysis and predictive assurance.
Security also forms a core pillar of this infrastructure strategy. HPE is integrating CrowdStrike’s AI-native security platform into its Private Cloud AI solutions and collaborating with Fortanix to deploy NVIDIA Confidential Computing for sovereign AI workloads in regulated environments. These efforts ensure that as customers scale, they do not compromise on data protection or compliance.
From Lab to Production: A Cohesive AI Infrastructure Playbook
HPE’s announcements in Barcelona outline a clear playbook: offer modular, AI-optimised platforms for every phase of the AI lifecycle—from early prototyping in compliant lab environments to production-grade rack-scale deployment. The Helios AI rack, AI Factory Lab, sovereign private clouds, and agentic operations tools are all part of this strategy.
Crucially, this is a full-stack approach where the value lies in the tight integration of compute, storage, networking, and software layers. With reference architectures, open standards, liquid cooling, and sovereign-compliant frameworks, HPE is building infrastructure that works across countries, industries, and compliance regimes.
Its partnership model—with NVIDIA for GPU acceleration and AI software, AMD for open rack-scale compute, Broadcom for Ethernet innovation, and CrowdStrike and Fortanix for cybersecurity—shows HPE’s commitment to openness and flexibility.
Together, these moves signal HPE’s larger ambition: not just to serve AI workloads, but to industrialise AI infrastructure itself. By turning data centres into AI factories, and by making those factories sovereign, intelligent, and compliant by design, HPE is rewriting the blueprint for enterprise AI at scale.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)