/vnd/media/media_files/2025/12/02/hpe-discover-2025-barcelona-ai-rack-scale-architecture-2025-12-02-21-27-03.jpg)
Hewlett Packard Enterprise (HPE) has announced it will soon offer the AMD Helios AI rack-scale architecture, an open, integrated platform designed to accelerate the deployment of AI-ready data centres. Announced on the eve of the HPE Discover 2025 in Barcelona, the solution is built for cloud service providers (CSPs), including neoclouds, and supports large-scale AI training and inference workloads.
The solution connects 72 AMD Instinct MI455X GPUs per rack, delivering up to 2.9 AI exaflops of FP4 performance and 260 terabytes per second of aggregated scale-up bandwidth. It offers 31 TB of fourth-generation high-bandwidth memory (HBM4) and 1.4 PB/s of memory bandwidth, enabling it to support trillion-parameter AI models with high inference throughput.
“For more than a decade, HPE and AMD have pushed the boundaries of supercomputing, delivering multiple exascale-class systems and championing open standards that accelerate innovation,” said Antonio Neri, president and CEO at HPE. “With the new AMD ‘Helios’ and our purpose-built HPE scale-up networking solution, we are providing our cloud service provider customers with faster deployments, greater flexibility, and reduced risk in how they scale AI computing in their businesses,” he added.
HPE and Broadcom Deliver Scale-up Ethernet for AI Workloads
At the heart of the Helios rack is a standards-based scale-up Ethernet networking solution built by HPE Juniper Networking in collaboration with Broadcom. The system is powered by Broadcom’s Tomahawk 6 chip and uses the Ultra Accelerator Link over Ethernet (UALoE) standard—an open, high-performance communication layer designed for the high-bandwidth, low-latency requirements of modern AI training workloads.
This marks the first time a scale-up switch has been purpose-built for AI over standard Ethernet, offering an alternative to proprietary interconnects. It integrates HPE’s AI-native automation and assurance capabilities, simplifying network operations and reducing the total cost of ownership.
“Broadcom is proud to take part in this collaboration to advance open Ethernet-based AI infrastructure for scale-up,” said Hock E Tan, President and CEO at Broadcom. “Our high-performance silicon delivers industry-leading ultra-low latency, massive performance, and lossless networking with the scalability and efficiency modern AI workloads require,” he said, adding that together with AMD, HPE is enabling customers to build powerful AI data centres with standard Ethernet, maximising choice and flexibility while delivering exceptional scale.
The solution complements HPE’s existing scale-out and scale-across networking offerings, completing its full “networks for AI” portfolio.
Liquid Cooling and Open Standards for Better Rack Efficiency
Designed for thermal and operational efficiency, the Helios rack features a double-wide chassis conforming to OCP’s ORW specifications. It incorporates direct liquid cooling to support dense GPU configurations while maintaining serviceability and energy efficiency.
The rack’s open architecture integrates AMD ROCm open software and AMD Pensando networking technology, both of which are designed to lower the total cost of ownership and accelerate innovation. These components ensure the platform remains extensible and compatible with evolving AI demands, offering modularity without vendor lock-in.
“With Helios, we are bringing together the full stack of AMD compute technologies and HPE’s system innovation to deliver an open, rack-scale AI platform,” said Dr Lisa Su, Chair and CEO at AMD. “This drives new levels of efficiency, scalability, and breakthrough performance for our customers in the AI era,” she explained.
The company further stated that its Helios rack-scale AI solution will be made available globally by HPE in 2026 and delivered with support from HPE Services, drawing on the company’s experience in exascale systems and liquid-cooled infrastructure deployment.
Helios Adds Scale to HPE's Full-Stack AI Data Centre Vision
The Helios rack complements HPE’s broader AI infrastructure strategy, which includes the recently announced AI Factory Lab in Grenoble (France), developed in partnership with NVIDIA. That initiative enables enterprises to test and validate AI workloads in sovereign, regulatory-compliant environments within the EU.
While the NVIDIA-powered AI Factory Lab provides a secure environment for prototyping and compliance testing, the AMD Helios rack offers a scalable production environment for training large language models and enabling high-throughput inference. Together, they represent HPE’s full-stack vision for AI-ready data centres—combining compute, networking, storage, and security to address enterprise AI at every stage.
By collaborating with both AMD and NVIDIA, and integrating technologies from Broadcom and Juniper Networking, HPE is positioning itself as a key enabler of sovereign, open, and efficient AI infrastructure—from lab to production.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)