/vnd/media/media_files/2025/12/02/hpe-discover-2025-barcelona-ai-rack-scale-architecture-2025-12-02-21-27-03.jpg)
HPE has unveiled the AMD Helios AI rack-scale architecture, a turnkey solution designed to accelerate AI training and inferencing in large-scale data centres. Announced on the eve of the HPE Discover 2025 in Barcelona, the rack integrates HPE Juniper Networking hardware and software with AMD’s latest GPUs and Broadcom’s high-performance networking silicon to create an open, scale-up platform for AI-ready data centres.
The Helios rack connects 72 AMD Instinct MI455X GPUs, delivering up to 2.9 AI exaflops of FP4 performance and 260 terabytes per second of aggregate scale-up bandwidth. It also includes 31 TB of fourth-generation high-bandwidth memory (HBM4) and 1.4 PB/s of memory bandwidth to support the demands of trillion-parameter model training and high-volume inferencing.
“For more than a decade, HPE and AMD have pushed the boundaries of supercomputing,” said Antonio Neri, President and CEO at HPE. “With the introduction of the new AMD Helios and our purpose-built HPE scale-up networking solution, we are providing our cloud service provider customers with faster deployments, greater flexibility, and reduced risk in how they scale AI computing in their businesses,” he added.
The solution adheres to Open Compute Project (OCP) standards and features Open Rack Wide (ORW) specifications optimised for power delivery, direct liquid cooling, and serviceability.
HPE and Broadcom Deliver Scale-up Ethernet for AI Workloads
At the heart of the Helios rack is an industry-first scale-up Ethernet networking solution developed by HPE and Broadcom. The system is powered by HPE Juniper Networking hardware and software and features the Broadcom Tomahawk 6 switch chip. It supports the Ultra Accelerator Link over Ethernet (UALoE) standard—an open, high-performance communication layer designed for next-generation AI architectures.
This standards-based Ethernet fabric allows AI data centres to scale without reliance on proprietary interconnects. The switch is engineered to support lossless, low-latency data flow between GPUs within the rack, meeting the throughput needs of AI model training at scale.
“Broadcom is proud to take part in this collaboration to advance open Ethernet infrastructure for AI,” said Hock E Tan, President and CEO at Broadcom. “Together with HPE and AMD, we are enabling customers to build powerful AI data centres using standard Ethernet, maximising choice and flexibility while delivering scalability and efficiency for modern AI workloads,” he stated.
HPE Juniper Networking software provides AI-native automation, assurance, and telemetry capabilities that simplify network operations, reduce deployment times, and optimise performance across large AI workloads.
Liquid Cooling and Open Standards for Better Rack Efficiency
Designed for thermal and operational efficiency, the Helios rack features a double-wide chassis conforming to OCP’s ORW specifications. It incorporates direct liquid cooling to support dense GPU configurations while maintaining serviceability and energy efficiency.
The rack’s open architecture integrates AMD ROCm open software and AMD Pensando networking technology, both of which are designed to lower the total cost of ownership and accelerate innovation. These components ensure the platform remains extensible and compatible with evolving AI demands, offering modularity without vendor lock-in.
“With Helios, we are bringing together the full stack of AMD compute technologies and HPE’s system innovation to deliver an open, rack-scale AI platform,” said Dr Lisa Su, Chair and CEO at AMD. “This drives new levels of efficiency, scalability, and breakthrough performance for our customers in the AI era,” she explained.
The company further informed that its Helios rack-scale AI solution will be made available globally by HPE in 2026 and will be delivered with support from HPE Services, drawing on the company’s experience in exascale systems and liquid-cooled infrastructure deployment.
Helios Adds Scale to HPE's Full-Stack AI Data Centre Vision
The Helios rack complements HPE’s broader AI infrastructure strategy, which includes the recently announced AI Factory Lab in Grenoble (France), developed in partnership with NVIDIA. That initiative enables enterprises to test and validate AI workloads in sovereign, regulatory-compliant environments within the EU.
While the NVIDIA-powered AI Factory Lab provides a secure environment for prototyping and compliance testing, the AMD Helios rack offers a scalable production environment for training large language models and enabling high-throughput inference. Together, they represent HPE’s full-stack vision for AI-ready data centres—combining compute, networking, storage, and security to address enterprise AI at every stage.
By collaborating with both AMD and NVIDIA, and integrating technologies from Broadcom and Juniper Networking, HPE is positioning itself as a key enabler of sovereign, open, and efficient AI infrastructure—from lab to production.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)