SuperX Unveils GB300 NVL72 System, a Rack-Scale AI Supercomputer Powered by NVIDIA Grace Blackwell Ultra for Trillion-Parameter Model Training

17 October 2025 | NEWS

SuperX Unveils GB300 NVL72 System, a Rack-Scale AI Supercomputer Powered by NVIDIA Grace Blackwell Ultra for Trillion-Parameter Model Training

SuperX AI Technology Limited  announced the launch of the SuperX GB300 NVL72 System, a groundbreaking, rack-scale AI supercomputing platform powered by the NVIDIA GB300 Grace Blackwell Ultra Superchip. Designed to conquer the physical and computational limits of training and deploying next-generation trillion-parameter models, the liquid-cooled GB300 System delivers a significant leap in performance density and energy efficiency that redefines the requirements for modern data center infrastructure.

 

The arrival of rack-scale AI systems like the SuperX GB300 NVL72 system marks a critical inflection point for the industry. By delivering up to 1.8 exaFLOPS of AI performance in a single, liquid-cooled rack, the system achieves a level of compute density that the traditional air-cooled data center designs and conventional Alternate Current (AC) power distribution find challenging to support. This immense concentration of compute and power in a smaller footprint means that legacy infrastructure cannot adequately support these next-generation workloads.

 

This technological shift elevates advanced power solutions, particularly 800 Voltage Direct Current (800VDC), from an efficiency advantage to a fundamental necessity. The ability to deliver massive amounts of power directly and efficiently to the rack is now critical for stability, safety, and operational viability. The SuperX GB300 NVL72 System is therefore positioned not as a standalone product, but as the centerpiece of our full-stack SuperX Prefabricated Modular AI Factory solution that includes the essential liquid cooling system, and 800VDC power infrastructure required for its deployment.

The Grace Blackwell Ultra Superchip Advantage

The SuperX GB300 System is built upon the foundational GB300 Superchips with 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs in a 2-to-1 combination. This integration delivers extreme bandwidth through a 900GB/s chip-to-chip link that seamlessly connects the Grace CPU's high-memory bandwidth to the Blackwell Ultra GPUs. It ensures unified memory by combining 2,304GB of HBM3E with the Grace CPU's expansive LPDDR5X, enabling the handling of the largest models and K/V caches without I/O bottlenecks. At the same time, it achieves optimal efficiency, as the Grace CPU's power-efficient processing complements Blackwell Ultra GPU's compute-intensive performance, providing superior performance per watt.

Rack-Scale Exascale AI

The GB300 System is natively designed for massive horizontal scaling. A single SuperX GB300 System can scale up to the NVL72 rack configuration, linking 72 Blackwell Ultra GPUs together into a single, massive GPU system. Delivering breakthrough performance, the system achieves up to 1.8 exaFLOPS of FP4 AI compute within a single rack, redefining industry standards for large-scale AI training and inference. With 800Gb/s InfiniBand XDR connectivity, it ensures ultra-low latency across the most demanding AI clusters, unlocking significant scalability potential. To maintain this level of performance, the system incorporates an advanced liquid cooling design, enabling exceptional density and continuous 24/7 operation while maximizing energy efficiency.

Technical Specifications:

CPU (Per Rack)

36* NVIDIA Grace CPUs (144 Arm Neoverse V2 cores total)

GPU (Per Rack)

72* NVIDIA Blackwell Ultra GPUs

Total HBM3E Memory

≈165TB (GPU High-Bandwidth Memory)

Total LPDDR5X Memory

≈17TB (Grace CPU Memory)

Peak AI Performance

≈1.8 ExaFLOPS (FP4 AI)

Networking

4* NVIDIA NVLink Connectors (1.8TB/s)

4* NVIDIA ConnectX 8 OSFP Ports (800Gb/s)

1* NVIDIA BlueField 3 DPUs (400Gb/s)

Dimension

48U NVIDIA MGX Rack

2296mm(H) x 600mm(W) x 1200mm(D)

Market Positioning

The SuperX GB300 NVL72 System is the ideal infrastructure for organizations building the foundation of tomorrow's AI:

  • Hyperscale & Sovereign AI: For constructing national AI infrastructure, public cloud services, and massive enterprise AI factories that require exascale compute to train and serve the most complex multi-modal and large language models.
  • Exascale Scientific Computing: For governments and research institutions tackling grand challenges in physics, materials science, and climate modeling that necessitate highly efficient, rack-scale compute.
  • Industrial Digital Twins: For automotive, manufacturing, and energy sectors building large-scale, high-fidelity digital twins requiring the combined processing power of the Grace CPU and Blackwell Ultra GPU.