Login to Continue Learning
After connecting powerful AI GPUs, NVIDIA has introduced a solution that links multiple data centers to enhance massive computing power.
### NVIDIA’s Spectrum-XGS Ethernet Technology: A Powerful Interconnect for AI Clusters
Scaling up AI computing through architectural advancements reaches its limits. Therefore, NVIDIA has developed an interconnect platform that allows firms to combine ‘distributed data centers into unified, giga-scale AI super-factories’. This new technology, known as Spectrum-XGS, eliminates geographical and cooling limitations associated with expanding a single data center.
According to NVIDIA CEO Jensen Huang, “With NVIDIA Spectrum-XGS Ethernet, we add scale-across to scale-up and scale-out capabilities, linking data centers across cities, nations, and continents into vast, giga-scale AI super-factories.”
Spectrum-XGS is an evolution of the company’s original Spectrum technology and a cornerstone for AI computation. It doubles performance compared to NCCL, a similar implementation for interconnecting multiple GPU nodes. Additionally, it includes features like auto-adjusted distance congestion control and latency management to ensure performance remains consistent despite longer distances between interconnected data centers.

Interestingly, Coreweave will be the first company to integrate its hyperscalers with Spectrum-XGS Ethernet. This move is expected to “accelerate breakthroughs across every industry,” bringing previously unattainable compute levels.
NVIDIA continues to push boundaries in interconnect technology, following recent innovations like silicon photonics network switches. We can expect even more advanced interconnect mechanisms in the future.