Login to Continue Learning
NVIDIA has unveiled its next-generation silicon photonics interconnect technology, known as Spectrum-X Ethernet Photonics. This innovative solution is designed to significantly improve power efficiency and reduce the cost of implementing optical interconnects in AI factories.
At Hot Chips 2025, NVIDIA demonstrated that an AI factory requires around 17 times more optical power than a traditional cloud data center. The increased number of GPU clusters necessitates numerous optical transceivers, which can account for up to 10% of the total compute power in an AI factory. Spectrum-X Ethernet Photonics aims to address this issue by offering co-packaged optics that enhance power efficiency and reduce the laser count.
Spectrum-X Ethernet Photonics utilizes 200 G/lane SerDes, a cutting-edge standard in electrical signaling. This implementation places the photonic engine (PIC) right next to the switch ASIC, eliminating long PCB traces and reducing the number of lasers required for data transmission. For example, a 1.6 Tb/s link can be achieved with just two lasers instead of eight, resulting in lower power consumption and higher transfer reliability.
NVIDIA’s silicon photonics solution includes a CPO chip with ultra-high 1.6T throughput and MRMs (Micro-Ring Modulators) for enhanced bandwidth and reduced power footprint. A significant feature is the 3D stacking between the photonic and electronic layers, which simplifies routing and increases bandwidth density.
NVIDIA’s first full-scale switch with integrated photonics, the Spectrum-6 102T, offers several improvements:
– 2× throughput
– 63× signal integrity
– 4× fewer lasers
– 1.6× higher bandwidth density
– 13× better laser reliability
– Replaces 64 separate transceivers
In summary, NVIDIA’s Photonics technology is poised to enhance power efficiency by 3.5 times and increase resiliency by 10 times compared to traditional optical standards. This innovation will enable AI compute to scale more effectively, with a significant portion of the power previously used for networks now available for GPU clusters, leading to improved performance.