Login to Continue Learning
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
With GPU economics driving significant concern in financial circles, Morgan Stanley has highlighted NVIDIA’s GB200 NVL72 GPUs as the most efficient solution for large-scale AI factories.
Each NVL72 AI rack contains 72 NVIDIA B200 GPUs and 36 Grace CPUs connected via NVLink 5, a high-bandwidth, low-latency interconnect. Currently, such a server rack costs around $3.1 million compared to about $190,000 for an H100 rack.
Morgan Stanley believes it is more economical to use NVIDIA’s latest rack-scale offering rather than the older generation H100k.
According to Morgan Stanley’s calculations, NVIDIA’s GB200 NVL72 systems generate a 77.6% profit margin for a given 100MW AI factory. Google’s TPU v6e pods rank second with a 74.9% profit margin.
Note that the exact pricing of Google’s TPU v6e pods is not public, but on average, it costs about 40-50% less to rent a pod compared to an NVL72 rack.
Interestingly, Morgan Stanley calculates negative profit margins for AI factories using AMD’s MI300 and MI355 platforms at -28.2% and -64%, respectively.
Morgan Stanley’s report estimates that a 100MW AI data center would incur $660 million in infrastructure costs, depreciated over 10 years. GPU costs range from $367 million to $2.273 billion, depreciated over 4 years. Operating costs are calculated based on power efficiencies and average global electricity prices.
As a result, NVIDIA’s GB200 NVL72 systems have the highest Total Cost of Ownership (TCO) at $806.58 million, followed by AMD’s MI355X platform at $774.11 million.