Tesla constructing a behemoth of a supercomputer training cluster at its Giga Texas factory, about a whopping 50k H100 GPUs, Tesla AI training powerhouse: Giga Texas goes all-in, 50k H100 GPUs and 20k HW4 AI computers.
Elon Musk, never one to shy away from ambitious plans, spilled the beans on X about Tesla’s Giga Texas GPU cluster cooling system. stating, “Sizing for ~130MW of power & cooling this year, but will increase to >500MW over next 18 months or so. Aiming for about half Tesla AI hardware, half Nvidia/other. Play to win or don’t play at all.”
Now, let’s put this into perspective, shall we? Two weeks ago, Adam Jonas dropped a bombshell, claiming that “US data center power usage may be equivalent to the power used by 150 million electric cars by 2030.” He went on to calculate that “the annual electrical energy of one H100 chip is equivalent to the power usage of 1.43 Teslas.”
Do the math, and you’ll find that this Giga Texas 50k unit Nvidia GPU cluster could guzzle as much juice as 71,500 Teslas per year. Crazy? You bet your bottom dollar it is!
Here’s the million-dollar question: Is this massive power consumption worth it? On one hand, we’re looking at a supercomputer that could revolutionize AI development and potentially lead to breakthroughs in autonomous driving. On the other, we’re talking about an energy consumption that’d make even the most staunch environmentalists do a double-take.