You can’t accuse Elon Musk of thinking small when it comes to AI hardware. Elon revealed Tesla ordered Nvidia to redirect thousands of its latest H100 chips originally meant for other firms over to Tesla instead.
Why the sudden Nvidia chip splurge? Elon says it’s all about giving Tesla’s nascent AI training supercomputer enough raw muscle. “Tesla had no place to send the Nvidia chips to turn them on,” Posted Elon, “so they would have just sat in a warehouse.”
Tesla clearly has big plans for putting those AI accelerators to work. According to Musk, Tesla is close to finishing construction on a new 50k H100 chip data center extension near its Giga Texas, Elon confirms part of the Giga Texas extension will contain a “super dense, water-cooled supercomputer cluster.” Combined with an existing 35,000 H100s already installed, that gives Tesla access to a staggering 85,000 of Nvidia’s latest and greatest AI processors.
And make no mistake, squeezing optimal performance out of that silicon behemoth won’t be easy. As Elon acknowledges: “I can’t overstate the difficulty of making 50k H100s train as a coherent system. No company on Earth has been able to achieve this yet.”
If Tesla manages to pull it off, the rewards could be immense for accelerating AI training workloads for its self-driving software. Musk estimates Tesla will spend up to $4 billion on Nvidia chips alone for AI training this year as it builds internal AI clusters. That follows the $10 billion total that Musk says Tesla will invest in AI hardware and software in 2024.
Of course, Nvidia likely won’t be Tesla’s only AI silicon supplier forever. Musk reiterated that Tesla’s in-house “Dojo” AI training chips could one day outpace Nvidia’s offerings – just don’t hold your breath. As the CEO admitted: “There is a path for Dojo to exceed Nvidia. It is a long shot…but success is one of the possible outcomes.”
For now, Tesla appears happy to leverage Nvidia’s silicon mastery and keep those H100s humming around the clock. Musk’s mission to achieve self-driving singularity demands computing power at a scale that makes even the biggest cloud providers blush.
Related Post: Tesla Activates 10K Nvidia H100 AI Cluster, Dojo scaling up and actively training