Tesla supercomputer cluster aptly named Cortex. This powerhouse of computational ability, housed in the company’s Giga Texas, represents a significant leap forward in Tesla’s artificial intelligence and autonomous driving capabilities.
Tesla AI Training Powerhouse: Giga Texas Goes All-In, 50k H100 GPUs and 20k HW4 AI computers
Initially planned as a 50k H100 chip cluster, Cortex has now doubled in scope. The supercomputer will feature an impressive array of 100k H100 or H200 chips, positioning it at the forefront of computational power. This expansion underscores Tesla’s commitment to pushing the boundaries of AI and machine learning.
Elon emphasized Cortex’s role in advancing the company’s Full Self-Driving (FSD) technology and the development of the Optimus humanoid robot. The supercomputer’s massive storage capacity and processing power will be instrumental in training these systems, processing vast amounts of video data to refine their capabilities.
The sheer computational power of Cortex comes with significant thermal management challenges. Musk revealed that the cluster will require approximately 130 MW of power this year, with projections suggesting this could increase to over 500 MW within 18 months. To address this, Tesla has implemented an extensive cooling system, including large fans and four substantial water tanks.
Related: Elon Musk’s Liquid-Cooled xAI and Tesla Supercomputer Clusters: Revolutionizing Tech
Tesla’s investment in AI technology is substantial. Musk estimated that the company’s expenditure on NVIDIA chips alone could reach $3 billion to $4 billion this year, accounting for half of Tesla’s total AI-related spending of $10 billion. The remaining funds are allocated to in-house AI inference computers, vehicle sensors, and the development of the Dojo supercomputer.
Cortex represents more than just a technological achievement for Tesla. It signifies the company’s strategic pivot towards becoming a leader in AI and autonomous systems. While many in the tech industry are focusing on the productization and cost-efficiency of Large Language Models (LLMs), Tesla’s approach with Cortex demonstrates a commitment to advancing assisted driving through immense data processing and computing power.
The naming of the supercomputer cluster as Cortex is itself significant. Drawing parallels with the cerebral cortex—the brain’s center for advanced cognitive functions—Tesla is positioning this technology as the ‘brain’ behind its next generation of intelligent systems.
Tesla continues to expand its computational capabilities, not just processing data—it’s computing a new future for transportation and robotics. With Cortex at its core, Tesla is poised to make significant strides in AI and autonomous technology.