Tesla’s grand ambitions for an AI training monster powerful enough to rapidly iterate its autonomous driving capabilities just got a major boost. At its 2024 North America Technology Symposium this week, TSMC revealed it has already kicked off production on the Tesla’s next-generation Dojo computer tile.
The chip titan didn’t go into full details, but did confirm a few tantalizing tidbits about the packaging tech behind Dojo’s unique wafer-scale architecture. According to TSMC, the new Dojo tile leverages the company’s System-on-Wafer approach to essentially stitch together a 5×5 grid of pre-tested processors into a single, ultra-dense unified package.
It’s an innovative advance in wafer-scale integration that TSMC says will allow Tesla’s chips to avoid many of the traditional throughput bottlenecks between discrete silicon dies by maintaining extremely high-bandwidth interconnects. The end result should be Dojo computation modules with terascale-level throughput for rapidly crunching the mind-boggling datasets needed to train safe self-driving AI.
As powerful as Tesla’s System-on-Wafer Dojo will likely be, TSMC has even bigger wafer-level ambitions already mapped out. The company teased that by 2027 it plans to roll out a new Chip-on-Wafer-on-Substrate (CoWoS) packaging tech enabling full wafer-scale processors over 40 times denser than today’s traditional chiplets.
If achieved, components built with CoWoS could effectively eliminate the interconnect bottlenecks holding back current multi-chip packages and architectures. An enticing prospect for AI companies like Tesla looking to exponentially scale their compute might.
With such ambitious wafer-level plans already becoming reality for customers like Tesla, the race is on for chipmakers to perfect the future of advanced heterogenous integration. TSMC is wasting no time sprinting towards that monolithic finish line – and working hand-in-hand with Tesla to get there first.