In 2018, ARK
wrote that Tesla’s strategy of vertical integration would enable it to create novel use cases, deeper moats, and faster execution. At the time, Tesla controlled all of the hardware stack except for semiconductors: Nvidia provided the chips for training and self-driving.
At
AI Day, an event focused on recruiting top artificial intelligence talent, Tesla unveiled an ambitious plan to complete the vertical integration of every component of its hardware and software stacks, from silicon, training cluster and compiler to driving simulator. If fully implemented in 2022, Tesla would be the most deeply integrated “automotive company” in the world, as shown below.
Forecasts are inherently limited and cannot be relied upon. For informational purposes only and should not be considered investment advice, or a recommendation to buy, sell or hold any particular security. Source: ARK Investment Management LLC.
Tesla pulled back the curtain on Dojo — its custom-built AI training supercomputer capable of
1 exaflop of performance, roughly twice the raw performance of the
most powerful supercomputer in the world today. Dojo is centered around the D1, an application-specific chip (ASIC) designed from the ground up specifically for AI training, optimizing for low latency and high bandwidth. With System-on-Wafer packaging technology similar to that of
Cerebras, Tesla combines 25 D1s on a single training tile for a whopping 9 petaflops of compute. A bespoke compiler designed to harness the D1’s custom instruction set architecture should be able to train Tesla’s neural nets with best-in-class hardware utilization.
We believe that by owning the AI stack - from training silicon to labeling to neural network design - Tesla should be able to iterate much faster than the competition. Not hostage to third parties shipping chips or fixing bugs, Tesla will be in full control.
Special thanks to James Wang for co-authoring this update.