Elon Musk has been vocal about Dojo for years — Tesla’s AI supercomputer that will be pivotal for the company’s ambitions in AI. In July 2024, Musk emphasized that his company’s AI team would “intensify efforts” on Dojo leading up to the much-anticipated unveiling of Tesla’s robotaxi in October.
But what is Dojo, and why is it crucial to Tesla’s future strategy?
Essentially, Dojo is Tesla’s bespoke supercomputer, engineered specifically to train its “Full Self-Driving” neural networks. Enhancing Dojo is intertwined with Tesla’s aim to achieve fully autonomous driving and launch a robotaxi service. While the FSD technology is already in hundreds of thousands of Tesla vehicles, it still requires human oversight for certain driving tasks.
Following the unveiling of Tesla’s Cybercab, the company is preparing to kick off an autonomous ride-hailing service with its own fleet in Austin this June. During its fourth-quarter earnings call in late January 2024, Tesla revealed plans to roll out unsupervised FSD for U.S. users by 2025.
Despite Musk’s previous statements indicating that Dojo would be essential for reaching complete autonomy, he has recently been less vocal about its prospects.
Instead, since August 2024, the focus has shifted to Cortex, an expansive new AI training supercluster under construction at Tesla’s headquarters in Austin, tasked with solving practical AI challenges. Musk has remarked that it will offer “extensive storage for video training of FSD & Optimus.”
In Tesla’s Q4 shareholder presentation, updates on Cortex were shared, but no information regarding Dojo was mentioned.
Tesla has committed to substantial investments in AI and its supercomputing efforts, particularly through Dojo and now Cortex, to achieve autonomy for both vehicles and humanoid robots. The company’s future growth largely depends on mastering this challenge, especially as competition in the EV sector intensifies. Thus, examining Dojo, Cortex, and their current status is vital.
Background on Tesla’s Dojo

Musk envisions Tesla as more than just an automaker or provider of solar products and energy storage; he aims for it to emerge as an AI powerhouse capable of revolutionizing self-driving cars by mimicking human perception.
Unlike other firms developing autonomous vehicle technologies that depend on an array of sensors—including lidar, radar, and cameras—combined with high-definition maps, Tesla is committed to achieving full autonomy utilizing only cameras to gather visual input. This data is then processed by advanced neural networks to make real-time decisions about vehicle maneuvering.
As articulated by Tesla’s former AI head, Andrej Karpathy, at the company’s first AI Day in 2021, Tesla’s mission is to construct “a synthetic animal from scratch.” Although Musk had hinted at Dojo since its inception in 2019, the official announcement occurred at AI Day.
Competitors like Alphabet’s Waymo have developed Level 4 autonomous vehicles—defined by the SAE as systems capable of self-navigation without human input under specific conditions—through more traditional sensor and machine learning frameworks. Tesla, however, has yet to produce a fully autonomous system free from human oversight.
Approximately 1.8 million consumers have subscribed to Tesla’s FSD, priced at $8,000, with maximum costs reaching as high as $15,000. The premise is that the AI software refined by Dojo will be distributed to customers through over-the-air updates. The widespread deployment of FSD allows Tesla to gather extensive video data, which is crucial for training its FSD technology. The belief is that increased data volume will hasten the journey to achieving complete self-driving capabilities.
Nevertheless, some industry analysts contend that simply relying on more data may not always lead to smarter models.
“There are financial limitations; soon it may become impractical to continue this strategy,” stated Anand Raghunathan, a professor in electrical and computer engineering at Purdue University. He further noted, “Some experts argue that we may deplete meaningful data necessary for training models. More data does not equate to more valuable information; it depends on whether that data contains useful insights to refine a better model and if the training mechanism can effectively extract that information.”
Despite these concerns, the trend toward data accumulation is likely to continue in the near future. Enhanced data means greater compute power is required to manage and train Tesla’s AI models, underscoring the importance of Dojo as a supercomputing solution.
Defining a Supercomputer
Dojo represents Tesla’s supercomputer system intended to serve as an AI training facility, primarily for FSD. The term “Dojo” signifies a place dedicated to martial arts practice.
A supercomputer is constructed from thousands of smaller computing units known as nodes. Each node is equipped with its CPU (central processing unit) and GPU (graphics processing unit). The CPU oversees the node’s functions while the GPU handles complex calculations, breaking tasks into manageable units for concurrent processing. GPUs are crucial for machine learning tasks like those involved in FSD training and are also employed in large language models, which is why Nvidia’s valuation has surged.
Tesla still relies on Nvidia GPUs for training its AI (more details on that later).
The Necessity of a Supercomputer for Tesla
Tesla’s vision-centric strategy necessitates a supercomputer. The FSD neural networks are trained using copious amounts of driving data to identify and categorize objects in the vicinity of the vehicle, enabling real-time driving decisions. In essence, Tesla aims to replicate the human visual system and cognitive capabilities digitally.
To accomplish this, Tesla must process and store the extensive video data harvested from its global car fleet while executing millions of simulations for model training.
While Tesla appears to presently depend on Nvidia for its Dojo training computer, it aims to diversify its resources particularly since Nvidia’s chips are costly. Tesla aspires to develop superior technology that improves bandwidth and reduces latency. Consequently, its AI division has embarked on a custom hardware initiative aimed at training AI models more efficiently than conventional solutions.
At the heart of this program lies Tesla’s proprietary D1 chips, claimed to be optimized for AI tasks.
Further Insights into the D1 Chips

Tesla aligns with Apple’s philosophy that hardware and software must be optimally integrated. Hence, the company is working to design custom chips to power Dojo, moving away from traditional GPU architectures.
Unveiled during AI Day in 2021, the D1 chip is a silicon unit about the size of a palm. Production of the D1 chip began as of May this year, created by the Taiwan Semiconductor Manufacturing Company (TSMC) utilizing 7-nanometer fabrication processes. The D1 boasts 50 billion transistors and a considerable die size of 645 square millimeters, implying substantial processing capabilities and efficiency for executing complex tasks.
“We can perform computation and data transfers concurrently, and our customized ISA, which stands for instruction set architecture, is finely tuned for machine learning applications,” shared Ganesh Venkataramanan, the former senior director of Autopilot hardware, during Tesla’s 2021 AI Day. “This is pure machine learning.”
Currently, the D1 chip lacks the power of Nvidia’s A100 chip, also produced by TSMC using a 7-nanometer process. The A100 contains 54 billion transistors with a die area of 826 square millimeters, enabling it to outperform Tesla’s D1 chip marginally.
To enhance bandwidth and computing power, Tesla’s AI division has interconnected 25 D1 chips into a single tile, forming a cohesive computing unit. Each tile provides 9 petaflops of processing capability and 36 terabytes of bandwidth per second, encompassing all components essential for energy, cooling, and data interchange. Each tile can be viewed as an independent computer formed by 25 smaller units. Six tiles create one rack, two racks make up a cabinet, and ten cabinets form an ExaPOD. During AI Day 2022, Tesla indicated that scaling Dojo would involve deploying multiple ExaPODs, collectively constituting the supercomputing system.
Tesla is also developing a next-gen D2 chip intended to alleviate information transfer delays by integrating the entire Dojo tile onto a single silicon wafer.
Tesla has not confirmed how many D1 chips it has ordered or expects to receive and hasn’t disclosed the timeline for operational Dojo supercomputers utilizing D1 chips.
In reply to a post on X in June about “Elon constructing a colossal GPU cooler in Texas,” Musk stated Tesla aims for “50% Tesla AI hardware and 50% Nvidia/other” within approximately 18 months. The term “other” could suggest AMD chips, as per Musk’s January comment.
The Implications of Dojo for Tesla

By taking charge of its chip manufacturing, Tesla could potentially scale up its AI training capabilities at minimal costs, especially as they and TSMC expand production capabilities.
This strategy may lessen Tesla’s reliance on increasingly scarce and pricey Nvidia chips.
During the second-quarter earnings call, Musk highlighted the overwhelming demand for Nvidia hardware, stating that securing GPUs is becoming increasingly difficult. He expressed concern regarding the consistency of GPU availability and advocated for intensified efforts on Dojo to ensure necessary training capabilities.
Nonetheless, Tesla continues to procure Nvidia GPUs for AI training. In June, Musk tweeted:
“Out of roughly $10 billion allocated for AI this year, half will be for internal projects, mainly our AI inference computer designed in-house and the sensors in our vehicles, along with Dojo configuration. Nvidia hardware constitutes around two-thirds of expenses related to our AI training superclusters, meaning my estimation for Nvidia purchases this year hovers between $3 billion to $4 billion.”
“Inference compute” refers to real-time AI computations executed by Tesla vehicles, distinct from the training compute handled by Dojo.
Ultimately, Dojo represents a high-stakes gamble—one that Musk has cautioned may not guarantee success.
In the long term, Tesla could theoretically carve out a new business model predicated on its AI division. Musk has suggested that the initial iteration of Dojo will be tailored for computer vision labeling and training at Tesla, beneficial for FSD and the development of Optimus, the company’s humanoid robot. However, its utility may be limited beyond this scope.
Musk has indicated that future versions of Dojo may be adapted for broader AI training applications. A potential obstacle could arise from existing AI software being largely optimized for GPUs, necessitating a redesign for it to function with Dojo.
An alternative could involve Tesla offering computing services, akin to cloud computing entities like AWS and Azure. Musk has also mentioned during a Q2 earnings call envisioning a path for Dojo to compete with Nvidia’s capabilities.
Morgan Stanley’s September 2023 report predicted that Dojo could add $500 billion to Tesla’s market valuation by unveiling new revenue spheres via robotaxis and software services.
In summary, Dojo’s chips act as a safety net for Tesla, promising extensive returns.
Progress of Dojo

According to Reuters’ reports, Tesla commenced Dojo’s production in July 2023. However, a post by Musk in June 2023 indicated that Dojo had been “online and executing useful tasks for several months.”
During that same period, Tesla anticipated that Dojo would rank among the five most powerful supercomputers by February 2024—a claim yet to be validated publicly, leaving doubts regarding this achievement.
The company also projected its total computational power would achieve 100 exaflops by October 2024. (One exaflop corresponds to one quintillion computing operations per second. To attain 100 exaflops, assuming a single D1 chip can achieve 362 teraflops, Tesla would need over 276,000 D1 chips, or around 320,500 Nvidia A100 GPUs.)
In January 2024, Tesla committed $500 million to construct a Dojo supercomputer at its Buffalo gigafactory.
In May 2024, Musk disclosed intentions to allocate the rear section of Tesla’s Austin gigafactory for a “super dense, water-cooled supercomputer cluster.” We now understand that this will be Cortex, not Dojo, occupying that area in Austin.
Following Tesla’s second-quarter earnings call, Musk tweeted that Tesla’s AI team employs the Tesla HW4 AI computer (renamed AI4)—the hardware present in all Tesla vehicles—in the training process alongside Nvidia GPUs. He detailed that the configuration is approximately 90,000 Nvidia H100s and about 40,000 AI4 units.
“Dojo 1 will have an equivalent of approximately 8,000 H100s operational by year-end,” he continued. “While this isn’t massive, it’s not insignificant either.”
Tesla has yet to share updates regarding the status of those chips and their operational integration into Dojo. Ensure to note that during the company’s fourth-quarter 2024 earnings call, there was no reference to Dojo. Instead, Tesla confirmed the completion of the Cortex deployment in Q4, emphasizing that it was Cortex that facilitated version 13 of supervised FSD.
Original publication date: August 3, 2024. This article will be updated as new information surfaces.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


