For several years, Elon Musk has been vocal about Dojo—the supercomputer that sits at the heart of Tesla’s artificial intelligence aspirations. In July 2024, he emphasized the significance of this initiative by stating that the company’s AI division would “double down” on Dojo as they prepared for the robotaxi unveiling in October.
But what precisely does Dojo entail? And what makes it so vital to Tesla’s long-term vision?
In summary, Dojo is Tesla’s tailor-made supercomputer, specifically crafted for training its Full Self-Driving (FSD) neural networks. Strengthening Dojo aligns perfectly with Tesla’s aim to achieve comprehensive autonomous driving and introduce a robotaxi service. Currently, FSD, which is active in hundreds of thousands of Tesla vehicles, can handle certain automated driving functions but still necessitates human oversight.
With the recent reveal of Tesla’s Cybercab behind us, the company is now preparing to launch an autonomous ride-hailing service in Austin as of June. Additionally, during its fourth-quarter earnings call in January 2024, Tesla announced plans to introduce unsupervised FSD for U.S. customers by 2025.
Historically, Musk has claimed that Dojo would play a crucial role in realizing Tesla’s full self-driving objectives. However, with Tesla seemingly approaching that milestone, his comments regarding Dojo have become less frequent.
Since August 2024, the conversation has shifted towards Cortex, touted as Tesla’s “huge new AI training supercluster” being developed at the Texas headquarters for real-world AI challenges. Musk has also mentioned that Cortex will feature “massive storage for video training of FSD and Optimus.”
In Tesla’s Q4 shareholder deck, updates were provided on Cortex but Dojo was not mentioned.
Tesla has positioned itself to invest heavily in AI—both with Dojo and now Cortex—aiming for autonomy in both vehicles and humanoid robots. The company’s future growth significantly depends on its ability to secure these goals amid rising competition within the EV space. Thus, examining the status of Dojo, Cortex, and their implications is vital.
The Background of Tesla’s Dojo

Musk envisions Tesla as more than just a carmaker or a provider of solar panels and energy storage solutions. He desires to transform Tesla into an AI powerhouse that achieves self-driving capability by emulating human perception.
Unlike many competitors developing autonomous technology with a mix of sensors like lidar, radar, and cameras, as well as high-definition maps for localization, Tesla aims to achieve full autonomy solely through cameras that gather visual data, processed by advanced neural networks for decision-making.
At Tesla’s inaugural AI Day in 2021, Andrej Karpathy, the prior head of AI, described the endeavor as essentially building “a synthetic animal from the ground up.” (Musk had been hinting at Dojo since 2019, with an official announcement made at the AI Day event.)
While companies such as Waymo have successfully introduced Level 4 autonomous vehicles—defined as those capable of steering without human intervention in specific situations—Tesla has yet to produce a fully autonomous system that operates without human oversight.
Around 1.8 million users have subscribed to Tesla’s FSD, which currently costs $8,000—previously priced up to $15,000. The expectation is that software trained with Dojo’s AI will eventually be distributed to Tesla owners via over-the-air updates. Moreover, Tesla has collected millions of miles of driving data to refine FSD, with the understanding that this extensive dataset can significantly advance their full self-driving capabilities.
Nonetheless, some experts caution that simply increasing data volume might not yield better intelligence in models.
“There’s a financial ceiling, and it will soon become too costly to continue this method,” remarked Anand Raghunathan, a professor at Purdue University. He added, “Some assert we might eventually exhaust significant data for training purposes. More data does not automatically equate to more insights; its usefulness depends on whether it possesses the relevant information necessary to refine the model effectively.”
Despite these concerns, the prevalent trend favors wealthier data acquisition for now. More data necessitates increased computational power for adequate storage and processing to refine Tesla’s AI models—this is precisely where Dojo comes into play as a supercomputer.
Defining a Supercomputer
Dojo serves as Tesla’s supercomputing apparatus designed predominantly for AI training, particularly FSD. The term “Dojo” reflects a space used for martial arts training.
A supercomputer consists of thousands of interconnected smaller computers called nodes. Each node is equipped with a central processing unit (CPU) and a graphics processing unit (GPU). The CPU executes general node management, while GPUs handle intricate tasks, such as dividing workloads and processing them concurrently. GPUs are pivotal for machine learning endeavors, including FSD training simulations.
Even Tesla utilizes Nvidia GPUs for its AI training activities (more on this later).
The Necessity of a Supercomputer for Tesla
To realize its vision of using a vision-only approach, Tesla requires a supercomputer. The FSD neural networks are meticulously trained on extensive driving data to identify and categorize objects in the vehicle’s environment and subsequently make driving decisions. This mandates that when FSD is in action, the neural networks collect and interpret visual data continually, accommodating the recognition speed and depth comparable to human abilities.
Essentially, Tesla aims to construct a digital replica of human visual processing capabilities.
Achieving this goal necessitates extensive storage and processing power to handle vast video data sourced from its global fleet and to run millions of simulations to inform the AI models.
Currently, Tesla relies on Nvidia to power its existing Dojo training computer; however, it aims to diversify beyond this dependency—especially with the high costs of Nvidia’s chips. Tesla seeks to improve by developing customized hardware that enhances efficiency in training AI models.
Central to this initiative are Tesla’s proprietary D1 chips, which have been optimized for AI workloads.
Details on D1 Chips

Tesla shares the belief that optimal hardware and software should work in unison; thus, it is transitioning away from standard GPU configurations to develop its own chips for Dojo.
Tesla introduced its D1 chip on AI Day in 2021, a palm-sized silicon chip that began production as of mid-2023. Manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) using 7 nanometer technology, the D1 chip boasts 50 billion transistors and a substantial die size of 645 square millimeters, indicating its impressive power and efficiency for rapid task execution.
“Our custom instruction set architecture is fully optimized for machine learning, allowing simultaneous compute and data transfer,” remarked Ganesh Venkataramanan during AI Day 2021. “This is purely focused on machine learning.”
However, the D1 chip is still inferior to Nvidia’s A100 chip, which is also produced by TSMC using a similar 7 nanometer process. With 54 billion transistors and a die size of 826 square millimeters, the A100 outperforms the D1 chip marginally.
To enhance bandwidth and compute capabilities, Tesla has interconnected 25 D1 chips to form a single tile, which functions as a consolidated computing system. Each tile offers 9 petaflops of compute power and 36 terabytes of bandwidth, encompassing all necessary components such as power supply, cooling, and data transfer facilities. A rack comprises six tiles, two racks create a cabinet, and ten cabinets constitute an ExaPOD. At AI Day 2022, Tesla indicated plans to scale Dojo by utilizing multiple ExaPODs, collectively forming the supercomputer.
Tesla is also developing a next-gen D2 chip, designed to eliminate information flow bottlenecks by integrating an entire Dojo tile on a single silicon wafer.
Currently, Tesla has not disclosed how many D1 chips it plans to order or the expected timeline for Dojo’s operational readiness.
In response to a June post asserting “Elon is building a giant GPU cooler in Texas,” Musk indicated that Tesla aims for a blend of “half Tesla AI hardware and half Nvidia/other” within the next 18 months. The “other” may encompass AMD chips, according to Musk’s note from January.
What Significance Does Dojo Hold for Tesla?

By taking charge of chip fabrication, Tesla could potentially scale AI training efforts rapidly and cost-effectively, especially as production ramps up with TSMC.
This strategy also diminishes Tesla’s dependency on Nvidia’s increasingly costly chips that are challenging to procure.
During Tesla’s second-quarter earnings briefing, Musk noted the soaring demand for Nvidia hardware, stating that securing GPUs is increasingly difficult. He expressed concern over obtaining a reliable supply of GPUs when needed, leading to enhanced focus on Dojo to meet training requirements.
Nevertheless, Tesla continues to purchase Nvidia chips for AI training. In June, Musk tweeted:
For about $10 billion in AI-related expenditures for this year, roughly half is allocated for internal use—primarily for Tesla-designed AI inference computing and sensors across our vehicles, plus Dojo. Of the costs for building AI training superclusters, Nvidia hardware comprises approximately two-thirds. I estimate Tesla’s purchases of Nvidia chips could reach between $3 billion to $4 billion this year.
“Inference computing” refers to real-time AI operations performed by Tesla vehicles, distinct from the training compute responsibilities assigned to Dojo.
Investing in Dojo represents a high-stakes gamble for Musk, who has acknowledged potential outcomes of failure.
In the long run, Tesla might form a novel business model focused around its AI division. Musk has indicated that Dojo’s initial version will cater to Tesla’s computer vision labeling and training, particularly beneficial for FSD and training Tesla’s humanoid robot, Optimus. However, this focus may limit its utility for broader applications.
Musk has suggested that future iterations of Dojo will be optimized for general-purpose AI training. A significant challenge lies in that almost all existing AI software has been developed for GPUs, requiring substantial rewriting to use Dojo effectively for general-purpose AI.
Alternatively, Tesla could monetize its compute capacity by renting it out, reminiscent of how AWS and Azure provide cloud services. Musk mentioned during Q2 earnings that he envisions “a pathway to compete with Nvidia utilizing Dojo.”
A report from Morgan Stanley in September 2023 projected that Dojo could add a staggering $500 billion to Tesla’s market capitalization by unlocking new revenue streams such as robotaxis and software services.
In conclusion, the chips designed for Dojo act as a safeguard for the automaker, while presenting opportunities for significant return on investment.
The Progress of Dojo

Reports by Reuters last year indicated that Tesla kicked off Dojo production in July 2023. A subsequent June 2023 tweet from Musk confirmed that Dojo was already “online and executing useful tasks for several months.”
During the same period, Tesla anticipated that Dojo would rank as one of the top five most powerful supercomputers by February 2024—a milestone that remains undisclosed publicly, leading to skepticism about its achievement.
Tesla also projected that Dojo’s processing capacity would reach 100 exaflops by October 2024. (To clarify, one exaflop equals 1 quintillion operations per second, and achieving 100 exaflops would require over 276,000 D1 chips, or around 320,500 Nvidia A100 GPUs, assuming a single D1 achieves 362 teraflops.)
Additionally, in January 2024, Tesla committed $500 million towards establishing a Dojo supercomputer at its gigafactory in Buffalo, New York.
By May 2024, Musk disclosed that a section of Tesla’s Austin gigafactory would be designated for a “super dense, water-cooled supercomputer cluster.” It has since emerged that this is intended for Cortex, not Dojo.
Shortly after Tesla’s second-quarter earnings call, Musk posted an update on X indicating that the company’s AI team is using the Tesla HW4 AI computer (renamed AI4)—the hardware integrated within its vehicles—during the training loop alongside Nvidia GPUs. He noted a configuration of around 90,000 Nvidia H100s coupled with 40,000 AI4 computers.
“By the year’s end, Dojo 1 will incorporate training capabilities equivalent to approximately 8,000 H100s,” he elaborated. “This isn’t monumental, but it’s certainly not negligible either.”
Tesla has not provided updates regarding the activation of those chips or Dojo’s current operational status. During the fourth-quarter 2024 earnings call, no mention was made of Dojo. However, Tesla confirmed the completion of Cortex deployment in Q4, which enabled version 13 of supervised FSD.
This article was initially published on August 3, 2024, and will be updated as new information becomes available.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

