Home Transportation A Chronological Overview of Tesla’s Dojo Development

A Chronological Overview of Tesla’s Dojo Development

by admin

Elon Musk envisions Tesla evolving beyond a mere vehicle manufacturer to become a leader in AI technology, with the ultimate goal of enabling fully autonomous driving capabilities in its cars.

A key component of this ambition is Dojo, Tesla’s bespoke supercomputer tailored to enhance the training of its Full Self-Driving (FSD) neural networks. Although FSD currently offers some automated driving functionalities, it necessitates a vigilant driver at the wheel. Tesla believes that by harnessing increased data, computing power, and extensive training, it can transition from quasi-autonomous to fully autonomous driving.

This is where Dojo plays a vital role.

Musk has been hinting at Dojo for some time, but in 2024, he intensified discussions surrounding the supercomputer. As we step into 2025, another supercomputer named Cortex has emerged, yet Dojo’s significance remains pivotal for Tesla—especially with EV sales facing a downturn, investors seek reassurance that the company can achieve true autonomy. Below, we present a timeline highlighting the key mentions and commitments regarding Dojo.

2019

Initial Mentions of Dojo

April 22 – During Tesla’s Autonomy Day, the company’s AI team took the stage to discuss Autopilot and Full Self-Driving, emphasizing the AI systems that support them. Tesla unveiled details about its unique chips engineered specifically for neural networks and autonomous driving.

At this event, Musk teased Dojo, indicating that it would serve as a supercomputer for AI training. He also mentioned that all Tesla cars manufactured at that time were equipped with the hardware necessary for full self-driving capabilities, pending a software update.

2020

Musk Kicks Off the Dojo Roadshow

Feb 2 – Musk stated that Tesla would soon exceed a million connected vehicles globally, all equipped with the sensors and computing power necessary for full self-driving, while emphasizing Dojo’s potential.

“Dojo, our training supercomputer, will handle immense video training data and run hyperspace arrays effectively, equipped with extensive memory and ultra-high bandwidth connections between cores. Further updates will follow soon,” he remarked.

August 14 – Musk reiterated Tesla’s initiative to create a neural network training supercomputer dubbed Dojo, capable of processing astronomical volumes of video data. He described it as “a beast,” and forecast the initial version to be available in approximately a year, targeting an August 2021 launch.

December 31 – Musk shared that although Dojo is not essential, it will enhance self-driving capabilities, emphasizing that Autopilot must ultimately surpass human drivers in safety by a significant margin.

2021

Official Announcement of Dojo

August 19 – At Tesla’s inaugural AI Day, the automaker made an official announcement about Dojo, aimed at attracting new talent to its AI team. During this event, Tesla also unveiled its D1 chip, which, along with Nvidia GPUs, would power the Dojo supercomputer. The AI cluster was expected to comprise around 3,000 D1 chips.

October 12 – Tesla published a whitepaper on Dojo Technology, detailing Tesla’s configurable floating-point formats and arithmetic. This document outlined the technical standards for a novel type of binary floating-point arithmetic utilized in deep learning neural networks, capable of being implemented solely in software, solely in hardware, or a combination of both.

2022

Progress Updates on Dojo

August 12 – Musk announced that Tesla would “begin phasing in Dojo” and indicated a reduced need for additional GPUs in the forthcoming year.

September 30 – During Tesla’s second AI Day, the company disclosed the installation of its first Dojo cabinet and conducted a load test of 2.2 megawatts. Tesla reported it was producing one tile per day (comprising 25 D1 chips) and showcased Dojo generating AI images of a “Cybertruck on Mars.”

Crucially, the company set a target for completing a full Exapod cluster by Q1 2023 and plans to develop a total of seven Exapods in Palo Alto.

2023

A ‘Long-Shot Bet’

April 19 – In Tesla’s first-quarter earnings call, Musk expressed that Dojo could significantly enhance training cost efficiency and potentially evolve into a sellable service akin to Amazon Web Services.

Musk characterized Dojo as “a long-shot bet,” but one that is “definitely worth pursuing.”

June 21 – The Tesla AI X account shared that the company’s neural networks were already integrated into customer vehicles. The update included a timeline graph of current and anticipated compute power, with production of Dojo slated to begin in July 2023, though it remained unclear whether this concerned the D1 chips or the supercomputer itself. Musk commented that Dojo was operational and handling tasks within Tesla’s data centers.

Tesla also anticipated that its computational capacity would rank among the top five globally by around February 2024 and projected achieving 100 exaflops by October 2024.

July 19 – In its second-quarter earnings report, Tesla announced the commencement of Dojo production. Musk also revealed a financial commitment exceeding $1 billion for Dojo through 2024.

September 6 – Musk shared on X that Tesla’s AI training capabilities were lagging, but improvements were on the horizon with both Nvidia and Dojo. He highlighted the challenges associated with managing data generated by roughly 160 billion frames of video daily from Tesla vehicles.

2024

Scaling Plans

January 24 – During Tesla’s fourth-quarter and full-year earnings call, Musk reiterated that Dojo represents a high-risk but potentially high-reward project. He indicated that Tesla was advancing in parallel tracks—utilizing both Nvidia and Dojo—and confirmed that “Dojo is operational” and completing training tasks. Plans for iterative versions such as Dojo 1.5, 2, and 3 were also mentioned.

January 26 – Tesla unveiled plans to allocate $500 million toward constructing a Dojo supercomputer in Buffalo. Musk tempered expectations slightly, stating that a $500 million investment is relatively modest compared to the larger AI competition expenses.

April 30 – At TSMC’s North American Technology Symposium, it was stated that Dojo’s next-gen training tile, identified as D2, had entered production. This tile would consolidate the entirety of the Dojo tile onto a single silicon wafer, eliminating the need for 25 separate chips.

May 20 – Musk mentioned that the addition to the Giga Texas factory would accommodate “a super dense, water-cooled supercomputer cluster.”

June 4 – A CNBC report indicated that Musk redirected a substantial number of Nvidia chips originally set aside for Tesla to X and xAI. Initially, he disputed the claims, but later disclosed that Tesla lacked an operational site to utilize the chips due to ongoing construction at Giga Texas, noting that the facility would house 50,000 H100 chips for FSD training.

He further stated:

“Regarding roughly $10B in AI-related costs projected for this year, approximately half is allocated internally, mainly for our custom-designed AI inference computer and the sensors in all vehicles, including Dojo. For constructing AI training superclusters, Nvidia equipment will account for around two-thirds of the cost. I currently estimate Tesla’s spending on Nvidia this year to be between $3B and $4B.”

July 1 – Musk disclosed on X that Tesla’s existing vehicles might lack the necessary hardware for the company’s next-gen AI model. He elaborated that the significant increase in parameter count anticipated with the upcoming AI would be challenging to achieve without enhancing the vehicle’s inference capabilities.

Challenges with Nvidia Supplies

July 23 – During Tesla’s second-quarter earnings call, Musk remarked on the intense demand for Nvidia’s hardware, stating that acquiring GPUs has become increasingly difficult.

“This situation prompts us to invest more in Dojo to guarantee we obtain the necessary training capabilities. We believe we can ultimately stand against Nvidia by leveraging Dojo,” he mentioned.

A graph in Tesla’s investor presentation suggested that Tesla’s AI training capacity could grow to nearly 90,000 H100-equivalent GPUs by late 2024, up from about 40,000 in June. Subsequently, Musk indicated that Dojo 1 would see “approximately 8k H100-equivalent training online by year-end.” He also shared images of the supercomputer, which appeared to feature a design reminiscent of Tesla’s Cybertrucks.

Transitioning from Dojo to Cortex

July 30 – Musk indicated that AI5 would be approximately 18 months away from mass production, responding to a comment from a user expressing concerns about being left behind by the new AI advances.

August 3 – Musk shared on X that he had toured “the Tesla supercompute cluster at Giga Texas (known as Cortex)” and noted that it would consist of roughly 100,000 H100/H200 Nvidia GPUs, incorporating vast storage capacity for video training on FSD and Optimus.

August 26 – Musk posted a clip showcasing Cortex, praising it as “the massive new AI training supercluster being assembled at Tesla HQ in Austin to tackle real-world AI challenges.”

2025

No Updates on Dojo in 2025

January 29 – Tesla’s Q4 and full-year 2024 earnings call did not mention Dojo. However, Cortex, Tesla’s newly established AI training supercluster at the Austin Gigafactory, was highlighted as part of the update. Tesla indicated in its shareholder presentation that deployment of Cortex, comprising approximately 50,000 H100 Nvidia GPUs, has been completed.

“Cortex facilitated the deployment of FSD V13 (Supervised), which features substantial enhancements in safety and comfort, achieving a 4.2x increase in data and improved video input resolutions, among other improvements,” the company mentioned in its communication.

During the call, CFO Vaibhav Taneja stated that Tesla expedited the establishment of Cortex to hasten the introduction of FSD V13. He noted that cumulative capital expenditures related to AI, inclusive of infrastructure, had amounted to about $5 billion thus far. Taneja also forecasted that AI-related capital expenditures would remain stable in 2025.

This article was initially published on August 10, 2024, and will be updated as new developments arise.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles