TechCrunch News 02月07日
Tesla Dojo: Elon Musk’s big plan to build an AI supercomputer, explained
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

特斯拉的Dojo是其定制的AI超级计算机,旨在训练“完全自动驾驶”神经网络。Dojo的增强与特斯拉实现完全自动驾驶和推出Robotaxi的目标密切相关。尽管马斯克过去曾表示Dojo是实现特斯拉完全自动驾驶目标的关键,但自2024年8月以来,讨论的焦点已转向Cortex,特斯拉在奥斯汀总部建造的“巨型新AI训练超集群”,用于解决现实世界的AI问题。特斯拉希望通过在AI和Dojo(现在是Cortex)上的大量投入,来实现汽车和人形机器人的自主性。特斯拉的未来成功实际上取决于其解决这一问题的能力,因为电动汽车市场的竞争日益激烈。

💡Dojo是特斯拉定制的超级计算机,旨在训练其“完全自动驾驶”(FSD)神经网络,是实现完全自动驾驶和推出Robotaxi的关键基础设施。

🧠特斯拉的FSD依赖摄像头捕获视觉数据,并通过先进的神经网络处理数据并做出驾驶决策,目标是复制人类视觉皮层和大脑功能,实现真正的自动驾驶。

芯片是Dojo的核心,特斯拉推出了定制的D1芯片,旨在优化AI工作负载,通过将25个D1芯片融合到一个tile中,形成一个统一的计算机系统,提高带宽和计算能力。后续还在研发下一代D2芯片,旨在解决信息流瓶颈。

💰特斯拉计划在未来18个月内实现“一半特斯拉AI硬件,一半英伟达/其他”的芯片供应结构,以降低对单一供应商的依赖,并可能采用AMD芯片。

For years, Elon Musk has talked about Dojo — the AI supercomputer that will be the cornerstone of Tesla’s AI ambitions. It’s important enough to Musk that in July 2024, he said the company’s AI team would “double down” on Dojo in the lead up to Tesla’s robotaxi reveal, which happened in October.  

But what exactly is Dojo? And why is it so critical to Tesla’s long-term strategy?

In short: Dojo is Tesla’s custom-built supercomputer that’s designed to train its “Full Self-Driving” neural networks. Beefing up Dojo goes hand-in-hand with Tesla’s goal to reach full self-driving and bring a robotaxi to market. FSD, which is on hundreds of thousands of Tesla vehicles today, can perform some automated driving tasks but still requires a human to be attentive behind the wheel. 

Tesla’s Cybercab reveal has come and gone, and now the company is gearing up to launch an autonomous ride-hail service using its own fleet of vehicles in Austin this June. Tesla also said during its 2024 fourth-quarter and full-year earnings call at the end of January that it plans to launch unsupervised FSD for U.S. customers in 2025. 

Musk’s previous rhetoric has been that Dojo would be the key to achieving Tesla’s goal of full self-driving. Now that Tesla appears close to nearing that goal, Musk has been mum on Dojo. 

Instead, ever since August 2024, talk has been around Cortex, Tesla’s “giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI.” Musk has also said it will have “massive storage for video training of FSD & Optimus.” 

In Tesla’s Q4 shareholder deck, the company shared updates on Cortex, but nothing on Dojo. 

Tesla has positioned itself to spend big on AI and Dojo — and now Cortex — to reach its goal of autonomy for both cars and humanoid robots. And Tesla’s future success really hinges on its ability to nail this down, given the increased competition in the EV market. So it’s worth taking a closer look at Dojo, Cortex, and where it all stands today. 

Tesla’s Dojo backstory

Image Credits:SUZANNE CORDEIRO/AFP via Getty Images / Getty Images

Musk doesn’t want Tesla to be just an automaker, or even a purveyor of solar panels and energy storage systems. Instead, he wants Tesla to be an AI company, one that has cracked the code to self-driving cars by mimicking human perception. 

Most other companies building autonomous vehicle technology rely on a combination of sensors to perceive the world — like lidar, radar and cameras — as well as high-definition maps to localize the vehicle. Tesla believes it can achieve fully autonomous driving by relying on cameras alone to capture visual data and then use advanced neural networks to process that data and make quick decisions about how the car should behave. 

As Tesla’s former head of AI, Andrej Karpathy, said at the automaker’s first AI Day in 2021, the company is basically trying to build “a synthetic animal from the ground up.” (Musk had been teasing Dojo since 2019, but Tesla officially announced it at AI Day.)

Companies like Alphabet’s Waymo have commercialized Level 4 autonomous vehicles — which the SAE defines as a system that can drive itself without the need for human intervention under certain conditions — through a more traditional sensor and machine learning approach. Tesla has still yet to produce an autonomous system that doesn’t require a human behind the wheel. 

About 1.8 million people have paid the hefty subscription price for Tesla’s FSD, which currently costs $8,000 and has been priced as high as $15,000. The pitch is that Dojo-trained AI software will eventually be pushed out to Tesla customers via over-the-air updates. The scale of FSD also means Tesla has been able to rake in millions of miles worth of video footage that it uses to train FSD. The idea there is that the more data Tesla can collect, the closer the automaker can get to actually achieving full self-driving. 

However, some industry experts say there might be a limit to the brute force approach of throwing more data at a model and expecting it to get smarter. 

“First of all, there’s an economic constraint, and soon it will just get too expensive to do that,” Anand Raghunathan, Purdue University’s Silicon Valley professor of electrical and computer engineering, told TechCrunch. Further, he said, “Some people claim that we might actually run out of meaningful data to train the models on. More data doesn’t necessarily mean more information, so it depends on whether that data has information that is useful to create a better model, and if the training process is able to actually distill that information into a better model.” 

Raghunathan said despite these doubts, the trend of more data appears to be here for the short-term at least. And more data means more compute power needed to store and process it all to train Tesla’s AI models. That is where Dojo, the supercomputer, comes in. 

Dojo is Tesla’s supercomputer system that’s designed to function as a training ground for AI, specifically FSD. The name is a nod to the space where martial arts are practiced. 

A supercomputer is made up of thousands of smaller computers called nodes. Each of those nodes has its own CPU (central processing unit) and GPU (graphics processing unit). The former handles overall management of the node, and the latter does the complex stuff, like splitting tasks into multiple parts and working on them simultaneously. GPUs are essential for machine learning operations like those that power FSD training in simulation. They also power large language models, which is why the rise of generative AI has made Nvidia the most valuable company on the planet. 

Even Tesla buys Nvidia GPUs to train its AI (more on that later). 

Tesla’s vision-only approach is the main reason Tesla needs a supercomputer. The neural networks behind FSD are trained on vast amounts of driving data to recognize and classify objects around the vehicle and then make driving decisions. That means that when FSD is engaged, the neural nets have to collect and process visual data continuously at speeds that match the depth and velocity recognition capabilities of a human. 

In other words, Tesla means to create a digital duplicate of the human visual cortex and brain function. 

To get there, Tesla needs to store and process all the video data collected from its cars around the world and run millions of simulations to train its model on the data. 

Tesla appears to rely on Nvidia to power its current Dojo training computer, but it doesn’t want to have all its eggs in one basket — not least because Nvidia chips are expensive. Tesla also hopes to make something better that increases bandwidth and decreases latencies. That’s why the automaker’s AI division decided to come up with its own custom hardware program that aims to train AI models more efficiently than traditional systems. 

At that program’s core is Tesla’s proprietary D1 chips, which the company says are optimized for AI workloads. 

Ganesh Venkataramanan, former senior director of Autopilot hardware, presenting the D1 training tile at Tesla’s 2021 AI Day. Image Credits:Tesla/screenshot of streamed event

Tesla is of a similar opinion to Apple in that it believes hardware and software should be designed to work together. That’s why Tesla is working to move away from the standard GPU hardware and design its own chips to power Dojo. 

Tesla unveiled its D1 chip, a silicon square the size of a palm, on AI Day in 2021. The D1 chip entered into production as of at least May this year. The Taiwan Semiconductor Manufacturing Company (TSMC) is manufacturing the chips using 7 nanometer semiconductor nodes. The D1 has 50 billion transistors and a large die size of 645 millimeters squared, according to Tesla. This is all to say that the D1 promises to be extremely powerful and efficient and to handle complex tasks quickly. 

“We can do compute and data transfers simultaneously, and our custom ISA, which is the instruction set architecture, is fully optimized for machine learning workloads,” said Ganesh Venkataramanan, former senior director of Autopilot hardware, at Tesla’s 2021 AI Day. “This is a pure machine learning.”

The D1 is still not as powerful as Nvidia’s A100 chip, though, which is also manufactured by TSMC using a 7 nanometer process. The A100 contains 54 billion transistors and has a die size of 826 square millimeters, so it performs slightly better than Tesla’s D1. 

To get a higher bandwidth and higher compute power, Tesla’s AI team fused 25 D1 chips together into one tile to function as a unified computer system. Each tile has a compute power of 9 petaflops and 36 terabytes per second of bandwidth, and contains all the hardware necessary for power, cooling and data transfer. You can think of the tile as a self-sufficient computer made up of 25 smaller computers. Six of those tiles make up one rack, and two racks make up a cabinet. Ten cabinets make up an ExaPOD. At AI Day 2022, Tesla said Dojo would scale by deploying multiple ExaPODs. All of this together makes up the supercomputer. 

Tesla is also working on a next-gen D2 chip that aims to solve information flow bottlenecks. Instead of connecting the individual chips, the D2 would put the entire Dojo tile onto a single wafer of silicon. 

Tesla hasn’t confirmed how many D1 chips it has ordered or expects to receive. The company also hasn’t provided a timeline for how long it will take to get Dojo supercomputers running on D1 chips. 

In response to a June post on X that said: “Elon is building a giant GPU cooler in Texas,” Musk replied that Tesla was aiming for “half Tesla AI hardware, half Nvidia/other” over the next 18 months or so. The “other” could be AMD chips, per Musk’s comment in January

Tesla’s humanoid robot Optimus Prime II at WAIC in Shanghai, China, on July 7, 2024. Image Credits:Costfoto/NurPhoto / Getty Images

Taking control of its own chip production means that Tesla might one day be able to quickly add large amounts of compute power to AI training programs at a low cost, particularly as Tesla and TSMC scale up chip production. 

It also means that Tesla may not have to rely on Nvidia’s chips in the future, which are increasingly expensive and hard to secure. 

During Tesla’s second-quarter earnings call, Musk said that demand for Nvidia hardware is “so high that it’s often difficult to get the GPUs.” He said he was “quite concerned about actually being able to get steady GPUs when we want them, and I think this therefore requires that we put a lot more effort on Dojo in order to ensure that we’ve got the training capability that we need.” 

That said, Tesla is still buying Nvidia chips today to train its AI. In June, Musk posted on X

Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo. For building the AI training superclusters, Nvidia hardware is about 2/3 of the cost. My current best guess for Nvidia purchases by Tesla are $3B to $4B this year.

“Inference compute” refers to the AI computations performed by Tesla cars in real time and is separate from the training compute that Dojo is responsible for.

Dojo is a risky bet, one that Musk has hedged several times by saying that Tesla might not succeed. 

In the long run, Tesla could theoretically create a new business model based on its AI division. Musk has said that the first version of Dojo will be tailored for Tesla computer vision labeling and training, which is great for FSD and for training Optimus, Tesla’s humanoid robot. But it wouldn’t be useful for much else. 

Musk has said that future versions of Dojo will be more tailored to general-purpose AI training. One potential problem with that is almost all AI software out there has been written to work with GPUs. Using Dojo to train general-purpose AI models would require rewriting the software. 

That is, unless Tesla rents out its compute, similar to how AWS and Azure rent out cloud computing capabilities. Musk also noted during Q2 earnings that he sees “a path to being competitive with Nvidia with Dojo.”

A September 2023 report from Morgan Stanley predicted that Dojo could add $500 billion to Tesla’s market value by unlocking new revenue streams in the form of robotaxis and software services. 

In short, Dojo’s chips are an insurance policy for the automaker, but one that could pay dividends. 

Nvidia CEO Jensen Huang and Tesla CEO Elon Musk at the GPU Technology Conference in San Jose, California. Image Credits:Kim Kulish/Corbis via Getty Images / Getty Images

Reuters reported last year that Tesla began production on Dojo in July 2023, but a June 2023 post from Musk suggested that Dojo had been “online and running useful tasks for a few months.”

Around the same time, Tesla said it expected Dojo to be one of the top five most powerful supercomputers by February 2024 — a feat that has yet to be publicly disclosed, leaving us doubtful that it has occurred.

The company also said it expects Dojo’s total compute to reach 100 exaflops in October 2024. (One exaflops is equal to 1 quintillion computer operations per second. To reach 100 exaflops, and assuming that one D1 can achieve 362 teraflops, Tesla would need more than 276,000 D1s, or around 320,500 Nvidia A100 GPUs.)

Tesla also pledged in January 2024 to spend $500 million to build a Dojo supercomputer at its gigafactory in Buffalo, New York.

In May 2024, Musk noted that the rear portion of Tesla’s Austin gigafactory will be reserved for a “super dense, water-cooled supercomputer cluster.” Now we know that it’s actually Cortex, not Dojo, that is taking up that space in Austin. 

Just after Tesla’s second-quarter earnings call, Musk posted on X that the automaker’s AI team is using Tesla HW4 AI computer (renamed AI4), which is the hardware that lives on Tesla vehicles, in the training loop with Nvidia GPUs. He noted that the breakdown is roughly 90,000 Nvidia H100s plus 40,000 AI4 computers. 

“And Dojo 1 will have roughly 8k H100-equivalent of training online by end of year,” he continued. “Not massive, but not trivial either.”

Tesla hasn’t provided updates as to whether it has gotten those chips online and running Dojo. During the company’s fourth-quarter 2024 earnings call, no one mentioned Dojo. However, Tesla said it completed the deployment of Cortex in Q4, and that it was Cortex that helped enable V13 of supervised FSD. 

This story originally published August 3, 2024, and we will update it as new information develops.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

特斯拉 Dojo AI超算 完全自动驾驶
相关文章