Tesla Demonstrates Cybertruck-Inspired Server Racks for Advanced Self-Driving Systems Training
Tesla's Dojo Supercomputer Advances with the Introduction of Dojo 2 Chips
Tesla's AI efforts are taking a significant leap forward with the introduction and mass production of the Dojo 2 chips, designed in-house for enhanced AI training performance and efficiency. The new chips entered mass production at TSMC in mid-July 2025.
Dojo 2 Chip Details
The Dojo 2 supercomputer chip is a crucial component in Tesla’s autonomous driving AI and other AI capabilities, including Optimus humanoid robot training. The chip offers improved scalability and energy efficiency, and is optimized for Tesla’s specific deep learning workloads.
ExaPOD Units and Chip Deployment
While exact current counts of ExaPOD units (Tesla’s modular AI training pods that contain thousands of these chips) installed at Tesla’s compute data centers are not explicitly stated, the scale of deployment is expected to grow as mass production ramps up. The Dojo 2 chip forms the core of these ExaPODs, aiming toward exascale computing capacity for Tesla’s AI.
Use of D1/D2 Chips and Nvidia GPUs
Tesla initially deployed Dojo D1 chips in the original Dojo system. Now with D2 chips entering mass production, Tesla is transitioning toward these newer chips for greater performance. Tesla still purchases third-party GPUs including Nvidia H100 and B200 GPUs to supplement AI training capacity, especially where flexibility and general GPU tasks are needed. However, the emphasis is on vertically integrating with its own Dojo hardware for specialized AI workloads.
Locations
Both the Mojo Dojo Compute Hall in New York and Tesla's Giga Texas facilities are known to contribute computing resources for AI training at Tesla. The Mojo Dojo is particularly associated with Tesla’s custom supercomputer infrastructure, where ExaPOD rigs with D1 and now D2 chips operate. Giga Texas is also expanding its computing infrastructure supporting Autopilot and Full Self-Driving neural networks, but exact breakdowns between sites are not publicly disclosed.
New Server Cabinets
Tesla's new server cabinets, with a Cyber design inspired by the Cybertruck, are likely Dojo ExaPOD units. The thermal demands of these new cabinets are 2.3 MW per cluster, aligning with Dojo's high power and thermal demands. The pictures of the new server cabinets match what Tesla shared last year of its Dojo 1 supercomputer.
AI Training Usage
Tesla's Dojo supercomputer is likely used for AI training of its Full Self-Driving (FSD) system. Elon Musk estimated Tesla's 2024 Nvidia spending at $3-4 billion, potentially equating to 75,000-133,000 GPUs. However, precise numbers of Nvidia GPUs currently in use at Mojo Dojo Compute Hall or Giga Texas have not been publicly detailed as of July 2025.
Looking Ahead
xAI plans to deploy a 300,000 B200 GPU supercomputer by summer 2025, indicating a further purchase of 200,000 B200 GPUs. The cooling infrastructure visible in the ceiling of the new server cabinets is mentioned, but the specifics of the design are not detailed beyond being futuristic and extensive. The new server cabinets showcase clean wiring, suggesting a well-organized and efficient setup.
As of July 2025, Tesla’s Dojo AI supercomputer is undergoing a significant transition, from D1 to mass-produced D2 chips powering scalable ExaPOD units principally located in Mojo Dojo New York and Giga Texas. Nvidia H100 and B200 GPUs remain part of the heterogeneous hardware mix supporting Tesla’s AI workloads, although Tesla aims to increasingly rely on its optimized in-house hardware at scale. Detailed unit counts for ExaPODs, chip numbers, and Nvidia GPU deployments at specific sites have not been publicly released as of this date.
The Dojo 2 supercomputer chip, a crucial component in Tesla’s autonomous driving AI and other AI capabilities like Optimus humanoid robot training, is being mass-produced for improved scalability and energy efficiency, optimized for Tesla’s specific deep learning workloads, and is a vital part of the new AI training efforts using artificial-intelligence. As Tesla transitions toward these newer D2 chips for greater performance, they are also deploying Nvidia H100 and B200 GPUs to supplement AI training capacity, especially where flexibility and general GPU tasks are needed, while focusing on vertically integrating with its own Dojo hardware for specialized AI workloads.