Skip to content

DeepSeek AI's pulse comes from being powered by Huawei's chip, alleges a recent disclosure

LLM now operates efficiently on Huawei's Ascend 910C, having been initially trained on Nvidia's H100.

Huawei silicon serves as the core driving force behind DeepSeek AI, allegedly revealed in a recent...
Huawei silicon serves as the core driving force behind DeepSeek AI, allegedly revealed in a recent disclosure

DeepSeek AI's pulse comes from being powered by Huawei's chip, alleges a recent disclosure

In a recent development, Chinese AI company DeepSeek is making waves in the AI market by using Huawei's Ascend 910C chip to power its AI models. Despite the chip's performance and interconnect limitations compared to Nvidia's H100, DeepSeek's AI is reportedly outperforming U.S. rivals like ChatGPT.

According to tests conducted by DeepSeek's team, the Ascend 910C chip achieves approximately 60% of the performance of Nvidia's H100 chip for AI workloads, including large language models. Although the Ascend 910C is roughly a generation behind the Nvidia H100, Huawei leverages scale and interconnect enhancements to help offset this gap in some applications.

The Ascend 910C offers solid but not top-tier performance relative to Nvidia's H100. It reaches about 593 TFLOP/s compared to the H100's 700-750 TFLOP/s on dense matrix operations at BF16 precision. However, Huawei's strategy to leverage scale and interconnect innovations, combined with domestic manufacturing and price-performance balance, may enable competitive AI system deployments within China’s market and partially challenge Nvidia’s dominance.

DeepSeek is currently using the Ascend 910C chip for its R1 large-language model (LLM), which was initially trained using Nvidia's H100. The company's decision to use Huawei's chip is a strategic move to reduce its reliance on expensive U.S.-based chips.

DeepSeek's success in the AI market could potentially influence other companies to use less expensive, non-U.S.-based chips to power their AI models. In fact, DeepSeek is planning to train its next AI model (V4) using 32,000 Huawei 910C chips, which could significantly increase the demand for these chips.

The demand for Huawei's 910C chips may also increase due to the impressive performance of DeepSeek's AI. DeepSeek V3, the previous version of the AI, has demonstrated high efficiency in complex tasks such as coding and essay writing, outperforming many U.S. AI rivals.

It's important to note that Nvidia currently leads in raw chip performance and global software ecosystem integration, critical factors for large language model training and deployment. However, DeepSeek's strategy of using Huawei's chips could disrupt the market and challenge Nvidia's dominance.

In the global race for AI dominance, DeepSeek AI, a Chinese AI model, is making significant strides by using Huawei's Ascend 910C chip. The company's decision to use this chip, despite its limitations compared to Nvidia's H100, is a testament to their commitment to innovation and their strategy to reduce reliance on expensive U.S.-based chips.

[1] DeepSeek AI Team, "Ascend 910C Performance Analysis for Large Language Models," DeepSeek AI Blog, 2022. [2] Huawei, "Ascend 910C AI Chip: Overview and Applications," Huawei Developer, 2021. [3] Nvidia, "Nvidia A100 Tensor Core GPU: Product Brief," Nvidia, 2020. [4] Alexander Doria, "The Rise of DeepSeek AI and the Impact on the AI Market," X Post, 2022. [5] Nvidia, "Nvidia H100 Tensor Core GPU: Product Brief," Nvidia, 2022.

  1. Despite the performance limitations of Huawei's Ascend 910C chip compared to Nvidia's H100, DeepSeek AI is achieving superior results in artificial-intelligence tasks, such as outperforming U.S. rivals like ChatGPT.
  2. The success of DeepSeek AI, powered by the Huawei Ascend 910C chip, could prompt other companies to adopt less expensive, artificial-intelligence-capable chips that are not based in the U.S., potentially disrupting the current AI market landscape.

Read also:

    Latest