AI Powerhouse AMD Takes Lead with Helios Rack-Scale AI Systems, Outperforming Intel
In the rapidly evolving world of artificial intelligence (AI), Intel is making significant strides in the development of rack-scale AI solutions. The tech giant's strategy revolves around the Gaudi 3 AI accelerator and the 18A process node technology, with a shift towards cost-efficient AI capacity and open ecosystems rather than brute-force performance.
Intel's Gaudi 3, now available in PCIe and rack-scale configurations, boasts 128GB of HBM2E memory, offering higher memory capacity than Nvidia’s H100. While not necessarily surpassing Nvidia in raw performance, it supports up to 64 accelerators in a rack-scale design, targeting hyperscalers and large enterprises that demand high bandwidth and efficiency for AI training and inference workloads.
Complementing the Gaudi 3 is Intel's ARC Pro B60 GPUs, designed for AI inferencing and creative workstations. These GPUs aim to compete through cost-effective performance and multi-GPU scaling for large AI models. The 18A node, a cutting-edge manufacturing process utilizing gate-all-around transistor tech, is critical to Intel’s hardware competitiveness but has experienced delays; it is expected to be a key driver in their future AI and CPU/GPU products.
Intel's entry into the rack-scale AI market comes at a time when competitors like Nvidia and AMD have already established themselves, particularly in AI training. Intel acknowledges that it has fallen behind in this area, admitting that it is "too late" to catch up. As a result, Intel is shifting more focus towards edge AI applications and agentic AI, deviating from the hyperscale training market dominated by Nvidia and AMD.
In the broader context of rack-scale AI, both Nvidia and AMD continue to push advanced interconnect technologies, offering extreme bandwidth and tight GPU-memory pooling at scale. Intel is only beginning to compete with its Gaudi 3 rack-scale design. These architectures are targeted at cloud hyperscalers and large on-prem AI users, demanding high-performance but coming with high cost and complexity.
Despite the challenges, Intel's rack-scale AI solutions are expected to be commercially available now or imminently, with early adoption by hyperscalers translating into revenue from late Q2 to early Q3 of 2025. Intel's rivals, Nvidia and AMD, on the other hand, already have market-leading products widely deployed.
In summary, Intel's rack-scale AI solutions offer a cost-effective and open approach in a market dominated by high-end performance and tight hardware-software integration from competitors like Nvidia and AMD. The tech giant's strategic pivot to edge AI and agentic AI, coupled with its advanced packaging technology, reflects an adaptation to competitive realities and delayed advanced node ramp-up.
Meanwhile, AMD is currently performing well in the AI market, with Oracle already a customer for AMD's current rack-scale solution, featuring MI355X GPUs. Intel's AI strategy, however, remains uncertain due to the recent change in CEO and upcoming layoffs. As the race for AI dominance continues, it will be interesting to see how Intel navigates these challenges and competes in the market.
Finance and technology are intertwined in Intel's AI strategy as the tech giant focuses on cost-efficient AI capacity by developing rack-scale AI solutions with their Gaudi 3 AI accelerator and the advanced 18A process node technology. Investing in artificial intelligence, Intel aims to compete with their rivals in the market, such as Nvidia and AMD, through a more open ecosystem and edge AI applications.