For much of the past few years, U.S. export controls on advanced AI chips have been justified publicly as a matter of urgency. Policymakers warned that Chinese chipmakers, backed by massive state support and forced into self-reliance, were on the verge of closing the gap with Nvidia in AI hardware. That fear has shaped decisions in Washington, including recent efforts to loosen restrictions on certain Nvidia silicon bound for China.
Now, a new report from the Council on Foreign Relations paints a very different picture. Based on performance data, manufacturing constraints at China’s leading foundry, and realistic production volume estimates, the analysis concludes that Huawei’s AI chip capabilities lag behind Nvidia’s by a wide margin, and that the gap is not narrowing. Indeed, by several measures, it is accelerating.
The report’s findings are especially significant because they directly address U.S. AI policy. If Huawei cannot plausibly catch Nvidia on AI hardware in the medium term, the rationale for relaxing export controls weakens. The question is not whether China is investing heavily in AI silicon. It clearly is. The question is whether those investments are translating into competitive hardware at scale. Right now, this report suggests they are not.
Huawei stuck several generations behind
Huawei’s flagship AI accelerators come from its Ascend line, most recently the Ascend 910C. On paper, the 910C is impressive. It is a large, power-hungry accelerator aimed squarely at data center AI training and inference. In practice, however, its performance ceiling is far below Nvidia’s current generation.
The CFR analysis estimates that the Ascend 910C delivers roughly 60% of the inference performance of Nvidia’s H100 under comparable conditions. That comparison already favors Huawei because Nvidia has moved beyond H100. The H200, which entered volume shipments in 2024 and was recently re-cleared for export to China, substantially increases memory capacity and bandwidth, while Nvidia’s Blackwell generation pushes further still.
Process technology is a major part of the problem. Huawei no longer has access to TSMC and must rely on SMIC for fabrication. SMIC’s most advanced production technology is widely understood to be a 7nm class process achieved without EUV lithography. Yield rates are low, and costs are high, and whether SMIC can scale beyond that node remains uncertain.
Nvidia, by contrast, continues to use TSMC’s leading-edge processes and advanced packaging. Its latest accelerators pair large compute dies with massive pools of HBM using CoWoS interposers. That combination is critical for today’s AI workloads, where memory bandwidth and capacity ultimately dictate how the GPUs perform in deployment.
Huawei’s Ascend chips simply cannot match that memory subsystem. Without access to large volumes of HBM and advanced packaging capacity, Ascend accelerators rely on slower memory configurations that bottleneck performance, particularly for LLMs. Even Huawei’s own roadmap highlights the issue, admitting that next year’s Ascend 950PR and 950DT both have lower total processing performance than the Ascend 910C.
According to projections cited in the CFR report, Huawei’s next-gen Ascend chips would at best approach H100-class performance around 2026 or 2027. By then, Nvidia will be multiple product cycles ahead. The report estimates that by 2027, the best U.S. AI chips could be more than 17 times more powerful than Huawei’s top offerings.
... continue reading