DeepSeek has delayed the launch of its next AI model, R2, after efforts to train it on Huawei’s Ascend chips failed, prompting a return to Nvidia systems for training. The company, which drew notice with its R1 model in January, faced official pressure to favor domestic hardware, according to people cited by the Financial Times.
When training began, engineers ran into persistent technical issues on the Ascend platform that blocked successful, end-to-end runs. As a result, DeepSeek scrapped a planned May release and shifted training back to Nvidia while it works to stabilize R2. The company is still exploring whether Huawei’s chips can handle the less demanding inference stage once training is complete.
Huawei even dispatched its own engineers to DeepSeek’s offices to help, but the teams were unable to complete a working training pass. Industry figures say the outcome is not surprising: training large models is far more resource-intensive and unforgiving than inference, and small reliability gaps can derail multi-week jobs.
Huawei’s founder, Ren Zhengfei, has said this year that outside praise has overstated the company’s progress and that its best chips remain a generation behind leading rivals. Beijing continues to push major firms to choose local hardware, with reports that companies must now justify purchases of Nvidia’s H20, the export-compliant option still widely used for advanced AI work.
Inside DeepSeek, founder Liang Wenfeng has reportedly told staff he is unhappy with R2’s overall pace and ambition and wants a model that keeps the company among industry leaders. The setback underscores the stakes of choosing compute platforms: training needs sustained power, memory bandwidth, and stable software over long periods, while inference can be split across lighter, cheaper hardware.
For now, Nvidia keeps the performance lead for training at DeepSeek, and the company’s R2 schedule has slipped as it rebuilds its workflow on those systems. The episode highlights China’s long-term push for self-reliance in chips, and the practical limits facing AI developers today, as they balance national goals, technical realities, and the race to ship stronger models.