
xAI Raises $20 Billion to Accelerate AI Growth
Nvidia has introduced its next-generation Vera Rubin AI computing platform at CES 2026, accelerating a launch that had been expected later in the year. The company is positioning Vera Rubin as a major step beyond its Blackwell lineup, which helped fuel a strong year for Nvidia’s data center business amid surging demand for AI hardware.
Ahead of its keynote, Nvidia senior director of HPC and AI infrastructure solutions Dion Harris described Vera Rubin as “six chips that make one AI supercomputer.” The platform combines the Vera CPU, Rubin GPU, a sixth-generation NVLink switch, the Connect-X9 network interface card, the BlueField4 data processing unit, and Spectrum-X 102.4T CPO. Nvidia says the system will support third-generation confidential computing and is designed to be the first rack-scale trusted computing platform, aimed at customers that need stronger security features while running large-scale AI workloads.
Nvidia claims the Rubin GPU can deliver five times the AI training compute of Blackwell. The company also says the broader Vera Rubin architecture can train a large “mixture of experts” AI model in the same amount of time as Blackwell while using one-quarter of the GPUs and cutting token cost to one-seventh. If those figures hold in real-world deployments, Rubin could reduce the hardware required for top-tier model training while lowering operating costs.
The announcement arrives shortly after Nvidia reported record data center revenue, rising 66 percent compared with the prior year. That growth was driven by demand for Blackwell and Blackwell Ultra GPUs, setting high expectations for Rubin as the next benchmark for AI infrastructure spending and a key indicator of how durable the current AI surge will be.
Nvidia said products and services built on Vera Rubin will be available through its partners starting in the second half of 2026, signaling that broader adoption will ramp over time rather than immediately following the CES reveal.