Thinking Machines Lab, the innovative startup founded by former OpenAI executive Mira Murati, has reportedly secured a substantial multi-billion-dollar agreement to significantly expand its utilization of Google Cloud’s advanced AI infrastructure. This expansion includes access to systems powered by Nvidia’s cutting-edge GPUs, according to exclusive information obtained by sources familiar with the matter.
The deal, valued in the single-digit billions of dollars, encompasses access to Google's most recent AI systems, which are built upon Nvidia's new GB300 chips. Additionally, it provides comprehensive infrastructure services essential for the rigorous training and efficient deployment of AI models.
Google has been proactively forging numerous cloud partnerships with leading AI developers as part of its strategy to seamlessly integrate its core cloud offerings with a broader ecosystem of services, including storage solutions, its robust Kubernetes engine, and the Spanner database product. Earlier this month, for instance, Anthropic entered into an agreement with Google and Broadcom to secure multiple gigawatts of Tensor Processing Unit (TPU) capacity, Google’s proprietary AI chips designed for demanding machine learning workloads.
However, the competitive landscape in the AI infrastructure sector remains intensely fierce. Illustrating this dynamic, Anthropic also finalized a new agreement with Amazon this week, securing up to 5 gigawatts of capacity dedicated to the training and deployment of its Claude AI models.
Earlier in the year, Thinking Machines had established a partnership with Nvidia, which included a direct investment from the chipmaker. This latest agreement, however, marks the first time the lab has entered into a direct deal with a major cloud services provider. While the agreement is not exclusive, allowing Thinking Machines the flexibility to potentially engage with multiple cloud providers in the future, it underscores Google’s strategic intent to secure early partnerships with rapidly expanding frontier AI laboratories.
Mira Murati departed her role as OpenAI’s chief technologist to establish Thinking Machines in February 2025. The company, which swiftly secured a $2 billion seed funding round at an impressive $12 billion valuation, has maintained a high level of secrecy until the launch of its inaugural product, Tinker, in October. Tinker is an innovative tool designed to automate the creation of bespoke frontier AI models.
Wednesday’s announcement provided valuable insights into Thinking Machines’ ongoing developments. In its press release, Google highlighted its capacity to support the startup’s reinforcement learning workloads, a critical component of Tinker’s underlying architecture. Reinforcement learning is a sophisticated training methodology that has been instrumental in recent breakthroughs at prominent labs like DeepMind and OpenAI, and the substantial scale of the Google Cloud deal reflects the significant computational expense associated with this advanced work.
Thinking Machines stands among the initial Google Cloud customers to gain access to its GB300-powered systems. These state-of-the-art systems promise a remarkable twofold improvement in training and serving speed compared to previous-generation GPUs, according to Google's specifications.
“Google Cloud got us running at record speed with the reliability we demand,” stated Myle Ott, a founding researcher at Thinking Machines, in an official statement.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.