Skip to main content
Feb 11

Orbital AI: The Killer Economics of Space

The concept of artificial intelligence operating in space has long been a subject of discussion among visionaries like Elon Musk and his associates, o

10 min read111 views3 tags
Originally reported bytechcrunch

The concept of artificial intelligence operating in space has long been a subject of discussion among visionaries like Elon Musk and his associates, often drawing parallels to science fiction narratives depicting advanced civilizations governed by sentient spacecraft.

Musk now perceives a tangible opportunity to bring a version of this futuristic vision to fruition. His company, SpaceX, has sought regulatory approval to establish solar-powered orbital data centers. This ambitious project envisions a constellation of up to a million satellites, collectively capable of relocating as much as 100 gigawatts of compute power from Earth into orbit. Reports also indicate Musk's suggestion that some of these AI satellites could be constructed on the Moon.

“By far the cheapest place to put AI will be space in 36 months or less,” Musk asserted during a recent podcast hosted by Stripe cofounder John Collison.

This sentiment is not exclusive to Musk. The head of compute at xAI has reportedly wagered with his counterpart at Anthropic that 1% of global compute capacity will reside in orbit by 2028. Google, a significant investor in SpaceX, has unveiled its own space AI initiative named Project Suncatcher, with prototype vehicles slated for launch in 2027. Furthermore, Starcloud, a startup backed by Google and Andreessen Horowitz that has raised $34 million, recently submitted plans for an 80,000-satellite constellation. Even Jeff Bezos has acknowledged the potential of this future.

However, beyond the considerable buzz, what are the practical requirements for deploying data centers into space?

Initial analyses suggest that current terrestrial data centers remain more economical than their orbital counterparts. Space engineer Andrew McCalip has developed a calculator comparing these models, with his baseline findings indicating that a 1-gigawatt orbital data center could cost approximately $42.4 billion—nearly three times the expense of an equivalent ground-based facility, primarily due to the substantial upfront costs of satellite construction and launch.

Experts contend that altering this economic equation will necessitate advancements across multiple technological domains, immense capital investment, and extensive development of the supply chain for space-grade components. This shift also hinges on an increase in terrestrial costs as growing demand strains resources and supply chains on Earth.

A fundamental determinant for any space-based business model is the cost of transporting payloads into orbit. While SpaceX is actively working to reduce launch costs, analysts believe even lower prices are essential to validate the business case for orbital data centers. Thus, while space AI data centers might appear as a novel business venture preceding a potential SpaceX IPO, their feasibility ultimately depends on the successful completion of the company's long-standing, unfinished Starship project.

To illustrate, the reusable Falcon 9 currently offers a cost to orbit of approximately $3,600 per kilogram. According to Project Suncatcher’s white paper, making space data centers viable will demand prices closer to $200 per kilogram—an 18-fold improvement anticipated in the 2030s. At such a price point, the energy provided by a Starlink satellite today would become cost-competitive with a terrestrial data center.

The expectation is that SpaceX’s next-generation Starship rocket will deliver these crucial cost reductions, as no other vehicle currently under development promises comparable savings. Nevertheless, Starship has yet to achieve operational status or even reach orbit, with its third test iteration expected to make its maiden launch in the coming months.

Even assuming Starship's complete success, the premise that it will immediately translate into significantly lower customer prices might be overly optimistic. Economists at the consultancy Rational Futures argue persuasively that, similar to the Falcon 9, SpaceX will likely price its services competitively with its closest rivals to maximize revenue. For instance, if Blue Origin’s New Glenn rocket is priced at $70 million, SpaceX is unlikely to undertake Starship missions for external customers at a substantially lower cost, potentially exceeding the figures publicly projected by space data center developers.

“There are not enough rockets to launch a million satellites yet, so we’re pretty far from that,” stated Matt Gorman, CEO of Amazon Web Services, at a recent event. “If you think about the cost of getting a payload in space today, it’s massive. It is just not economical.”

While launch costs represent a primary obstacle for all space ventures, the secondary challenge lies in production expenses.

“We always take for granted, at this point, that Starship’s cost is going to be hundreds of dollars per kilo,” McCalip informed TechCrunch. “People are not taking into account the satellites are almost $1,000 a kilo right now.”

Satellite manufacturing costs constitute the largest portion of this price tag. However, if high-powered satellites could be produced at roughly half the cost of current Starlink units, the economics begin to align. SpaceX has already made significant strides in satellite manufacturing efficiency with its Starlink network and aims for further improvements through economies of scale. The strategy of deploying a million satellites is undoubtedly influenced by the potential cost savings from mass production.

Nonetheless, the satellites designated for these missions must be sufficiently large to accommodate the intricate requirements of operating powerful GPUs, including substantial solar arrays, advanced thermal management systems, and laser-based communication links for data transmission.

A 2025 white paper from Project Suncatcher offers a comparison of terrestrial and space data centers based on the cost of power, the fundamental input for chip operation. On Earth, data centers typically spend between $570 and $3,000 per kilowatt of power annually, depending on local energy rates and system efficiency. SpaceX’s Starlink satellites generate power from on-board solar panels, but the combined cost of acquiring, launching, and maintaining these spacecraft translates to an energy cost of $14,700 per kilowatt annually. Simply put, satellites and their components require substantial cost reductions to compete with metered power on Earth.

Proponents of orbital data centers often suggest that thermal management is "free" in space, yet this is an oversimplification. In the absence of an atmosphere, the dissipation of heat is, in fact, more challenging.

“You’re relying on very large radiators to just be able to dissipate that heat into the blackness of space, and so that’s a lot of surface area and mass that you have to manage,” explained Mike Safyan, an executive at Planet Labs, which is developing prototype satellites for Google Suncatcher slated for 2027 launch. “It is recognized as one of the key challenges, especially long term.”

Beyond the vacuum, AI satellites must also contend with cosmic radiation. Cosmic rays gradually degrade chips and can induce "bit flip" errors that corrupt data. Protection measures include shielding, the use of radiation-hardened components, or redundant error checks, but all these options entail expensive trade-offs in terms of mass. Google, for instance, has employed a particle beam to evaluate radiation effects on its Tensor Processing Units (chips specifically designed for machine learning). SpaceX executives have indicated on social media that the company has acquired a particle accelerator for a similar purpose.

Another set of challenges arises from the solar panels themselves. The underlying principle of the project is energy arbitrage: placing solar panels in space makes them five to eight times more efficient than on Earth, and if positioned in the correct orbit, they can receive sunlight for 90% or more of the day, further boosting their efficiency. Since electricity is the primary fuel for chips, more energy effectively translates to cheaper data centers. However, even solar panels present greater complexities in space.

Space-rated solar panels crafted from rare earth elements are durable but prohibitively expensive. Silicon-based solar panels are more affordable and increasingly common in space, utilized by Starlink and Amazon Kuiper, but they degrade much faster due to space radiation. This degradation will likely limit the operational lifespan of AI satellites to around five years, necessitating a faster return on investment.

Nevertheless, some analysts consider this limited lifespan to be less of a concern, given the rapid pace of chip generation advancements. “After five or six years, the dollars per kilowatt hour doesn’t produce a return, and that’s because they’re not state of the art,” Philip Johnston, CEO of Starcloud, told TechCrunch.

Danny Field, an executive at Solestial, a startup developing space-rated silicon solar panels, notes that the industry views orbital data centers as a significant growth driver. He is currently in discussions with several companies regarding potential data center projects and believes that “any player who is big enough to dream is at least thinking about it.” As a seasoned spacecraft design engineer, however, he remains acutely aware of the inherent challenges within these proposed models.

“You can always extrapolate physics out to a bigger size,” Field remarked. “I’m excited to see how some of these companies get to a point where the economics make sense and the business case closes.”

A crucial unanswered question regarding these data centers is their intended application: Will they be general-purpose, optimized for inference, or for training? Based on current use cases, they may not be entirely interchangeable with terrestrial data centers.

A significant hurdle for training new models is the coherent operation of thousands of GPUs simultaneously. Most model training is not distributed but conducted within individual data centers. While hyperscalers are working to decentralize this process to enhance model power, it remains an unachieved goal. Similarly, training in space will demand seamless coherence among GPUs distributed across multiple satellites.

Google’s Project Suncatcher team highlights that the company’s terrestrial data centers connect their TPU networks with throughputs reaching hundreds of gigabits per second. In contrast, the fastest commercially available inter-satellite communication links today, which employ lasers, can only achieve approximately 100 Gbps.

This disparity has led to an innovative architectural proposal for Suncatcher: it involves flying 81 satellites in a tightly controlled formation, close enough to utilize the same type of transceivers found in terrestrial data centers. This approach, however, introduces its own set of challenges, particularly the autonomous navigation required to maintain each spacecraft’s precise station, even when maneuvers are necessary to avoid orbital debris or other spacecraft.

Despite this, the Google study includes a cautionary note: inference tasks can tolerate the orbital radiation environment, but further research is needed to fully comprehend the potential impact of bit-flips and other errors on more demanding training workloads.

Inference tasks do not require thousands of GPUs operating in perfect synchronicity. Such operations can be performed with dozens of GPUs, potentially on a single satellite, representing a kind of minimum viable product and a probable starting point for the orbital data center industry.

“Training is not the ideal thing to do in space,” Johnston asserted. “I think almost all inference workloads will be done in space,” envisioning applications ranging from customer service voice agents to ChatGPT queries being processed in orbit. He further noted that his company’s inaugural AI satellite is already generating revenue by performing inference tasks in orbit.

While specific details are scarce even in SpaceX’s FCC filing, the company’s orbital data center constellation appears to anticipate approximately 100 kilowatts of compute power per ton, roughly double the power of current Starlink satellites. These spacecraft are designed to operate interconnectedly, utilizing the Starlink network for information sharing; the filing claims that Starlink’s laser links can achieve petabit-level throughput.

For SpaceX, the recent acquisition of xAI, which is concurrently developing its own terrestrial data centers, strategically positions the company to explore both ground-based and orbital data center solutions, allowing it to assess which supply chain adapts more rapidly and effectively.

This flexibility underscores the advantage of having fungible Floating Point Operations Per Second, assuming the operational challenges can be overcome. “A FLOP is a FLOP, it doesn’t matter where it lives,” McCalip observed. “[SpaceX] can just scale until [it] hits permitting or capex bottlenecks on the ground, and then fall back to [their] space deployments.”

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news