SpaceX’s AI compute rental agreement with Anthropic represents a seismic shift in how rival AI companies access infrastructure. On May 5, 2026, Elon Musk announced via X that SpaceX would rent full access to Colossus 1’s compute capacity—220,000 Nvidia GPUs and 300 megawatts of power—to Anthropic, an AI firm Musk had previously criticized as “misanthropic.” This deal matters because it exposes the brutal scarcity of AI compute and suggests that even bitter industry rivalries bend when silicon becomes the bottleneck.
Key Takeaways
- SpaceX rents all available Colossus 1 compute (220,000 H100/H200 GPUs, 300MW) to Anthropic under a multi-year deal.
- Colossus 1, located in Memphis, Tennessee, was built in 122 days—a record for supercomputer deployment speed.
- Musk personally vetted Anthropic’s leadership before approving the deal, calling the move “No one set off my evil detector.”
- SpaceX plans to expand Colossus infrastructure to 1 million GPUs and 1 gigawatt of power through 2027.
- Anthropic expressed interest in SpaceX’s orbital data center concepts, hinting at future satellite-based AI infrastructure.
Why Musk Reversed Course on Anthropic
Musk’s endorsement of Anthropic marks a dramatic reversal. He had previously dismissed the company with a dismissive label, yet now states he is “comfortable proceeding” after spending time with Anthropic’s leadership. The shift signals that Musk’s primary concern is not ideological alignment but rather ensuring his compute infrastructure serves the broader AI ecosystem—and generates revenue. This pragmatism matters: it suggests Musk views Colossus 1 as both a competitive asset for xAI and a commercial platform for external clients.
Dario Amodei, Anthropic’s CEO, confirmed the partnership on X, calling it “a major step” for scaling Claude while maintaining safety priorities. The deal is not trivial—Anthropic gains access to all available capacity from Colossus 1, immediately boosting Claude’s training and inference capabilities. For context, this compute pool dwarfs Microsoft’s 100,000-GPU clusters for OpenAI, positioning Anthropic as a genuine infrastructure peer to its rivals.
SpaceX AI compute rental and the global compute shortage
The SpaceX AI compute rental deal exists because demand for GPU capacity far exceeds supply. Anthropic previously relied on AWS Trainium chips and Google Cloud TPUs, but those alternatives cannot match the scale or performance that Colossus 1 offers. By securing exclusive access to SpaceX’s Memphis facility, Anthropic sidesteps the months-long waitlists plaguing other AI labs. This is not altruism—it is desperation meeting opportunity. SpaceX gains recurring revenue and validates Colossus 1’s commercial viability; Anthropic gains the horsepower to compete with OpenAI and Google at scale.
The broader context sharpens the significance: Colossus 1 became fully operational in April 2026, and SpaceX is already planning expansion to 1 million GPUs and 1 gigawatt of power across multiple phases through 2027. If those targets materialize, SpaceX will control one of the planet’s largest AI compute clusters. Musk’s willingness to rent to Anthropic suggests he sees a market opportunity in commercializing excess capacity—or that xAI’s own demands do not saturate Colossus 1’s full capability.
Orbital data centers and the future of AI infrastructure
Perhaps the most speculative element of this deal is Anthropic’s stated interest in SpaceX’s orbital data center concepts. Musk has long discussed placing data centers in orbit to leverage Starlink for power and cooling, eliminating terrestrial constraints. Anthropic’s curiosity about this technology hints at a potential future partnership beyond terrestrial GPU rental. Neither company has disclosed formal proposals, but the conversation signals that AI infrastructure is no longer bound to earth-based facilities. If orbital cooling and satellite-based power delivery prove viable, the next generation of supercomputers could operate in ways that terrestrial physics currently forbids.
How this reshapes AI industry alliances
Musk’s lawsuit against OpenAI and his public feud with Sam Altman created a narrative of irreconcilable industry division. This Anthropic deal shatters that narrative. It reveals that business logic trumps personal grievance when compute scarcity becomes acute. Musk is not endorsing Anthropic’s values or governance—he is recognizing that renting idle compute capacity to a well-funded competitor is more profitable than leaving it empty. This pragmatism is healthy for the industry: it decouples infrastructure provision from ideological purity and allows AI labs to compete on capability rather than compute access.
For readers outside the AI industry, the takeaway is simpler: the companies building large language models are so resource-starved that they will rent from rivals, negotiate with former enemies, and explore exotic solutions like orbital data centers. This arms race is not slowing down. It is accelerating.
Did SpaceX build Colossus 1 in record time?
Yes. Colossus 1 was constructed in 122 days, setting a record for the fastest supercomputer deployment. The Memphis, Tennessee facility features over 220,000 Nvidia H100 and H200 GPUs, with a power capacity of 300 megawatts. This speed is remarkable because traditional data center buildouts take 18-24 months; SpaceX compressed the timeline by an order of magnitude.
Will Anthropic’s orbital data center plans actually happen?
Anthropic has expressed interest in SpaceX’s orbital concepts, but no formal proposals or timelines have been disclosed. The technology remains experimental. Orbital cooling and Starlink-based power delivery are theoretically sound but unproven at scale. Expect research and prototyping over the next 2-3 years before any orbital facility becomes operational.
How does Colossus 1 compare to other AI supercomputers?
Colossus 1’s 220,000 GPUs exceed Microsoft’s clusters for OpenAI (100,000 GPUs) and rival Oracle’s planned 131,072-GPU setup. The Memphis facility is now among the largest AI compute clusters in operation. SpaceX’s announced expansion to 1 million GPUs would position it as a dominant global infrastructure provider, though those phases are still in development.
The SpaceX-Anthropic deal is not just a business transaction—it is evidence that AI infrastructure is becoming a commodity market. Musk’s willingness to rent compute to a rival signals maturity in the industry: the biggest constraint is no longer ideology or competition, but silicon and power. Expect more such deals as global compute demand continues to outpace supply. The real race is not between AI companies anymore—it is between whoever can build the most supercomputers fastest.
This article was written with AI assistance and editorially reviewed.
Source: Tom's Hardware


