Big Tech AI capex is heading for $725 billion in 2026, a staggering 77% increase from the $410 billion these companies spent in 2025. Google, Microsoft, Meta, and Amazon are doubling down on AI infrastructure despite mounting skepticism about whether this spending spree makes economic sense.
Key Takeaways
- Big Tech AI capex reaches $725 billion in 2026, up 77% from $410 billion in 2025.
- Hyperscaler capex averages $610 billion based on corporate guidance, roughly 3x 2024 spending levels.
- Analyst Dylan Passantino from Bernstein calls the bear thesis on Big Tech spending “garbage”.
- Recent announcements from Microsoft, Meta, and Google confirm escalating AI infrastructure commitments.
- Spending surge driven by GPU procurement, data center buildouts, and networking infrastructure.
The sheer scale of this investment reflects an industry-wide conviction that AI dominance requires massive computational infrastructure. No major player can afford to fall behind in the race to build the data centers, GPU clusters, and networking systems that power large language models and AI services.
Why the Bear Thesis on Big Tech Spending Keeps Failing
Skeptics have long questioned whether hyperscalers are throwing money at AI infrastructure without sufficient return on investment. That argument is crumbling. Analyst Dylan Passantino from Bernstein dismissed the bear thesis as “garbage,” pointing to sustained momentum in AI-driven capex across the industry. The evidence is hard to ignore: Microsoft, Meta, and Google all announced fresh billions in AI spending commitments recently, signaling confidence rather than hesitation.
The bear case rests on the assumption that these companies will eventually hit diminishing returns—that at some point, the cost of additional computing power will exceed the revenue it generates. But that inflection point appears nowhere in sight. Instead, each company is racing to secure enough GPU capacity and data center space to meet exploding demand for AI services, from enterprise customers and consumers alike. The competitive pressure is relentless. If one hyperscaler pulls back on capex, rivals gain ground in model training, inference speed, and feature richness.
What makes this different from past tech bubbles is tangible product adoption. AI features are shipping in consumer products, enterprise software, and cloud services. Companies are not just spending on theoretical capabilities—they are spending to support real, revenue-generating AI products already in market or launching soon.
The Scale of Big Tech AI capex Compared to Historical Norms
To grasp the magnitude of this shift, compare 2026 projections to 2024 levels. Hyperscaler capex in 2026 is expected to average $610 billion based on corporate guidance, roughly three times the amount these companies spent just two years earlier. That is not incremental growth—that is a structural transformation of how Big Tech allocates capital.
Google, Microsoft, Meta, and Amazon collectively spent $410 billion on capex in 2025 alone, and they are planning to exceed that by 77% in 2026. To put that in perspective, the entire capex budgets of most Fortune 500 companies pale in comparison to what a single hyperscaler now spends on AI infrastructure in a single year.
This spending trajectory shows no signs of slowing. Corporate guidance from these companies continues to point upward, suggesting that 2026 will not be a peak but rather another rung on an ascending ladder. The question is no longer whether Big Tech will spend heavily on AI infrastructure—it is how long they can sustain this pace and whether the returns will justify the investment.
What Big Tech AI capex Means for the Broader Tech Industry
The implications ripple far beyond the four companies driving this spending. GPU manufacturers, networking equipment makers, data center operators, and power infrastructure providers are all benefiting from this capex surge. The spending creates a flywheel: more compute capacity enables better AI models, which drive more customer adoption, which justifies further capex to keep up with demand.
For startups and smaller tech companies, the message is clear. The barrier to entry in AI is not just talent or ideas—it is raw computational power. Hyperscalers are investing so heavily in infrastructure that they can offer AI services at scales and speeds that smaller competitors cannot match. This consolidates power in the hands of the biggest players, even as it accelerates AI innovation across industries.
The bear thesis may be “garbage,” as Passantino argues, but that does not mean this spending is risk-free. Execution matters enormously. These companies must successfully deploy capital at scale, train models efficiently, and convert infrastructure spending into profitable services. A misstep—a failed data center, a poorly optimized model, a service that fails to gain traction—could waste billions. But so far, the track record suggests these companies know what they are doing.
Will Big Tech AI capex Spending Ever Slow Down?
Not in the near term. Corporate guidance from Google, Microsoft, Meta, and Amazon all point to continued capex growth through at least 2027. The competitive dynamics are too intense, the market opportunity too large, and the technical challenges too demanding for any major player to reduce spending without risking obsolescence.
The real question is whether this pace becomes unsustainable. At some point, these companies will max out their ability to absorb capex without crushing returns. But that point appears distant. As long as AI services are driving revenue growth and customer demand remains strong, Big Tech will keep spending.
Is the $725 billion Big Tech AI capex figure accurate?
The $725 billion projection represents a high-end analyst estimate. Corporate guidance from the companies themselves averages closer to $610 billion for 2026. Both figures point in the same direction—massive spending—but the exact number depends on how aggressively each company executes its capex plans and whether they announce additional spending increases before year-end.
Why are hyperscalers spending so much on AI infrastructure?
Demand for AI services is growing faster than existing infrastructure can support. Training large language models, running inference at scale, and deploying AI features across consumer and enterprise products all require immense computational capacity. Hyperscalers are spending to meet this demand, secure competitive advantage, and ensure they have enough GPU and data center capacity to support their AI roadmaps for the next 2-3 years.
What happens if Big Tech AI capex spending does not generate returns?
If AI services fail to drive meaningful revenue growth, or if these companies overshoot demand and end up with stranded assets, shareholder pressure could force spending cuts. But this scenario seems unlikely given current adoption trends. More probable is that Big Tech continues spending heavily through 2027-2028, then gradually moderates as the infrastructure base matures and becomes sufficient to support stable AI service operations.
The bear thesis on Big Tech spending is not garbage because the companies are infallible—it is garbage because it misreads the competitive and market dynamics driving this capex surge. As long as AI remains a strategic priority and customer demand keeps climbing, expect Big Tech AI capex to stay elevated. The real story is not whether these companies will spend, but how efficiently they will deploy capital and whether the returns justify the historic scale of this infrastructure bet.
This article was written with AI assistance and editorially reviewed.
Source: Tom's Hardware


