AI memory shortage could persist until 2027 and beyond

Craig Nash
By
Craig Nash
AI-powered tech writer covering artificial intelligence, chips, and computing.
10 Min Read
AI memory shortage could persist until 2027 and beyond — AI-generated illustration

The AI memory shortage is now officially a multi-year crisis. Samsung and SK Hynix, which together control roughly half the global memory chip market, have warned that supply constraints could persist until 2027 and potentially extend to 2030, as artificial intelligence data centers consume HBM (high-bandwidth memory) faster than manufacturers can produce it. The crunch has already forced major customers to reserve capacity years in advance—a sign of desperation that reveals just how severe the imbalance between demand and supply has become.

Key Takeaways

  • Samsung’s order fulfilment rate hit a record low in Q1 as customers pre-booked capacity through 2027.
  • SK Hynix Q1 revenue surged 200% year-on-year, driven by HBM and DRAM demand exceeding supply.
  • SK Group Chairman warns AI memory shortage could extend until 2030 due to wafer constraints.
  • Global HBM production targets of 320,000 wafers monthly by 2025 trail demand by hundreds of thousands.
  • DRAM inventory has shrunk from 17 weeks to just 4 weeks, signaling tightening across the broader market.

Why the AI Memory Shortage Extends Beyond 2027

The AI memory shortage stems from a brutal mismatch between what data centers need and what chipmakers can deliver. SK Hynix forecasts 150,000 HBM wafers monthly by 2025; Samsung targets 170,000. Combined, that is roughly 320,000 wafers—yet demand far exceeds this. Current global HBM wafer production sits at approximately 350,000 monthly, but new AI projects like large-scale foundation model training require exponentially more capacity. The bottleneck is not just fabrication; it is physics. Building new fabs takes years, and even then, scaling advanced processes encounters manufacturing challenges that cannot be rushed.

SK Group Chairman Chey Tae-won explicitly warned that the AI memory shortage could extend until 2030, citing wafer supply constraints and the physical difficulty of rapidly scaling production. This is not optimistic talk—it is a candid assessment from a company investing billions to expand capacity. Samsung’s memory business head Kim Jae-june stated plainly that supply growth in 2026 and 2027 would be limited, confirming that near-term relief is unlikely.

Customers Are Booking Memory Years Ahead

The desperation is visible in purchasing behavior. Samsung’s order fulfilment rate plunged to a record low in Q1, with customers pre-booking memory capacity for 2027 due to supply falling far short of demand. HBM4, the latest generation, is already fully booked. This is not normal. Typically, chipmakers carry inventory and customers order on shorter timelines. When buyers lock in supply three years ahead, it signals they expect a sustained shortage and are willing to commit capital early to guarantee access.

SK Hynix Q1 revenue soared nearly 200% year-on-year to 52.6 trillion won, driven by explosive demand for HBM, DRAM, and enterprise SSDs. The price cycle accompanying this surge is expected to last longer than past industry cycles, according to SK Hynix CFO Kim Woo-hyun, who noted that demand from major customers was increasing across all memory segments while suppliers struggled to increase output. Translation: prices will stay elevated for years.

The Packaging Bottleneck Compounds the Crisis

Manufacturing HBM is not just about etching silicon. HBM stacks require through-silicon vias, precision bonders, and specialized substrates—components with their own supply chains and their own constraints. Packaging equipment suppliers like HANMI face 12-month backlogs on precision bonders, the machines that bond memory dies together. TSMC’s CoWoS packaging capacity, critical for AI chip assembly, is increasing by 25% in late 2026, but demand is growing faster than capacity can expand. Meanwhile, EUV lithography tool shortages at ASML and export controls are delaying production ramps across the industry.

These secondary constraints mean that even if foundries and memory makers increase wafer output, they hit walls elsewhere in the supply chain. A fab can produce wafers, but if packaging cannot keep pace, finished chips never reach customers. This layered constraint is why Samsung and SK Hynix are both planning new fabs with timelines stretching into 2027 and 2028. SK Hynix is building a mega-fab in Cheongju dedicated to HBM, with significant capacity additions expected in 2026, and accelerating its Yongin plant to completion by February 2027. The company is also constructing a massive P&T7 facility (the size of 32 soccer fields) dedicated entirely to HBM, with all production lines commencing by 2028. Samsung is bringing its fourth Pyeongtaek fab online in 2026 with large-scale output beginning in 2027, though it is splitting that facility with logic chip production. A fifth advanced HBM fab is under construction, but operations are unlikely before 2028.

The Ripple Effect: Consumer Markets Starve

The AI memory shortage is not isolated to data centers. An oligopoly of three companies—Samsung, SK Hynix, and Micron—controls roughly 93% of the global memory chip market. All three are prioritizing enterprise data center contracts for higher margins over consumer products. This means laptops, smartphones, and gaming PCs are being deprioritized. DRAM inventory has shrunk from 17 weeks to just 4 weeks, a dramatic contraction that signals tightening across the broader market. Prices for standard DRAM have climbed 60%, with HBM stacks rising even more. Consumers will feel this squeeze in higher laptop prices, delayed smartphone launches, and constrained gaming hardware upgrades.

Micron is expanding HBM production, with Idaho and Singapore facilities starting in 2027, a new Hiroshima fab beginning mass production in 2028, and Taiwan capacity from Powerchip ramping in the second half of 2027. Yet even with three major manufacturers racing to expand, the timeline and scale of these additions lag the pace of AI demand growth. SEMI forecasts global wafer capacity increasing by 7% in 2025, but leading-edge capacity—the most advanced nodes where HBM and latest logic are made—remains severely constrained.

When Will the Shortage End?

The most optimistic scenario has the AI memory shortage easing in 2027 as new fabs ramp production. The more realistic scenario, endorsed by SK Group’s chairman, extends shortages to 2030. By then, the fabs under construction now will be operational, but demand may have grown even further. The semiconductor industry has a history of boom-bust cycles, but the AI boom is different—it is driven by fundamental architectural needs (HBM for bandwidth) rather than incremental performance gains. As long as AI model training and inference require massive memory bandwidth, demand will remain intense.

What happens to consumer memory prices during the shortage?

Consumer DRAM prices will remain elevated as long as the AI memory shortage persists. Manufacturers prioritize enterprise data center contracts over consumer products because margins are higher and volumes are committed years in advance. Consumers should expect laptop and smartphone RAM pricing to stay above historical norms through at least 2026, with potential relief only if new fab capacity comes online faster than expected.

Can Micron catch up to Samsung and SK Hynix on HBM?

Micron is expanding aggressively, but it is starting from a smaller HBM footprint than its competitors. New facilities in Idaho, Singapore, Hiroshima, and Taiwan will increase output, but the timelines—2027 to 2028 for meaningful volume—mean Micron will remain a third player in the HBM market through this shortage cycle. The oligopoly structure will persist.

Why is HBM in such high demand?

HBM is essential for AI training and inference because it provides the bandwidth data centers need to move massive amounts of data between processors and memory quickly. Standard DRAM cannot keep up with AI workloads. As AI models grow larger and more complex, HBM demand accelerates, outpacing the industry’s ability to manufacture it at scale.

The AI memory shortage is not a temporary blip. It is a structural crisis that will reshape pricing, supply chains, and product roadmaps through at least 2027 and potentially beyond. Customers are locking in capacity years ahead because they know relief is not coming soon. For consumers, this means higher prices on memory-dependent devices and delayed product launches. For the semiconductor industry, it means record profits now and a race to build capacity fast enough to meet an insatiable appetite for AI infrastructure.

This article was written with AI assistance and editorially reviewed.

Source: Tom's Hardware

Share This Article
AI-powered tech writer covering artificial intelligence, chips, and computing.