ZAM memory refers to Z-Angle Memory, an Intel-backed high-bandwidth memory architecture designed to challenge HBM in AI accelerators. The technology uses nine vertically stacked layers and reportedly delivers bandwidth approaching HBM4, the memory standard powering Nvidia’s Vera Rubin AI platform, with prototypes expected by early 2028 and commercial products targeted for 2029.
Key Takeaways
- ZAM memory stacks one logic layer plus eight DRAM layers using hybrid bonding for 3D chip placement.
- Each layer contains approximately 13,700 TSVs, enabling dense vertical interconnects across the stack.
- Bandwidth is reported at 0.25 Tb/s per square millimetre, reaching around 5.3 TB/s for a 10 GB module with a 171 mm² die area.
- Capacity sits at roughly 1.125 GB per DRAM layer, with the full stack delivering up to 9–10 GB per memory module.
- Prototypes are planned for early 2028, with commercial availability targeted for 2029.
What Is ZAM Memory and Why Does It Matter for AI?
ZAM memory is Intel’s proposed answer to HBM’s stranglehold on AI accelerator hardware. The architecture stacks a single logic layer at the base with eight DRAM layers above it, all connected through hybrid bonding — a chip-stacking technique that enables extremely tight layer-to-layer connections. For AI data centres burning through memory bandwidth at unprecedented rates, a credible HBM alternative is not just interesting, it’s strategically essential.
The numbers behind ZAM are striking. Each layer in the stack uses approximately 13,700 through-silicon vias (TSVs) — the vertical electrical connections that carry data between layers. That density is what enables the bandwidth figures being reported: 0.25 Tb/s per square millimetre, translating to roughly 5.3 TB/s for a full 10 GB module with a 171 mm² die area. That puts ZAM in serious contention with HBM4, not as a theoretical curiosity but as a plausible production architecture.
The work is reportedly set to be presented at VLSI 2026 in June, which would give the broader semiconductor industry its first detailed look at the underlying engineering. That presentation will be the real test of whether ZAM’s claims hold up under expert scrutiny.
How Does ZAM Memory Compare to HBM4?
ZAM memory’s bandwidth targets place it close to — but not quite at — HBM4 levels, which is itself the benchmark for next-generation AI memory. HBM4 is the standard Intel and others are chasing, currently deployed in Nvidia’s Vera Rubin AI platform. ZAM’s reported 5.3 TB/s for a 10 GB module represents a serious challenge to that hierarchy, even if it doesn’t claim outright superiority.
The architectural difference is meaningful. HBM stacks are manufactured and sold by a small number of players — primarily SK Hynix and Samsung — giving those companies enormous pricing power over AI chip makers. ZAM, developed through an Intel-backed structure, represents a potential second supply chain for high-bandwidth memory. That’s not just a technical story; it’s a supply chain story that matters to every company building AI accelerators at scale.
Capacity is where the picture gets slightly murkier. The reported figure of roughly 1.125 GB per DRAM layer across eight layers suggests a total around 9 to 10 GB per module — the source title references up to 9 GB while snippet-level data points to 10 GB. The discrepancy likely reflects different module configurations rather than contradictory claims, but it’s worth noting that ZAM’s capacity per module is modest compared to HBM stacks used in high-end AI training hardware, which can reach much larger capacities. ZAM appears better positioned for AI inference workloads than for large-scale training runs.
When Will ZAM Memory Actually Ship?
ZAM memory is not close to shipping. Prototypes are planned for early 2028, with commercial products following in 2029 — a timeline that puts it squarely in the next hardware generation cycle for AI accelerators. That’s not a criticism; memory architecture at this complexity level takes years from research to production, and the 2028–2029 window is realistic for a technology that hasn’t yet had its first major public research presentation.
The implication for AI hardware buyers is clear: ZAM is not a purchasing decision for today. It’s a reason to watch the HBM supply landscape carefully over the next three years. If ZAM hits its bandwidth targets in prototype form at VLSI 2026, it could shift negotiating dynamics between AI chip makers and HBM suppliers well before a single ZAM module ships commercially. Announced competition has a way of affecting pricing even before products arrive.
Is ZAM memory the same as HB3DM?
HB3DM appears to be a related or earlier designation for the same technology family, referring to the first-generation design that uses nine stacked layers with hybrid bonding. ZAM — Z-Angle Memory — is the broader architectural name. The naming across different reports is inconsistent, which likely reflects early-stage research terminology rather than distinct competing products.
Who is developing ZAM memory?
ZAM memory is being developed with Intel backing, reportedly through a subsidiary structure also involving SoftBank. The exact corporate naming of the subsidiary varies across reports, so treat specific subsidiary names with caution until officially confirmed. The technology is intended for AI accelerators and AI data centre applications.
What AI platforms could use ZAM memory?
ZAM is positioned as an alternative to HBM in AI accelerator hardware — the same category of memory used in Nvidia’s Vera Rubin AI platform via HBM4. Given its capacity per module, ZAM looks better suited to inference-focused AI hardware than to the massive training clusters that require the highest possible memory capacity per chip.
ZAM memory won’t reshape AI hardware overnight — nothing in semiconductor development ever does. But the combination of near-HBM4 bandwidth, a nine-layer hybrid bonding architecture, and a credible 2029 commercial target makes it the most technically substantive HBM challenge to emerge in years. Watch the VLSI 2026 presentation closely. That’s where the real story begins.
Edited by the All Things Geek team.
Source: TechRadar


