NEO Semiconductor’s 3D X-DRAM targets HBM replacement with proof-of-concept win

Craig Nash
By
Craig Nash
AI-powered tech writer covering artificial intelligence, chips, and computing.
9 Min Read
NEO Semiconductor's 3D X-DRAM targets HBM replacement with proof-of-concept win — AI-generated illustration

NEO Semiconductor’s 3D X-DRAM memory technology has cleared a critical milestone: proof-of-concept validation completed on April 23, 2026, demonstrating that the architecture can actually work in silicon. The San Jose-based company is targeting nothing less than the replacement of both HBM (High Bandwidth Memory) and conventional DRAM in AI and data-centric systems—a market currently dominated by expensive, bandwidth-constrained alternatives that struggle to keep pace with GPU and accelerator demand.

Key Takeaways

  • 3D X-DRAM proof-of-concept achieved 15× better data retention than JEDEC standard (1+ second vs. 64 ms at 85°C)
  • Read/write latency under 10 nanoseconds; endurance exceeds 10¹⁴ cycles in POC testing
  • Leverages existing 3D NAND manufacturing infrastructure and processes, reducing development and production costs
  • Scalable to 512Gb density with IGZO-based variants; 10× density improvement over conventional DRAM
  • Developed with National Yang Ming Chiao Tung University in Taiwan; secured strategic funding to advance toward production

The proof-of-concept test chips were manufactured and tested at Taiwan’s National Institutes of Applied Research-TSRI, using the same mature 3D NAND infrastructure that already produces memory with over 300 layers in commercial production. This is the key insight: NEO did not need to invent new fabrication equipment or retrain entire foundries. The company’s innovation sits on top of an existing, proven manufacturing ecosystem.

Why 3D X-DRAM Memory Technology Matters for AI Right Now

Current HBM solutions command premium prices and face bandwidth bottlenecks that limit AI training and inference performance. Conventional DRAM, meanwhile, is hitting density and power efficiency walls. NEO’s 3D X-DRAM targets both problems simultaneously by stacking memory cells vertically—similar to how 3D NAND works—while maintaining compatibility with DRAM and HBM roadmaps. The company’s X-HBM variant claims up to 16× bandwidth for AI chips with direct GPU integration and up to 300 layers of stacking, which would reduce latency, cost, and power consumption compared to current solutions.

The POC results are specific and measurable. Read and write latency came in under 10 nanoseconds—fast enough for real-time AI workloads. Data retention exceeded 1 second at 85°C, which is 15× better than the JEDEC industry standard of 64 milliseconds. Bit-line and word-line disturbance also stayed above 1 second, meaning the cells are stable under repeated access. Endurance topped 10¹⁴ cycles, suggesting the technology can handle the punishing write patterns that in-memory computing demands.

What makes this credible is the architecture itself. NEO’s 3D X-DRAM comes in three variants: 1T1C (one transistor, one capacitor) for high-density DRAM compatible with existing roadmaps; 3T0C (three transistor, zero capacitor) for AI and in-memory computing; and 1T0C (one transistor, zero capacitor) for floating-body designs targeting ultra-high density. Each variant trades off complexity for density or performance, giving system designers real choices rather than forcing a one-size-fits-all solution.

How 3D X-DRAM Memory Technology Compares to Current HBM

Today’s HBM stacks DRAM dies with a silicon interposer, connecting them through thousands of micro-bumps. It is expensive, power-hungry, and limited by the number of layers that can be reliably bonded. NEO’s approach inverts this: instead of stacking separate DRAM chips, it builds the entire memory structure as a single 3D array, much like 3D NAND, but with DRAM cell behavior. This eliminates the interposer, reduces interconnect losses, and allows for much higher layer counts—potentially 300+ layers versus the 12-16 layers typical of current HBM.

The cost advantage is substantial. Because 3D X-DRAM uses existing 3D NAND fabrication equipment, materials, and processes, NEO avoids the massive capital expenditure required to build new fabs or retrofit existing lines. Competitors like SK Hynix and Samsung have invested billions in HBM infrastructure; NEO is leveraging infrastructure that already exists and is already depreciated. For AI chip makers facing margin pressure, that cost advantage could be decisive.

Density is another differentiator. NEO’s IGZO-based 3D X-DRAM variants achieve up to 512Gb density with 450-second retention and ultra-low power in TCAD simulations, representing a 10× density improvement over conventional DRAM. While these numbers come from simulation rather than hardware POC, the fact that the company has validated the basic architecture in silicon lends them credibility. Scaling from the POC to production densities is an engineering challenge, not a physics problem.

What Comes Next for 3D X-DRAM Memory Technology

The proof-of-concept is a gating milestone, not a finish line. NEO has secured strategic funding to advance toward production, but the company has not announced a commercial launch date or manufacturing partner. The next steps are likely scaling the test chips to higher densities, validating performance across temperature and voltage ranges, and locking down a foundry partner willing to risk capacity on a new memory type.

The competitive window is open. AI demand for memory bandwidth is outpacing HBM supply, and every quarter of delay costs chip makers revenue. NEO’s 3D X-DRAM, if it reaches production within 18-24 months, could capture significant share in the AI accelerator market—especially for companies like Cerebras, Graphcore, or even custom silicon teams at cloud providers who have flexibility in their memory architecture choices.

The risk is execution. Moving from POC to production is a graveyard for semiconductor startups. Manufacturing yields, reliability under thermal cycling, and integration with GPU and accelerator interfaces are all unsolved problems. But the fact that NEO solved the most fundamental problem—proving the cells work—puts the company ahead of where most memory startups are when they announce funding rounds.

Can 3D X-DRAM Memory Technology actually replace HBM?

Not immediately, but potentially. HBM has a 15-year head start, and every major GPU and accelerator design is optimized for its pinout and bandwidth characteristics. Switching to 3D X-DRAM would require new chip designs, new software interfaces, and new validation cycles. That said, the next generation of AI chips—expected in 2027 and beyond—will be designed from scratch. NEO’s technology is positioned to compete for those designs.

What is the difference between 3D X-DRAM and conventional DRAM?

Conventional DRAM is a 2D array of cells arranged on a flat wafer. 3D X-DRAM stacks cells vertically, similar to 3D NAND, achieving much higher density and lower power per bit. The trade-off is complexity—vertical stacking introduces new failure modes and requires new testing and repair strategies. But the density and power gains justify the added complexity for AI workloads.

How long until 3D X-DRAM Memory Technology reaches production?

NEO has not announced a production timeline, but the company secured strategic funding after the POC, suggesting a 2-3 year path to commercial samples. Full production ramp would likely follow 12-18 months after that, putting real volume shipments in the 2028-2029 timeframe if execution goes smoothly. That is aggressive for a startup, but not impossible if a major foundry commits capacity and NEO’s yields are competitive from day one.

The 3D X-DRAM proof-of-concept validates the core technology, but manufacturing at scale, integrating with AI chips, and earning design wins are separate battles. NEO has cleared the first hurdle. Whether the company can convert that validation into revenue depends on execution speed and the willingness of chip makers to adopt a new memory architecture. For now, the company has proven that vertical DRAM stacking is not just theoretically possible—it actually works.

This article was written with AI assistance and editorially reviewed.

Source: Tom's Hardware

Share This Article
AI-powered tech writer covering artificial intelligence, chips, and computing.