IBM and Arm mainframe collaboration represents a fundamental shift in how enterprises deploy AI and data-intensive workloads in regulated environments. The two companies announced a strategic partnership to develop dual-architecture hardware supporting both IBM and Arm-based workloads on IBM Z mainframes and LinuxONE systems, enabling organizations to run modern cloud-native applications without migrating systems of record to public clouds.
Key Takeaways
- IBM and Arm are building virtualization tools to run Arm software directly on IBM Z mainframes and LinuxONE systems.
- The collaboration targets regulated industries and sovereign markets where data residency and security compliance prevent cloud migration.
- Arm processors now power close to half of all compute shipped to major hyperscalers in 2025, signaling mainframe adjacency to efficiency-driven architectures.
- Hardware includes the Telum II processor with 8 cores at 5.5GHz, 40% larger on-chip cache, and built-in AI accelerators for real-time inference.
- The partnership extends IBM’s 25-year mainframe Linux heritage into a new era of hybrid workload deployment.
Why Arm matters to mainframes now
Arm has become the dominant architecture for cloud efficiency. Close to half of all compute shipped to top hyperscalers in 2025 runs on Arm chips—AWS Graviton, Google Axion, and Microsoft Cobalt—each optimized for power efficiency and cloud-native workloads. Until now, mainframe environments have remained isolated from this efficiency trend. The IBM and Arm mainframe collaboration closes that gap by bringing Arm’s lightweight, power-efficient design philosophy into enterprise data centers where mission-critical systems cannot leave on-premises infrastructure.
This is not about replacing IBM’s POWER processors. Instead, the collaboration creates a dual-architecture environment where Arm workloads run alongside traditional mainframe applications. Enterprises get flexibility—they can deploy containerized AI models, data processing pipelines, and cloud-native applications on the same physical infrastructure that runs their core transaction systems. For regulated industries like finance, healthcare, and government, this means modernizing without surrendering data residency or security controls.
The technical foundation: Telum II and beyond
IBM announced the Telum II processor at Hot Chips in August 2024, a critical foundation for this collaboration. The Telum II features 8 cores running at 5.5GHz with 360MB of on-chip cache—40% larger than its predecessor—and includes a built-in AI accelerator for real-time inference and a data processing unit optimized for I/O-intensive workloads. This hardware architecture signals IBM’s intent to embed AI capabilities directly into mainframe infrastructure rather than offloading to separate systems.
Arm’s contribution centers on the Arm Agentic AI CPU, which orchestrates AI accelerators, manages memory and storage, schedules workloads, and optimizes data movement across the dual-architecture platform. The implementation may involve add-in cards with Arm CPUs hosting virtual machines that run Arm-native programs—Linux-on-Arm or Windows-on-Arm—similar to historical coprocessor designs that extended mainframe capabilities.
Who benefits: Sovereign and regulated markets
The IBM and Arm mainframe collaboration targets a specific market segment: enterprises operating in sovereign, air-gapped, or heavily regulated environments. These organizations face regulatory mandates requiring data to remain within national borders, operate disconnected from public cloud infrastructure, or manage systems too critical to trust to third-party cloud providers. Financial institutions, government agencies, and healthcare systems in this category have historically chosen between modernizing applications (by moving to cloud) or maintaining control (by staying on mainframes). This partnership offers a third path.
According to Rachita Rao, senior analyst at Everest Group, “This is a mainframe adjacency play. The intent is to extend IBM Z and LinuxONE environments by enabling Arm-compatible workloads to run closer to systems of record. While hyperscalers use Arm to lower their own internal power costs and pass savings to cloud-native tenants, IBM is targeting the sovereign and air-gapped market.” The focus areas include virtualization tools for Arm software, security and data residency compliance for regulated industries, and common technology layers to simplify software deployment across both architectures.
IBM’s mainframe Linux legacy meets modern architecture
This collaboration builds on IBM’s proven ability to integrate non-native workloads into mainframe environments. The company introduced the Integrated Facility for Linux in 2000, enabling Linux applications to run on System z mainframes. LinuxONE, launched later, extended this capability to dedicated Linux-only configurations. The IBM and Arm mainframe collaboration represents the next evolution of this strategy—not replacing Linux, but expanding the ecosystem to include Arm-native software, containerized applications, and AI workloads that thrive on Arm’s efficiency-focused architecture.
Mohamed Awad, executive vice president of the Cloud AI Business Unit at Arm, framed the partnership this way: “As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments. Our collaboration with IBM builds on this progress, extending the Arm ecosystem into mission-critical enterprise environments and giving organizations greater flexibility in how they deploy and scale these workloads.”
Market timing and strategic implications
The announcement arrives at a critical moment. Enterprises are scaling AI deployments, but many lack the infrastructure flexibility to experiment with new architectures without disrupting production systems. Public cloud migrations remain off-limits for regulated workloads. The IBM and Arm mainframe collaboration removes that constraint. Organizations can deploy AI models, analytics engines, and data processing applications on Arm-native software stacks running on mainframe-grade infrastructure, all without cloud migration.
Industry observers note this reflects deeper investment in long-term platform innovation than typical mainframe announcements. Unnamed analysts suggest the partnership signals a meaningful step toward a future where enterprises think differently about deploying and scaling modern workloads on mission-critical infrastructure. Hyperscalers proved Arm’s efficiency advantage in cloud environments; IBM and Arm are now proving it works in regulated enterprise settings where cloud is not an option.
What this means for enterprise infrastructure teams
For mainframe teams, the IBM and Arm mainframe collaboration opens new possibilities without requiring wholesale platform replacement. Existing System z and LinuxONE investments remain valuable. New workloads can leverage Arm’s software ecosystem, AI-optimized designs, and power efficiency. Teams gain flexibility to choose the right architecture for each workload—traditional mainframe applications on IBM POWER, cloud-native and AI workloads on Arm, all unified within a single data center footprint.
The collaboration also signals IBM’s commitment to mainframe relevance in the AI era. Rather than conceding modern workloads to cloud providers, IBM is embedding AI capabilities directly into mainframe infrastructure. This is not about nostalgia or legacy support—it is about acknowledging that regulated enterprises need powerful, flexible, secure computing platforms that cloud alone cannot provide.
How does the IBM and Arm mainframe collaboration compare to hyperscaler approaches?
Hyperscalers like AWS, Google, and Microsoft use Arm to reduce power costs and pass savings to cloud tenants. IBM and Arm are targeting a different use case: enterprises that cannot move to cloud. Hyperscalers optimize for scale and cost; IBM is optimizing for control, security, and regulatory compliance. The underlying Arm architecture is the same, but the deployment model and target market are fundamentally different.
When will the IBM and Arm mainframe collaboration be available?
The research brief contains no launch dates, availability timelines, or pricing information for the dual-architecture systems. IBM and Arm have announced the strategic partnership and outlined the technical direction, but specific product availability remains unspecified. Enterprises interested in this capability should contact IBM directly for roadmap details and implementation timelines.
What workloads benefit most from dual-architecture mainframes?
AI inference, real-time analytics, data processing, and containerized applications are primary candidates for Arm deployment on mainframes. These workloads thrive on Arm’s efficiency and benefit from proximity to core transaction systems. Traditional mainframe applications—COBOL batch processing, IMS databases, transaction monitors—remain on IBM architecture. The dual-architecture approach lets teams optimize each workload for its architectural sweet spot.
The IBM and Arm mainframe collaboration represents a pragmatic response to enterprise computing reality: regulated organizations need modern AI and data capabilities, but cannot sacrifice control or move to public clouds. By extending Arm’s proven efficiency into mainframe environments, IBM is keeping the mainframe relevant for another generation of enterprise computing. The partnership signals that mainframes are not retreating into legacy support—they are evolving to meet the demands of AI-driven, security-conscious enterprises that cloud providers cannot serve.
This article was written with AI assistance and editorially reviewed.
Source: Tom's Hardware


