Newsletters




IBM Debuts New Processor Technologies to Support Advanced AI


IBM is unveiling new processor innovations designed to accelerate AI initiatives across next-generation IBM Z mainframe systems. The new technologies—IBM Telum II processor, IO acceleration unit, and IBM Spyre accelerator—serve to improve latency, sustainability, and scale processing capacity to better accommodate the demands of AI and large language models (LLMs).

Key to moving AI projects—especially those using LLMs—from proof-of-concept to production is establishing a power-efficient, secure, scalable architectural foundation to support it. Any AI initiative will require significant energy; according to Morgan Stanley research, the power demand of generative AI (GenAI) will increase by 70% annually over the next few years, where it is projected to consume as much energy in 2027 as the entire country of Spain did in 2022, IBM cited.

IBM’s latest processor innovations aim to support a variety of advanced, AI-driven use cases while striking the balance between high performance and power efficiency.

IBM Telum II processor is a new IBM chip engineered to power next-generation IBM Z systems, increasing frequency and memory capacity. Delivering a 40% growth in cache and an integrated AI accelerator core—as well as a coherently attached Data Processing Unit (DPU)—Telum II delivers low-latency, high-throughput, in-transaction AI inference that addresses the industry’s complex transaction needs, according to IBM.

Part of Telum II is the IO acceleration unit, a new DPU placed directly on the Telum II chip that accelerates complex IO protocols for networking and storage on the mainframe. This not only streamlines system operations but also enhances key component performance.

The IBM Spyre accelerator is an additional AI compute capability that complements the Telum II processor, designed to support ensemble methods of AI modeling—an approach that merges multiple machine learning (ML) or deep learning AI models with encoder LLMs. This multi-model strategy works to improve the accuracy and quality of results when compared to single model usage, according to IBM.

“Our robust, multi-generation roadmap positions us to remain ahead of the curve on technology trends, including escalating demands of AI,” said Tina Tarquinio, VP, product management, IBM Z and LinuxONE. “The Telum II processor and Spyre accelerator are built to deliver high-performance, secured, and more power efficient enterprise computing solutions. Years in development, these innovations will be introduced in our next generation IBM Z platform so clients can leverage LLMs and generative AI at scale.”

The Telum II processor and IBM Spyre accelerator are manufactured by Samsung Foundry—IBM’s long-standing fabrication partner—who offer high performance, power-efficient 5nm process nodes. This advanced structure aids in powering a range of AI use cases, including:

  • Insurance claims fraud detection with ensemble AI, increasing performance and accuracy by combining LLMs with traditional neural networks
  • Advanced anti-money laundering (AML) detection to drive compliance and reduce financial crime risk
  • AI assistants for a multitude of purposes, including application lifecycle acceleration, knowledge and expertise transfer, code explanation and transformation, and more

“We're really looking forward to seeing what our clients do when we put this new AI compute into the system,” said Chris Berry, distinguished engineer, microprocessor design at IBM. “We know that our clients have an appetite for more AI…[and] being able to enable the clients to do more with AI—whether it be inferencing or generative—is really exciting.”

To learn more about IBM’s latest processor technology, please visit https://www.ibm.com/us-en.


Sponsors