IBM is releasing the latest version of its mainframe hardware, which includes new updates designed to accelerate the adoption of artificial intelligence.
On Monday, the company announced the release of the IBM z17, the latest version of its mainframe hardware that aims to accelerate the adoption of artificial intelligence. The fully encrypted mainframe is powered by the IBM Telum II processor and is designed for more than 250 AI use cases, the company said, including AI agents and generative AI.
Mainframes may seem outdated, but according to one source, 71% of Fortune 500 companies use them today. According to Market Research Future, the mainframe market was estimated to be worth $5.3 billion in 2024.
The z17 can process 450 billion output operations per day, which is 50% more than its predecessor, the IBM z16, which was released in 2022 and ran on Tellum’s original processor. The system is designed to be fully integrated with other open source hardware, software and tools.
Tina Tarquinio, vice president of product management and design for IBM Z, told TechCrunch that this mainframe upgrade has been in the works for five years – long before the current AI hype that began with the release of OpenAI’s ChatGPT in November 2022.
According to Tarquinio, IBM spent more than 2,000 hours on research and received feedback from more than 100 customers during the creation of z17. In her opinion, it is interesting to see that now, five years later, the feedback received coincides with where the market is ultimately headed.
“It was wild to realize that we were implementing an AI accelerator and then to see, especially in the second half of 2022, all the changes in the industry related to artificial intelligence,” Tarquinio told TechCrunch. “It’s been really exciting. I think the most important thing was that we don’t know what we don’t know about what’s coming next, right? So the possibilities are really unlimited in terms of what AI can help us do.”
According to Tarquinio, the z17 is configured to adapt and accommodate to where the AI market is headed. The mainframe will support 48 IBM Spyre AI accelerator chips upon release, with a plan to bring that number to 96 within 12 months.
“We are purposefully building a safety margin,” Tarquinio said. “We’re deliberately working on the flexibility of AI. So when new models come out, [we] make sure that we build in space for bigger and bigger models – models that maybe need more local memory to communicate with each other. We anticipated this because we know that this is the approach that will change, right? New models will come and go.”
Tarquinio said that one of the main features of this latest hardware – although she joked that it was like asking her to pick her favorite child – is that the z17 is more power efficient than its predecessor, as well as likely competitors.
“On the chip, we’ve increased the AI acceleration by a factor of seven and a half, but that’s five and a half times less power than you would need, for example, to multimodel on another type of accelerator or platform in the industry,” Tarquinio said.