Mainframes: A Workhorse for the Data Center and the Hybrid Cloud



Sky is the limit

The impending death of mainframes has been forecast for more than three decades, but these sturdy platforms still run 68% of enterprise production workloads, according to the Share User Group. In this interview, Richard Baird, global CTO for Core Enterprise and zCloud at Kyndryl, explains why they are just as relevant in the hybrid cloud as in the data center.

Q: How critical is the mainframe to today’s data-centric business world?

It continues to be as essential today as it was yesterday, and it is continually evolving. For example, Telum, the central processor chip for the next generation of IBM Z and LinuxONE systems, has on-chip acceleration for artificial intelligence (AI) inferencing while a transaction is taking place. With the large volume of data that businesses need to process today, mainframes are a core part of their technology portfolio.

Q: What are some common misconceptions about the mainframe?

That the platform is dead and that it doesn’t support modern technologies or modern ways of doing work. It’s been written off before, like during the client/server phase in the early ’90s and when the internet and cloud computing arose. One of the remarkable things about the platform is its longevity, its ability to run applications created in the 1960s and ’70s. It just works and scales as the workload increases.

Another misconception is that the skills aren’t out there. At Kyndryl the average age of our mainframe team is 30, with a continuous opportunity to grow new skills on the new technologies coming to IBM Z. IBM Z runs Node.js, Python, git, Jenkins, Linux, containers, and other modern development and automation tools. And the number of transactions being driven by mainframes continues to grow. If this was a platform in decline, you wouldn’t expect to see that.

Q: What factors make the modern mainframe the right choice for a hybrid cloud environment?

Workloads like those in the finance industry, which have a lot of high-volume transactions. It’s also a good choice where security is a priority. There’s hardware encryption available, and it’s a highly secure, resilient environment. You can scale up to a bigger machine without moving workloads (“vertical scaling” or “scaling up”) or adding more processors as you must do in the commodity hardware world (“horizontal scaling” or “scaling out”). If you have high peak workloads, then vertical scaling is often the best approach.

Q: What new options does Kyndryl present for customers seeking to modernize their portfolios?

I’ll use our alliance with Microsoft as an example. It opens up mainframe infrastructure to an integrated devops tooling suite. Not a new paradigm, but many customers struggle with how to do it. Rather than building applications with a traditional waterfall approach (which many customers think is the only way possible with the mainframe), you can use the same tools you use for agile development and even manage the source code repository in git with automated testing and deployment through Kyndryl services. A customer can take data on a Z platform and virtualize it into the Azure world so it looks like Azure data but it’s on a mature, trusted, and safe platform. Customers can get the best of the cloud development tools, and developers don’t even realize they’re building applications on the mainframe.

Q: Are customer views of the future of the mainframe changing?

Some clients have moved off the mainframe, but others are driving more transactions through it. There’s a lot of logic in mainframe applications and a lot of data on those platforms. So it really is a question of “the right platform for the right workload.” Customers can decommission part of an application and move it to another platform. They’re realizing they have more options than ever, and the mainframe is part of their entire hybrid cloud strategy.

To learn more about Kyndryl’s core enterprise and zCloud solutions, click here.

Click here for additional perspective on the modern mainframe.