Don’t write-off the mainframe!

I’ve lost count of the number of headlines I’ve read over my adult life which publicise the failures of IT: “£1bn wasted as flagship IT project fails to deliver savings” or “Outrage as the taxpayer foots the bill due to incompetent IT staff and system inadequacies to the tune of $56M.”

Most articles continue to describe how the projects have failed due to politics, miscommunication, bad specification requirements and inadequate planning. I keep most of these articles hoping that they will teach me something, someday.

More recently, I read about another failed IT project in my part of the world. This was that over 90 million Euros had been allocated to drive more efficiency in the IT department and centralise it. Obviously the reason for this is the need these days to reduce cost and complexity which in turn should reduce or consolidate vendors.

In this recent case an external agency “found” more than 15,000 different desktop applications, after spending a lot of IT budget cataloguing the IT estate, another consulting agency recounted, only to find that there were actually really only 500 different applications…. For that kind of money, I’d expect my consultants to be able to count. But I should swiftly move to my point.

It is my own personal belief that many large “IT transformation” projects start out with a lack of understanding or just plain ignorance. It always results in a full blown 80 page report which reveals there are huge problems with how IT is organised, or that IT is spending too much budget, or not delivering services.

If these problems are not addressed quickly, they will only get even bigger and more costly. Recommendations usually include that the home grown applications need to disappear, and should be replaced by standard software or an ERP system.

My main gripe is that without failure, the “old” Mainframe is always too expensive and should be replaced by distributed servers – all bringing more flexibility and enormous cost savings. The cost savings are usually derived from the fact that new technologies are ambitious and therefore staffing costs are reduced.

New technologies are more often than not “one size fits all” – or can be modified “to fit all”, so where does the Mainframe fit into all this?

I recently visited a large company which had just spent a lot of money on an extensive report which basically said two things: “replace everything with standard software, remove applications from the Mainframe and run them on a cheaper platform (distributed or cloud)”. The cost estimates and ROI I saw surprised me, based on past experience.

But what really scared me was the comparison of costs of the current Mainframe with the future environment. The assumption was that in the near future, 50% of the applications would run in the cloud and the other 50% would run on distributed systems. The cost of running these applications would also be 25-35% of the costs of running them on the Mainframe. I made a few observations:

  • Many of the Mainframe applications were very company specific, which was exactly what made the company competitive. Running these applications in the cloud would require major migration efforts and costs. Costs that could not be found in the report.
  • According to the report, the staff required to manage and run these cloud service applications was less than 10% of what it takes to manage a distributed application but there was however no proof point or scientific ground for these numbers.
  • For the distributed applications that were supposed to replace the applications running on the Mainframe, assumptions were made, without any consultation with the Mainframe staff. The sizing and complexity of applications that required 20 years to build and were the backbone of every transaction of this company were not taken into account.
  • The staff running the distributed applications and the hardware was equal to the numbers currently managing them on the mainframe, according to the report. We all know that the numbers in real life will be different. A ratio of 1:10 for Mainframe vs distributed is not uncommon.
  • Integrating 50% of the applications to run in the cloud with the 50% running on the distributed systems was NOT taken into account at all. I have to say that I do not hear this discussion in any Cloud discussion I attend/read about. People seem to think that either this is not necessary or that it will all happen magically.
  • Integrating cloud applications while they are implemented with the existing Mainframe applications was also never mentioned. During the transition period, this must be an important and expensive job.
  • Software and hardware prices of the Mainframe were calculated with a yearly price increase, and depreciation was calculated according to the standard depreciation policies. Depreciation of the distributed servers was nowhere to be found for the five year period that the report covered.

Now, maybe it’s just me, but I have suspicions that many of these large IT consulting projects do not bring about the promised cost reductions and increased ROI at all. In some cases, the money and time spent on projects like this is three times the original budget, and the resulting complex infrastructure is even harder to manage and requires more staff than expected.

I could go on and on, but I think I you’ve got the picture. In our Industry, it sometimes looks as if spending thousands on an IT Transformation report with a well-known consulting company is enough to go ahead, without looking at the real implications or understanding the technology.

I have personally been involved in projects like this where a mainframe that was supposed to be dismantled after two years was still running happily after six years. The problem was that the cost of the replacement project had exceeded the budget so much, that the ROI of the project had to be calculated in decades instead of years, i.e there would never be an ROI.

The Mainframe is sadly too often misunderstood, in our daily lives we are too ready to write off old technology and replace with new more cool stuff – but we understand the implications of this. How many people really understand the mainframe and the implications of removing something that quietly gets on with running the critical business services? Young and cool is not always attractive or cost effective!

Let me end with an anecdote: Seven years ago, I assisted a customer with a financial report on how IDMS were used in their company. What applications it was running, how much extra CPU it would take to migrate to another “more modern” DBMS etc.

About every 18 months, he calls me and says: “Marcel, I had to use your report again. It took them 15 minutes to understand. The project is off.” That said, every situation is not the same and every potential project needs to be carefully analysed.

Marcel den Hartog is Principal Product Manager EMEA for CA’s Mainframe solutions. In this role, he is a frequent speaker on both internal (customer) and external events where he talks about CA’s mainframe strategy, vision and market trends. Marcel joined CA in 1986 as a Pre-sales consultant. Before this, he worked as a programmer/systems analyst on VSE and MVS systems, starting with CICS DL1/IMS and later with DB2. He is still an expert in CA Easytrieve and Cobol and has hands-on experience with many CA products. He was responsible for managing CA’s pre-sales teams in The Netherlands, Belgium and South Africa for a number of years. Prior to his current role Marcel worked as a Linux Development Architect for CA’s Linux and Open Source team. In that role, he served almost two years as a board member of the Plone Open Source Community.