Why You Should Optimise Application Performance

Let’s all agree that many IT datacenters consume too much energy. The electricity needed to make sure that the hardware performs at its optimum has long been a cause for concern. Energy prices are expected to increase by at least 30% in the next 20 years (EU Energy Trends to 2030).

And while this alone is reason enough for concern, additional taxes are also expected on emissions and the number of EU directives and regulations on energy efficiency are growing. Enough reasons for IT executives to look for ways to reduce the amount of power needed.

The IT industry has already shown that there is a lot to gain; server virtualization, more energy efficient datacenters and a move to use more Cloud based solutions are a few examples that come to mind. But server virtualization and Cloud alone are not the solutions.

According to Aberdeen Group, on average, only 55% of all applications are virtualized, with an average rate of 45% server utilization and the fact that the companies in the survey plan to run (on average) 71% of their applications virtualized, proves that virtualization is worth the effort. But with the amount of applications growing and the additional load of mobile applications adding to the overall CPU power needed, it means that this will be an ongoing battle.

The IT industry has already shown that there is a lot to gain: server virtualization, more energy efficient datacenters and a move to use more Cloud based solutions are a few examples that come to mind. But server virtualization and Cloud alone are not the solutions. According to Aberdeen Group,  on average, only 55% of all applications are virtualized, with an average rate of 45% server utilization.

So, on one side, the fact that the companies in the survey plan to run, on average, 71% of their applications virtualized, proves that virtualization is anyway worth the effort. On the other side, with the amount of applications growing, and the additional load of mobile applications adding to the overall CPU power needed, it also means that this will be an ongoing battle.

If there is one thing that has suffered from all the attention given to virtualization and Cloud initiatives, it is application performance management, along with the application tuning expertise to make this happen, and the skills to write better and more efficient software.

Even on a platform (the IBM Mainframe) where this was almost part of the DNA of those who managed it application performance management (or APM for short) has suffered due to a lack of funds and time. The result of this lack of attention to the performance of our applications is simple: we need more hardware to run them, resulting in additional capacity, which in turn needs more power.

Making software more efficient, however, is not simple. Many applications today are packaged applications, and the knowledge to tune them is either very expensive, or takes a long time to acquire. If we write in-house applications , functionality and time-to-market are often more important than code efficiency.

And even when an application runs efficiently immediately after the first implementation, it often starts to suffer from performance problems in the months that follow. With applications crossing multiple platforms, the NMP (Not My Problem) syndrome makes performance management even worse. Many distributed applications that use data from a DB2 database running on a mainframe use SQL queries that give them the answer they want, but in the most inefficient way.

When IT budgets were abundant, we solved this problem in an easy way: we bought more (faster) hardware, and then we simply waited (but for less time than before). We never solved the real problem (an application that performed sub-optimally), we simply found a workaround that we could afford at that time.

But now times have changed, and our attitude about high CPU usage of applications has to change as well. In-house development teams should not only be made responsible for creating applications that perform well: the care for the same level of performance AFTER the application has been used for a while should also be assigned to these same teams. Apart from adding new functionality when needed, they should have the responsibility to go back on a regular basis and check if the performance is still on par with the initial expectations.

Third party application vendors also have the responsibility to guarantee that their applications still perform well after one, two or even three years. It is a fact that the behavior of an application changes over the years (and especially in the first few months) of usage. When a database grows, its behavior changes and queries that did well with 100 rows will probably not do so well with 100,000,000 rows…

But performance tuning is typically something that is easily “forgotten”. Since there is no direct ROI, many managers prefer to run projects that are more visible and help support the business objectives. At the same time, everybody who USES applications knows that an application that does not perform well can (and will) cost money.

Loss of employee productivity, dissatisfied partners that have to wait too long for price quotes, a helpdesk that is suffering from either complaints about these applications or spends too much time finding resolutions because they have to wait too long for a response from their own helpdesk system – these are all drains on your resources.

But more importantly, the extra money we could save if we could move four applications to one virtualized server instead of just three applications, or the additional money we have to pay to our hardware vendor for additional CPU power, can instead be spent on innovative new projects.

The ROI for performance projects is not an easy one to prove but, once you have done your homework, often provides very lucrative results. To make a performance project work, however, you need the right tools. If you don’t know what you are looking for and (more importantly) WHERE to look, the ROI case will be hard to make.

A solid APM suite will not only give you the transaction response times that the end-user experiences, it will also give you drill-down possibilities for the component (whether on a distributed platform or on a mainframe) that help you pinpoint and solve problems. Without the right tools, even the best expert is helpless.

One final note, for those in-house developers among you; it’s only after the application goes into production that the real work starts. Even the smartest algorithm may prove inefficient once the database gets filled with real data. It not only pays to test with production-like data (in quantity terms), it also helps you to set a baseline. And a good baseline is what you need once the application goes live, because it will show you when things start to go wrong well in advance.

Applications that are properly tuned will save companies money. Lots of money. Partly in the reduced CPU capacity needed, but also in the amount of power needed to drive the IT Infrastructure. And as a by-product, it will make your users more satisfied and more productive. A real win-win-win situation if you ask me.

Marcel den Hartog is Principal Product Manager EMEA for CA’s Mainframe solutions. In this role, he is a frequent speaker on both internal (customer) and external events where he talks about CA’s mainframe strategy, vision and market trends. Marcel joined CA in 1986 as a Pre-sales consultant. Before this, he worked as a programmer/systems analyst on VSE and MVS systems, starting with CICS DL1/IMS and later with DB2. He is still an expert in CA Easytrieve and Cobol and has hands-on experience with many CA products. He was responsible for managing CA’s pre-sales teams in The Netherlands, Belgium and South Africa for a number of years. Prior to his current role Marcel worked as a Linux Development Architect for CA’s Linux and Open Source team. In that role, he served almost two years as a board member of the Plone Open Source Community.