What do you do when your Web application isn’t performing?

If you are running an organisation that relies on its website for business, or you are launching a service that potentially thousands of individuals can be using at any one time, then performance is a big issue. When everything is working well, then things are fine, but when problems occur, then it can have a serious impact on how the business performs as well.

In the event of a problem, two things can happen to an app; either it goes down completely or it slows to a crawl. In a complex IT environment, an app isn’t stand-alone, it’s usually tied to policy management, web servers, database servers and a myriad of other tools and services.

So the app can slow down or fail altogether for all kinds of reasons. The challenge is how do you figure out where the problem is and then deal with effectively?

Infrastructure monitoring doesn’t cut it

On a technical level, monitoring is an approach that is pretty much bust, especially when it comes to modern applications. With distributed, heterogeneous architectures in place within businesses to support their web-based applications, infrastructure monitoring only really works if you can tie backend and frontend data together in real-time, which with internet-facing applications is often difficult due to firewall and security constraints.

The cloud adds yet another level of complexity. Unless you manage each component and its inter-relationships, you can be in a situation where the infrastructure supporting each component is fine, yet overall the system doesn’t perform to standard (or perhaps at all).

And when – if – you become aware of a problem, the damage has already been done as far as the user is concerned. Application performance is not a static game, which is the modus operandi forced on you if you only monitor things. The emphasis should therefore be on proactive application management, as opposed to monitoring which tends to be reactive and often too late to address problems.

Knowing what happens at the back end is not enough to understand the overall end-to-end user experience

To the greatest extent with Ajax, and also with other technologies, code logic is spread around between the backend and the application consumption point. So even if you could perfectly monitor the back end, you wouldn’t have any idea how the app performs when users encounter it, unless you could integrate performance management probes into the user session.

Application browser clients are frequently embedded with affiliate code which is usually JavaScript based and intended to provide various types of business information. These and other third-party calls are completely invisible to any backend monitoring system that is in place.

This therefore provides you with no insight into a sub-standard user experience if it’s down to slow affiliate performance, and is more common than you might think. As an aside, affiliate use can be extremely fickle, and it is rapidly falling out of favour with technology analysts, so many businesses have redundant affiliate code littered across their websites. A little regular house-keeping to remove this unwanted and unnecessary code can make a surprising difference to user experience.

Reliance on third party offerings is fast becoming the norm, with a plethora of service-based offerings available covering search, content delivery, mobile site provision, security, credit referral, payment gateways, APIs and more.

It is vitally important to understand and monitor the performance of any third-party services that are part of your application infrastructure, as they can quickly become performance bottlenecks if left unattended. All of the above has obvious ramifications in terms of (internal and external) system adoption, and building user adoption and customer loyalty.

Lifecycle

Rooting out performance issues early on saves time and money later. Keep performance uppermost in mind as you build your application: think performance by design, and you won’t go far wrong. At this stage, it’s also worth linking client performance and backend together, so you can get a true picture of user experience before you roll it out.

Giving different teams access to equivalent information facilitates communication, so (for example) the QA team’s concerns can be accurately conveyed to developers. To this end, we recommend using a single APM tool across the application lifecycle – so you can easily take your dev and test APM profiles and port them into production. The benefits of this are realized to an even greater degree if you’re operating an Agile development environment, looking at continuous integration or have a DevOps approach.

Ultimately, the concern from the application owner and the business in general is how does my application bear up in production? This leads to another question: why monitor fake users, when it’s the real ones that matter? While synthetic tracking can play a role in performance monitoring, visibility into real user data as opposed to averages and synthetic transactions is critical.

In this way, you can be sensitive to performance degradation before it becomes an issue; and you’ll have the information that empowers you to tackle it. This is true end-to-end performance management that can make a difference to the business.

Ian Molyneaux is a seasoned IT professional with over 30 years' experience in the industry. As managing director at Intechnica, he is passionate about promoting a structured approach to performance assurance in an enterprise context, which has historically been the weak link in software quality assurance. Ian is the published author of a number of titles, including "The Art of Application Performance Testing".