Waiting eight seconds for your website to load might have been the standard a few years ago, but compared to today that seems like an eternity. In this era of app-centric IT, we’ve become a generation obsessed with instant gratification. Not only have we become impatient when it comes to slow applications, we increasingly expect our IT team to be able to resolve issues almost instantly.
A recent survey has found 94 percent of business end users believe their application performance and availability directly affects their ability to do their job, with a further 44 percent saying that is absolutely critical. Similarly, 66 per cent expect their IT team to have fully resolved their issue within an hour of reporting the problem. These eye opening stats highlight a dramatic change in end-user mindset and the increased pressure that is being placed on the IT department.
With this in mind, it is no surprise that when corporate response times run slowly, employees become frustrated, productivity dives and the blame game flourishes. More often than not, the finger-pointing is oriented towards the database and the database administrator (DBA) team.
Although not everybody sees it, the database impacts us daily. It might be low on the technology stack, but it’s very complex and possibly one of the biggest contributors to the wait time that we all experience every day. In response to slow performance, senior management’s immediate response might be to throw money at expensive new hardware. However, buying a new database server often comes with a hefty price tag – including increased administrative overhead and costs for the business.
Before you purchase new hardware it’s worth taking a closer look at application query performance. Make sure that you have a system in place that can monitor 24/7, record query performance over time, provide the ability to analyse historical data, and identify trends and anomalies which are key to application query performance improvement. Not only can this visibility identify potential issues but can help alleviate bottlenecks and improve overall performance.
To help make the job easier, here are some helpful tips to ensure your application queries are running smoothly and effectively:
- Check the base tables (not views) and verify the expected row counts. Often people have no idea that a table may have millions of rows being joined to other tables of a similar size.
- Examine the filters – it is essential to examine ‘where’ and ‘join’ clauses to understand the expected result set to be returned.
- Know the selectivity of the table indexes, making sure to work with the smallest logical set whenever possible.
- Analyse the query columns, looking for SELECT * or scalar functions.
- Look for CASE, CAST and CONVERT. Do not overlook sub-queries.
- Review existing keys, constraints and indexes. This is crucial to do before you consider altering existing indexes or adding new ones.
- Examine the execution plan. Look for possible expensive operators like table or index spools, nested loop joins, and fat pipes. Also, look for any warning icons for things like implicit conversions or for a warning about missing indexes.
- Remember to focus on logical I/Os (the number of logical reads).
- Adjust the query, focusing on the most expensive operations first.
- Remember to make one change at a time, and record the results after each iteration.
- Consider index adjustments, especially if changing code is not an option.
Before you write that cheque for new hardware, follow these tips to make sure your database is actually working the way it should!