Tackling Data Centre Inefficiencies

Data Centre

At the Data Centre and IT Operations Summit in November 2011, Rakesh Kumar, vice president of Gartner Research, said that in the next couple of years energy costs in data centres will rise massively. In 2010, energy alone accounted for 12% of the cost of running a data centre; the third largest expense after people (29%) and software (22%).

However, the outlook is that this split is going to change in the next few years. While hardware (servers and storage devices) costs will go down, energy costs will account for a fifth of the cost of running a data centre.

It’s true that physical servers have reduced in size over the past twenty years thanks to commoditisation, and the old fridge-sized devices have made way for today’s more manageable “pizza box” configurations. In addition, processing capacity has been roughly doubling on a two-year cycle for the past forty years, a trend which is expected to continue until at least 2020.

However this extra power density and increased processing capacity has contributed to the higher energy and cooling demands in modern data centres. Computing resource can become so focused in one particular area that there is insufficient power available for any extra capacity. This gives rise to a dangerous paradox where it becomes cheaper to build a whole new data centre than to retrofit an old one to create extra usable space.

The hidden costs of virtualisation

The rhetoric around energy and space savings has created a view that virtualised servers provide automatic cost benefits. Indeed, many companies have sought to mitigate data centre inefficiencies with virtualisation.

However it is worth noting the areas where costs can creep up on you. For example, the standard specification of a server purchased to host virtual machines (VMs) is usually much higher than that of a standalone server. Hosting multiple virtualised servers requires more CPUs and significantly more memory, both of which contribute to higher power requirements and heat output.

After virtualising 100% of its physical servers a company will run more operating system instances than it did previously. The physical and environmental footprint may have changed but the IT department will still need to monitor, patch, secure, backup and license the virtualised servers, and now the host servers as well. Requirements such as high availability further increase infrastructure costs.

In reality, consolidation through virtualisation is just a band aid solution and not a long term way to increase IT Efficiency.

What’s useful?

Server rationalisation should be part of any virtualisation project. Today’s IT efficiency tools can identify what is useful which informs an on-going process to reclaim unused resources and avoid unnecessary expenditure.
Traditional tools designed for systems and operations management struggle to deliver reports on the utility of each server.

CPU utilisation reports represent how much “usage” took place. Most servers carry standard software for corporate environments – antivirus software, systems management, backup and event monitoring – so each server reports a specific amount of usage but fails to reveal whether or not that usage is useful or in other words, whether it provides any business value.

Virtualisation has clear benefits, including a reduction in energy consumption and floor space, but to get the most out of virtualisation, organisations must invest in tools which monitor efficiency; specifically the amount of useful work their IT assets undertake.

An expensive software habit

In addition to the overprovisioning of servers, there is software licence waste on every corporate server. Businesses spend heavily on enterprise applications, some of which end up unused or infrequently used. According to analyst estimates, the total cost of licensing and running servers is approximately 80-90% of total software spend.

The software applications all have various, complex licensing rules. Deploying tools that help you understand actual usage helps to identify situations where a premium licence can be replaced by a standard model or even cancelled if it’s not in use.

Diverse licensing models across different applications are complex to manage and can lead to deliberate overprovisioning as an expensive way to avoid the wrath of auditors. The outcome is IT waste and operational inefficiency.

Consolidation is key

As energy prices rise, it’s become ever more important for data centre managers to tackle the inefficiencies which abound in physical and virtual environments, resource usage, and software licences. It’s time for businesses to take control of their server estate by looking back at their data centre assets and taking steps to ensure that they are being used efficiently. With big savings available for the taking, there’s no time to waste.

Andy Hawkins is a Product Manager at 1E, a global role he has held for 3 years. Reporting into the Head of Software, Andy is an integral part of the innovations team and is responsible for the development of new innovations which enable end user organisations to reduce IT costs. Most recently, Andy spearheaded the development of the hotly anticipated NightWatchman Server Edition, a power and efficiency management tool, which launched in October 2009. During his eleven year tenure at the company, Andy has held a number of roles at 1E, including principal consultant, where he led a team of people tasked with designing some of the UK’s largest IT infrastructures. He was seconded to EMC in 2000 as a storage architect, working with large organisations in the UK such as Orange and Norwich Union. Andy holds a BSC in Physics and Acoustics from the University of Surrey and has a passion for music production in his spare time.