Avoiding The Hidden Costs Of Virtualisation

Avoiding Costs

Although virtualisation has further stimulated interest in saving costs within data centres, some of the complex management aspects and the implications of longer-term running costs are still not properly understood.

Virtualisation is the abstraction of physical resources into separate logical – virtual – units. It has several distinct versions, including storage, network and server virtualisation. In some respects, virtualisation gives facility managers, CIOs and IT managers the benefits of improved cost savings, efficiency, capacity, control, and security, as well as other advantages including reduced power consumption, cooling requirements and system downtime.

However, the argument is a little more nuanced than that. As with most technologies, there are potential pitfalls that need to be avoided. These issues can be tackled with data centre infrastructure management tools, which give a new level of insight into performance and cost efficiencies in the data centre.

How has virtualisation changed the data centre?

The changes that virtualisation have brought to the data centre can roughly be divided into three categories: new physical infrastructure and potential downtime; changes to management techniques and productivity; and implications for system power consumption and costs.

First, looking at changes to physical infrastructure, separating individual servers into separate virtual units has increased the capacity of individual machines. This could lead in theory to greater capacity in the data centre as a whole, helping to improve utilisation of machines and overall operational efficiency.

Secondly, this has had an impact on data centre management techniques as it has meant facility managers have greater flexibility as they are no longer so constrained by their physical infrastructure. The benefits here are obvious when it comes to situations where technical failures would have previously led to server downtime. Now instead, the workload can be shifted to another machine with little difficultly. This flexibility has essentially translated into more control for data centre professionals.

Thirdly, there is a big impact on power consumption and costs. By reducing the number of physical servers, you reduce the amount of energy that you need to run that infrastructure (though there are some caveats to this discussed below). Furthermore, with fewer servers, associated management costs could in theory also fall. Cooling costs in particular should fall as fewer equipment items are needed to cool the data centre. In the face of the twin pressures of reducing costs and cutting carbon footprint, this saving is a particular is a welcome one.

What are the challenges of virtualisation?

Virtualisation has been marketed in some quarters of the IT industry as a neat solution to deriving more from the corporate IT infrastructure; however planning and managing a virtual computing strategy presents some demanding operational challenges. For instance, if an organisation opts to host multiple virtualised servers, it will require more CPUs and thus significantly more memory.

However, by pursuing a strategy of increasing the server estate’s capacity and workload over time, the organisation will expand its data centre to a point where it is arguably beyond the direct control of the IT function, leading to potential management, resilience and cost questions. This expansion of the server infrastructure or “server sprawl” is one the main reasons for avoidable energy consumption in the data centre. Technology industry researchers have estimated that firms can spend up to 40% of their total IT budget on data centre running costs.

There are specific impacts from virtualisation that is not managed with running costs in mind. These strategies can also undermine a data centre’s power usage effectiveness (PUE) if, through server consolidation, the resulting power and cooling infrastructures are not matched. While IT loads may shrink from consolidation, these ‘fixed losses’ will become a higher proportion of the total data centre energy use, thus decreasing PUE since it is generally accepted that this measure will always be highest at heavier IT loads and lower at reduced system computing requirements.

Despite these management issues, there have recently been new developments in data centre monitoring and management that can be used to counter them. In particular, the rise of data centre infrastructure management (DCIM) is changing the game. This approach enables data centre and IT professionals to take a holistic approach to monitoring data centre computing and utilities operations and thereby manage the impact of strategies such as virtualisation on the cooling needs of their facility.

This approach arose because IT managers were pressured increasingly by their C-level executives for system energy metrics at more integrated level. Major data centre equipment manufacturers such as HP and Dell took the lead in this new thinking, and, in the last two years, both have either developed or re-designed their own platforms to support enhanced monitoring and demands for hotter running equipment that reduces overall running costs including cooling.

As the data centre industry begins to move away from ‘first generation’ measures like PUE, DCIM will help monitor the power usage of separate processes and even monitor power usage down to the individual server level.

This shift means that IT professionals can start to plan and achieve in-depth analysis of how data centre power is being used, therefore allowing corporate decision-makers to analyse equipment usage and power uptake trends across their data centres, to identify where they can make operational changes to reduce energy waste.

This opportunity will detect the business unit which uses the most energy, and in turn, helps change the way the infrastructure is managed. DCIM is a critical breakthrough because it provides all the information necessary to run an efficient and cost-effective data centre and provides overview information that board level executives and IT professionals alike can use to manage IT infrastructure including virtualisation strategy.

In essence, DCIM enables IT and FM professionals to monitor the data centre’s key parameters and develop an overview of the efficiency of the facility’s and detail down to the level of individual machines or server racks to identify under-utilised or even unproductive servers.

This granular level of analysis then helps data centre managers identify a specific plan of action that can be used to fine tune server infrastructure and virtualisation management strategies for optimum performance or more energy- or cost- efficient operations, as required. According to industry analysts, Gartner, DCIM will increase from its current 1 per cent market penetration to more than 60 per cent by 2015.

When acknowledging virtualisation strategies’ benefits for today’s 24/7-focused business, it is important to bear these longer-term management challenges in mind. A well-planned and executed virtualisation strategy can be extremely effective and when combined with the right DCIM programme it can change the data centre responsiveness while more closely managing management costs.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Chris Smith is the sales and marketing director for on365, who has over 20 years experience within the industry. Chris is a specialist in the planning, management and optimisation of physical IT infrastructure and utility services.