Data Centres: The Physical Foundation Of Virtualisation And The Cloud

Data Centre

The gradual growth in virtualisation and cloud technologies is changing the way in which data centres must be developed and configured. While increased attention is given to the benefits that modern virtual and cloud environments can bring to computing, it is important to understand the impact that their introduction will have on the tangible and physical infrastructure upon which they run: the data centre.

Virtual challenges to data centre design

One of the great advantages of a virtual system is the ability to match demand with resource. When the number of users of a given application increases, a virtual system will increase the resources provided from the server or move it to a different server with greater capacity. This dynamic and fluid load movement is certainly an advantage – for example, if a system fails it can be easily re-located to another server – but for the data centre, it presents new configuration challenges.

Before virtualisation, mission critical applications would be maintained on the highest spec servers, and given additional power and cooling. But now, with the dynamic movement of applications on a virtual system, mission critical applications no longer remain in one place; they move around. In a virtual environment it is now necessary to ensure that any or all of the servers in your data centre can support these mission critical applications adequately.

The movement of applications around the data centre as a result of virtualisation also makes it much more difficult to prepare for temperature hot spots, since they can now appear anywhere in your data centre: the hot spots, like the virtual machine, move around depending on load. Before virtualisation, it was possible to anticipate hot spots and take action accordingly. Now, the whole data centre must be configured to prevent spikes in temperature affecting service delivery.

A virtual system also requires a certain amount of redundancy to be designed into the data centre – enough to allow for the peaks and troughs of usage – but how much is optimal in this new environment?

In answering these questions and challenges, the traditional rulebook for data centre development needs substantial revision. New techniques and approaches must be taken into account to optimise your data centre for a virtual or cloud environment.

Data centre cooling

There are several steps you can take in order to resolve problems with hot spots and cooling while allowing your virtual machine to run unimpeded. The first is to use the ‘hot aisle/cold aisle’ method of temperature control across the entire data centre. This is where hot or cold air is captured and held in a certain area so that the maximum benefit can be derived from cooling activities.

For this method to work efficiently, the racks must be properly sealed from front to back, cabling must not be allowed to get in the way of the airflow, and air from the cold aisle must not bypass equipment. Side-to-side airflow might also need correcting so that it flows front to back. Furthermore, you might need to watch that racks and cabinets do not create their own microclimate by re-circulating air internally.

The way in which the racks are sealed plays an important role in the correct implementation of ‘hot aisle/cold aisle’. However, the savings speak for themselves. Containment can bring a 23-30% reduction in cooling provisioning; this adds up to a substantial reduction in operating costs over the life of any data centre.

The modular approach

The data centre design for virtual and cloud environments might be relatively easy to establish when you are starting your design from scratch. However, it can be more difficult to implement if you are migrating an existing data centre into a new design fit for virtual or cloud deployments.

With this problem in mind, companies are turning to modular data centre designs as solutions to their virtual and cloud migration needs. The benefit of modular designs is not only the speed at which they can be delivered, but also the speed at which you can get them up and running.

These units can go from individual collections of server, storage and compute power such as VCE’s VBlock or NetApp’s FlexPod, through to full data centres provided in shipping containers. In the case of VCE’s VBlock, the power and cooling systems, as well as the virtualisation layer and IT components, are pre-configured, enabling service delivery to the end user in days rather than months.

The modular approach also addresses some of the problems already outlined above with regard to cooling and power consumption. As modular data centres are built to a specific requirement in one go, the company manufacturing the unit takes responsibility for its cabling, cooling and airflow requirements, optimising them as much as possible. This assumes that power is used as efficiently as possible. That said, the move to a modular data centre can still be a complex undertaking. A logical step-by-step process can help simplify matters.

The first step is to make space for the modular data centre and then deploy a pre-configured virtualised engine module. Next, run the business applications in parallel on both the virtual and legacy systems. Then, once the new system is up and running, the legacy estate can be switched off and consolidated.

This method over-simplifies the process needed to achieve the migration – the first step of creating space for the module itself is a challenging undertaking, involving physically moving assets and freeing up precious power and cooling resources. The modular approach can bring simplicity but still requires some effort.

Furthermore, while it is true that one of the easiest ways to ensure optimised data centre design in terms of cooling and power is to buy into a modular strategy, this route tends to work best only when you know ahead of time what your capacity and IT consumption habits will be. If you cannot provide reasonably accurate projections of these figures for the next three or so years, the modular construction might actually create more problems than it solves.

This is because modular data centre units, as already mentioned, are carefully and tightly configured in compact spaces to specific requirements. Modular works best when it meets requirements specified at one time, that are unlikely to change suddenly, and where new modules would be bolted on when additional growth is required. The sheer speed of delivery that modular enables is beneficial to the business, but it does limit flexibility in technology choices for the future.

Have you got it right?

There are two mechanisms you can employ in order to understand whether you are running your reconfigured data centre optimally. The first is Power Usage Effectiveness (PUE). Developed by the Green Grid consortium, this is a ratio that data centre managers use in order to judge the efficiency and effectiveness of their installation.

The aim of PUE is to attain a figure as close to 1.0 as possible. Higher figures indicate less efficiency. Hewlett Packard’s EcoPOD modular data centre is said to achieve a PUE of around 1.2, which fares well when compared to an average ‘normal’ data centre figure of around 2.0.

PUE is supposed to be a measure of DC power and cooling efficiencies and be the marker of how, over time, the data centre improves. However, business now tends to simply view a PUE of 1.0 as excellent and a PUE of less than 1.0 as wasteful. This simplistic view does not account for the fact that it is cheaper to cool in the winter and more expensive in the summer, so PUE must be viewed over the long term and take into account seasonal fluctuations.

The introduction of Data Centre Infrastructure Management (DCIM) is another tool that can show how efficiently your virtual or cloud data centre is running. DCIM covers environment performance, helping you map the physical location of your assets within the data centre as they move around, as well as understand the space, power and cooling availability. DCIM can even automate part of the deployment planning process.

If you are considering migration to a virtual or cloud environment, it is essential that you get the foundations right. The successful introduction of new technologies relies on an underlying physical infrastructure tailored to quite specific and different demands

By introducing new approaches to cooling and power consumption, and using modular approaches where it suits you, you should find a balance between the delivery of the right amount of computing power with the best power efficiencies. This enables a seamless transition for your organisation’s move to virtualisation and the cloud.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

David Palmer-Stevens is currently Systems Integrator Manager EMEA for PANDUIT, focusing on the company’s Green Data Centre Initiatives, Intelligent infrastructure Management and new high power PoE solutions. He has a long history in understanding data centre design and optimising this for cost efficiency and power savings. David has over 20 years experience in the communications industry covering engineering, sales and marketing. Prior to PANDUIT, David initiated PowerDsine’s PoE channel strategy in Europe. David’s background is from the Networking industry with senior management roles with, Enterasys, Xyland, Cabletron and Racal Milgo. David wrote and illustrated the Cabletron ‘Guide to Local Area networks’, the Xylan ‘Switching Revolution’ booklet, the Advents ‘E-Business Strategy for Small Businesses’ and the Enterasys technology book ‘Enterprise Networking’. David has a Bachelors degree in Mathematics and an HNC in Electronic and Electrical Engineering.