Achieving Private Cloud Success

Private Cloud

What do hotels and virtual environments have in common? The need for a reservation system. Hotels have long been using a reservation system to refine their operations, properly placing guests, allocating resources, managing current and future bookings, thereby matching supply and demand.

Without such a system, hotel operators would be forever building additional rooms just in case a coach of tourists were to arrive with little or no warning, rather than basing their building requirements on an actual profile of historical and predicted demand.

Classic over provisioning – must all sound very familiar to anyone who has ever managed a production virtual environment.

Undoubtedly a hotel could not operate without a reservation system to manage resource availability and match that with guests’ needs. However, so many companies do exactly this to manage their virtual and internal cloud environments.

Simply put, as with hotels, virtualised and private cloud infrastructures are all about sharing resources. These resources might be storage, network or compute resources, but resources all the same. In order to ensure performance, compliance and cost control, companies need to take a leaf out of the hotelier’s book and optimise these environments, by properly balancing capacity supply and application demand.

By applying the same principles used to manage a hotel’s available capacity to their own operations, IT organisations can significantly reduce risk and cost while ensuring service levels in virtual and cloud infrastructures. There are five reasons why the process of workload routing and capacity reservation must become a core, automated component of IT planning and management:

1. Complexity Of The Hosting Decision

Hosting decisions are all about optimally aligning supply with demand. However, this is very complex in modern infrastructures, where capabilities can vary widely, and the requirements of the workloads may have a significant impact on what can go where. To make the optimal decision, there are three important questions that must be asked:

2. Does The Infrastructure Satisfy The Workload?

This is commonly referred to as “fit for purpose,” and is required to determine whether the hosting environment is suitable for the kind of workload being hosted. This question has not always been top of mind in the past, as the typical process to deploy new applications has been to procure new infrastructure with very detailed specifications.

But the increasing use of shared environments is changing this, and understanding the specifications of the currently running hosting environments is critical. Unfortunately, early virtual environments tended to be one-size-fits-all, and early internal clouds tended to focus on dev/test workloads, so fit for purpose decisions rarely extended beyond ensuring the environment has the right CPU architecture.

3. Will The Workloads Fit?

While the fit for purpose analysis is concerned with whether a target environment has the right kind of capacity, this aspect of making hosting decisions is concerned with whether there is sufficient free capacity to host the workloads. This is a more traditional capacity problem, but with a twist, as virtual and cloud environments are by nature shared environments, and the capacity equation is multi-dimensional.

Resources such as CPU, memory, disk, I/O, network I/O, storage capacity, etc., must be considered, as well as looking at the levels and patterns of activity to ensure that the new workloads are “dovetailing” with the existing ones. Furthermore, any analysis of capacity must also ensure that the workload will fit at the point in time it will be deployed and it must continue to fit beyond that time.

4. What Is The Relative Cost?

While fit and suitability are critical to where to host a workload, in a tiebreaker the main issue becomes relative cost. While many organisations are still not sophisticated enough to have an accurate chargeback model in place, a more precise way to determine cost may be to consider the relative cost of hosting a workload as a function of policy and placement.

5. Move Past Your ‘Gut’

Hosting decisions are far too important to be left to simplistic, best-efforts approaches. Where a workload is placed and how resources are assigned to it is likely the most important factor in operational efficiency and safety, and is even more critical as organisations consider cloud hosting models.

These decisions must be driven by the true requirements of the applications, the capabilities of the infrastructure, the policies in force and the pipeline of activity. They should be made in the context of the global picture, where all supply and demand can be considered and all hosting assumptions challenged. And they should be made in software, not brains, so they are repeatable, accurate and can drive automation.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone
Andrew Hillier

Andrew Hillier, CTO and co-founder of CiRBA, has over 20 years of experience in the creation and implementation of mission-critical software for the world's largest financial institutions and utilities. A co-founder of CiRBA, he leads product strategy and defines the overall technology roadmap for the company. Prior to CiRBA, Hillier pioneered a state of the art systems management solution which was acquired by Sun Microsystems and served as the foundation of their flagship systems management product, Sun Management Center.