Establishing The Value Of Virtualisation

Virtualisation

While a ruler of ancient Greece would be perplexed by a smartphone, he still understands what is required to maintain a prosperous kingdom. King Midas of myth and the modern CIO share a number of fundamental desires: stability, consolidation, and service.

These are essential values that transcend time and retain their luster after thousands of years. And whether you’re trying to run a kingdom, small-to-medium business, or enterprise-level company, you are often occupied with acquiring or maintaining them.

Midas prayed for the ability to transmute anything to gold at a touch… but gold is a heavy anchor, as he soon discovered. Implementing a business plan without being fully apprised of a value proposition’s associated costs can be ruinous—golden apples are pretty, but hardly edible. The Midas touch opportunity that is virtualization requires careful review and understanding by a CIO.

Talk of virtualization, private clouds, public clouds, and everything in between has become so commonplace that you’re the unpopular and unprincipled dilettante at the ball if you don’t have an interesting anecdote to share on the topic. While the confusion persists, you’re still looking for the bottom line: Is this the direction that my business needs to move in? The field is still changing and advancing so rapidly that it can be difficult to find your footing on the shifting ground. So where do you begin?

The old way

Businesses and enterprises alike are plagued with inefficiencies, runaway resource distribution, and escalating costs. The contributing factors are numerous but can be quantified generally by underutilization and inflexibility. While server sprawl and workstation profusion escalate, storage administrators are still aware of the fact that these resources are not at full capacity.

Licensing constraints and application purposing dictate the need for resource assignment, leading to a situation in which some servers have to pull triple time while others are barely active. Production floors are saturated with PCs at varying levels of hardware tier and HelpDesk inboxes overflow with upgrade requests or reports of malfunction. Many businesses are still ignoring performance issues such as file fragmentation.

The problem is that this sounds normal. Because we have always dealt with the tight coupling of software and hardware, the solution path has traditionally led towards more distribution. Building out the network reactively has been simple to cost in terms of the production demands of the moment: a drive died, a new server is required, we need X additional workstations, etc. Purchase orders pile up for additional software licensing, IT costs continue to break ever higher glass ceilings, and we convince ourselves that this is growth.

While there have been amazing advances in continuity, server outages due to poor load balancing persist. Tools enabling the automation of these services are typically proprietary, which means hundreds of IT hours annually spent in learning, implementation, and maintenance for potentially hardware-specific tools. The thought of migrating the business to a new architecture can paralyze a server administrator with fear, yet little choice remains when expansion is the only answer.

Solutions exist. Disk fragmentation prevention or correction for direct-attached storage and other utilities can increase system performance for current infrastructures and resolve some of the fundamental issues driving these larger problems. The advent of shared storage gave us an introduction to the massive opportunities available in consolidation. True scalability, though, remains out of reach. We’re still subject to the pitfalls of storage islands and their rising energy costs.

Gold as light as a cloud

Virtualization actually predates distributed computing. More than 30 years ago, IBM first implemented virtualization in order to logically partition their mainframes into separate ‘virtual machines.’ And the rationale then still holds true today: They virtualized in order to fully leverage their resources.

To understand the principal benefits of virtualization, let’s review the core concept: abstraction and separation of application from hardware. When you tie specific use cases to server or workstation hardware, you tie the survival, investment, and productivity of the use cases to that hardware.

It is little wonder, then, that IT purchasing has become so accepted that preapproved budgets in the hundreds of thousands or even millions get rubber stamped with minimal review. The very survival of mission-critical applications currently rests on these hardware expenses.

Introducing an internal virtual infrastructure, or ‘private cloud,’ is the first step toward escaping the current model. Separation of application permits the evaluation of hardware expense as a cost distinct from simply being able to do business, giving companies a choice. At the same time, a virtual administrator can now leverage dynamic control over the resources available to that virtualized application, since it is no longer tied to one set of physical resources.

Virtualization also opens the door to new conventions for performance and IT. Stepping away from the model of ‘one system, one application’ yields new pathways for measuring the advancement and growth of enterprise. Putting a price tag on server or workstation usability, enterprise management, and energy costs gives the purchase decision maker more granular control on proper spending.

Long-frazzled system administrators who have spent countless hours tapping pencil to forehead while desperately trying to word their proposals for IT investments are now able to eloquently and effectively communicate the need for innovation because we can see the cost of performance.

‘Public cloud’ offerings are rapidly multiplying as well. These remote virtual services offer the benefits of virtualization with even greater scalability and the reliability of dedicated virtual resources that may exceed by leaps and bounds the infrastructure available at any given price point for private cloud. These encompass every form of virtualization possible, from individual applications and file stores to workstations and servers.

In contemplating a transition to private/public cloud or a hybrid implementation of the two, it’s necessary to quantify what the realized benefits to your business will be. Consolidation and increased life of hardware is not an abstract concept—it’s reduced cost. Improved business continuity expands on previously understood and accepted data security standards. Improved performance means greater service levels, internally and externally.

All of these are positioned as the value proposition for virtualization, and rightly so. They also need to be weighed against the costs of new storage requirements, the IT investments associated with implementation, and the new obstacles that virtualization can present. With virtual provisioning over shared storage, an administrator needs to understand and monitor resources with a far keener eye than ever before. More than one resource suffers when bottlenecks occur, and optimizing your new or existing virtual platform must be a paramount concern.

In the case of medium business to large-scale enterprise, virtualization will be the norm in a matter of years. It’s important to get educated now and to begin understanding not only the benefits but also the unique challenges of virtualization. Know when to virtualize based on your existing physical infrastructure and application use, and monitor the benefits as virtualization is instituted. Look before you leap, and get all of the gold your kingdom can handle without the heavy burden.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Damian Giannunzio has worked in Technology and Field Services for over ten years and has been with Diskeeper for 6 of those. He is a frequent tech blogger with a focus on storage.