Building Confidence In Cloud Computing With Converged Infrastructure

Converged Infrastructure

The dynamic nature of cloud computing means that the physical infrastructure must be highly automated, reliable and elastic in order to deliver trusted, agile and cost effective computing services to successive layers of the cloud stack.

To do this, the physical infrastructure itself – the server hardware, virtual resources, the software payloads, the networks and the storage and communication I/O connections – must all be abstracted and defined in software. By doing this, it is possible to create a unified server fabric, commonly known as ‘Converged Infrastructure’ or ‘Unified Computing’.

By leveraging this low-level abstraction of the physical infrastructure and connections to deliver the agile capacity that define a cloud environment. This low level abstraction of the fabric also provides reliability at the infrastructure level, which in turn means customers can set and guarantee service levels and priority for their cloud based apps.

The ability to guarantee compute resources is critical as clouds constantly expand and contract to meet user demand, and to ensure failures are automatically bypassed so end users are protected from interruptions. The physical layer is simply the only place in the cloud stack that can deliver that capability.

As the Physical Infrastructure as a Service (P-IaaS) layer is where physical compute, I/O, networking, load balancing and storage components are allocated, it must be actively managed. Any CPU in the fabric must have the ability to quickly and without manual intervention be assigned any workload and personality.

This level of flexibility at the CPU level is not only what enables the unification, management and automation of physical and virtual infrastructure, it is needed in order to maintain cost effective yet high service level agreements (SLA).

Beyond provisioning infrastructure, the converged infrastructure fabric manager also provides cloud-bursting capabilities. This ensures capacity is delivered to the application that needs it, when it is required.

This management layer is also where SLAs are maintained and updated, where protection levels, policies and priorities are set and where continuity levels are tracked and reported. This ensures that local failure and even recovery from entire data centre outages are verified, all regardless of the workload and whether the service is running on physical servers, virtual servers or both.

In the eventuality of a failure with a converged infrastructure, the entire state of the failed machine—including software—is replicated on a new piece of hardware, complete with original addressing, storage naming, network names.

The implication is clear: for most instances, this abstracted view of the physical infrastructure renders unnecessary the need for clustering and high-availability systems at the virtualisation and/or software levels.

No matter what type of cloud environment you’re considering—a large public cloud, a customisable hosted cloud or a private cloud within your own enterprise—success depends on having the right technology foundation in place.

John Humphreys is VP of Marketing at Egenera. Egenera has been selling its mission-critical PAN Manager Software since 2001. Based on the Processing Area Network (PAN) concept, PAN Manager simplifies data center infrastructure and management by pooling compute, I/O, network and storage resources. The system's embedded server profile creation, HA and DR facilities ensure a wire-once, always-on environment that supports all virtualisation technologies as well as native operating systems.