Bringing Infrastructure Up To Hyper-Speed Through Convergence

Converged Infrastructure

The IT infrastructure world is evolving at an increasingly rapid pace, and technologies that were once considered a novelty are becoming obsolete within ever-decreasing lengths of time. The changes in the data centre environment in particular are highly dynamic and complex in nature. Following the introduction of converged technology a couple of years ago, the focus has now moved on to hyper-convergence, especially within small and medium-sized businesses (SMEs). The basic reason for this early adoption in SMEs is to do and achieve more with fewer resources.

Hyper-convergence was born out of the concept of converged infrastructure products that include storage, compute and networking in one box. Hyper-convergence is a natural progression in this technology from its initial concept. The basic driving force behind hyper-convergence is virtualisation. Systems that fall under the hyper-convergence category have the hypervisor built into them.

The major benefits that hyper-convergence brings are in simplifying management by providing a single pane of glass view, and enhanced elasticity through scale computing. These benefits have had a significant impact on the growing popularity of hyper-converged infrastructure over the last year.

What makes these stacks appealing is the fact that they reduce deployment times to near zero, increase resource efficiency, reduce OPEX, completely remove the possibility of problems related to interoperability, and most importantly, come vendor certified. The only shortcoming with this approach is the relatively low level of customisation possible or options available. An alternate approach was later developed and adopted to provide the maximum amount of flexibility and choice. This involves the use of reference architecture, which is simply a vendor approved kit list that can be assembled to form a solution stack.

How It All Started

The inception of convergence came in the networking world where vendors recognised the inherent deficiencies and shortcomings associated with maintaining two separate fabrics for storage and network (LAN and SAN). While Ethernet was the universal standard for LAN, Fibre Channel was the protocol of choice for the storage world. The industry saw the immediate benefit in the creation and establishment of a unified fabric that would allow for both traffic types to be converged. This resulted in the creation of Converged Enhanced Ethernet.

This was followed by advances and innovations in the storage world to keep pace with the incessant demands posed by modern generation applications. To address the inherent limitations associated with the ability of spinning disks to deliver the required number of IOPs, there was a move towards leveraging SSDs to meet the IOPs requirement. This came about initially as a caching layer in existing storage arrays that could be harnessed through automated tiering. This evolved further to include the use of server side flash cards.

With the appropriate pieces now available, the first true attempt at convergence was made and this can be linked to the use of pre-validated and integrated stacks. These are pretty much a pre-configured assemblage of discrete compute, network, storage and virtualisation elements with a common layer of management on top.

Simply put, hyper-convergence delivers on the premise of a “data centre in a box” and combines the compute, storage, network and virtualisation layers into a unit that can be scaled up and out to build massive pools of resources that can be leveraged for an organisation’s application needs. Both convergence and hyper-convergence are geared towards addressing the needs of a modern organisation and foster automation, business agility and responsiveness.

However, present hyper-converged solutions also seek to address basic constraints and shortcomings that have plagued traditional solutions. These include data granularity and IO handling capabilities, effectively leveraging flash, providing real time de-duplication and compression. Again individual solutions approach the hyper-converged paradigm differently, either with an all software-based solution or one built on specialised hardware.

Splurging On Converging

With these benefits, it’s little wonder that the interest in converged infrastructure has rocketed. Most data analyst reports indicate convergence to be a significant part of an organisational road map. Such is the scale of this that analyst firms such as ITCandor have estimated that the market will grow from $2.8bn to $5.5bn in 2018 as more businesses realise the benefits. Gartner estimates that by 2015, one third of all servers shipped will be as managed resources integrated in a converged infrastructure.

Of course, hyper-converged infrastructure solutions may not be suitable for all businesses and they should take a measured approach before jumping in. For example organisations will have to modify their models around investments in IT infrastructure across the entire enterprise in order to realise the full potential benefits of hyper-converged infrastructure. As such, buying cycles need to be inline from a budgetary and purchase perspective. There is also a trade-off between purchasing granularity and management simplicity with hyper-converged infrastructure. If the organisation’s infrastructure resource requirements do not grow linearly, this could result in waste through over-resourcing.

Getting Ahead Of The Game

Investment in converged infrastructure solutions is accelerating, and will only continue to grow. The reality is that as organisations ask more of their CIOs, data centre automation and management at a reduced cost will become mainstream and a top priority. By combining servers, storage and networking hardware and management, businesses stand to make vast financial savings. At the same time, IT teams will benefit from simplified management automation, freeing up their time for higher level tasks. As businesses continue to realise the benefits that converged and hyper-converged infrastructure can bring, we’ll see much higher levels of investment as businesses look to enhance their competitive edge.

Kalyan Kumar

Kalyan Kumar is the Chief Technologist for HCL Technologies – ISD and leads all the Global Technology Practices. In his current role Kalyan is responsible defining Architecture & Technology Strategy, New Solutions Development & Engineering across all Enterprise Infrastructure, Business Productivity, Unified Communication Collaboration & Enterprise Platform/DevOps Service Lines. Kalyan is widely acknowledged as an expert and path-breaker on BSM/ITSM & IT Architecture and Cloud Platforms and has developed many IPs for the company in these domains. He is also credited with building HCL MTaaS platform from scratch, which has a multi-million turnover today and a proprietary benchmark for Global IT Infrastructure Services Delivery.