Traditional load balancers were undoubtedly a useful (if sticking plaster) solution to a problem that came from the olden days challenges of WAN management. Effectively, these were appliances that maintained the equilibrium of networks, so they didn’t go wonky when certain workloads tried to hog bandwidth. The load balancer was the appliance that tried, against the odds, to maintain an even playing field, calling the shots to ensure connectivity and performance when applications contended for bandwidth capacity and other network resources.
The hardware load balancer was the Switzerland of the datacentre: an honest broker that tried to ensure peace and justice even when the wars between applications, connectivity, and compute resources were raging. They worked… up to a point. Hardware load balancers aren’t cheap and do little to advance intelligence, innovation, or speed—in many ways they are designed to maintain the status quo.
Today, that’s not enough – not even close. The crow’s nest view that was the seat provided for the physical load balancer, needs to be occupied by a power that delivers more than just an approximate stability – it needs to provide a source of insight.
Imagine the datacentre was a busy town centre where town planners wanted to fix traffic jams. The traditional load balancer operated as a traffic cop in the middle of an intersection. During light traffic, the cop may be a suitable solution. But as volume increases, traffic slows to a crawl. The failure with this model is that the traffic cop is the centrepiece. The traffic cop doesn’t move at the speed of traffic (your applications and end users), your traffic moves at the speed of the cop.
As the volume and speed of your business increase, you need a system than can keep pace. The challenge, continuing with the previous analogy, is that too many look to the traffic cop as a solution when, in reality, he is the problem. Ideally, you’d want a system that has complete visibility of the all the roads — not just a single intersection — and isn’t dependent on the waving of hands and blowing of whistles. Traffic light systems combined with traffic navigation applications (e.g. Waze) provide us the fastest way to get from point A to B. In fact, traffic cops are rarely necessary. The same is true of physical load balancers.
Software load balancers are delivering similar results across datacentres and clouds. These modern solutions operate as a distributed fabric that adjusts to real-time traffic patterns. Your applications and end-users, not the load balancer, are the centre of attention. These advancements in application delivery are why the physical load balancer is going away and being replaced by software. We all still need to balance loads, but that core ability has become a feature rather than a product.
Today, we expect not just load balancing, but the ability to handle all sorts of activity and requests: SSL for security, caching and offloading for smarter delivery and beyond. But our demands are becoming greater because as the one-time Sun Microsystems CEO used to say “the network is the computer”: that is, if we don’t sort out the network in the way the I/O bus and the microprocessor on a PC handles data, then all we get is bottlenecks and a backed-up freeway of data going nowhere fast.
And today those networks are vast, from sophisticated in-house datacentres stacked with blade servers to the public clouds of the internet giants. Whether we’re Acme Corp. or a search engine, the requirement is the same: we need to prioritise requests and place data as close as possible to its destination through intelligent analysis of what will need to be retrieved and the best way to shuttle it there.
When you search Google to find out a fact, you’re really part of a modern miracle whereby the answer to your question is delivered in a fraction of a second. That speed is based on years of intelligence: Google can guess what you’re looking for because it knows where you’ve searched from, what you’ve looked for in the past and it knows what millions of others have looked for and what they’ve done next. Based on those (and many other) contextual elements, it provides answers that are stack-ranked and usually very relevant to what you intended, even if you misspelt the search term or expressed your request in a casual manner.
Today, we all want a piece of that action: the chance to orchestrate resources from a platform that sits where the physical load balancer sat and can anticipate traffic movements – and all sitting on a platform based on bog-standard x86 boxes from the maker of your choice, like an omniscient traffic system who knows everything. And guess what: it’s available now. So, if you’re still paying for a hardware load balancer, start thinking about alternative approaches… or risk seeing your traffic snarl up again.