Server-Based Computing Has Come Of Age

Thin-client computing – or more accurately, server-based computing – has been around for years – Citrix first entered the market in 1993 with a product based on Novell’s Netware, DOS and Quarterdeck Expanded Memory Manager, QEMM. In 1995, it shipped its first native Windows server-based computing product, WinFrame.

The idea behind server-based computing was to provide access to a desktop held on another machine (a shared server, rather than a PC) from a different device (at the time, a specialised thin client device, but now any device from a PC or laptop through to a tablet or smartphone device).

However, in the early days, server-based computing was essentially used for task workers – those doing repetitive jobs within a single application. Employee mobility was low, and remote connectivity slow and costs were high – and organisations preferred to allow the highly mobile sales person or field engineer to carry their world with them on dedicated laptops.

For task workers, providing them with specialist thin-client devices and consolidating the applications to a common place in the data centre, more control could be applied to what they were allowed to do, and hot-desking could be implemented as the actual desktop environment was not tied to the access device itself. Therefore, server-based computing made inroads into areas such as contact centres, claims management departments and so on – but did not fair too well outside of that environment.

Attempts to move server-based computing into different areas of the organisation hit problems. The lack of voice and video capabilities in many thin clients, the lack of support for re-directable printing and for local USB devices meant that those who required a degree more functionality than task workers did not take to the technology.

Where server-based computing was implemented as a distributed office solution supporting remote and branch offices, wide area network performance issues meant that the response rates experienced by users was not up to expectations. Even as mobile connectivity improved, the need for an “always-on” connection mean that executives could not work when in planes, and field engineers and sales executives often found themselves incapable of gaining a solid enough – and fast enough – connection to carry out their jobs, often just at the point where that connection was most needed.

Another issue that was found by technologists – even if it remained hidden to the general user – was that moving the workload from the client to the server did not always meet the promises made by the sales person involved. Many organisations kept their PCs and used them as clients to access the desktop – but this meant that expected energy saving from moving from a 75W or more desktop to a 10W or less thin client were not realised – yet a new server farm was required to run the desktop images, which required more energy to be used.

Early implementations could only sustain a few desktops per single physical server – and so large installations of server-based computing required very large server farms.

Citrix went on an acquisition spree, buying in companies such as Netscaler, Sequoia, XenSource, and more lately Cloud.com and App-DNA that improved its capabilities. Others also entered the market, including VMware, which also made acquisitions with the likes of Thinstall. WMware launched View as a direct competitor to Citrix, built on vSphere and touting “desktops in the cloud”.

Citrix and VMware continue to battle for the minds of buyers – from my viewpoint, the use of one or the other tends to come down from a buyer’s starting point. If the systems are going to continue to be under the management of a dedicated “desktop” team, the purchase generally comes down to Citrix. If the systems are to be managed as part and parcel of the data centre itself, then VMware tends to be more of a choice, as the server team will generally already be used to using VMware for virtualisation.

Improvements in server technology and the use of virtualisation rather than clustering meant that more desktops could be supported per server, and vendors such as Nutanix have introduced highly effective all-in-one appliances that provide the improved performance required to successfully manage virtualised desktop workload requirements at critical times, such as the period where everyone is coming into work at around the same time and accesses their desktops, causing a spike of activity known as a “boot storm”.

Backed by improvements in connectivity and wide area network and wireless performance, server-based computing is now making greater inroads into organisations, and has moved away from just being seen as something for the task worker. Additional improvements in how the virtual desktop itself performs now means that the massive server farms of old are no longer required – a single, virtualised server rack can now serve up hundreds to thousands of desktop images – and support a hybrid delivery model as well.

The changing ecosystem around the main vendors of thin client computing is ushering in a new era of server-based computing. No longer is the choice a binary one between everything being held on the client device against everything being held as a server-based image. Now, the intelligence of the client device can be utilised – and it doesn’t have to be a Windows- or Linux-based device to manage this.

Using client-side virtualisation, parts of a desktop can be streamed from a server to the device so that a given application runs in a secure environment where controls can still be applied. For example, the compute power of a client device can still be utilised while ensuring that data cannot be cut and paste between the application and the local storage on the device and vice versa, maintaining high levels of security yet providing good levels of user experience when it comes to application performance and response.

Where necessary, data can be stored encrypted on a device, but the use of digital rights management (DRM) from the likes of Adobe and EMC, and data leak prevention (DLP) from the likes of Symantec, Trend, McAfee and Check Point Software means that data can be stored safely, and that the employee can deal with that data in a manner unlikely to compromise the organisation.

However, the main change in the user experience is in the transparency of the desktops that can be provided. This is where the likes of Centrix Software, AppSense and RES Software come in. These companies offer easier ways to identify common application usage patterns, advise on what would make good “golden” images (desktop images that can be shared between a set of people) and implement these in the best possible way.

By blending the mix of local, streamed, virtualised and server-side applications and functions, the desktop provided to the end user looks like a single, cohesive system. Backing this up through licence management and self-service application provisioning means that the employee is put more in control of their own environment while they get the best possible performance from their systems.

Fully managed server-based computing also means that a bring your own device (BYOD) strategy is not just possible but is something to be encouraged – the device becomes just an access mechanism, with everything that the individual does on that device around the organisation’s needs are controlled completely from the centre.

So, server-based computing has come of age. The technology is now extremely advanced, and what buyers should be looking for is where the real business value-add lies –areas such as being able to patch and update images en masse, areas such as full licence management to ensure that over- or under-licencing is not occurring. The capability for Apple iOS, Android and other non-Windows systems to be able to participate as fully as possible in a hybrid server-based environment should also be looked for, as this enables BYOD.

Finally, don’t fall for the argument that server-based computing is all about cost. The complex transfer of energy from end devices to the data centre, combined with the additional need for systems management and maintenance of the systems in the data centre can lead to additional costs. Strangely, it may be more cost effective in the short to medium term to maintain your existing PCs as clients, using whatever operating system there (e.g. Windows NT) as a client.

The devices can then be cascade replaced as they fail with low-energy thin clients, so getting more value out of the PCs – although organisations must look to be able to identify where the sweet spot is where managing and maintaining such PCs outweighs the cost of replacement, even before device failure. What server-based computing should be about is the capability to better manage and secure the organisation’s intellectual property in a manner which enables the end user to still work flexibility. And, if this happens, then the end result will be a more cost effective system.

Clive Longbottom is founder of Quocirca and is a highly respected and globally recognised industry analyst, covering a range of business and technology areas. Clive’s primary coverage area is business process facilitation. Clive has been an ITC industry analyst for over 15 years. Clive has worked with a range of large and small analyst companies, including META Group (now Gartner) as VP Europe. Clive has a B.Sc. (Hons) in Chemical Engineering from the University of Aston in the UK.