Second rate providers give cloud a bad name

The early stirrings of cloud computing are beginning to take place – with mixed results. Totally contained cloud services, such as salesforce.com, Concur and Transversal show how full service functions held in the cloud can facilitate processes within a business without the need for implementing and maintaining costly hardware platforms – but is this going to be the case going forward?

The problems will grow as cloud becomes less contained. If there is a move toward the “composite application” – one where different functions are brought together on the fly to deal with a process issue – there will be a whole new raft of issues that will need to be dealt with.

Figure-1

In figure 1, the way a composite application works is outlined. A user, through their access device, makes a request for a task or for a process to be dealt with. This will generally be handled through a trusted partner who will take responsibility for identifying and pulling together the functions that are required to facilitate that task or process.

Some of these functions may already be available within the user’s own data centre (through a private cloud, or just through a standard function in an existing application), others may be hosted by the aggregator themselves; many will not.

It may well be that other providers already pull together multiple functions to provide a “functional assembly” that can deal with a bigger part of the task. This has multiple benefits in that such a provider can reduce the number of links in a chain, can help in minimising the number of contracts involved and may be able to improve response rates by aggregating functions in a specific manner, for example through using providers who are all hosted within a single co-location facility.

The aggregator will have to negotiate on-the-fly technical and business contracts with other providers that it knows and trusts, and ensure that everything meets the needs of the end customer.

So far, so good. But now let’s look at the underlying network issues that raise their ugly heads in such a scenario.

Figure-2

Figure 2 shows a few of the areas where cloud networking may be an issue. Taking each one in turn, Quocirca advises the following:

  • Functional availability: The internet is, fortunately and unfortunately, under multiple ownership. This is fortunate, as it offers a multi-path network capability such that if any one part of the internet goes down, availability tends to remain due to the capability for alternative paths to be taken. This is unfortunate, as it means that root cause analysis for network issues can be, at best, painful. The majority of good cloud function providers will use multiple networks to ensure high availability – but some may only have a single provider, and even if the internet itself is OK, a break in service from that single provider can lead to the lack of availability of a core function for the composite application. Look for providers who have multiple network providers and, where possible, multiple data centre facilities to ensure functional availability. Also, ensure that you have multiple network providers yourself – a break in your connection will mean no access to any functionality…
  • Performance: the great imponderable of the internet. The very capabilities of general availability outlined above means that the performance across the internet is difficult to guarantee. As a packet of data can take any route it wants, it could take different amounts of time dependent on network conditions. Look for cloud providers who offer quality and priority of service using services such as multi-protocol labelling service (MPLS) and 802.1p/q. Also, look at the use of other tunnelling or direct connecting services, such as leased or dedicated lines where core functions are concerned.
  • Failover: allied to the above, should there be a failure in the chain, what happens? A good cloud provider will have multiple instances of a function running, and should be able to failover gracefully to another instance. This will require the maintenance of certain network information, however, otherwise transactions may become confused.
  • Contextuality: as part of failover, there is a need for all functions to be fully cognisant of what they are doing, and for this to be made available should any failure occur. Store and forward messaging is required in the cloud, so that any break in service provides a known state, and that this is one which can automatically resume when the failure has been addressed. Look for a cloud provider who offers a fully audited store and forward capability as part of their service.
  • Security: A knock on issue from providing failover capabilities can be that it becomes easier for a chain to be hijacked by a malicious user. If they can inject themselves in to the chain when the failure happens (which will generally have been initiated by them anyway), then the process can continue blithely unaware that the whole chain has been compromised in this way. Ensuring that no part of the chain can be hijacked in this way through the use of full contextuality, audit trails and cloud-based intrusion detection capabilities will mitigate this issue.
  • Information control: as well as ensuring that the process chain itself is not hijacked through functional injection, there is a need to ensure that the information in the chain itself cannot be compromised. Data leak prevention, content inspection and encryption will help here. Look for cloud providers who offer these as functions – but remember that the information is yours, and as such, the over-reaching responsibility for it resides with you.

To meet the promise of a fully functional environment, the cloud needs a solid foundation, and this will be dictated by the network it is dependent on.

Second rate cloud providers will be the ones who give cloud a bad name through poor performance, availability and security. Through meeting the above criteria, cloud aggregators will be able to provide enterprise-class services through providing certain functions themselves and using functions from other in a fully managed and audited manner.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Clive Longbottom is founder of Quocirca and is a highly respected and globally recognised industry analyst, covering a range of business and technology areas. Clive’s primary coverage area is business process facilitation. Clive has been an ITC industry analyst for over 15 years. Clive has worked with a range of large and small analyst companies, including META Group (now Gartner) as VP Europe. Clive has a B.Sc. (Hons) in Chemical Engineering from the University of Aston in the UK.