“Yes”, The Cloud Can Be Secured

How can we expect users to trust the Cloud, until it has really been put to the test? Well, it has been, and it works! Considering the business world’s dedication to efficiency and minimizing expenditure, the personal computer revolution of the 1980s coulde seem pretty perverse to later generations. Why fill the office with PCs loaded with identical software rather than centralise on one mainframe with simpler, cheaper workstations on the desk?

And yet the PC survived well into the age of the Internet, when it first became possible to deliver all software as a service from a central source. The idea of ”Software as a Service” was good, but pioneering attempts failed, simply because broadband access was not yet widely good enough to support the service. But with today’s widespread broadband it has become a practical proposition.

It’s now called ”Cloud Computing” because actual processing takes place at some unknown location, or in a dispersed virtual machine, across the Internet cloud. And it only becomes practical when the Internet access is fast enough not to frustrate a user accustomed to the speed and responsiveness of on-board software. Simarly, the success of a virtual data centre must depend on network links fast enough to preserve the illusion of a single hardware server.

We do now have networks and access technologies fast enough to meet these challenges, but many are held back because they do not have the confidence to enter the Cloud. Knowing what we do about the determination and skill of cyber-criminals, how can we secure a system as amorphous and connected as the Cloud? And, after decades of experience in which enthusiastic technology-advocates have promoted systems too complex to be reliable, why should the public now put its trust in cloud computing?

The answer would be to find some way to test these shapeless and dynamic virtual systems with the same thoroughness and accountability as testing a single static piece of hardware. That is asking a lot, but it has been achieved – according to a recent report.

The performance challenge

Cloud computing potentially offers all the benefits of a centralised service – pay for what you actually use, professional maintenance of all software, single contact and contract for any number of applications, processing on state-of-the-art hardware – but it has to match the speed, responsiveness and quality experience of local software if the service is going to be accepted. So how does the provider ensure that level of service will be maintained under a whole range of real world operating conditions including attempted cyber attacks? The answer must lie in exhaustive testing.

But there is a fundamental problem in testing any virtual system, in that it is not tied to specific hardware. The processing for a virtual switch or virtual server is likely to be allocated dynamically to make optimal use of available resources. Test it now, and it may pass every test, but test it again and the same virtual device may be running in a different server and there could be a different response to unexpected stress conditions.

This is what worries the customer – is it really possible to apply definitive testing to something as formless as a virtual system? “Can we trust the cloud? The answer now is ‘yes’. Virtual Security works in theory but, until there was a way to test it thoroughly under realistic conditions, solution vendors have had a hard time convincing their customers. With the use of combined physical and virtual test machines, the testing proved not only highly rigorous, but also quite simple to operate.

Maintaining the application library

Whether the central processing runs on a physical, virtual or cloud server, it needs to hold a large amount of application software to satisfy the client base, and that software needs to be maintained with every version upgrade and bug fix as soon as they become available. It’s a complex task, and it is increasingly automated to keep pace with development. There must be a central library keeping the latest versions and patches for each application package, and some mechanism for deploying these across the servers without disrupting service delivery.

At this stage the service provider is in the hands of the application developer – the service to the end user can only be as good as the latest version on the server. We hope the aplication developer has done a good job and produced a reliable, bug-free product, but the service provider’s reputation hangs on that hope until the software has been thoroughly tested on the provider’s own system.

In the case of a physical server, we do not expect any problem because the application is likely to have been developed and pre-tested on a similar server. But virtualisation and cloud computing adds many layers of complexity to the process. The speed of the storage network becomes a significant factor if the application makes multiple data requests per second, and that is just one of many traffic issues in a virtual server.

Faced with such complexity, predicting performance becomes increasingly difficult and the only answer is to test it thoroughly under realistic conditions. One cannot expect clients to play the role of guinea pigs, so usage needs to be simulated on the network. It is critical to gauge the total impact of software additions, moves and changes as well as network or data centre changes. Every change must be tested to avoid mission critical business applications from grinding to a halt.

Application testing in a virtual environment

There are two aspects to testing applications in a virtual environment. Firstly functional testing, to make sure the installed application works and delivers the service it was designed to provide, and then volume testing under load.

The first relates closely to the design of the virtual system – although more complex, the virtual server is designed to model a hardware server and any failures in the design should become apparent early on. Later functional testing of new deployments is just a wise precaution in that case. Load testing is an altogether different matter, because it concerns the impact of unpredictable traffic conditions on a known system.

To give a crude analogy: one could clear the streets of London of all traffic, pedestrians, traffic controls and road works then invite Michael Schumacher to race from the City of London to Heathrow airport in less than 30 minutes. But put back the everyday traffic, speed restrictions, traffic lights and road works and not only will the journey take much longer, it will also become highly unpredictable – one day it might take less than an hour, another day over two hours to make the same journey.

In a virtual system, and even more so in the cloud, there can be unusual surges of traffic leading to unexpected consequences. Applications that perform faultlessly for ten or a hundred users may not work so well for a hundred thousand users – quite apart from other outside factors and attacks that can heavily impact Internet performance.

So the service provider cannot offer any realistic service level agreement to the clients without testing each application under volume loading and simulated realistic traffic conditions.

The test solution

Network performance and reliability have always mattered, but virtualisation makes these factors critical. Rigorous testing is needed at every stage in deploying a virtual system. During the design and implementation phases it is needed to inform buying decisions, and to ensure compliance. Then, during operation it is equally importantand to monitor for performance degradation and anticipate bottlenecks, as well as ensuring that applications still work under load as suggested above.

But large data centres and cloud computing pose particular problems because of their sheer scale. Today’s test platforms allow for this, meeting the need for scalability in rack systems supporting large numbers of test cards scaling to several terabits per rack. These modular devices can be adapted to any number of test scenarios that specifically addresses the challenge of testing the performance, availability, security and scalability of virtualized network appliances as well as cloud-based applications across public, private and hybrid cloud environments.

This combination of a physical test device plus virtual test software provided exceptional visibility into the entire data centre infrastructure, where as many as 64 virtual servers, including a virtual switch with as many virtual ports, may reside on a single physical server and switch access port. With this combination, it is not only possible to test application performance wholistically under realistic loads and stress conditions, but also to determine precisely what component – virtual or physical – is impacting performance.

To create realistic test conditions, the virtual software is used in conjunction with devices designed to generate massive volumes of realistic simulated traffic. The simulation can replicate real world traffic conditions with its error conditions and realistic user behavior, while maintaining over one million open connections from distinct IP addresses. By challenging the infrastructure’s ability to stand up to the load and complexity of the real world it puts application testing in a truly realistic working environment.

Latency and security

Even minute levels of latency can become an issue across a virtual server. So how does one measure such low levels of latency, where the very presence of monitoring devices produces delays that must be compensated for? Manual compensation is time consuming and even impossible in some circumstances, so we chose a test platform that provided automatic compensation adjusting according to the interface technology and speed.

The acceptability of cloud computing depends upon delivering a quality of experience as good as local processing but without all the overheads of licencing and software version management. Quality of experience is a subtle blend of many factors such as latency, jitter and packet loss and all these can be precisely monitored under wide-ranging traffic loads, both running pre-programmed tests automatically and allowing operator intervention via a simple user interface.

As well as delivering good quality of user experience, the cloud computing provider needs to satisfy the clients’ fears about security in the Cloud. The hacker that accesses a soft switch can re-route traffic at will, and so virtualisation leads to potentially severe vulnerability across the whole business – and the social infrastructure in the case of cloud computing. Again, the growth in virtualisation demands a corresponding increase in prior and routine testing.

Here it is not only the need to test under unusual load conditions – because those are the times when attacks are most likely to succeed – but also there is a need to simulate a whole range of attack scenarios. The application must still work when tested in the context of the network security devices working under attacks and vulnerabilities.

Conclusion

The key takeaway from the report is that not only the strength of the security in the virtual datacentre, but that the test process was able to stress the capability of the security solution right to its limits. It is comforting to know you can survive an attack, but it is even more useful to be able to test your security to its limits – to know your limitations is to know your true strength.

People assume that security is the final objective, when what is even more important is to have some precise way to quantify and tailor the level of security in a complex system. ”Tried and tested” means more than any amount of theoretical argument.

The economic benefits of cloud computing are overwhelming, but so are the security concerns of network operators and their customers. This independent report breaks that deadlock, as reliable testing now makes it easy for system vendors to mitigate the risks of migrating to the cloud, while optimizing resource utilization under an exhaustive range of real-world operating and threat scenarios. The only way to ensure success is to offer a tried and tested service.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Steve Broadhead runs Broadband-Testing Labs, an independent test organisation, in Andorra. Steve’s IT and networking experience dates back to the early 1980s, deploying and managing PC networks for two insurance companies, after which time he made a sideways move into the world of computer journalism where he single-handedly introduced the world of networking to the UK publishing industry. In 1991 he formed Comnet, which became The NSS Group, with Bob Walder, specialising in network product testing, and consultancy for vendors and the publishing industry. In 1998, Steve created the NSS labs and seminar centre in the Languedoc region of France, offering a wide range of test and media services to the IT industry.