Optimising Performance In Hybrid Cloud Environments

Hybrid Cloud

The move to the cloud has been fast-paced for a number of years now. But recently, the adoption of hybrid cloud has burgeoned. In its 2016 UK Cloud Adoption and Trends survey report, the Cloud Industry Forum found that over four in five UK organisations had formally adopted at least one cloud service, and that the percentage of organisations maintaining hybrid cloud environments was high; 36% of public sector and 48% of private sector organisations were utilising hybrid infrastructures.

The appeal of the hybrid cloud model includes the ability to initiate new business models and revenue streams, fast and with minimal capital investment, while maintaining control of key elements of the infrastructure. For example, hybrid cloud confers impressive automation, orchestration and self-service capabilities that make it possible to deploy a variety of infrastructure and application components to create new services – like distributed business applications.

What’s more, hybrid cloud delivers the on-demand capacity that’s ideal for coping with seasonal or user demand peaks – services can be quickly scaled back to keep operational costs under control.

Initially the territory of early cloud adopters, large enterprises are now looking to use hybrid cloud and transition their more traditional environments to hybrid scenarios. But, as cloud confidence grows, IT teams will need to address the twin concerns of increased data vulnerability and performance issues.

Alongside managing critical security issues, maintaining data and application responsiveness is a key issue for IT teams, as any degradation in service has a considerable impact on user productivity. While driving optimal performance from hybrid cloud environments can represent a challenge for IT departments, there are a number of best practices that IT teams can deploy to manage and maintain high performance – and simultaneously mitigate the security risks associated with hybrid cloud environments.

Infrastructure Monitoring

Utilising automatic infrastructure monitoring solutions will deliver end-to-end visibility into the cloud security infrastructure and enable analysis of network traffic patterns. Alongside the continuous monitoring of the entire environment – including network devices, and physical and virtual servers – alert and notification policies should be established to standardise escalation procedures. As part and parcel of this process, establish and enforce bandwidth usage policies based on incoming network traffic.

To assure availability of critical infrastructures and maximise performance for end users, IT teams should monitor at a minimum:

  • Internet connectivity, VPN sessions, network traffic and flow records
  • Servers, remote desktops, virtual machines and applications
  • Routers, switches, firewalls, load balancers and intrusion prevention systems.

Automated Log Collection

Effective analysis of the network to quickly detect unauthorised activity or security threats is a priority. Automating the collection, storage and back-up of logs from firewalls, security and load balancers – and the creation of alerts – will enable IT teams to proactively detect irregular activity and take fast corrective action. It also provides evidence for audit and compliance activities. Key target areas for automated log collection will include access and permission changes to Files, Folders and Objects, as well as the most common log types such as Syslog, Microsoft event and W3C/IIS.

Flow Record Analysis

Evaluating network flow records enables IT departments to identify which applications consume the most expensive ISP bandwidth; full flow records also make it possible to analyse network traffic, based on flow, to identify users of non-business applications. Correlating information on NetFlow, sFlow, J-Flow and IPFIX will deliver the 360-visibility into overall network performance – and can ultimately be used to reduce ISP costs.

Networking Testing

Conducting regular network tests will help to pinpoint any misconfigurations or software flaws that represent a potential vulnerability hackers could exploit. Running network penetration tests, for example, will enable the discover of potential infrastructure security weaknesses – there are plenty of free open source penetration testing tools available to IT teams, such as Metasploit and BackTrack.

As part of the IT department’s testing procedure, it may also be worth taking into consideration the need to test the organisation’s security incident identification and response capabilities – as well as employee security awareness and security policy compliance.

By regularly evaluating the network availability, performance and security of their hybrid cloud environment, IT departments can ensure their organisation is able to maintain control of on-premise infrastructures, while benefiting from the flexibility and efficiencies of the latest cloud services.

Michael Hack

Michael oversees Ipswitch’s entire EMEA business responsible for it’s operations and partner structure. Michael has many years of experience in IT firms in different software segments and markets. Prior to taking his role at Ipswitch, Michael was president at Sitecore, a global leader in customer experience management software. Previously he has also acted as senior vice president sales EMEA & International for the Enterprise Search Group at Microsoft. Further experience includes roles as sales director Central Europe at SABA, head of the SAP Competence Centre at IXOS Software AG (today OpenText), international sales and product manager at CompuNet AG and sales manager for SAP’s software solution. Michael studied business economics and holds a degree in business administration.