Network Performance Is imperative For A 21st Century Business

Network-Performance

In the old days, those tasked with ensuring their organisation’s networks were secure, reliable and sufficient for their needs were dealing with known resources and predictable usage. Network equipment was confined to the organisation’s various premises, the larger of which were linked via dedicated leased lines; smaller locations were often deemed unworthy of network access.

The applications that ran over the network were nearly all planned and provisioned by the IT department. That has all changed in the last twenty years as the internet has become a fundamental business resource and employees have become far more mobile.

Today, ensuring the performance, reliability and security of network usage requires that a holistic view is taken of internal network resources, the internet and mobile network services. Only when this is the case can the impact the network has on the end-to-end user experience be understood and a minimum acceptable service level aspired to.

The problem is exacerbated by unpredictable workloads. IT departments themselves have been loading networks with ever more resource hungry applications, for example voice and video conferencing. They have also been cramming more and more processing power in to data centres through the use of virtualisation, which means more network resource is required per physical server.

They are also using online resources to supplement internal infrastructure which requires a reliable and suitably “broad” interface to the internet.

On-demand services also make it easy for lines of business to provision their own applications and IT resources. Employees can do this too; accessing social media sites and firing up mobile apps at will, sometimes for good business reasons, but more likely for personal use. Such unplanned use makes ensuring network performance and security problematic, to say the least.

Data shows that the most common reason for application failure is a network communications breakdown of some sort. In other words the network is the soft under belly of most organisations’ IT infrastructure. To get on top of this requires that the user experience is constantly monitored and that when that experience is not good enough, the impact that the network is having is understood.

Mitigation may require upgrades to network services or equipment, but is may be sufficient in some cases to simply adjust and optimise usage of the existing network. A port assessment by Networks First, a network management company shows that in many cases network equipment is actually underutilised. With intelligent application it should be possible to drive more performance out of existing resources.

For many it makes sense to hand the complexities of ensuring minimum network service levels to a third party management company. The initial stage of any such assignment is discovery. What equipment and services are in place and how do they map together to form the total network.

It may seem surprising that a given organisation does not already know this; however, most networks have been cobbled together over a number of years by a succession of network managers and contractors often dealing with tactical issues without regard for an overall long term network strategy.

Once the network components are understood, the network’s current base performance and loading can be assessed. Whether this is good or bad, it is a necessary measure to provide a benchmark for measuring how the management company improves service levels going forward.

The user experience needs to be measured on an on-going basis and ensuring it does not regularly drop below a target baseline and that when it does this the reasons why are understood, and if necessary, remedied.

The tools required for monitoring and managing network performance tend to be sophisticated and expensive. Open source ones are available but need good technical skills to make effective use of. Smaller organisation may not have access to any such tools and larger organisations may lack the time or wherewithal to get the most out of them.

Network management companies will have developed the expertise to use such tools and can share their cost over a number of customers, making them available to their customers, whatever their size.

Whatever steps are taken to ensure the on-going performance, availability and security of a network, the cost of doing so must be justified by three factors. First, it must be possible to reduce running costs, or at least ensure better on-going performance, without excessive short to medium term investments in new equipment and/or services.

Second, the business risks posed by the network and problems with its performance and security must be mitigated and minimum service levels guaranteed. Third, a stable network that performs well and has excess capacity should be able to be relied upon to provide new business value as and when required.

The majority of businesses will not have the in depth understanding of their networks to be sure of achieving many of these goals. Most will not even have had a recent network assessment. If they did, they may well be surprised at how poorly it is serving them and how much may be gained from addressing this.

A functional network is imperative for a 21st century business. A well-managed high-availability, high-performance and secure network can be a distinct competitive advantage; a poorly managed one a fundamental business risk.

Bob Tarzey joined Quocirca in 2002. His main area of coverage is route to market for ITC vendors, but he also has an additional focus on IT security, network computing, systems management and managed services. Bob has extensive knowledge of the IT industry. Prior to joining Quocirca in he spent 16 years working for US technology vendors including DEC (now HP), Sybase, Gupta, Merant (now Serena), eGain and webMethods (now Software AG). Bob has a BSc in Geology from Manchester University and PhD in Geochemistry from Leicester University.