Next-Generation Firewalls: Understanding The Challenges And Opportunities

Next-generation firewalls have come a long way in a relatively short period of time, building on the evolution of application firewalls, which have started to feature ever-more complex rule sets for standard services, such as sharing services.

Next-gen firewalls have the advantage that they support application firewall features that rely on mandatory access control (MAC), also referred to as sand boxing, in order to protect vulnerable services. Against this backdrop, it comes as no surprise to learn that Gartner predicts that spending on the management of next-gen firewalls is expected to grow to 35 per cent of total firewall spending by the time 2016 rolls around.

A number of firewall vendors – notably Check Point, Cisco, Fortinet, McAfee and Palo Alto Networks – have either developed – or are developing – their initial range of next-gen firewall offerings, with Palo Alto seen as slightly ahead of the industry with a unified single-pass inspection engine. This approach differs from the industry standard methodology in terms of taking a unified single-pass approach, rather than handing IP traffic flows over to a series of sub-modules.

Whilst some observers view this move as logical, given the hybrid nature of current security threats against the corporate IT platform, most experts agree that the resultant firewall technology throws up a number of challenges, not least on the regulatory front.

In the US, for example, FISMA – the Federal Information Security Management Act of 2002 – recognises the importance that IT security represents to the economic and national security interests of the US – and requires each federal agency to develop, document, and implement its own programme to provide IT security to all aspects of their operations.

It is worthy of note that FISMA mandates “agency programme officials, chief information officers, and inspectors general to conduct annual reviews of the agency’s information security program and report the results to Office of Management and Budget.”

According to the Act, these IT security terms “means protecting information and information systems from unauthorised access, use, disclosure, disruption, modification, or destruction in order to provide integrity, confidentiality and availability.”

Put simply, FISMA mandates a stringent burden of annual security reviews – and by implication security plus governance audits – on public sector IT security professionals in the US. And perhaps worse, from IT security perspective, there is the issue that Act is very specific in what it requires.

And meeting those needs is no mean feat because – even as far back as 2005 – there were problems meeting these requirements. A 2005 report from Intelligent Decisions for example, flagged up the fact that US Federal CISOs saw increasing software quality assurance as the number one area on which the private sector needed to focus, and pointed directly at ongoing issues surrounding the quality of software.

Amongst other areas, the study highlighted the issues surrounding network compromise, patch management, and FISMA compliance as major concerns that keep Federal CISOs awake at night. In what many now view as a prescient report, the 2005 study highlighted unauthorised wireless access points, preventing unauthorised wireless deployments, and rogue WiFi devices as being amongst the major wireless network security concerns.

And it’s not just FISMA compliance that causes potential headaches for IT security professionals, as the rising tide of compliance required by the Payment Cards Industry Data Security Standard (PCI DSS) – version 3.0 of which is expected to be announced later this year – which is imposed on organisations that process card transactions from clients, is increasingly being viewed with concern.

The PCI DSS standards were developed by the major credit card companies as a guideline to help businesses that process card payments prevent credit card fraud, hacking and various other security vulnerabilities and threats. Retailers wishing to continue accepting cards, and who are notified they are within the scope of PCI DSS, must validate their compliance on an annual basis.

The validation is normally conducted by auditors – registered PCI DSS Qualified Security Assessors – although smaller firms have the option of using a self-certification questionnaire. Whilst the PCI DSS governance rules are great as a basic audit benchmark for retailers, the real problem facing retailers is their failure to invest in IT security.

If we go back 15 years or so, experts were advising firms to spend between six and eight per cent of their total IT budget on security – in fact, it turned out that most firms only invested around two per cent of their budgets and the industry is now reaping the results of this IT security under funding.

Whilst PCI DSS clearly has its place in the overall security picture, it is my belief that effective governance and compliance comes down to taking a holistic view of your security needs and too many companies fail to do this. What many firms overlook is the fact that installing extra security can also reap benefits in areas other than simply preventing fraud. It can also improve overall business efficiency, which is positive for every company.

Delving into the complexities of the PCI DSS v2.0 standard reveals a number of control objectives, and detailed in the 12 main category requirements is the requirement to install and maintain a firewall configuration to protect cardholder data – as well as encrypting the transmission of cardholder data across open, public networks such as the Internet.

The audit and governance functions are covered – in part – by the need to regularly monitor and test networks, as well as to track and monitor all access to network resources and cardholder data, rounded off by the need to regularly test security systems and processes.

So far, so good on the governance front, but there is a school of security thought that an annual audit may no longer be enough to fully meet the provisions of PCI DSS v2.0 – and with v3.0 of the governance standard bearing down on us at high speed – there is an increasing awareness of the need for continuous security audits.

These audits, by implication, must be carried out using an automated platform, since any manual audit can only represent the state of a system’s IT security at a given point in time. An automated security audit and governance function – working quietly in the background – is not only good for compliance and defence purposes, but it also prevents the audit function from interfering in the efficiency of the organisation’s core business.

The danger with this strategy is that the regulations start to impede the day-to-day efficiency of the business. My belief is that policies should be both detailed and easy to understand – and should also balance enforcement against productivity. If a security policy starts to impede productivity, then it is, by all conventional definitions, not a good policy, especially if it is misinterpreted by employees.

A classic case of this is the Data Protection Act in the UK – how many times are UK consumers told (incorrectly) that a certain request they have made cannot be completed because of the provisions of the Act? In some cases the Act’s legal restrictions are understandable, but in many instances there is a clear misinterpretation of the provisions of the legislation, leading to employee actions that – at best – reduce their efficiency and the level of service they offer to customers.

When it comes to IT security, there is a danger that people tend to `over-tech’ most interactions with computers, resulting in the technology usage being more detailed than it actually needs to be. An example of this is where IT security professionals will often refer to a computer audit, rather than a computer-assisted audit, which is the correct term for automated IT audit functions.

Tapping the power of an effective security audit technology that supports a rage of both automated and pre-defined reports means that IT security professionals can get on with the task of efficiently managing the security of the IT systems in the background, but without impeding the efficiency of their colleagues elsewhere in the business.

This is particularly important when it comes to the generation – and the use – of `what-if’ reports that automated firewall management technology thrives on – as they can then be used to develop effective risk analysis programmes.

As Chief Security Architect, Michael Hamelin identifies and champions the security standards and processes for Tufin Software Technologies. Bringing more than 15 years of security domain expertise to Tufin, Michael has deep hands-on technical knowledge in security architecture, penetration testing, intrusion detection, and anomalous detection of rouge traffic. He has authored numerous courses in information security and worked as a consultant, security analyst, forensics lead, and security practice manager. He is also a featured security speaker around the world widely regarded as a leading technical thinker in information security. Michael previously held technical leadership positions at VeriSign, Cox Communications, and Resilience. Prior to joining Tufin he was the Principal Network and Security Architect for ChoicePoint. Michael received Bachelor of Science degrees in Chemistry and Physics from Norwich University, and did his graduate work at Texas A&M University.