Understanding the correlation between data leakage and the security mission

In a number of cases, Internet-attached private systems are not always maintained in an up-to-date state and, as such, are vulnerable to exploitation by mischief-makers and, of course, their cybercriminal peers.

With this as a given, it is my contention that most public/private-sector professional security installations enjoy a very healthy state of security and, in most cases, are secure against drive-by, targeted or opportunist hackers, and all the compromises that can result from this type of cyber-criminality.

However, as can be seen from the recent Shady RAT analysis from McAfee, it would appear that, despite this high level of resilience, some otherwise well-managed security installations are failing to deliver a 100 per cent security defence strategy.

For those readers unfamiliar with the August Shady RAT (Remote Access Trojan) report, the broad brush strokes are that there have been a series of multi-year, multi-system attacks on at least 72 US and other Western-allied government, contractor and other server systems.

Regardless of which entities are behind the attacks, however, the reality is that conventional IT security defences—when deployed alongside well-planned and executed security strategies—may no longer be considered sufficient to stop a determined and targeted attack.

This leads us into the interesting supposition that the majority of previous reports, many of which have been well-researched by industry professionals, may be operating on a rationale that is a little out-of-date, and, as such, may not be adding value in giving the reader a complete overview and explanation of the current state of play.

A second observation is that, whilst some of the report findings focus on the failings of current generations of defensive technologies, they do not account for the root causes of the problems caused by determined and targeted attacks.

These causes, I believe, centre on everyday working practices and security configurations, which are not always included in the standard security mission in a typical IT systems environment. There may also be further issues in the areas of security skill sets and a hands-on understanding, appreciation and anticipation of the potential for insecurity that may arise from adhering solely to the standard security mission of a given organisation.

Is this a criticism of the current status quo that exists in most corporate IT security operations? Far from it; my main aim here is to set the scene for my observations.

Recently I was fortunate enough to have a meeting with an incumbent IT security manager in a large organisation, and as part of the getting-to-know-you process—on both sides of the table—that all potential new candidates for IT security projects undertake, I was asked a number of key questions. Amongst these questions asked in connection for a senior IT security role were the following: (a) Do you know what access control is, and (b) Can you explain what “audit and log” means?

You could conclude that these were trick questions, but the reality is that these types of questions suggest that the person drawing up the list of primary questions may not have been be fully conversant with the in-house IT security function (and that is likely a generous assertion).

This brings us to one of the biggest challenges of the current age of cybersecurity: Advanced Evasion Techniques (AETs). At their most basic, AETs are a logical means by which attacks can be engineered to exploit a condition by re-engineering a vector of attack, and so circumvent any currently deployed defence or control, with the intention to invade, compromise and/or impact a targeted operational environment.

Developing a typical AET-enabled security attack is no mean feat, but the task is made easier by the fact that there are significant volumes of unintentionally published— but very available—intelligence on various IT platforms that can assist cybercriminals in what the design the attack profile of his/her AET should take, and how a hostile `footprinting’ a potential target and collating information on the system they are considering attacking.

This process then allows the hacker to decide network incursion can be engineered. At this point I’d like to introduce a supposition, namely that all of the above events, skills and knowledge can be used to develop a highly effective data leakage strategy. By its very nature, data leakage is opportunistically invasive—and unless understood, will always be present in the background, trapping, recording, and then without any malicious intent, making the information available to unauthorised persons.

In many organisations, AET-enabled data leakage is a potential disaster just waiting to happen. What many might interpret as mere snippets of information can be leveraged by an experienced cybercriminal to launch a highly effective attack on an organisation.

One of the biggest potential areas for data leakage in my experience lies in the hacker treasure trove that metadata has become. Because metadata is data-about-data, it is often classed as summary information, when, in fact the possession of metadata—along with other snippets of information about a given potential target—can allow an experienced cybercriminal to develop one or more attack vectors that have the same success rate potential as if they had possession of all the underlying data `summarised’ by the metadata.

But before we move on with this analysis, what is metadata? Metadata exists in all types of documents, and is present to assist the application, machine or user to manage the objects by, for example, allowing tagging or applying some other deeper hidden detail analysis which may assist with searching or document management. Despite its potential for darkware development, metadata’s underlying purpose is entirely above board.

Problems start to rear their ugly heads when the security implications of metadata are not fully understood. And it is here that we start to see the opportunities of data leakage starting to creep out of the security woodwork in a typical organisation, often as a direct result of the many document formats that exist in modern IT environment: DOC, DOCX, PDF, PPT, PPS, XLS, XLSX, ODT, ODS, ODG, ODP and SVG, along with many others.

The sheer variety of data formats gives cybercriminals the ability to gain legitimate access to published documentation, download it, and then subject the data to analysis in order to locate snippets of information, such as user names. This can then lead to the identification of active user and/or email accounts, internal URLs, printer names, network and user paths, shared folders, and operating systems.

And this is before we even begin to talk about NetBIOS names, IP addresses, GPS data and applications, all of which support `footprint’ intelligence to the would-be attacker, who gains a ready insight into your network platform.

Is this a threat? Most certainly yes. This type of footprinting—which may come as a surprise to many network admins—is a very effective method of working out how organisations operate on the inside and, by manipulating data such as an Admin Account to produce BAU documents or to locate the use of legacy operating systems and their applications.

This is – as any network security professional will attest – a perfect environment to create a crafted AET attack process and, by definition, represent a clear and present danger to the organisation’s IT system that is under the hacker microscope.

In some of my own analysis and research, I have found it perfectly possible to gather sufficient intelligence to identify those sensitive assets that can be attacked through the use of externally gathered data. In one instance, this process methodology allowed the identification and extraction of files containing hard-coded user IDs and their associated passwords.

In another instance, my approach allowed the identification of some very sensitive servers and associated information assets that were hanging off of a third-party developer’s Web site.

As a conclusion, I believe that data leakage has become one of the most misunderstood conditions that engender a potential threat in modern security landscapes. I am also of the opinion that data leakage is one of the primary reasons why organisations are falling easy prey to hacktivists, hackers and cybercriminals.

A strong data leak prevention program, implemented with the proper governance and assurance considerations is critical. If the associated insecurity of data leakage issues were addressed as a matter of routine cybersecurity housekeeping, our industry would enjoy a noticeable reduction in the success rate of AET-enabled data incursions.

John Walker served in the Royal Air Force in the very early days of Computer Security, and as an Investigator, and served with number specialist signals units, as well as working with GCHQ, CESG, and specialist agencies. Today he holds the appointment of Visiting Professor at a University in the Faculty of Computing and Informatics, and MD Secure-Bastion, and has worked on a number of International assignments, examples of which are Argentina, Germany, Holland, Spain, India, Netherlands, and Denmark/Norway. John is a published author, with articles in Information Age, CONSPECTUS, Consultants Advisory, Computer Weekly, Computing, NT Magazine, Information Security (US), Elsevier (Computer Science), and Management Consultant. John is also engaged with Government, and Parliamentarians, and has both contributed, and presented at Public, and Government debates for the House of Lords, House of Commons, DTI, EURIM, and Local Chambers of Commerce.