Stuxnet was an interesting wakeup call

Stuxnet

Critical infrastructure protection has grabbed a lot headlines lately, but talk about weaponized exploits targeting these areas has been mostly academic. Until now.

Stuxnet surfaced during July 2010, and was spreading fast. It was also the first instance of an attack that specifically targets one of the US’s critical infrastructures, throwing much of the information security industry into a frenzy.

One reason is that, although Siemens has effectively responded to the issue and delivered a tool for the detection and removal of Stuxnet, the attack was seen by many as a shot over the bow of the bulk energy systems; a potential strike in the elusive campaigns of cyberwar. It’s not fully understood yet what the full effects of Stuxnet will be, but one thing is certain: whether it’s a targeted attack or not, it’s at the very least a proof-of-concept that such an attack could occur.

Stuxnet is an interesting and worrisome attack for a few reasons: first, it was sophisticated, utilising a zero-day exploit (CVE-2010-2568) as a delivery mechanism. Second, it was targeted, focusing on specific Siemens process control devices; and finally, it used a known (but not widely known) default password within those Siemens systems, which indicates that the attacker understood that target extremely well.

The first two, on their own, aren’t anything new: there will always be zero-days, and almost anything can be targeted, for almost any reason. An outside threat that understands the inner workings of a control system, however, was new — and it’s fundamental in the consideration of Stuxnet and the true threat that it represents.

Historically, SCADA (supervisory control and data acquisition) and DCS (distributed control systems) have been extremely isolated: physically, digitally, and intellectually — but this isolation is rapidly deteriorating as business and control system become increasingly interconnected. Stuxnet used a default password used to connect WinCC and STEP 7 programs to the STIXINT database.

You can’t simply walk down to a home electronic shop, buy a Siemens control system and hack it until you find some internal default account to exploit. You also can’t access someone else’s control system to do that same reconnaissance, at least not easily, as these systems are — or should be — completely isolated from the Internet. Right?

The problem is that these systems are built for reliability, first and foremost, and that means products are designed to have life spans measured in decades. It’s for this reason that many such systems are replete with what most IT professionals would classify as “legacy” equipment, and most IS professionals would label as “vulnerable.”

This includes a variety of “dumb” line controllers and other devices that lack internal security monitoring or logging, many of which still operate serially (and some of those have been upgraded to operate serially over TCP/IP, using relatively insecure protocols such as Modbus or DNP3).

You can’t simply take a critical asset offline for an upgrade, or to apply a patch, so even the best efforts of control system vendors can sometimes go unimplemented. Everything’s structured, and there are a myriad of dependencies among assets that ensure everything works as intended to reliably deliver energy, or pump water, or manufacture a vaccine.

This is the heart of the issue, and it’s a blessing and a curse all in one. Simply put, SCADA and DCS systems that run our nation’s critical infrastructures are predictable.

The really good news is that this predictability is also one of its strengths. What security operations analyst wouldn’t love to have a reliable and dependable baseline of normal activity? Unlike the average enterprise network, which is a free fire zone of sometimes near-random activity, control systems can be accurately defined and baselined.

Armed with that, any variation of activity — a sudden increase in traffic on a certain segment, or a new user logging into a system for the first time, or a file seen in use where it isn’t expected — can be considered suspect, and because very little happens that is out of the ordinary, those suspect events are manageable. If anyone ever doubted the intentions of NERC CIP, and the massive amounts of documentation and logging that it requires, it’s time to embrace the truth that knowledge is power.

Working with definable baselines is one thing: understanding the policies, procedures, assets and enclaves within your network allows you to do something even more powerful. Having gone through the process of documenting users, you can monitor user activity, and look for accounts that aren’t documented.

Identifying critical assets, and which other cyber assets are (or aren’t) allowed to communicate with them, allows you to look for invalid network communication paths. Understanding which applications are allowed makes it easy to spot unwanted activity, and understanding how authorized applications are supposed to operate lets you look for the more subtle threats — like Stuxnet — that are using legitimate application calls in illegitimate ways.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Eric Knapp is the Director of Critical Infrastructure Markets for NitroSecurity. He joined NitroSecurity in early 2007, bringing over a decade of experience in telecommunications and Internet security technology. He has previously held senior positions in Product Management at Cabletron Systems, Paradyne, and Zhone Technologies. Eric is an award-winning author, and is considered an expert in applied Ethernet technologies, specializing in the monitoring and security of critical infrastructure networks. He is the author of “Industrial Network Security: Securing Critical Infrastructure Networks for SmartGrid, SCADA, and other Industrial Control Systems,” available in Summer 2011.