Let’s forget about security…

“Security” is really a lot to do with perception and point of view. Yesterday, preventing “whistle-blowing” was a compliance issue; today, post WikiLeaks, allowing whistle-blowing is starting to look like it might become a compliance issue; and the US government has issued an information assurance memorandum “pertaining to security, counter-intelligence, and information assurance disciplines, with emphasis on their application in automated systems”. In this, the issues “are categorized as follows.

1) Management & Oversight
2) Counterintelligence
3) Safeguarding
4) Deter, Detect, and Defend Against Employee Unauthorized Disclosures
5) Information Assurance Measures
6) Education & Training
7) Personnel Security
8) Physical/Technical”

The memorandum goes on to list a fairly sensible set of self assessment questions around common vulnerabilities and risks, that could help to improve National Security (although I’d be rather concerned if most of these issues aren’t being addressed already).

Well, I’m not sure that much of the material released by WikiLeaks was significant to national security (it was mostly gossip, and mostly confirmed much of what we all suspected anyway) and, as a taxpayer, I think that my government’s concern with secrecy may be mostly about keeping its mistakes and misconceptions away from the voters.

I’m also fairly sure that much of the security industry will welcome this memo as a chance to sell more security technology (it will almost certainly influence what is regarded as “good practice” outside of its immediate scope). Many of its readers will comply with the letter of its “requirements”, using money spent on technology as a metric, whilst ignoring its procedural and people-management implications—which could help them to implement its spirit. However, Europe’s “comply or explain” culture probably places it ahead of the USA in that last regard anyway.

Nevertheless, my response to what looks like another, however well-intentioned, knee-jerk response to a media security exposure, is that we need to step back and reassess the place of “security” generally.

Perhaps the recent case of former Swiss banker, Rudolf Elmer, now on trial for “stealing” and publishing information about tax evaders and other possible criminals, in what he sees as the public interest, highlights some of the issues.

As Ed Macnair, CEO of Overtis, points out: “Whether you view him as a whistle-blower or a renegade, from an information security perspective Elmer’s case is yet another example of a trusted employee storing customer information to removable media and passing it to a third party. There is a growing recognition that employees with privileged access to data may become less trustworthy over time and so security should be user-centric. The only way to stay on top of your data governance is to put security in between your users and your data, so that policies are consistently enforced”.

I’m sure that is true—security is a people issue and it must be user-centric and policy-based. What people do in the open is usually in line with the mores of the culture around them, and if they work in an ethical culture, then their behaviour will be ethical too.

Overtis, for instance, sells software which, used appropriately and with sensitivity, can enable a good user-oriented “security culture”. It can be used, for example, to notify users when they are breaking security policies and record their explanation (which could be seen as “knowledge transfer” leading to policy improvement) rather than to prohibit an action mindlessly and perhaps impede business agility.

However, to me, this brings up a bigger issue, that of “ethics”. What if an organisation expects “ethical behaviour” of its employees; but these employees see the organisation, or its management team, behaving unethically itself? I can remember working in banking when Ethics was some place way out on the Central Line, where your staff honed their trading skills in early life. This made building an effective “security culture” rather hard, especially when security impeded doing business—the imperative seemed to be to comply with the letter of all regulations; while evading their spirit whenever possible.

I think that there’s scope for a radical rethink on the place of security that ensures that it is built into ethically-directed business processes, automated and manual, from the very start. All too often, the scope of process improvement is limited to automation and when automating a business process, the functional requirements are what developers focus on. Security, compliance, auditing and so on are seen as “non-functional requirements” and are often a bolt-on, from a third party provider.

This is usually very dysfunctional (bolting on security to an insecure system is expensive, can impact performance and usually doesn’t work very well) but it keeps a security silo in business. It also tends to promote security as an end in itself, often in opposition to business agility and innovation.

Wouldn’t it be better to forget all about “security” as such? Appropriate Confidentiality, Integrity and Availability are still vital, of course, but they are non-functional requirements for the holistic business process. Perhaps these non-functional requirements are satisfied by specific technology or by generic automated processes shared across a domain—or perhaps they’re handled by manual procedures. All in the context of ethical “good business practice”.

There are no security absolutes: what we call “security” is simply one aspect of holistic risk management. So, for example, you sometimes read that all information on external (and even internal) networks should be encrypted. However, in low-latency WANs involved in algorithmic trading (where milliseconds matter) the overheads of packet encryption/decryption are probably unacceptable and other means must be found to protect data privacy appropriately. Encrypting all information on the network is too simplistic a requirement.

Appropriate (and user-centric) transparency and control is necessary to deliver a business outcome. In other words, good practice business process (automated or otherwise) is needed to deliver a business vision, including compliance with any regulations and accepted business ethics. However, what we now think of as the security industry is really just an enabler for best practice automated business service development.

So, don’t think “security”; think “enabling business outcomes”; including the identification and management of any associated risks and vulnerabilities. You will still buy security technology, of course, but now you will feel assured that it is delivering something essential to the business outcomes you want, not just satisfying a security officer’s check-list or keeping the security industry profitable.

David Norfolk is Practice Leader Development and Governance (Development/Governance) at Bloor Research. David first became interested in computers and programming quality in the 1970s, working in the Research School of Chemistry at the Australian National University. Here he discovered that computers could deliver misleading answers, even when programmed by very clever people, and was taught to program in FORTRAN. His ongoing interest in all things related to development has culminated in his joining Bloor in 2007 and taking on the development brief. Development here refers especially to automated systems development. This covers technology including acronym-driven tools such as: Application Lifecycle Management (ALM), Integrated Development Environments (IDE), Model Driven Architecture (MDA), automated data analysis tools and metadata repositories, requirements modelling tools and so on.