It Can’t Happen Here? Electoral Manipulation EU-Style

EU

Swaying public opinion is now less about stirring oratory and brilliant policies and more about generating a tsunami of tiny hints that combine to overwhelm reasoned reflection and inflate existing prejudices. Isolation technology promises to restore security and somewhat calm the Internet experience. With all its information to hand, and a little more freedom to process it, might we not draw a little closer to the democratic ideal of a rational voting public?

From our side of the pond US elections can seem like a crazy carnival of balloons, cheerleaders and hyped up PR stunts – and the latest presidential election just seems to have underlined that view. How could so many million educated people allow themselves to be so readily manipulated?

Before we conclude that “it couldn’t happen here”, it is worth remembering just how open the Old World is to US cultural trends: rock ‘n roll, fast food and personality-cult politics have long gone global. But surely, when it comes to something as serious as an election or referendum, our majorities would still opt for rational choices? Be guided by head and heart, rather than gut feeling?

The snag with rationality is that it relies on data to form conclusions. If the data is accurate it should lead to good decisions, but false or compromised data can result in poor choices. “Post truth” was named “word of the year 2016” by Oxford Dictionaries and the recent French election, the Brexit referendum and the Trump campaign are just the most obvious examples.

Some 9 gigabytes of private data from Macron’s election campaign were loaded online, fuelling instant comparisons the recent cyber attack on the Democratic National Committee and the chairman of Hillary Clinton’s Democratic campaign – which US intelligence agencies blamed on Vladimire Putin’s efforts to support Donald Trump for president.

It is tempting to believe that we have learned that lesson and that, once the truth about post truth was exposed, it could never happen again. Established political parties deny such unethical behaviour, because they believe in democracy and an educated voting public.

Populist parties are likely to fall back on Putin’s excuse that – while his government would never think of stooping so low – it is possible that some “patriotic” hackers might have interfered in the elections. What sort of ethics is it that allows one to whip up a frenzy of patriotism against another nation, and then turns a blind eye on the resulting consequences?

True Or Not, It’s An Old Story

The over-riding reason that we can guarantee that voters will be manipulated in future is that this is simply the latest version of something that has always happened since the birth of democracy.

Politicians’ speeches have always said more about their rivals’ faults than their own policies, and all parties employ spin-doctors and PR companies to help sway the vote. What is different now is: a) that the untruths can be buried in such a flood of data that we simply do not have time to analyse and expose them, and b) that social media can make them look more like the opinions of good friends and respected authorities, than political rhetoric.

This is where the change has taken place: a shift from open broadcast to what seems like personal contact; from the means of communication to the human recipient. A fiery speech might convince an audience of several hundred, whereas a fiery tweet could influence millions.

Until recently, defending the truth would require protecting computer systems from being hacked and loaded with false data, now it is increasingly about protecting minds from being loaded with such data. The same thing is happening with cybercrime: today’s hot security news is less about next generation firewalls and intruder prevention systems, and more about how to train people to resist phishing attacks.

Fake news will be readily accepted as long as it matches the prejudices of a particular group, and they will eagerly pass it on without checking its truthfulness. This free distribution channel could be augmented by teams of human activists employed to spend all day posting comments to social media platforms. But now there are Social Bots to automate the process and insert millions of apparently human comments into sensitive contexts – possibly guided by sophisticated Big Data algorithms.

People were initially shocked by the revelation that a British company, Cambridge Analytica (“We find your voters and move them to action”) had been involved in so many US political campaigns. But is their apparent success a proof of effectiveness? Or is the story itself false news to boost the company’s profile?

Whether or not the company actually raised support for Donald Trump is still being debated, but the commercial success of advertising networks suggests that it could soon happen. Book a hotel online, and for months you will find appropriate ads popping up on every website you visit – though Big Data has yet to understand that a single night’s stay in a hotel does not mean that one is addicted to the experience.

If it is one step down on the ladder of declining ethical standards to spread lies in order to make people love you, then it is one step further to spread lies, or even facts, to make people hate all others. The best way to discredit an opponent is to steal personal data, emails and sensitive information that can be leaked via the media or social channels. Neither clever direct marketing nor negative political campaigning are in themselves against the law, but data theft is leading us further into the realms of cybercrime.

Can Democracy Be Protected?

How can we counteract the abuse of social media for political aims? Restrictive legislation risks shooting incumbents in the foot – it would be a political disaster in a democracy. Anyway, such measures have been proven time and again to be unworkable. Even in North Korea and Iran there are ways and means to by pass official filters.

The debate between those crying out for Internet censorship and those defending freedom of expression will never be resolved. So what can be done in technological terms?

The first priority must be to find better ways protect one’s own personal data – either to stop it from being used against you, as in a smear campaign or ransom attack, or else to avoid becoming a target for certain types of misinformation. And credential theft is the most common way to get access to personal data. Once someone or something knows your username and password you become “low hanging fruit”, and there are an endless number of tricks used to steal those credentials.

As well as conventional anti-virus software for recognising known malware there are a number of artificial intelligence solutions available that can recognise suspicious patterns and help defend against new, unprecedented attacks. In response to this, attackers keep developing new tricks to disguise malware in innocent looking formats and even to conceal it within legitimate content.

Spear-Phishing takes general phishing a stage further by using information already stolen to sharpen the targeting of the attack. For example spam e-mail might use your correct name, target your favourite hobby, refer to a recent purchase or appear to be signed by a friend. There are so many ways to inspire just enough trust to allow a too-busy victim to click the link offered, without realising where it is actually taking them.

The problem is that the Internet pleases its users by providing a rich, responsive multimedia experience –­ a far cry from the static pages of 20 years earlier. All that responsiveness is possible because of the hidden “active content” that lies behind surface appearance – the Flash and Java and other interactive elements.

Even a PDF or Word document is a lot more complicated than it looks on the surface – and it is in those hidden complexities that malware can be concealed, even on an otherwise legitimate and popular website.

Keeping Our Hands Clean

The Internet is now hopelessly infected with malware. It is no longer just a question of knowing what is poison and blocking it, because the attacks are now being spawned automatically and evolve constantly.

Just as political campaigns are being driven more by spreading waves of false panic, so does today’s malware spread more by human misjudgement than by technological penetration. So people are being told that they must take ever more precautions: not just using and concealing more complex and less memorable passwords, but also checking the names of websites and looking for any number of signs that they might not be authentic.

The truth is that connecting to the Internet has become like connecting your plumbing to a source of foul water. The number of precautions entailed would be intolerable: boil all water before drinking, do not wash your hands in unboiled water or use cutlery that might have been washed in contaminated water, ask how clean a person’s hands are before you shake them… Instead of all those warnings, it would surely make more sense to sterilise the water on its way into the building?

The Internet equivalent of sterilised water is called isolation technology. Employees may think they are simply browsing the web and responding to e-mails in the ordinary way, but actually they are using “safe pages” that have passed through a secure process in the cloud that effectively strips out all the hidden active content and creates an identical looking and behaving page.

This in turn is transmitted between the cloud service and the user using secure servers and in a highly encrypted form – just like clean water being delivered by a clean pipe. Effectively it is providing Internet access as printed pages with no possibility of active content or hidden malware – except that the pages respond in the normal dynamic manner.

Protecting personal privacy might seem a far cry from reducing the sway of populist demagogues, but it is the basis of a broader process of disinfection from disinformation. The less one’s secrets are abroad, the less easy it is to target and play on prejudices, or to tempt with personalised PR. The less one is attracting floods of spam and adware, the more time one has to think and make reasonable choices.

The less time one is forced to spend washing and rewashing your hands, the more time is left for using them wisely. Democracy might once more be in safe hands.

Jason Steer

Jason is an engineer at heart and has built and broken computer and networks since 1996. Jason has worked at a number of successful technology companies over the past 15 years, including IronPort, Veracode & FireEye. Jason has worked as a cyber-expert with CNN, Al Jazeera & BBC and has worked with the EU and UK Government on Cyber Security Strategy. Jason has spoken at numerous industry events such as ENISE.