Artificial Intelligence: Choosing The Right Flavour

AI

It is common to see technologists put artificial intelligence (AI) technology into a box and wax lyrical about their vision of how it will impact humanity. Elon Musk made headlines when he publicly stated that AI, in his opinion, posed a significant threat and was in need of regulation, going so far as to call it a “fundamental risk to the existence of civilisation”. Meanwhile, Mark Zuckerberg called such warnings “irresponsible”, and accentuated the benefits AI could provide in saving lives through medical diagnoses and driverless cars.

It is important to bear in mind that the form of AI being discussed by Musk and Zuckerberg relates primarily to artificial intelligence that has ‘human level’ cognitive skills, otherwise known as AGI or ‘Artificial General Intelligence’. Despite impressive progress in a range of specialities (from driving cars to playing Go), this technology is by no means imminent.

What current debates tend to ignore is that AI is something that’s already in common use by many in a business context today, and that the associated risks are not about whether it will leave us all in devastation. Instead of worrying about such catastrophic scenarios, we should focus our energies on the very real risks posed by this technology in the here and now if it is used incorrectly. These dangers can include regulation violations, diminished business value and significant brand damage. Though not cataclysmic in their impact on humanity, these can still play a major role in the success or failure of organisations.

As a refresher, not all artificial intelligence is created equally. AI comes in two flavours – Transparent and Opaque. Both have very different uses, applications and impacts for businesses and users in general. For the uninitiated – Transparent AI is a system whose insights can be understood and audited, allowing one to reverse engineer each of its outcomes to see how it arrived at any given decision. Opaque AI, on the other hand, is an AI system that cannot easily reveal how it works. Not unlike the human brain, any attempt to explain exactly how it has arrived at a particular insight or decision can prove challenging.

Though the labels ‘Opaque’ and ‘Transparent’ may have emotive connotations attached to them, we should not be influenced by these. There’s no such thing as ‘good’ or ‘bad’ AI – only appropriate or inappropriate use of each system, depending on one’s own needs. Opaque AI has a number of positive aspects to it which can prove useful in the right circumstances. Being transparent is a constraint on AI and will limit its power and effectiveness, and therefore in some cases an Opaque system might be preferable.

There is a potential problem with an Opaque system of bias creeping in. Without your knowledge, an Opaque AI system could start favouring policies that break your organisation’s brand promise. It’s surprisingly easy for an AI system to use neutral data to work out customer details which it can then use to make non-neutral decisions. So, for example, an Opaque AI in a bank could interpret customer data and use it to start offering better deals to people based on race, gender or other demographics – with disastrous results for the organisation.

In highly regulated industries the choice between Transparent and Opaque becomes quite crucial. In the financial services, proper use of Opaque AI in lending will result in fewer errors and improved accuracy. But it becomes a challenge or even a liability if banks are required to demonstrate how these operational improvements were achieved though reverse engineering the decision process (as mandated by GDPR, for instance).

Organisations can determine whether or not they are using AI correctly, and which type of AI works best, by deciding how much they are willing to trust it. To put complete trust an AI system, either the AI needs to be Transparent so that business management can understand how it works or, if the AI is Opaque, it needs to be tested before it is taken into production. These tests need to be extensive and extend beyond searching for viability in delivering business outcomes, looking also for the types of unintended biases outlined above.

The GDPR will be enforced in Europe from May 2018, mandating that companies must be able to explain exactly how they reach certain algorithmic-based decisions about their customers. Organisations that are able to use some sort of a switch to increase transparency by forcing the methods used by AI to make decisions from Opaque to Transparent will have a distinct advantage: they can much more easily comply.

Businesses are increasingly at a crossroads when it comes to selecting which AI system is right for them. One might think that a transparent system would be the preferred choice of many if they could make it unencumbered, but in reality, it may be quite a tough decision to make. Would you, for instance, insist on ‘Transparent’ AI to diagnose patients if you knew that an Opaque alternative was available which was more likely to diagnose correctly and save lives? In some cases, the deciding factor may be marginal, with a number of issues relating to profitability, customer experience and regulation to consider before organisations can come to a decision.

Though some technologists are vocal with dystopian views of AI rising up against humanity, let’s not overlook the risks posed by artificial intelligence here today and make sure we’re choosing the flavour of AI that’s right for each of us.

Rob Walker

Dr. Rob Walker is Vice President, Decision Management at Pegasystems.