Click, Touch, Wave, Talk: The User Interface Of The Future

First there was the Character User Interface (CUI, pronounced cooo-eey) typified by green letters on a black background screen. Then the Graphical User Interface (GUI, pronounced goo-eey) came along with a mouse and icons. Pen interfaces existed in the era of GUI, but now smartphones and tablets are driving many more interaction approaches using touch interfaces.

Now the GUI itself is going through a re-birth on mobile platforms with many more new types of user interface controls than we have seen in the past, we have gone way beyond simple buttons, drop-down lists and edit fields.

Many devices also support the ability to support voice driven operations, and although voice recognition has been around for over two decades, the experience is poor and more recently drastically oversold by the likes of Apple. However this is an area that will is likely to improve radically in the coming years.

The Microsoft Kinnect gaming platform provides yet another innovation in user interaction, a touchless interface using a camera to recognise gestures and movement. Microsoft are already making moves to take this form of user interaction into the mainstream outside of gaming (http://www.bbc.co.uk/news/technology-16836031), as are many other suppliers and we should see phones and TV’s supporting these this year.

However even some old methods of interaction are being given a new lease of life such as Sony’s inclusion of a rear touchpad and dual joysticks.

So, with all these modes of interaction what does this mean to User Interface Designers? Shouldn’t they really be called User Interaction Designers? How do you decide what is the best mode of interaction for an application? Should you support multiple modes of interaction? Should you use different widgets for different interaction? Should the user choose their preferred mode of interaction and the application respond accordingly? Should the mode of interaction be decided by what the device supports? Are there standards for ALL these modes of interaction?

This emerging complexity of different user interaction methods will raise many more questions than I’ve listed above. So far I have only found little research in this area, but this is a moving target. The other evidence from the mobile world is the rapid change in user behaviour as users get used to working in different ways.

Initially I would imagine most applications to use basic interactions like touch/click so that the widest possible range of devices can be used. However those targeting specific devices will be the “early adopters” for the common interaction mode for that specific device (e.g. 3D gestures on Xbox Kinnect).

In the very long term standards will evolve and interaction designers and usability experts will combine to design compelling new applications that are “multi-interactive”, choosing the most appropriate interaction method for each action and sometimes supporting multiple types of interaction methods for a single action.

Multi-interactive interfaces will make users lives easier, but are you ready to provide them?

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Dharmesh Mistry is the CTO/COO of Edge IPK, a leading provider of front-end Web solutions. Within his blog, “Facing up to IT”, Dharmesh considers a number of technology issues, ranging from Web 2.0, SOA and Mobile platforms, and how these impact upon business. Having launched some of the very first online financial services in 1997, and since then delivering online solutions to over 30 FS organisations and pioneering Single Customer View (Lloyds Bank, 1989) and Multi Channel FS (Demonstrated in Tomorrow’s World in 99), Dharmesh can be considered a true veteran of both the Financial Services and Technology industries.