Technology has a weird pattern — In the beginning it solves a problem and later on itself becomes a problem.
Why do we need to take out our mobile phones and press a button to switch ON the lights- smart lights? Why do we need to take out smart phone to switch ON the AC — smart AC? Why do we need to take out our mobile phone to check the weather to carry an umbrella — smart umbrella? We all have been shown the so called future by this and so many similar looking pictures all the time.
Unfortunately our good intention with interfaces is creating more problems than solving them. As per a study by the Nation Safety Council in year 2013, More than a quarter of all car crashes in America are likely caused by cell phone use. Mobile addiction, an un-heard term until recently has now been widely acknowledged by the society and medical science.
As technologists and end users, we all recognise that excessive screen dependence is detrimental in so many ways, whether it is mental or psychological or simply a matter of ‘lost time’. Yet for every problem we think of mobile “first”. Internet is littered with recommendations on “5 apps that would change your life” and that list changes every month. A study conducted by Nelson showed that average number of apps owned by americans is less than 30. When last checked, Android had 2.2 millions apps and Apple around 2 million apps — all aimed at keeping people glued to their screens. Most of the apps inspite of all the good intentions and good investment are ignored and fail.
The reason for a screen
Why is this a problem? What is the problem of sticking a screen to every experience? The first big assumption we make is that user has enough attention bandwidth and infinite memory to respond to all apps needs in the form of input and data. This has been shown to be far from the truth. Research has shown that our brain can keep track of a maximum of seven things at any given point in time. What would you do when there are 100 apps vying for your attention?
The second erroneous assumption is that the user would already know how to work with and interact with a screen. Unfortunately, only 40% people worldwide have internet connectivity today, and (this is also a fact of our world) less than 10% of the population in developing countries is digitally literate.
The third assumption we go wrong with is that everyone knows how to negotiate with UI features and techniques on screens. Why all payment gateways not offer same experience? When there are so many different way for product checkout? The situation worsens further when UI you already are familiar with completely changes in the name of an experience “upgrade”.
The time has arrived to question the presence of visible interfaces at every point. Why do we need to take out our mobile phones and press a button to switch ON smart things? It is easy to spot the screen pattern across the divergent tasks that interestingly have nothing to do with the use case and yet they consume considerable effort and time. Over a period of time we have assumed this to be the only way of experience design.
We have ended up creating a disastrous formula of equating UI with UX. The role of a designer is not to create engaging UI but to solve the problem and get the user task done.
Invisible interface is a mindset, a way of thinking. It is a ‘screen-less’ mindset for a world filled with screens — of various sizes. It is a shift in paradigm — powerful and empowering.
Solving for the need itself
Empathy is much-talked-about in our communities, and it is for a good reason. A screen-less mindset enables one to understand the real user need; it is a mindset that makes you want to solve for that need seamlessly and not by just sticking a screen to the problem. Invisible interfaces is the humanistic approach to computing which, by the way, is the core philosophy of design thinking. There are three important guiding principles suggested by golden Krishna (Designer at google) and Andy Goodman (Design strategist at Fjord) to make interfaces invisible.
Embrace the process — Embracing the process reveals the underlying parameters required to serve the user needs. Let us fix the smart light — by figuring out the arrival time of owner, time of the day and natural light condition smart light with the help of mobile can itself decide to either switch ON or to dim or switch OFF the light. In my opinion, this is the true behaviour of “smart” light; it is self adjusting, and therefore frees up the user’s mind and hands.
Make the technology invisible — When you make computers invisible you actually flip the HCI. It is no longer the human that is adjusting to the computer’s need, but the other way round. There is no learning curve because the computer is invisible to you. You have completely taken away any distractions in between the human and the ‘computer’ — it is screen-less (see!). Let us fix the smart AC — by figuring out the daily usage pattern and learning the preferred comfortable temperatures “smart” AC self regulates the temperature. As a user, you are always are greeted with a comfortably temperature controlled room. In order to remain invisible, the app is taking the decisions on behalf of users making the experience seamless.
Make it personal and contextual — Segmentation is not personalisation. In order to meet screen size limitations common features were bucketed in different categories. How can all 35 year-olds working professionals in a city be same? In order to make the screens invisible, the system must understand the context in which they are operating. Let us fix the umbrella. The umbrella connects to the weather service and evaluates the probability of rain. If there is a probability of rain, it gives a visual feedback. It doesn’t stop there. If used during sunshine, then it gives similar feedback during summer as well. We just added another personalised feature without adding any new user learning curve.
All the three guidelines are interconnected. It is encouraging that some organisations have already started moving in that direction and are seeing results. Once upon a time, it was difficult, but now, with deep learning-based systems, it is within the realm of what is possible and achievable.
The downside
The counter argument against invisible interfaces is high implementation cost and extended timeline, but then, what is the point of a reasonably priced or inexpensive solution which does not, in real terms, solve the actual problem. In today’s world, who has the time to check their weather app before carrying the umbrella?! Who will remember to switch ON their AC while coming out from movie theatre?!
Making technology invisible requires un-learning lot of things we all have been taught about HCI but if done correctly would lead to best user experience. The time has come to change from mobile “first” to mobile “last” approach.