F5.5G Leap-forward Development of Broadband in Africa The Africa Broadband Forum 2024 (BBAF 2024) was successfully held in Cape Town, South Africa recently, under…
The human body as interface, and a new language of interaction
As it stands, we have moved through a few core paradigms in terms of how we interact with computers and software. From the earliest days of physical levers and punchcards, through text (DOS etc.), onto the GUI and windows/Mac OS type interfaces, and recently now touch, gesture and voice. These last three have signaled a major shift towards a more natural way of interacting — making use of the kinds of behaviours we understand intuitively, either because we have evolved to perform them, or because we learn them from a young age, as useful behaviours in a range of contexts.
Touch interfaces are able to mimic real world interfaces, as was the case in the original skeumorphic design trends apparent in Apple iOS apps. These leveraged our immediate recognition of physical buttons and controls and our intuition to use touch to control them. Consoles like the Kinect, and the Wii obviously allow us to perform almost the same kinds of movements we would in a real life version of the games they support — swinging our arms with a fake tennis racket to hit a ball or jumping and ducking to control a character in a game to do the same. Apple’s Siri makes use of perhaps our deepest and most universal behaviour: language. Aside from individual issues with these platforms, in concept they are all far easier to learn to use than text or graphically based interfaces, because at heart there is no learning curve. The techniques we use to control the software in each case are already known, it is just a matter of us matching our existing behaviours to the functions we want the software to fulfill (understanding that a three finger swipe fulfills a different function from a two finger swipe on an iPad for example, or understanding that leaning to the left or right steers a boat left or right in a game on the Kinect or Wii).
In each of these cases we develop a language of interaction, comprising of all the micro movements and behaviours that form part of the larger movements and behaviours that control the software and technology we are interacting with. The way we use a mouse, piloting the arrow around a GUI, or the way we swipe and tap through Flipboard are little examples of these languages of interaction.
So then what is the next frontier? Well, it seems likely that it is the human body itself that will become the interface in many scenarios. We are already at the dawn of the era of wearable tech, but rising fast from the depths of university science and engineering labs are examples of technology that is highly integrated with our bodies (for example, smart contact lenses). The new language of interaction for this future era then will be even more intuitive and subtle than before, often likely to the extent that we can just perform completely natural behaviours. Sensors implanted on or inside our bodies are already more than sensitive enough to detect extremely small movements, so gestures like nodding and shaking the head can be easily translated into software commands. The Hollywood vision of Minority Report style interfaces, where bold and sweeping arm movements are required would not be necessary at all. Small flicks of the wrist, or tapping of the finger tips will be more than enough to control even complex interfaces.
The future job then of Interaction and User Experience (UX) Designers, will be to translate software functions into the most appropriately intuitive, and easiest to perform movements and behaviours. Nodding and shaking the head is an obvious solution to a few key software interaction problems, but it is beyond the obvious where it will get interesting and exciting. A small rotation of the wrist can result in the palm facing up or down. Even just trying this in the air in front of you can elicit different associations, and expected meanings — the upward facing palm feels more receptive and requesting, the downward facing palm indicates finality, or halting. As with all languages, even of interaction, there are cultural considerations as well. In many European countries, and in North America, we hail cabs and get people’s attention be raising our hands above our heads, sometimes with a finger or two extended. In Asia and parts of Africa this is rather done with the arm outstretched at or below shoulder height, with the palm down, making a patting motion, like patting a child on the head.
There is a world of complexity and psychology just waiting for interaction designers to get stuck into, and the future, as is often the case, appears to be just around the corner.