HBO is bringing another best selling fantasy book series to life, this time with Phillip Pullman’s His Dark Materials trilogy. The content company dropped…
Are you considered an expert at work because you are the one who always knows about the latest technologies, and are eager for your colleagues to try them out unsuccessfully?
Or even worse, is your job to get people to adopt new technologies across corporations that have been repeating processes for a number of years, and are quite happy doing so?
The above situations are common in today’s digital environment, from my observations, and conversations with people such as the abovementioned guy. The question is: is it not contradictory that even though we live in a hyper connected world, our adoption of disruptive technologies is relatively slow?
The answer is obvious but not very simple as I’ll explain.
The human brain and technology adoption
I recently came back from South by South West commonly known as SXSW a digital technology, music and film festival held in Austin. Amongst the many topics discussed, the one that stood out for me was the influence of human emotions and needs on technology design, which results in the creation of robots that feel like humans, or apps and wearable tech that measure emotions, and values, etc.
Attending SXSW inspired me to write about it from a behavioural perspective that can be summarised as a need to humanise technology. The consensus from behavioural economics and neuroscience is:
A. As humans we reject change because it takes up a lot of energy in the brain which can be utilised in more urgent and immediate tasks, and
B. Even though we find technology rewarding, we are not as prepared to abandon behaviours that we have mastered over the years in the hope of optimisation and productivity.
What interests me the most about the above is the fact that making processes more efficient, saving time, and doing less work, are objectives some companies have in mind when developing new technologies.
When this is compared with how the human brain works, the pattern is not quite the same. The brain likes tasks that require attention, and enjoys solving challenges to the extent that when something takes an effort, it makes people love it more (see Dan Ariely on the Ikea effect). At the same time, the brain wants a human element, so the idea of replacing humanness with digital platforms and processes doesn’t seem as appealing.
What does this mean for technology adoption?
It seems that the most successful platforms are those that blend the right amount of human and technology; of emotion and logic.
This means that there is an opportunity for applying non-conventional thinking to understanding the uses and adoption of new technology.
For instance, what if instead of promoting new platforms’ adoption as productive, IT departments and technology evangelists focused on the emotional benefits of using them, or how much human connectivity people can get out of them without having to replace real interactions entirely.
As a secondary reward product designers, marketers, and technologists could put emphasis on making people feel empowered when engaging with their products: instead of these having the ability of making people better at something, they could inspire users to believe that they can achieve productivity, or whatever else, for themselves.
The world described above is one where individuals have agency to adopt the technology they want to use, but can be positively influenced by designers, technologists, and futurists to embrace all its uses in a way that benefits them. It is worth noting, such benefits go beyond functionality, and include values such as connectivity and inclusion, and emotions such as achievement and completion.
From this perspective, the more in touch with human emotions technologists and marketers are, the closer they will be with the end user. After all the technology is an enabler of something else, and chances are that something is a basic manifestation of humanness.