Given the incredible advances in natural language understanding (NLU), one would be forgiven for thinking that this is what artificial intelligence is all about.
Try asking Alexa or Siri for an Indian restaurant near you, and you will most likely be given the right answer. Or try having a text-driven conversation with one of the many amazing Facebook Messenger powered chatbots, and you will be amazed at how it seems to ‘understand you’.
No ad to show here.
I tried recently with my bank’s virtual assistant, and loved the way it could respond so accurately to my request for my latest balance, or for help to move some money from one account to another. It was almost as if I was engaging with a human assistant.
However, my amazement ended when I asked for help choosing the best loan product. It was almost as if the chatbot had a brain freeze, and I was immediately offered a link to the loan products on their website and an offer for a financial advisor to give me a call.
The same thing happened when I was engaging with a self-service chatbot on my telco provider’s website. All went swimmingly until I asked for advice on my WiFi that was not working. At that point, I was quickly offered a link to a very unhelpful document on WiFi troubleshooting. And as a result, I ended up calling the contact centre — which, by the way also struggled to solve my issue.
This experience is not isolated. I have yet to experience a self-service chatbot that is capable of diagnosing my issue and offering me a solution when my context is specific and there are many possible answers. In my experience, most chatbots simply act as digital assistants. With more advanced NLU, these assistants seem to understand me better and better, even when I use different words or expressions. However, they seem unable to ever offer me any meaningful advice. For that I always seem to be pointed in the direction of a human.
But why is that? Why can we make the self service ‘front end’ so intelligent, but the ‘back end’ remains less impressive? What is preventing so many virtual assistants from upgrading themselves to virtual advisors?
Part of the problem seems to be the different logic that an assistant and advisor need to apply. The assistant typically looks to answer a specific question or execute a specific instruction. The challenge for the assistant is to first understand the question or instruction, and to then locate the specific answer based on all possible answers available. If the answer does not exist, the assistant is stuck.
So if you ask, in connection with a specific image of a dog that you are looking at, ‘what is the breed of this dog?’, the assistant needs to firstly recognise the meaning of the question, and then know what a ‘dog’ is, and what a ‘breed’ is in relation to a ‘dog’. It must also be able to recognise the dog in your image against hundreds of images it may have access to.
Change the question slightly, say to ‘where does this breed of dog originate from?’, and a different answer must be given based on different logic patterns.
Just achieving a reasonable level of accuracy, given the millions of possible questions you can ask on any topic, is what makes this so incredible. And it truly is.
However, finding the right answer from millions of possibilities is not good enough if you are offering someone advice. Advice requires that you firstly understand the specific context of the problem, before you look to identify possible solutions. And in most cases, your solutions are based on prescribed logic rules that require a record proving the advice was given in line with these rules.
Achieving this structured, consistent and compliant level of logic requires more than mere pattern recognition. It requires the ability to execute a prescribed set of logic paths that can respond to any known context, and can be tracked. And it requires mapping that is not trapped by decision tree or knowledge base thinking.
This level of advice remains, for many, a digital dream. But it is one organisations around the world are focused on solving, and, they’re seeing results.
Feature image: rawpixel via Pexels