What should we make of the fact that robots are now ‘self aware’?

Three small and very innocent looking robots (called “Nao Bots”) were recently tasked with solving a slightly altered version of philosopher Luciano Floridi’s induction puzzle, “The King’s Wisemen”. The test set out to prove that robots could simulate self awareness by using the logical power of deduction… and one of them managed to solve it.

The test took place at the Rensselaer Polytechnic Institute in New York and was conducted by Prof. Selmer Bringsjord, the Head of the Cognitive Science Department. It went as follows:
The three robots were each given a hypothetical “pill” (a soft tap on the head). Two of these pills would render the recipient speechless while one of them was only a placebo (or “dumbing pill”) and would do nothing. They were then asked: “Which pill were you given?”

No ad to show here.

Now as only one of them could actually utter an answer it was up to it to solve the puzzle. It proceeded to deduce that as none of them were speaking that there was no way for it to discern which one of them took the placebo. It scuffles to its feet and answers “I don’t know”.

And then a bright spark. As it hears the sound of its own voice it excitingly realises that it is the lucky one that has been spared this induced speech impairment and quickly changes its answer: “Sorry, I know now. I was able to prove that I was not given the dumbing pill.”

Now reading this you might think it an amusing and entertaining concept but for a robot to achieve this is rather groundbreaking and is an achievement that will probably exert great influence on the evolution of Artificial Intelligence.

The robot would have to understand the question it has been posed and recognise its own voice from other robot’s. It would then also have to realise that by hearing its own voice, it now has the answer to the original question.

This is a level of self-awareness and deductive reasoning that has never been showcased by a robot before.

Now, this is nowhere near the level of self-awareness that has been fantasised about in the world of Sci-Fi cinema (Ex-Machina, A.I., Blade Runner). These Nao Bots have been programmed to be self-aware in a specific situation adhering to a certain rule set.

“We’re talking about a logical and a mathematical correlate to self-consciousness, and we’re saying that we’re making progress on that” Prof. Bringsjord told Motherboard.

That being said, it is still a crucial and significant step towards actually creating a fully aware AI. Presently, one of the main problems scientists are facing in this regard is that current AI’s simply don’t have the capacity to process as much information as the human brain.

Advances in the field of artificial intelligence has been taking place at such an rapid rate recently that one must admit that day can’t be too far off, though. Have we taken the time to properly consider the dangers and implications of creating such a fully aware, self-thinking digital entity? Now might be a good time.

“This is a fundamental question that I hope people are increasingly understanding about dangerous machines,” says Prof. Bringsjord. “All the structures and all the processes, informationally speaking, that are associated with performing actions of malice could be present in the robot”.

It seems that in the not-so-distant future we will reach a crossroads where we will have to consider both the dangers and benefits of AI’s and which outweighs which.
Prof. Selmer Bringsjord will be unveiling his research next month at the IEEE RO-MAN Conference in Japan, which will be dealing with the theme “Interactions with Socially Embedded Robots”.

No ad to show here.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.

Exit mobile version