AI smartens up as robot learns through trial and error

Over the past few years, robotics and artificial intelligence (AI) have come a long way. The technology has evolved from soccer-playing spiders to bipedal robots and even sex dolls. Now it’s time to create AI through reinforced learning.

Berkeley, the research department at the University of California, has begun to create the T-101, well just its predecessor. The department has developed algorithms which enable the test unit — called BRETT (Berkeley Robot for the Elimination of Tedious Tasks) — to learn tasks through trial and error.

No ad to show here.

BRETT is given simple tasks, such as putting together two building blocks or a part of a toy plane, without any details about its surroundings. The process takes the unit a while to complete, but it eventually gets it right.

The algorithm controlling BRETT’s functions has a built-in reward system, which is based on how effectively the unit completed the task. Movements which helped BRETT complete his tasks in a faster, more complete way, will score higher and thus register first when completing similar tasks. It’s similar to how humans learn to do things. For example, we know gripping a pen with our thumb, index and middle fingers is easier than just using the thumb and pinky.

This, according to Professor Abbeel, part of Berkeley’s Department of Electrical Engineering and Computer Sciences, is the key to artificial intelligence:

The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.

This process can take up the three hours to complete if no coordinates are given for the task. If they are, BRETT takes 10 minutes to complete the same task. This time difference shows how long learning something new can take as opposed to repeating an action. Current challenges faced in home and offices are the ever changing environments. Robots struggle to adapt to these new surroundings if the details aren’t pre-programmed.

Berkely researchers decided to base this new way of learning on the human brain, which is called “neural nets”. These systems layer artificial neurons over sensory data. Similar to Siri, this helps robots recognise patterns and categories when receiving data.

Abbeel is optimistic about the progress of this new method:

With more data, you can start learning more complex things. We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.

No ad to show here.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.

Exit mobile version