Computers have advanced a huge amount over the last few decades. They’ve become an indispensable part of how we work, how we relax, and, most importantly, how we relate to one another. But for all its importance, there is still one thing a computer cannot do. Learn and reason.
That is, until now!
Google AI’s subsidiary DeepMind is exploring how reinforcement learning can teach computers to navigate complex situations. Their new AI has learned to walk all on its own, challenging the idea that computers can’t replicate this previously human specific ability.
As you can see, the “walking” that the computer does is… interpretive. But it does work. This is because the computer had to figure out how to effectively “walk” with no outside influences. All the DeepMind devs did is give it sensors for balance and send it on its way.
Eventually, walking got too easy and the AI’s learned to jump over gaps in the floor.
There are three models so far. One with two legs, one with four legs, and a fully humanoid figure with arms and a head. All the models were successful, but the four legged one was especially skilled at jumping.
In truth, computers have been capable of rudimentary “learning” for years. It’s how your browser knows what ads to show you. But the important thing here is that it proves that computers can learn and then apply that knowledge the same way a human would.
Deepmind is also a professional gamer. It’s learned to play Space Invaders so perfectly that, as of now, no one has been able to beat its perfect score.
The future seems bright for artificial intelligence. However, some worry that this could be the beginning of the end for humans. There’s little actual proof for this, so we’ll just have to wait and see.
For now, just be comfortable with the fact that the cutting edge of artificial intelligence looks like a wacky inflatable tube man.