The self-driving car that learned to play go

Posted by Mike Walsh

9/20/16 11:35 AM

42120005.jpg

 

There is a wonderful turning point in the movie ‘Fresh’ (1994) when 12-year-old Michael, surrounded by violence, drug dealers and ghetto enforcers, applies his father’s chess lessons to eliminating the forces that oppose him. Games are more than mere pastimes; they are frameworks for thinking. That is also why they make great teaching tools for humans and machines alike.

 

I first started playing Go in my early twenties. I discovered the game through my obsession with Japanese culture, and brought an antique wooden board to the offices of the small digital startup I was running at the time, challenging my co-workers Mike Cannon-Brookes and Niki Scevak. It didn’t take them long to exceed my own limited skill, at which point, they just played each other. Still, the irony was not lost on me: here we were at the dawn of the digital revolution, playing an ancient game from over 2,500 years ago.

 

The rules of GO are deceptively simply. It is played on a 19x19 grid with black and white stones. The object is to surround the most territory on the board by the end of the game. However, the number of potential moves in the game is vast, more — it is said — than the number of atoms in the known universe. All those permutations creates a huge search tree of possible moves, making it difficult for computers to use the kind of brute force, AI approach that Deep Blue used to beat Gary Kasparov in chess in 1997. Instead, something closer to intuition is required.

 

When AlphaGo, a program created by DeepMind (a division of Google), beat Fan Hui, Europe’s reining Go champion, in five straight games — it used a different approach than Deep Blue. The creators of AlphaGo trained its neural net to recognize winning patterns from a massive library of 30 million expert Go moves. That alone would have created a competent Go player. But to create a truly world class player, they went one step further: they made the system learn by playing against itself.

 

For most of raised on pop culture, the idea of robots building robots should sound a few alarms. Certainly, true believers in the Singularity are phobic about AIs designing smarter AIs — but unless there is actually a Cold War era supercomputer somewhere playing tic-tac-to against itself — we probably have little to fear in the immediate future. What is interesting, however, is to imagine where these game-trained neural networks might end up in the future.

 

Maybe one day, as your self-driving car navigates the labyrinth of London, it will be a variant of the AlphaGo neural net that manages the complexity of millions of autonomous vehicles moving together simultaneously. Similarly the next time you visit the hospital, you might find yourself interacting with IBM’s Watson, originally schooled in ‘Jeopardy’, now diagnosing your ailment. There might be battle AI from a multi-player game incorporated as an obstacle avoidance system in Amazon’s drone delivery network, or a dating pattern matching algorithm that finds new life as an automated customer service avatar.

 

Look deep into the provenance of the AIs that will run our future world, and I’m certain, you will find the games that they were first trained on.

Topics: Technology

New call-to-action

Latest Ideas