Man vs Machine

If you haven’t heard already, there is a pretty interesting event going on right now. One of the best Go players (Lee Sedol) is playing against an AI from Google (Deep Mind).

The public opinion always was that AI’s still have a long way to go before they can beat humans. That’s not the case anymore. Deep Mind has already beaten a professional Go Player last October (Fan Hui) and now it’s playing against one of the best.

It’s best of five and the first two games have already been played. Here are the videos:

They have someone from the Deep Mind team at the beginning of the second match giving a short explanation of the AI.

1 Like

Well… the MAIN limitation that Deep Blue had against Garry Kasparov was CPU and Conventional Storage. In that they actually ended up modifying the algos and stuff based upon memory limits and CPU power.

This was in 1996. Long enough that any children born during that era can legally drink in most places of the world (except for maybe a couple). We likely have more processor capacity added since then.

If anything, the goal might shift to become working on Back Porting rather well done AIs to older hardware. You know–for a bit of a challenge.

Deep Blue was mainly a brute force algorithm with special hardware. That was enough for chess, but Go is a much more complex game.

If the hardware was any factor they would probably talk about it in the video, but they haven’t said anything about it at all in the first match.

1 Like

For my comprehension, the IA’s has reached a new step thanks to the deep learning these past years. It’s not anymore only a problem of hardware . The algorythms are essentials.

1 Like

I got forwarded an interesting paper on that AI… unfortunately I can’t seem to find an actual link anywhere. The guy it beat last year was the European champion, and it won 5-0; first game +2.5, the next 4 the human resigned. It’s already up 2-0 against the world champ. This wasn’t supposed to happen for 10 more years.

Algorithm-wise, it’s a conglomeration of several methods running in parallel. IIRC each method running independently beats the best existing Go AI’s by themselves. Emphasis is given to moves that are favored by multiple paths.

They published a paper in Nature. Sadly it’s behind a paywall, but a good library should have it.

http://www.nature.com/nature/journal/v529/n7587/index.html
http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html

I haven’t read it yet, but the short explanation from the second video sounded straight forward. They have one neural net which generates candidate moves, another neural net to evaluate a position (not in points, but percentage to win), and with that they do a tree search.

edit: You can find the paper if you search for the title “Mastering the Game of Go with Deep Neural Networks and Tree Search”. I’m not sure about the copyright, so I’m not linking it.

1 Like

It was actually in a local newspaper, now.

Well more, that the Grandmaster Champion just won a single game after losing four games. He won it in the same way Garry Kasparov won against Deep Blue. He started Ralph Wigguming it all up. He started making completely ridiculous silly moves that made no sense–and that kind of had Deep Mind start to glitch out. Which… is what Garry Kasparov did for Deep Blue.

Some of you are aware of my weird obsession with have a stand-by strategy I’ve named after a Simpsons’ character “Ralph Wiggum”. In that in some cases, the best way to win a match is to emulate Ralph Wiggum for long enough to have your opponent pretty much end themselves from having no possible reaction to Ralph Wiggum going toe to toe with them. It is my currently held notion that we’d also get more knowledge on how to properly handle and work strategies–by more accurately and competently figuring out how to have our AIs act like Ralph Wiggum.

1 Like

That doesn’t sound like the game I saw. Lee Sedol won the fourth game because he played exceptionally good.

He lost the first three games and for a while it looked like he (or anybody else) can’t beat Alpha Go. He even said that he always had the feeling that Alpha Go was ahead.

The fourth game was different. At the beginning Alpha Go was ahead again, but in the middle game Lee Sedol found a brilliant move that Alpha Go didn’t see coming and couldn’t handle. That was the first time that Alpha Go was behind.

Go has a lot to do with balance. If both players play reasonable moves the player who is ahead initially will win. The player who is behind has to play at least some unreasonable moves. He has to get just a bit more than is reasonable to catch up.

My impression is that Alpha Go has trouble finding these moves when he is behind. It looks like he can’t distinguish between moves that are just a tiny bit unreasonable and moves that are stupid. Alpha Go did play some stupid moves when he was behind in the fourth game.

The guy apparently stated this was what he did in the interview. Start making some unreasonable/foolish moves which caused a glitch to work in the AI.

Now, just because a newspaper says some game player in a foreign country who doesn’t speak English said something–doesn’t mean it is typically accurate. It might have been messed up in the translation.