In July, I wrote an article entitled: “Being More Human”, which talked about some of the advancements in cognitive computing, and also, about how we – as humans – need to find new ways of contributing in our increasingly machine-driven society.
One example cited was the vast computing power demonstrated by computer mastery of games like chess, where the real champion of our planet is a machine, not a man. The World Computer Chess Championship was held November 13-16, featuring the ten strongest chess programs (“engines”) in the world. “Stockfish 8”, an open source, purpose-built chess engine won the November tournament, narrowly beating “Houdini” with a score of 3 wins, 2 losses and 15 draws. Stockfish also won similar tournaments in 2016 and 2014. More information on this tournament can be found here.
As a chess enthusiast, I watched as a human grandmaster played “with odds” against Stockfish in an exhibition game while this tournament was underway. The handicap given to Stockfish was that it had to play from the beginning of the game with only one knight. In addition to human observers, other chess engines were scoring the game, and we all watched in amazement as Stockfish intrepidly overcame the deficit of a knight to the point where accepting a draw was the welcome response of the human GM (and playing on any longer may have resulted in a loss).
Grandmasters heralded this event as the apex of chess computing. Chess was effectively “solved” as these engines, relying on massive “tablebases” addressed both the opening moves of the game and the endgame, while deep computational power allowed programs like Stockfish to score millions of positions per second in selecting moves in the middlegame.
Then, something altogether unexpected happened.
On December 5, AlphaZero, an algorithm developed by Google’s DeepMind division defeated not only the top programs in chess, but also similar programs in shogi and Go.
- After being given the rules of chess and nine hours to “teach itself” chess, AlphaZero demolished Stockfish by the astonishing score of 28 wins, 0 losses and 72 draws.
- In 100 shogi games, AlphaZero defeated elmo (the World Computer Shogi 2017 champion) winning 90 games, losing eight times with two draws.
- AlphaZero had similar results in Go, except this time it was matched against an earlier version of AlphaZero, which was already the World Champion of Go.
The natural reaction is to assume AlphaZero won because it was able to search deeper or faster, but in reality, it taught itself to search better. AlphaZero searches just 80,000 positions per second in chess, compared to the 70 million per second Stockfish can evaluate, but AlphaZero compensates by using deep neural network technology to focus on more optimal variations over Stockfish's brute force approach.
Most interestingly, AlphaZero won by playing more like a human (or maybe a superhuman) – using intuition when the event horizon for brute force calculation fails. AlphaZero often sacrificed material against Stockfish to gain persistent, but less concrete positional advantages. Grandmaster Peter Heine Nielsen said: “I always wondered how it would be if a superior species landed on earth and showed us how they played chess. Now I know.”
For more info on how AlphaZero plays chess, read this article.
If computers can beat chess after only a few hours of self-study, they may be able to do many things – good and bad – if given a little more time. How long until they develop self-awareness?
Be kind to your computer, and (hopefully) it will be kind to you.