AI Demonstrates Superiority in "Imperfect Information" Gaming

This post forms a trilogy of sorts with previous articles written on advancements in machine intelligence.  In July 2017, I wrote an article entitled "Being More Human" which chronicled improvements in computer bots enabling machines to successfully pass the Turing Test.   Then, in December, "Rise of the Machine" talked about the remarkable achievements in machine learning which allowed the AlphaZero algorithm to defeat the best purpose-built computers in chess, shogi and Go after only a few hours of self-study.  (These posts can be found in this News thread by scrolling down.)

This article highlights another win for AI in the field of gaming and is based on info from the MIT Technology Review article here.  This time, computer algorithms were teamed together to beat experienced DOTA 2 players.  DOTA 2 is an online game developed by the makers of the Warcraft gaming system (DOTA 2, Wikipedia). 

Indeed, games like chess are difficult to master.  An argument currently among chess enthusiasts is not whether computers are better at chess than humans, but whether chess can be "solved" by a computer.  In this sense, "solving" the game means, with best play on both sides, you know the outcome of the game with certainty from the beginning.  An example of a solved game is Tic-Tac-Toe.  (You know the first player can only draw at best if the second player plays perfectly.)  People who argue chess can never be solved this way say we will never have the computing power definitively to solve chess because chess has as many legal variations in the first 20 moves as the grains of sand in...wait for it...the universe!  State of the art AI dominance in chess is achieved through neural networks, which don't do brute force calculation, but narrow to "best moves" closely mirroring the skill humans refer to as intuition. 

What's different in this latest chapter of the Man vs. Machine saga is that DOTA 2 features "imperfect information" in a game where a number of potentially successful strategies are dependent upon how other teammates (and the opposition) approach the game.  (An analogy is the tuning dial on your stereo system versus presets.  Depending on the fidelity of your sound system, there is nearly an infinite number of adjustments you can make.)  By adding teammates and their varying strategies and reactions to opponents, DOTA 2 presents an entirely different set of problems for machines working collaboratively than the machine will encounter in a turn based game like chess.

Of course, the most interesting aspect of all of this is the machines are able to learn optimal strategies through self-learning, not by someone writing purpose-built programs to automate them.  

With machine learning, we are getting to the point where if winning strategies exist - even if they are abstract,  they can be mastered by a machine (and much faster than we would have imagined even a few years ago).  It reminds me of the 1983 "Wargames" movie where the computer decides chess is a better game to play than global thermonuclear warfare because you can win at chess.  But what if computers in the next generation determine global thermonuclear warfare (or some other catastrophe to humans) is winning (for them, at least)?  

I will conclude my comments on AI, with some notable quotes about the potential and dangers of Artificial Intelligence:

  • Stephen Hawking:  “AI is likely to be either the best or worst thing to happen to humanity...The development of full artificial intelligence could spell the end of the human race.  It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.” 
  • Larry Page: "Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
  • Elon Musk: "The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast.  Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time-frame. 10 years at most...We need to be super careful with AI. Potentially more dangerous than nukes.” 
  • Bill Gates: “A breakthrough in machine learning would be worth ten Microsofts...I am in the camp that is concerned about artificial intelligence.  First the machines will do a lot of jobs for us and not be super intelligent.  That should be positive if we manage it well.  A few decades after that though the intelligence is strong enough to be a concern.  I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” 

It is my sincere hope the machines will not hold this thread against me in the near future...