The ability for computers to beat humans at their own games has long been considered a benchmark for advancement in artificial intelligence (AI). In this new study, researchers present AlphaZero - a computer program capable not only of achieving superhuman mastery of some of the most complex board games known, like chess, shogi and Go, but also of teaching itself to play them with no prior knowledge except each game's rules. The results represent an important step towards developing a game-playing AI that can learn to play - and master - any game. In the decades since IBM's chess program, Deep Blue, defeated the human world champion, game-playing AIs have grown more advanced and able to beat humans at increasingly complex games. Other abstract strategy games, such as shogi and Go, each significantly more difficult than chess, have also come to be mastered by machine. However, the algorithms that drive these AI systems are often constructed to exploit the properties of a single game and rely on "handcrafted" knowledge - strategies imbued by their human developers, according to the authors. Based on self-play reinforcement learning, David Silver and colleagues at DeepMind developed AlphaZero, a generalized game-playing program that foregoes the need for human-derived information. It was able to learn chess, shogi and Go by playing against itself - repeatedly - until each was mastered. According to Silver et al., the system was able to beat state-of-the-art AI programs which specialized in these three games after just a few hours of self-learning. In a related Perspective, Murray Campbell writes that despite the immense complexity of games like chess, shogi and Go, recent advancements in AI have rendered them into easily solvable problems. As a result, AI researchers need to look to a new generation of games - multiplayer video games, for example - to provide the next set of challenges for AI systems.
###
Journal
Science