DeepMind’s superhuman AI is changing how we play chess, since 1997, when IBM’s Deep Blue beat world champion and chess legend Garry Kasparov in a six-game match.
Chess players have accepted that machines are stronger at chess. We have taken some comfort from the fact that we taught these machines how to play. But strangely enough, despite being programmed by humans, traditional chess engines don’t play quite like humans.
Despite the hand-crafted heuristics, the fundament of an engine’s superiority lies in the calculation: sifting through vast numbers of moves to find concrete ways to solve a position.
Back then, chess grandmasters were hired in to evaluate a series of typical positions and describe the considerations that led to the assessment, and then programmers turned these considerations into ever more sophisticated heuristics.
A chess program or an “engine” like Stockfish searches through about 60 million positions a second. But an engines solution may look ugly to human eyes, even if it is unquestionably a winning move.
Enter DeepMind. The Google-owned AI company’s AlphaZero is a paradox. AlphaZero taught itself chess (as well as go and shogi) starting with no knowledge about the game beyond the basic rules.
It developed its chess strategies by playing millions of games against itself and discovering promising avenues of exploration from the games it won and lost.
It also searches far fewer positions than Stockfish when it plays. The result was a chess player of superhuman strength with a style that is human-like.