New AI-based Chess Engine to Play Like Humans

When it comes to chess, computers are already regarded as experts in it.

Since the world chess champion Garry Kasparov lost to IBM’s Deep Blue in 1997, there have been more advances inn artificial intelligence to make chess-playing computers even better players. In the past 15 years, no human has beaten a computer in a chess tournament.

In new developments, a research team including Jon Kleinberg, a Tisch University Professor of Computer Science developed an artificially intelligent chess engine that is trained to play like a human. Not only does this create an interesting chess-playing experience, it also makes clearer how computers make decisions in a different way from people, and how humans could learn to improve.

‘Chess sits alongside virtuosic musical instrument playing and mathematical achievement as something humans study their whole lives and get really good at. And yet in chess, computers are in every possible sense better than we are at this point’, said Kleinberg. ‘So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI’.

Kleinberg is a co-author of ‘Aligning Superhuman AI With Human Behavior: Chess as a Model System’ presented at the virtual Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining held in August. In December, the Maia chess engine which was developed from the research, was released on the free online chess server lichess.org where it was played over 40,000 times in the first week. Agadmator, the most subscribed chess channel on YouTube spoke about the project and played two live games against Maia.

‘Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. They will tell you all the mistakes you made― all the situations in which you failed to play with machine-like precision― but they can’t separate out what you should work on, said co-author Ashton Anderson, Assistant Professor at the University of Toronto. ‘Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult’.

The other co-authors of the paper include Reid McIlroy-Young, a doctoral student at the University of Toronto and Siddhartha Sen of Microsoft Research.

As artificial intelligence exceeds human abilities in a wide array of areas, researchers are researching how to design AI systems with the aim of human collaboration. AI can inform or make human work more advanced in a lot of fields―for example, in the interpretation of results of medical imaging― but algorithms have a different approach to problems from humans, which makes learning from them almost impossible and possibly even dangerous.

In this project, the researchers aimed to develop AI that decreases the disparities between human and algorithmic behavior by training the computer on individual human steps, rather having it tutor itself on how to complete the task successfully. Chess offered the perfect opportunity to train AI models to do that.

‘Chess has been described as the “fruit fly” of AI research’, Kleinberg said. ‘Just as geneticists often care less about the fruit fly itself than its role as a model organism, AI researchers love chess, because it is one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly’.

Training the AI model on individual human chess moves rather than the overall task of winning the game, taught the computer to mimic the behavior of the humans. It also developed a system more adaptable to different skill levels― a difficult task for traditional AI.

Within each skill level, Maia matched human moves more than half of the time with its accuracy and skill increasing― a better accuracy rate of two popular chess engines. Maia was also able to capture the kinds of mistakes players at specific skill levels make, and when they reach the level to stop making those mistakes.

To develop Maia, the researchers customized Leela, an open-source system based on Deep Mind’s AlphaZero program, which makes chess decisions with the same types of neural networks used to classify images or language. They trained different versions of Maia on games at different levels of skill in order to create nine bots designed to play humans with ratings between 1100 and 1900 (ranging from the skill level of more novice players to strong amateur players).

‘Our model didn’t train itself on the best move― it trained itself on what a human would do’, Kleinberg said. ‘But we had to be very careful― you have to make sure it doesn’t search the tree of possible moves too thoroughly, because that would make it too good. It has to just be laser-focused on predicting what a person would do next’.

The research was supported partly by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award and a Canada Foundation for Innovation grant.

By Marvellous Iwendi.

Source: Cornell University