papers AI Learner
The Github is limit! Click to go to the new site.

Improving Search with Supervised Learning in Trick-Based Card Games

2019-03-22
Christopher Solinas, Douglas Rebstock, Michael Buro

Abstract

In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network — trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.09604

PDF

http://arxiv.org/pdf/1903.09604


Similar Posts

Comments