papers AI Learner
The Github is limit! Click to go to the new site.

Stable-Predictive Optimistic Counterfactual Regret Minimization

2019-02-13
Gabriele Farina, Christian Kroer, Noam Brown, Tuomas Sandholm

Abstract

The CFR framework has been a powerful tool for solving large-scale extensive-form games in practice. However, the theoretical rate at which past CFR-based algorithms converge to the Nash equilibrium is on the order of $O(T^{-1/2})$, where $T$ is the number of iterations. In contrast, first-order methods can be used to achieve a $O(T^{-1})$ dependence on iterations, yet these methods have been less successful in practice. In this work we present the first CFR variant that breaks the square-root dependence on iterations. By combining and extending recent advances on predictive and stable regret minimizers for the matrix-game setting we show that it is possible to leverage “optimistic” regret minimizers to achieve a $O(T^{-3/4})$ convergence rate within CFR. This is achieved by introducing a new notion of stable-predictivity, and by setting the stability of each counterfactual regret minimizer relative to its location in the decision tree. Experiments show that this method is faster than the original CFR algorithm, although not as fast as newer variants, in spite of their worst-case $O(T^{-1/2})$ dependence on iterations.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.04982

PDF

http://arxiv.org/pdf/1902.04982


Similar Posts

Comments