Sam Ganzfried

Computing Strong Game-Theoretic Strategies and Exploiting Suboptimal Opponents in Large Games Degree Type: Ph.D. in Computer Science
Advisor(s): Tuomas Sandholm
Graduated: May 2015

Abstract:

Designing successful agents for multiagent strategic settings is a challenging problem for several reasons. First, many games of interest are far too large to be solved (for a relevant game-theoretic solution concept) by the best current algorithms. For example, no-limit Texas hold 'em has approximately 10165 nodes in its game tree, while the best algorithms for computing a Nash equilibrium only scale to games with around 1017 states. A second challenge is that it is not even clear that our goal should be computing a Nash equilibrium in the first place. In games with more than two players (or two-player games that are not zero sum (competitive)), playing a Nash equilibrium has no performance guarantee. Furthermore, even in two-player zero-sum games, we can often obtain significantly higher payoffs by learning to exploit mistakes of a suboptimal opponent than by playing a Nash equilibrium.

The leading paradigm for addressing the first challenge is to first approximate the full game with a strategically similar but significantly smaller game, and then to solve this smaller abstract game. All of this computation is done offline in advance, and the strategies are then looked up in a table for actual game play. We have developed new algorithms for improving each step of this paradigm. Specifically, I present new algorithms for performing game abstraction and for computing equilibria in several game classes, as well as new approaches for addressing the problem of interpreting actions for the opponent that have been removed from the abstraction and further post-processing techniques that achieve robustness against limitations of the abstraction and equilibrium-finding phases.

I then describe two new game-solving paradigms: in the first, relevant portions of the game are solved in real time to a better degree of accuracy than the abstract game, which is solved offline according to the leading paradigm, and in the second, qualitative representations of the structure of equilibrium strategies are leveraged to improve the speed of equilibrium finding. The latter paradigm can be utilized to obtain human-understandable knowledge from strategies, which are often represented as massive binary files, thereby enabling improved human decision-making.

In the final portion of the thesis, I address the second challenge by presenting new algorithms for effectively learning to exploit unknown static opponents in large imperfect-information games after only a small number of interactions. Furthermore, I present new algorithms for exploiting weak opponents that are able to guarantee a good worst-case performance even against strong dynamic opponents. The approaches are domain independent and apply to any games within very broad classes, which include many important real-world situations. While most of the approaches are presented in the context of two-player zero-sum games, they also apply to games with more than two agents, though in some cases this results in a modification of theoretical guarantees. One application domain that was considered was two-player no-limit Texas hold 'em. Several of the approaches were utilized to create an agent that came in first place in the most recent (2014) AAAI Annual Computer Poker Competition, beating each opposing agent with statistical significance.

Thesis Committee:
uomas Sandholm (Chair)
Avrim Blum
Geoff Gordon
Michael Bowling (University of Alberta)
Vincent Conitzer (Duke University)

Frank Pfenning, Head, Computer Science Department
Andrew W. Moore, Dean, School of Computer Science

Keywords:
Artificial intelligence, game theory, multiagent systems, game solving, game abstraction, equilibrium computation, qualitative models, opponent modeling.

CMU-CS-15-104.pdf (1.62 MB) ( 245 pages)
Copyright Notice