The Carnegie Mellon team celebrating Libratus' victory earlier this year. Photo via Carnegie Mellon University.

Human Poker Experts Defeated in AI Breakthrough

Researchers at Carnegie Mellon successfully created an AI program that bested four of the game’s top human players this year during January’s 20-day, 120,000-hand “Brains Vs. Artificial Intelligence: Upping the Ante” game, now detailed at length in a study published on December 17, 2017 in the journal Science.

Responsible for the poker bots “Tartanian7” and “Baby Tartanian8” which won the Annual Computer Poker Competition (ACPC) in 2014 and 2016 respectively, the Carnegie Mellon created their new AI —christened “Libratus”— to tackle the challenge of heads-up no limit Texas hold ’em (HUNL), a poker variant which “has been the primary benchmark and challenge problem for imperfect-information game solving” due to the “large size and strategic complexity” of the game. Prior to the Carnegie Mellon study, no AI has been able to defeat human experts in the variant of HUNL, but the university’s approach allowed for Libratus to prevail over poker aces Jason Les, Dong Kim, Daniel McCauley, and Jimmy Chou. Libratus’ sucsess draws in part from its approach to understanding and playing the game. Libratus wasn’t designed from the ground up to succeed only at poker, as Libratus’ strategic algorithms “do not use expert domain knowledge or human data and are not specific to poker; thus they apply to a host of imperfect-information games.”

Carnegie Mellon’s AI is a breakthrough in AI’s ability to think in situations of “imperfect information”.

So-called “imperfect information” games vary from “perfect information” games such as Chess or Go in the fact that players in perfect informational situations can see the entirety of the game in the open and accessible, whereas in other games such as poker, information is “imperfect” and much of the game —the other players’ hands— remains hidden and unknown during the course of play. One theoretical approach for a machine to win at a game of imperfect information would be “to simply reason about the entire game as a whole” and map out every single possible outcome of the game, however in the case of HUNL, there are a staggering 10^161 possible decision points, meaning that “pre-computing a strategy for every decision point is infeasible for such a large game.”

Rather than work out such a vast and computationally intensive possibility of decisions, Libratus takes a three-pronged approach to the game, determining winning plays with three separate modules of algorithms. The first of Libratus’ three programs develops a rough abstraction of the game’s strategy and end objectives. This strategic “blueprint” informs much of the AI’s decision-making during the early stages of the game and is assisted by two other algorithms: one which focuses on the “nested subgames” of specific hands over the course of a game, and a final algorithm which revises the first strategic algorithm and “computes a game-theoretic strategy”, honing the AI’s play as it gleans information opponents’ strategies and its own successes.

This versatility and adaptability makes Libratus —in addition to another groundbreaking victory for Carnegie Mellon’s poker bots— an exciting research breakthrough into AI systems better able to”think” strategically in the kinds of imperfect-informational “games” which comprise the majority of real-world situations in which an AI may be useful.