John M. Talmadge, M.D.

A Blog Covering Many Topics

Part 2: Texas Holdem Poker, Human vs. AI

Texas Holdem is a version of poker that takes five minutes to learn and a lifetime to master. This is the second article about poker professionals taking on the most powerful poker-playing computer yet invented. Following two weeks of battle on the virtual felt in Pennsylvania, the “Brains vs. Artificial Intelligence” challenge concluded after a marathon battle. Although the numbers say there was a winner, in looking deeper into the numbers the contest is being declared a draw.

The competition was set up by the Carnegie Mellon University School of Computer Science, who created a poker-playing program named ‘Claudico’ and were looking for a significant test. The program, with the ability to “learn” as it played and thus is considered an “artificial” intelligence, is the first-of-its-kind in that it was created to play No Limit Texas Hold’em; every other poker playing computer created played the more-statistical Limit version of the game. Once the Carnegie Mellon staff nailed down the players – and the management of the Rivers Casino in Pittsburgh offered an exciting venue to play – the subjects set out on the 14-day competition.

From the start, the representatives of the human race – World Series of Poker bracelet winner and online wunderkindDoug ‘WCGRider’ Polk, Dong Kim, Jason Les and Bjorn Li – moved out to a financial edge that they wouldn’t relinquish. Playing a total of 80,000 hands of $50/$100 Heads Up No Limit Hold’em over the two-week period, the four men built up a $587,231 edge only a week into the play. They would seemingly ride that advantage over the last half of the competition and, once the results were announced on Friday, both sides crowed about their achievement.

When the final tallies were completed, the “Brains” in the competition had vanquished their “Artificial Intelligence” foe by the sizeable figure of $732,713. Leading the way was Li, who accounted for an astounding $528,033 of the total winnings amassed by the humans. Polk didn’t do badly either, racking up $213,671 in winnings and Kim slipped by ‘Claudico’ in taking slightly more than $70,000 in earnings. Only Les would disappoint the human race, dropping $80,482 to be the only one to lose to ‘Claudico’ by the money counts. (The four men divvied up a $100,000 prize provided by the Rivers Casino for their two weeks of work.)

The human players were a bit surprised at the skill that ‘Claudico’ demonstrated. “We know theoretically that artificial intelligence is going to overtake us one day,” Li said during the post-match celebration. “At the end of the day, the most important thing is that the humans remain on top for now.” Les, who had seen a previous version of ‘Claudico’ when it defeated professional players just last year, was stunned by the developers’ skills.

“The advances made in Claudico in just eight months were huge,” Les said, indicating that, at that rate of improvement, an Artificial Intelligence system might need only another year before it clearly plays better than professionals.

Polk seemed to be the only player who critiqued the playing of ‘Claudico’ during the finale. “There are spots where it plays well and others where I just don’t understand it,” Polk noted, stating that some of its bets were highly unusual. Polk saw instances that, where a human might place a bet worth half or three-quarters of the pot, Claudico would sometimes bet a miserly 10% or an over-the-top all-in move. “Betting $19,000 to win a $700 pot just isn’t something that a person would do,” Polk observed.

So who won the event? While the overall numbers would suggest that the “Brains” crushed the “Artificial Intelligence,” a closer look at those figures is necessary. As individuals, the humans once again take a 3-1 winning edge, but the actual analysis of the figures that the players put up indicate that the score might have been closer to 1-0-3, with Li the only outright winner and the remainder of the human team within the statistical range of calling their matches a tie. It wasn’t a point that was missed by the professor who helped to develop ‘Claudico.’

“We knew Claudico was the strongest computer poker program in the world, but we had no idea before this competition how it would fare against four Top 10 poker players,” said Dr. Tuomas Sandholm, the Carnegie Mellon University professor of computer science who helped to create ‘Claudico.’ “It would have been no shame for Claudico to lose to a set of such talented pros, so even pulling off a statistical tie with them is a tremendous achievement.” In replying to Polk’s stab at the unorthodox play during the event, the Carnegie Mellon team admitted they were just as puzzled as to why ‘Claudico’ made the decisions he made.

After the completion of the interesting competition, there have been no indications that there will be another event on the horizon. The Carnegie Mellon team will no doubt head back to the laboratory to tweak on ‘Claudico’ (or potentially a more-potent creation?), while the human race will wait for the next challenge to their ‘superiority’ in this world.

This may turn out to be the latest installment in a grand tradition of computers beating us at our own games. In 1997, IBM's Deep Blue computer famously beat chess great Garry Kasparov. Four years ago, IBM's Watson took part in the TV quiz show Jeopardy! and crushed two contestants with a strong track record. AI has even mastered the popular smartphone game 2048.

Still, poker is a tough nut to crack. In a game like chess, everyone knows where all the pieces are on the board. By contrast, poker is a game of imperfect information: players don't know for sure what cards the others hold or what will come up next in the deck. That makes it a challenge for any player, human or computer, to choose the right play.

(This blog entry was compiled from various sources, and some attribution is lacking. I apologize and will correct this if I can.)