
What Reinforcement Learning Method Should I Use for Poker AI with LLMs?
Hey everyone,
I’m working on a poker AI project, where I’m training a large language model (LLM) to predict poker actions from given game states (check, call, bet, raise, etc.). My end goal is to create a model that can play poker at a high level, primarily by self-play and opponent modeling. However, I’m running into some challenges that I hope you can help me with!
Here's the situation:
- Training Method: I’m using supervised fine-tuning (SFT) on real poker hand history data to initially teach the LLM how to predict poker actions from game states. This means that the model learns from examples of past games, predicting the actions that players took in various situations.
- Self-Play Setup: I plan to eventually move to self-play, where the LLM will play against itself (or other types of models that I create to simulate different play styles). I’ll use these self-play sessions to improve the model over time.
- Opponent Pool: I’m creating 6 types of poker players (Loose Aggressive, Loose Passive, Tight Aggressive, Tight Passive, Maniac, and Nit), each trained at 5 different skill levels (Novice, Beg*nner, Intermediate, Advanced, Expert). This gives me a decent range of opponent behavior for training.
The problem:
Here’s the catch:
- The LLM I’m using only outputs discrete actions (e.g., bet 3BB, raise to 10BB, etc.) with no access to the probabilities of actions, so I can't directly use methods like policy gradients or Q-learning that rely on action probabilities or continuous action spaces. This makes applying traditional RL methods a bit tricky.
My question:
Given that I don't have access to action probabilities, what RL method or strategy should I pursue to improve my model? Specifically, I’m looking for a way to:
- Incorporate self-play with reward-based learning.
- Refine the model through reinforcement learning, without the need for continuous probabilities.
- Ensure the model doesn’t just overfit to its own prior behavior but learns to adapt and exploit different strategies in poker.
I’ve considered a few approaches like reward-weighted supervised fine-tuning or using simpler RL techniques like Monte Carlo updates, but I’m not sure which would work best with the LLM setup I have. I've also considered Q-learning or Deep Q-learning.
Any advice or suggestions on which RL approach I should take given my situation would be greatly appreciated!
Yes I used AI to write this queston. But it captures everything I want to say, and I suck at writing.
Vibe Score

0
Sentiment

1
Rate this Resource
Join the VibeBuilders.ai Newsletter
The newsletter helps digital entrepreneurs how to harness AI to build your own assets for your funnel & ecosystem without bloating your subscription costs.
Start the free 5-day AI Captain's Command Line Bootcamp when you sign up: