L.L.A.M.A Engine / Bot

A C++ L.L.A.M.A card game engine accessible in the browser via wasm and simple API for writing L.L.A.M.A bots.

I went to a board game night with a buddy in Sunnyvale and had a good time playing this stupid simple game called "Don't L.L.A.M.A."

While playing it I couldn't stop thinking about writing an engine / bot.

For now the page is located here. You can easily write your own bot or play against a variety of my bots.

Development

I had a python game engine built within an hour or two. It could play random legal moves. I later rewrote it in C++ for the speed. I thought I'd use the C++ engine to generate a bunch of training examples and use those examples later on to train a model in python.

But then I thought it'd be even more fun to expose the C++ engine in the browser by exporting a wasm binary with bindings that allow the game to be manipulated with javascript.

I then created a simple bot loosely modeled how I play L.L.A.M.A. In a 4-player game against 3 other bots that play randomly it wins 850/1000 games or so. This isn't too impressive though.

ML

I've tried several ML strategies.

Reinforcement Learning is the obvious approach. I tossed some code together early on but found it hard to get something that could beat anything other than random-move bots.

After those early attempts, I went on to make many modifications to the core engine and framework and fixed some bugs.

I also came up with another ML-based bot that uses simulation to "try" out each legal move then play 5000 games with a random policy from that point. I used whichever action had the most wins to create a label to go with the state the bot initially saw. The bot with this policy would win against other simpler bots 60% of the time in a 4-player game. I used these labels to create a simple supervised classification model to predict that label given the state. This model correctly picked the best action 72% of the time. All this being said, I found it difficult to get good results in actual games. I'll return to this idea later.

Fixing all those mistakes and seeing some hope in the previous approach made me curious to try the RL approach again. Eventually I was able to make a bot that quickly becomes significantly better than the basic heuristic bots. The [messy] code for this is here.

Some things I learned is that a model can become overly trained on a particular player order or bot setup. In real life this is also true. If my turn is always right before a player who does irrational moves, I'd play differently (probably more conservatively) than someone who behaves more predictably. With this information I made it a point to constantly vary the competitors' policies and overall player order for each game played in the training.

Anyways, I'm kind of burnt out on this at the moment. I may return to it again in the future. It's a great way to learn!