k4ng0u, 66th in legend (C++)
Thanks Codingame team for hosting this contest that gathered more than 7k codingamers! The designs were really nice and the gameplay was interesting as well. I also appreciated that the game felt less random than the previous ones (thanks to the absence of fog of war?) But due to this, the contest
I started this contest with a simple Javascript bot to get to Bronze. It would basically target a potion and then cast spells to fill the inventory until the potion can be made.
Then I didn’t find any good heuristic so I went down the simulation path. For a week I had to watch my bot make plays beyond my understanding that would be useful (or not when the simulation or eval was wrong) a few turns later. To be honest it was both rewarding and frustrating.
MC Simulation to Gold
My simulation was close to the game engine, except that I always applied the first (+3) and second (+1) potion bonus even when they were all consumed.
At first I tried a MC (ignoring the opponent, though his state info was there) depth 15 with some tuning:
- On the 7 first turns, learn a free spell (I tried to use a dumb heuristic but it was worse => the gifted tier0 ingredients were often giving a better start to my opponent and make me lose the game)
- Only enable learn during first 2 turns of a simulation
- After the 5 first turns, if brew is possible, do it right away and ignore other possible actions
- Stop the run if the game finishes or if there are no more actions possible
The eval for a game state was as follows, with a decay factor of 0.9 by turn:
playerScore * 100 + sum(nbIngredientsOfATier * (tier + 1))
And on my 6th potion (end game), there is a big bonus/malus depending on if I win (score + number of ingredients of tier 1,2,3 are superior to my opponent’s numbers)
This reached top 150 gold on the last Saturday, but I felt its limits. The main issue was that due to the poor performance of my simulation (~180k states so about 12-20k rollouts per 48ms) it was hard to always find a good action chain.
Cheap Beam Search to Legend
Since my simu was lacking performance, I didn’t even try to go for an MCTS and went for a Beam Search as many mentioned in the chat. Because of the lack of time, I reused my simulation and GameState. This might have cost me a little because I was cloning every time both my info, the opponent info and the corresponding cache structures that were needed to have a faster MC. But in the end, I could still visit about 30k nodes (far from what other might have) which would go to a depth of 10-30 with a beam width of 400.
The main changes on the simulation was to only learn on depth 0 and remove the condition on the brew.
The eval was updated (and no decay was applied):
// potions are still the most important but it also values ingredients in the inventory
playerScore + 0.5*(nbTier0 + 2*nbTier1 + 3*nbTier2 + 4*nbTier3)
And of course the big bonus/malus when the game ends still applies.
Eventually, my bot was quite simple and I regret spending so much time trying to tweak my MC. I should have worked on the beam search from the start and try to include the missing features such as:
- Take the opponent a bit into account (an easy improvement would be to at least check if he can brew a potion or the max number of tier 1+ ingredients he can hold at the end of the turn)
- Come a up with a real spell draft
- Dedupe states (with a hash mechanism?) to avoid having the beam width full of equivalent states
- Good performances (a lot of people seemed to be able to compute more than 300k states)
But it managed to get to league legend so the mission was accomplished