Fall Challenge 2020 - Feedbacks & Strategies

First of all, thanks CodinGame for this contest. It was fun. Simple rules, clear goal.

I will probably finish around the 13th rank in the legend league.

My code is a simple beamsearch with iterative deepening and the following tricks :

  • I use a stupid dummy for my opponent and i use only at the first turn of my simulation. The dummy only maximize his score (buy potion or cast the best spell to maximize his score at the end of the game).
  • If my AI learn a spell during the simulation, i don’t add the tax to the other items. Because if you don’t simulate your opponent, your AI will thinks that it can take back the tax at the next turns … (and in most cases, your opponent just take it).
  • I check the path of actions that my AI want to do. If i found a LEARN action before any BREW, i do this LEARN action. You can cast your spells later, but your opponent can learn the spell you want before you’ll do it …

I use a beam size of 1000 nodes. My evaluation function is simple :

  • 1 point for each score point
  • 1 points for each tier 0 ingredient in my inventory
  • 2 points for each tier 1 ingredient in my inventory
  • 3 points for each tier 2 ingredient in my inventory
  • 4 points for each tier 3 ingredient in my inventory
  • 1.1 point for each crafted potion
  • 0.5 point for each learned spells

I aslo use a patience coefficient of 0.95
So the score of a node in my beamsearch is eval*(0.95^depth). For each node, I add the score of the parent node.

Fun fact, my first code was a MCTS without any evaluation function. And it was 50th in the gold league.

33 Likes