First of all thanks to the creators, it was a really fun contest with a lot of possible strategies and a good balance between sim and heuristic approaches.
I started the contest with heuristics on first Friday since when I first saw the rules I thought "towers, creeps... this should be similar to BOTG". I reached Bronze with a simple heuristics bot that built 2 mines, then a barrack and then towers. When the mines depleted it built new ones. I couldn't code the first weekend, and when I took it back on Monday I knew heursitics evasion would never be as good as with sims, so I started working on the engine. I don't agree with others that found it hard to code, I found it relatively simple to build a "good enough" engine in a few hours that could evade knights farily well. At this point my code was heuristics for build, move and produce, and when being attacked I used a random search of depth 4 to evade.
Switch to sim
Then I figured that since I had an almost fully working engine I should try to make all decisions by simulations, including creep production. I switched my code to a GA of depth 10 with a decaying factor of 0.95 and fixed the details that were making my simulation fail (thanks a lot to RoboStac and Illedan for their help understanding some details that I had missed from the Referee and from Kotlin!).
Regarding the depth, I'm not entirely sure why but I was able to find good solutions despite achieving only like 250 sims per turn on depth 10 (because of the expensive engine and my code is not the most efficient). I even had good results with depth 14. I think it's because I didn't use much rival information in my eval, so the impact of the rival decisions usually takes several turns to affect my queen (knights attacking for instance), so the solution of the previous turn shifted is a very good starting point.
I always find it difficult to write good eval functions. I find it hard to compare the value of for instance losing 1 unit of health vs building a mine or increasing a tower's hit points. In the end my eval takes into account all the following:
1) My queen health (a lot of value) and the rival queen health (very little value).
2) A score for each tower, mine or barrack. I used a logarithmic function to model the value of increasing the tower hp. The first 200 points are worth much more than the last 200 to complete the 800. Mines are worth more if they are close to the edges of the field, towers are worth more if they are more thowards the center, and barracks are worth more if they are closer to the enemy's side.
3) Distance to closest empty site.
4) Health of rival creeps.
5) On sunday I added a bonus for having a Giant barrack and for producing Giants if I'm in the situation of having much more extraction rate than my rival.
Since symmetric sites are consecutive in ids, i set the maxExtractionRate of the sites symmetric to my known sites. Hence I could estimate preety accurately the total extraction rate of the rival queen and also her gold.
For rival prediction, I only used the following:
1) She doesn't move
2) She improves towers and mines if she is touching them
3) She produces knights every time she can.
Good Saturday, Bad Sunday
On Saturday night I sent a version of the code without expecting much, and it ended up in 2nd place, right in between Agade and RoboStac !
After that I spent most of my Sunday trying to simulate the rival queen with my GA for the first milliseconds of the turn, but I failed misserably while my ranking dropped slowly to towards the 10th place. I'm still not sure what went wrong, but my bot couldn't stop making stupid decisions and I couldn't find the bugs that caused that behavior. Pursuing that instead of improving my eval was the worst decision I made on the contest. On Sunday afternoon I decidedo to drop rival simulation, go back to my Saturday version and try to improve that. Until that moment I didn't have any Giant barracks in my moves, and my mining vs towering decisions were very conservative (max 2 mines unless i'm in a preety comfortable position). On Sunday night I made a change to be much more mining aggressive, and I tried that code against Agade, Xyze and Risus and to my surprise, this aggressive behavior paid off and I was beating them more often than not ! I submitted this expecting the same results as Saturday night, but... The excesive aggressiveness paid off against some, but it took excesive risks and got destroyed in a lot of matches, even to players in the lower end of legend.
Let's balance it out
Finally I made some more adjustments to make the AI more balanced between the mining/towering dicotomy, and with that change I ended up 3rd/4th in the ranking on Sunday night (Agade/Azkellas were computing atm) and finally after the rerun ended up in a comfortable 4th place (same as in BOTG). I really wanted to end up in a podium position, but hey ! Maybe the next contest...
Thanks again to the creators, congratulations to the top 3 and specially to RoboStac that wrote an amazing AI that dominated the competition almost from start to finish. Great work !