I finished 164 in legend, using Rust. As always I really enjoyed the challenge, and the community. Thanks Codingame for those events, and big thanks for avoiding the Chrismas holidays and post-poning it to march.
the bot
I used a beamsearch of width 50 and max depth of 20, for each individual snakes. I first run it for opponent snakes, using a total of 5ms (thus stopping between depth 5~10). The remaining 40ms are for my snakes. At each search, I retain the best sequence of moves for the snake, and use it in the next searches for other snakes. I also kept 15 random nodes per depth level trying to add some diversity to the search.
I simulate the whole game at each step, using the already computed moves of preceeding snakes, or default direction for non processed snakes (they keep going on the same direction).
For evaluation, as always it is a mess composed of the number dead snakes (only mine), the sum of my snake’s body parts and the manathan distance of each head to the closest powersource. The manathan distances are computed only once at the begining of a turn and never updated during the search.
I spent some time working on the engine, and was able to simulate around ~30000 turn in 50ms.
workflow
over the years I tried to streamline my workflow to avoid wasting time during challenges. With this challenge I am happy about the way I was able to write and test the game engine, improving its performance, test my bots locally etc…
- I started using a bundler to split my code and make it easier to manage. big thanks to @MathieuSoysal for
cg-bundler - I used
cargo testto add unit testing for various parts, in particular for the engine. Cargo really help reducing the friction to write tests by simplifying the process. I reworked the input read part to be able to read either from stdin or from aVec<String>, which allowed me to easily take inputs from the web IDE and transform it into a test case. This helped me a lot when I was optimizing the engine, because I was not afraid of breaking things along the way. I even usedllvm-covto assess coverage of my tests and make sure everything worked properly. - For local testing I use the referee with additional CommandLineInterface.java to produce an executable that I can use with psyleague. That allows to run multiple bots in a local arena and quickly see how they perform. For bot using the 50ms, the local referee often time report timeout, so to avoid that I change the turn limit in the referee to 100ms, but since it hits the global 30s maximum time limit, I had to recompile Codingame engine to increase this limit also. The only downside with local arena is that using too many similar bots sometime does not reflect how a bot can perform in real arena, so I had to often cleanup the bot pool to mitigate this.