Code vs Zombies - Feedback & Strategies

First I would like to thanks Codingame for this contest. It was a lot of fun !

Here is my approach :

At each turn :

while there is some time left
    generate a random strategy
    simulate the strategy
apply the first move of the best strategy seen so far

A strategy is :

repeat X times (X chosen randomly between 0 and 3)
    move to a random position
for each zombie still alive (the order of the zombies is also chosen randomly)
    move toward the zombie until he's killed

And that’s it…
No complicated geometry, no hard-coded strategy, and the humans are not even taken into account (except for the simulation score computation)…
I tried many different modifications of this method and everything else give worst or very similar results.

Since the algorithm tries many (~1.000.000) different strategies, it will find most of the time if it’s better to run toward zombies or wait for them, save all humans or not, etc.
Trying up to 3 random moves seems good for short term (like finding a way to make a good combo).
Trying different orders of zombies to kill is good for long term (like for example ensuring that at least one human will stay alive).

Since the ranking is based on the sum of hidden test cases scores, it was more important to focus on tests that allowed very big scores.
This especially true when an algorithm starts to find very good combos, since the scoring formula (10HH*fib) grows exponentially, which means that, with a very good combo a single tests case will make most of the points.
On the public test cases, the test case that seemed to scale the better was CG, so I tried to focus on it, by running locally different modifications of my algorithm on this test hundred on times.
I didn’t knew that the hidden test cases was so close of it.

Locally the CG test case give those scores (based on ~9000k tests done today):

26.000 pts    1 times
42.000 pts    34 times
67.000 pts    735 times
109.000 pts   5588 times
177.000 pts   1811 times
286.000 pts   423 times
463.000 pts   298 times
750.000 pts   389 times
1.214.000 pts 9 times

As you can see it’s very random, but it’s actually a good thing because you can always resubmit several times.
That’s why I tried to code an algorithm with the highest variance possible.
So during the last hour, I submitted between 15 and 20 times and finally got a 966k.
The big combo on the hidden test cases is this one :
https://www.codingame.com/replay/solo/67664848

I think it would be great that for other contests :

  • Make all tests matter (one way to do that is to normalize the score of each tests : YOUR_SCORE/BEST_SCORE or YOUR_SCORE/MAX_THEORETICAL_SCORE)

  • More tests - so that it depends less on luck

  • Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times

  • Ignore the completion (number of succeed test cases) for the ranking - in order to avoid situation where you are stuck with a very poor ranking because you failed on one hidden test case (95% completion) and cannot debug it.

58 Likes