First I would like to thanks Codingame for this contest. It was a lot of fun !
Here is my approach :
At each turn :
while there is some time left
Ā Ā Ā generate a random strategy
Ā Ā Ā simulate the strategy
apply the first move of the best strategy seen so far
A strategy is :
repeat X times (X chosen randomly between 0 and 3)
Ā Ā Ā move to a random position
for each zombie still alive (the order of the zombies is also chosen randomly)
Ā Ā Ā move toward the zombie until he's killed
And thatās itā¦
No complicated geometry, no hard-coded strategy, and the humans are not even taken into account (except for the simulation score computation)ā¦
I tried many different modifications of this method and everything else give worst or very similar results.
Since the algorithm tries many (~1.000.000) different strategies, it will find most of the time if itās better to run toward zombies or wait for them, save all humans or not, etc.
Trying up to 3 random moves seems good for short term (like finding a way to make a good combo).
Trying different orders of zombies to kill is good for long term (like for example ensuring that at least one human will stay alive).
Since the ranking is based on the sum of hidden test cases scores, it was more important to focus on tests that allowed very big scores.
This especially true when an algorithm starts to find very good combos, since the scoring formula (10HH*fib) grows exponentially, which means that, with a very good combo a single tests case will make most of the points.
On the public test cases, the test case that seemed to scale the better was CG, so I tried to focus on it, by running locally different modifications of my algorithm on this test hundred on times.
I didnāt knew that the hidden test cases was so close of it.
Locally the CG test case give those scores (based on ~9000k tests done today):
26.000 ptsĀ Ā Ā 1 times
42.000Ā ptsĀ Ā Ā 34 times
67.000Ā ptsĀ Ā Ā 735 times
109.000Ā ptsĀ Ā 5588 times
177.000Ā ptsĀ Ā 1811 times
286.000Ā ptsĀ Ā 423 times
463.000Ā ptsĀ Ā 298 times
750.000Ā ptsĀ Ā 389 times
1.214.000Ā pts 9 times
As you can see itās very random, but itās actually a good thing because you can always resubmit several times.
Thatās why I tried to code an algorithm with the highest variance possible.
So during the last hour, I submitted between 15 and 20 times and finally got a 966k.
The big combo on the hidden test cases is this one :
https://www.codingame.com/replay/solo/67664848
I think it would be great that for other contests :
-
Make all tests matter (one way to do that is to normalize the score of each tests : YOUR_SCORE/BEST_SCORE or YOUR_SCORE/MAX_THEORETICAL_SCORE)
-
More tests - so that it depends less on luck
-
Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times
-
Ignore the completion (number of succeed test cases) for the ranking - in order to avoid situation where you are stuck with a very poor ranking because you failed on one hidden test case (95% completion) and cannot debug it.