I’ve just looked and, in fact, there is something wrong with my program… unfortuntally, I didn’t had a clue what was wrong on the contest, as the equivalent test on the IDE, the program executes flawlessly, protecting the right human.
I have the same problem. And, at first, my strategy was passed 100% test several times, but after it started to pass 90-100% without any global changes (I use random, and the result was different each time). I thought that it does not fit in time (100ms), but then I realized that, most likely, this is due to rounding errors. Also, the rules written that zombies go to the nearest human, but how they behave, if the distance to a couple of survivors will be the same? And distance rounded up before compared or not?
As a result, I have 235k points and I’m on the 1469 place
If the percentage is not influenced in the rating - It would 20 place ><
Ok, I saw the result and I can not understand why my program worked so weird
One thing about the validation tests: they’re not preventing hardcoding at all. The test cases in CG are almost always simple transformations like reflection or translation. Very easy to deduce. Maybe it was the intention - to reward clever coders who can find the rule of test transformation, then write test case detector and finally use hardoded or offline-optimized solution. But if that wasn’t the intention, some more sophisticated hardcoding prevention should be applied.
I do agree. I also hope that the final test cases are many more than these (hundreds), that cover a wide range of different senerios (lots of zombies, few zombies, lots of humans, few humans, etc) to at least make hard coding difficult, and also make any fluctuation in the scores been averaged out. If the system load is the problem, one possibility would be, say, there are two test cases. One for the small one (20), the other for the big one (200). One can test the small case every 1 min, but the large case only every 10 mins. And the final score is total score of both.
I think there has design problem in test case CG.
- This is score oriented programming rather than saving as much people as you can, the score contradict to the idea of the game. ( “I am sorry John, saving you cost me 90,000 score in CG, you better die” )
- This encourage test case by test case hard coding in a very high level.
Ideas of improvement in test case design:
- There should not have any situation of sacrificing human to score higher.
- Real test cases should be more dynamic rather than the y axis inverse of normal test cases.
First I would like to thanks Codingame for this contest. It was a lot of fun !
Here is my approach :
At each turn :
while there is some time left
generate a random strategy
simulate the strategy
apply the first move of the best strategy seen so far
A strategy is :
repeat X times (X chosen randomly between 0 and 3)
move to a random position
for each zombie still alive (the order of the zombies is also chosen randomly)
move toward the zombie until he's killed
And that’s it…
No complicated geometry, no hard-coded strategy, and the humans are not even taken into account (except for the simulation score computation)…
I tried many different modifications of this method and everything else give worst or very similar results.
Since the algorithm tries many (~1.000.000) different strategies, it will find most of the time if it’s better to run toward zombies or wait for them, save all humans or not, etc.
Trying up to 3 random moves seems good for short term (like finding a way to make a good combo).
Trying different orders of zombies to kill is good for long term (like for example ensuring that at least one human will stay alive).
Since the ranking is based on the sum of hidden test cases scores, it was more important to focus on tests that allowed very big scores.
This especially true when an algorithm starts to find very good combos, since the scoring formula (10HH*fib) grows exponentially, which means that, with a very good combo a single tests case will make most of the points.
On the public test cases, the test case that seemed to scale the better was CG, so I tried to focus on it, by running locally different modifications of my algorithm on this test hundred on times.
I didn’t knew that the hidden test cases was so close of it.
Locally the CG test case give those scores (based on ~9000k tests done today):
26.000 pts 1 times
42.000 pts 34 times
67.000 pts 735 times
109.000 pts 5588 times
177.000 pts 1811 times
286.000 pts 423 times
463.000 pts 298 times
750.000 pts 389 times
1.214.000 pts 9 times
As you can see it’s very random, but it’s actually a good thing because you can always resubmit several times.
That’s why I tried to code an algorithm with the highest variance possible.
So during the last hour, I submitted between 15 and 20 times and finally got a 966k.
The big combo on the hidden test cases is this one :
https://www.codingame.com/replay/solo/67664848
I think it would be great that for other contests :
-
Make all tests matter (one way to do that is to normalize the score of each tests : YOUR_SCORE/BEST_SCORE or YOUR_SCORE/MAX_THEORETICAL_SCORE)
-
More tests - so that it depends less on luck
-
Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times
-
Ignore the completion (number of succeed test cases) for the ranking - in order to avoid situation where you are stuck with a very poor ranking because you failed on one hidden test case (95% completion) and cannot debug it.
I’m kinda amazed at how such simple logic can yield so astonishing results. I really need to try this out now…
Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times
=> You could set a seed, submit, if it didn’t work out, change the seed.
Then you remove a lot of complexity…
Having on a few hours to spend on that contest, I decided to write a simulator for the game and try my luck with a genetic algorithm.
It took about an hour to get the score right in my simulations, that was a good milestone.
I would then generate a population of 100 solutions completely randomly (each being a list of waypoints for Ash).
The basic split crossover operation seemed to work ok and I later implemented a double spit operation that seems to work a little better.
The rest was tweaking the size of the population and the number of generations.
I was very happy with rank 38 (about 100k score) and decided to to bed before the end of the competition (living in Sydney!).
I woke up and checked the ranking: 100th! Got a nice achievement
I am sorry I did not explain my idea clearly.
This game can consider in the following way:
For all the possible ending ( 0 zombies and at least 1 human )
Pick the ending with the most human alive. ( Our mission, to save human) ( May be millions of such endings)
Pick the path with the highest score. ( Combo skill, the way how you implement to kill zombies, should be bonus only)
That is what I mean for mission oriented, CG has a highest score without ending with the most human alive.
Then again you remove the choice to sacrifice a human and this simplifies the problem by a great deal
Thanks a lot for your explanation, it is really helpful. Do you the name of these kind of randomized method ? Is there a theory behind ?
It’s called “Monte Carlo”. You try random strategies in the giving time and you keep the best.
Hey, we would have loved to reveal the source code of top players as we know it really helps to improve, however, this would spoil the thinking for those who want to try the game, as we will place it into the game session!
Similar, but different
For anyone interested in knowing how the winner proceeded… Thanks eldidou for sharing!
I tried what MaximeC says
[quote=“MaximeC, post:27, topic:1089”]
An algorithm that was enough to get 100% was: go to the nearest human that I can reach before a zombie and stay on it.[/quote] And so I almost reached the upper third of the leaderboard. YEAH!
Well, the amount of points is crappy, but who cares?
100% agree
This was almost exactly my strategy, I didn’t have the repeat X times
for my strategy however. I came in 81st place. Did you move towards the zombies position, move towards the zombie’s position next turn, or did you predict where the zombie will be when you get close to it? Also, for the “move to random position”, was the position a random point within the Ash’s circle of movement, or on the circumference of his circle of movement.
When CvS will be available as optimization puzzle?