Code vs Zombies - Feedback & Strategies

It was a very nice challenge with great visuals.
I always have more fun when puzzles have nice visuals like this one. :slight_smile:

As many codingamers I choosed the wrong strategy : saving humans first and then try to do combos.
Now I’m just waiting for the puzzle to come back to improve my solution. :slight_smile:

Congrats to the winners !
And thanks to Condingame for doing this awesome puzzle !


I have some questions:

  1. Will be this game available sometime soon?
  2. If yes, will it come with IDE tests or validator ones? I’d prefer validator ones since they are being somehow failed.
    2a) If not, will be there a console and ability to edit code at current validator page so I can figure out what mistakes in code I have?

And good job with the challenge, keep it so.

1 Like
  1. Yes, really soon, should be in the next day.
  2. I’m not sure I understand your question, but the tests will be the same as in the contest, i.e IDE test and validator tests slightly different.

I was thinking: how about the system keeping track of my highest ranking submission, instead of me having to make sure that my last submission is the best one?

I assume it would multiply the load on the system if this were allowed for multiplayer games, but for single player games like this one, it would be great. I could submit a modified version without having to fear that I lose everything, especially towards the end of the contest. Thoughts?


My code have successfully passed all the validation when I submited it during the contest and now, in the different validator available after the contest my code fails to complete 3 tests and I’d like to debug it.

This was my first contest on Coding Game, overall good fun !


I implemented a system of weighted score based on different factors :

  • Distance from Ash and the zombies
  • Distance from Ash and the humans
  • Weighted distance from zombies and humans
  • Value of kills
  • Value of remaining alive human
  • etc.

Then I went on to try different solutions within the circle (around 1000 tries per round) that would guarantee at least one human would be safe and take the maximum score.

Challenge backup plan

I passed all the tests (after debugging for 3 hours before realizing my Ash could not go up (which still allowed him to pass all the debug tests but not the invisible ones))…

I did 70k on the first validated try, then 110k and 135k by being less aggressive killing zombies and starting turning around them more.

Two hours before the end of the contests I realized I had no idea what my weights were representing anymore (I know correlation doesn’t imply causation but…) and overfitting the algorithm was taking me too much time to get a significant improvement so I decided to port the code to VB.Net to get at least the first place on the language.

My surprise was big as I scored 168k in VB.Net vs 131k in C# for seemingly the same code.


I would say my regret is that due to fibo most of the score was based on only three or four tests. I did 40k on test 4"CG" and 80k on test 7 “combo”. I could do 62k too but scoring less on “combo”. I could see the test 19 could be big earner but unable to do well without compromising on 4 and 7.

I would have prefered if the tests rated a percentage of score vs max possible score (on a log scale ?), it would favor more polyvalent algos vs. stacking ones. Here I have the feeling you could be in the top 10 only by crushing test 7 and assuring to stay alive in the rest, This doesn’t remove the skill of the top players (which probably do have very polyvalent algorithms anyway).


We’re sorry racribeiro, what happend is that a lot of people managed to get 100% in the IDE with a buggy algorithm (for instance going to the last human of the list works). Of course this was unlikely to pass all the tests after submission too. We’ve quickly realised this problem but it’s impossible for us, during the contest to add a new test case.

An algorithm that was enough to get 100% was: go to the nearest human that I can reach before a zombie and stay on it.

1 Like

This challenge allowed me to relearn with distance functions (Euclidean and Manhattan), vectors collinearity and even matrix! (not the movie FYI)


  • For all zombies, I figure out which human is he walking up to (includes me).
  • Have I the possibility to save this human?
  • Then I try to save a maximum by walking to them.

Simple, seems greedy (breadth first search), works. Not score oriented at all!


This challenge was fun. Indeed it was interesting to find which mathematical tools had to be used in specific circumstances. Mainly vector space though.

Good concept, good challenge. It would be my pleasure to say “Good job!”.

1 Like

I didn’t have much time to give to this game, but it left me with a “meh” impression.

It may be only my opinion, but optimisation games are real fun when you know what you are optimising: the length of the CoTR sequence, or the fuel consumption by the mars lander, those are nice, relevant parameters to optimise. The same applies to the solo games with specific achievements (number of bikes alive in the bridge, number of jumps for batman…)

Arbitrary metrics, on the other hand? Meh.

But the graphics and gameplay were very nice!


I did that, and it still failed…

I’ve just looked and, in fact, there is something wrong with my program… unfortuntally, I didn’t had a clue what was wrong on the contest, as the equivalent test on the IDE, the program executes flawlessly, protecting the right human.

I have the same problem. And, at first, my strategy was passed 100% test several times, but after it started to pass 90-100% without any global changes (I use random, and the result was different each time). I thought that it does not fit in time (100ms), but then I realized that, most likely, this is due to rounding errors. Also, the rules written that zombies go to the nearest human, but how they behave, if the distance to a couple of survivors will be the same? And distance rounded up before compared or not?
As a result, I have 235k points and I’m on the 1469 place :slight_smile:
If the percentage is not influenced in the rating - It would 20 place ><

Ok, I saw the result and I can not understand why my program worked so weird :slight_smile:

One thing about the validation tests: they’re not preventing hardcoding at all. The test cases in CG are almost always simple transformations like reflection or translation. Very easy to deduce. Maybe it was the intention - to reward clever coders who can find the rule of test transformation, then write test case detector and finally use hardoded or offline-optimized solution. But if that wasn’t the intention, some more sophisticated hardcoding prevention should be applied.


I do agree. I also hope that the final test cases are many more than these (hundreds), that cover a wide range of different senerios (lots of zombies, few zombies, lots of humans, few humans, etc) to at least make hard coding difficult, and also make any fluctuation in the scores been averaged out. If the system load is the problem, one possibility would be, say, there are two test cases. One for the small one (20), the other for the big one (200). One can test the small case every 1 min, but the large case only every 10 mins. And the final score is total score of both.


I think there has design problem in test case CG.

  1. This is score oriented programming rather than saving as much people as you can, the score contradict to the idea of the game. ( “I am sorry John, saving you cost me 90,000 score in CG, you better die” )
  2. This encourage test case by test case hard coding in a very high level.

Ideas of improvement in test case design:

  1. There should not have any situation of sacrificing human to score higher.
  2. Real test cases should be more dynamic rather than the y axis inverse of normal test cases.

First I would like to thanks Codingame for this contest. It was a lot of fun !

Here is my approach :

At each turn :

while there is some time left
    generate a random strategy
    simulate the strategy
apply the first move of the best strategy seen so far

A strategy is :

repeat X times (X chosen randomly between 0 and 3)
    move to a random position
for each zombie still alive (the order of the zombies is also chosen randomly)
    move toward the zombie until he's killed

And that’s it…
No complicated geometry, no hard-coded strategy, and the humans are not even taken into account (except for the simulation score computation)…
I tried many different modifications of this method and everything else give worst or very similar results.

Since the algorithm tries many (~1.000.000) different strategies, it will find most of the time if it’s better to run toward zombies or wait for them, save all humans or not, etc.
Trying up to 3 random moves seems good for short term (like finding a way to make a good combo).
Trying different orders of zombies to kill is good for long term (like for example ensuring that at least one human will stay alive).

Since the ranking is based on the sum of hidden test cases scores, it was more important to focus on tests that allowed very big scores.
This especially true when an algorithm starts to find very good combos, since the scoring formula (10HH*fib) grows exponentially, which means that, with a very good combo a single tests case will make most of the points.
On the public test cases, the test case that seemed to scale the better was CG, so I tried to focus on it, by running locally different modifications of my algorithm on this test hundred on times.
I didn’t knew that the hidden test cases was so close of it.

Locally the CG test case give those scores (based on ~9000k tests done today):

26.000 pts    1 times
42.000 pts    34 times
67.000 pts    735 times
109.000 pts   5588 times
177.000 pts   1811 times
286.000 pts   423 times
463.000 pts   298 times
750.000 pts   389 times
1.214.000 pts 9 times

As you can see it’s very random, but it’s actually a good thing because you can always resubmit several times.
That’s why I tried to code an algorithm with the highest variance possible.
So during the last hour, I submitted between 15 and 20 times and finally got a 966k.
The big combo on the hidden test cases is this one :

I think it would be great that for other contests :

  • Make all tests matter (one way to do that is to normalize the score of each tests : YOUR_SCORE/BEST_SCORE or YOUR_SCORE/MAX_THEORETICAL_SCORE)

  • More tests - so that it depends less on luck

  • Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times

  • Ignore the completion (number of succeed test cases) for the ranking - in order to avoid situation where you are stuck with a very poor ranking because you failed on one hidden test case (95% completion) and cannot debug it.


I’m kinda amazed at how such simple logic can yield so astonishing results. I really need to try this out now…

Re-running the tests at the end of the contest - in order to avoid what me and other people did, resubmitting several times

=> You could set a seed, submit, if it didn’t work out, change the seed. :slight_smile:


Then you remove a lot of complexity…

Having on a few hours to spend on that contest, I decided to write a simulator for the game and try my luck with a genetic algorithm.

It took about an hour to get the score right in my simulations, that was a good milestone.

I would then generate a population of 100 solutions completely randomly (each being a list of waypoints for Ash).

The basic split crossover operation seemed to work ok and I later implemented a double spit operation that seems to work a little better.

The rest was tweaking the size of the population and the number of generations.

I was very happy with rank 38 (about 100k score) and decided to to bed before the end of the competition (living in Sydney!).

I woke up and checked the ranking: 100th! Got a nice achievement :wink:


I am sorry I did not explain my idea clearly.

This game can consider in the following way:

For all the possible ending ( 0 zombies and at least 1 human )
Pick the ending with the most human alive. ( Our mission, to save human) ( May be millions of such endings)
Pick the path with the highest score. ( Combo skill, the way how you implement to kill zombies, should be bonus only)

That is what I mean for mission oriented, CG has a highest score without ending with the most human alive.