The topic where you can tell how you liked the challenge, & how you solved it. (Do not post full code.)
To me this looked more similar to the “Thor vs Giants” puzzle rather than “Defibrilators”
As always, very nice challenge, keep doing a great job!
This was my first contest and it was a lot of fun, I can’t wait until games are created from the engine so that I have a chance to play with it again.
I spent far too much time on what was ultimately a weaker strategy - I focused on mowing down zombies quickly with as many humans alive, but not on clustering and racking up big combos on groups. By the time I realized the latter approach, I was already too tired to do much about it.
Visually, it was a fun algorithm to watch, at least - spread more gore across the level than blowing everything up in one spot.
Really interesting challenge. Had lot of fun.
Would be very much interested to know the strategies of the topscorers.
It was a really good challenge. However, I am a bit frustated when I see the scores of the top players There are so far away, I surely have missed something. My strategy was a basic one : I solved it as a search problem with a limited depth (Depth First Search in my case). To increase a bit my maximum depth, I reduced the number of zombies by clustering them, with Kmeans. By tuning my algorithm, I could obtain better results but I do not think I can get 10 times my score… It would be fine to know the best strategy, or to see them play. That should be fun.
Top 1 Player Replay : https://www.codingame.com/replay/solo/67239795
Doing one more combo can give you tons of points +100K easily
How do you know that you do not need to save the top human ? It seems to be a manual strategy that works only for this case. Are the tests examples included in the evaluation example ? If so, it was “easy” to do some overfit…
It seems it was the combos that gave increased score, and I was trying to save all the humans.
The validators are always differents of the data provided. Same, but different.
from the top players too ?
The big disparity in points between the top three players leads me to believe the contest was too short for people to really develop good strategies.
@Aun: it is a solveable problem to determine whether to save a human or let them die, it just takes some good algorithms.
Hi, this strategy is very impressive on the CG test. I saw this video 20 minutes before the end of the challenge. I already had a strategy where humans are on a side and zombies are on the other.
To not trying to save the human in the middle of this pack of zombies, i had a problem because my first condition was to save all humans when i could do it.
So i hadn’t time to add a condition to ignore one human if he is in a big pack of zombies, if it is worth enough.
So i added a condition in my detection of opposite sides situation to ignore one human in the middle.
I earned + 30k on submission, and i hadn’t time to do more, because i think it’s very difficult to pack zombies together while defending an human. My method for packing was not compatible with defending if there is not a large gap between humans and zombies.
I will try more when all this will be on multi games.
Great challenge and thanx to CodingGame
I liked the theme of the challenge and it was pretty fun.
I was disappointed though in how my AI that was pretty optimally saving humans (as much as they could be saved) was worthless in terms of points. The combo points scale exponentially (fibonacci) while the contribution to score from having humans alive scales quadratically, letting everyone except a bait human alive is the meta.
And when it comes to making large combos I did not find a brilliant robust algo for that and ended up having code that would wildly flucutate in score as you changed anything in it. The slightest change in movement might not pack the zombies quite as well and would divide your score by 2. I can’t think of any other puzzles on the site where scoring is so wild.
To me it felt impossible to “improve” my code, it was more about trying something and seeing if I get lucky. In the end my AI had bugs in it that I had to keep otherwise I would lose half my score.
I was able to make versions that would protect humans better and better but the correlation between that and score turned out to be 0.
I wonder how the top scores did it because this puzzle felt very prone to hardcode.
It was interesting but I wanted to point out these things.
If you have a enough time, you can find an AI algorithm that finds the optimal solution. However, I would have need to had a lot of special cases to do such thing. One thing I can have wrong is the representation of a state and how to get to the next state. How did you represented these things ?
I agree. we should have Fibonacci score for humans and square score for zombies, and a STRIKE command too instead of automatic shoot, to have more fun :).
Is there a way to see the code of other people? I would love to see how the top 3 did it
This was my first contest also. I’m pretty addicted to Codingame website trying to push my CP as high as possible and the contest was a good opportunity to go higher.
I have to say I got really pissed (I’m really considering to leave codeingame because of this) about the difference between the IDE tests (100%) and the validated code (95%). I’ve tried 3 differente approches (kill as fast as possible, protect humans, protect nearest human able to be saved) on two languages (PHP and C), and all of them I scored 100% on the IDE, while getting the 95% mark upon valiadation.
I understand why you make different tests, what I don’t understand is why the tests don’t reflect all the possibilities on the validation.
To make it worse, people on IRC, were saying that getting 100% was easy, but then there were a few folks, like me, that were stuck on 95%! You can see it also on the leaderboard, 1/3 of the concurrents are at 95% and they, like me, have a lot of scoring points (I was at 50K, not great, but for sure not “sleeping while coding”).
On the puzzles I also have this problems sometimes (3 or 4 up to now) but looking at the runs that fail, I understand what is the problem. Sometimes is the running algorthim that is wrong (and I hardcode the exception), other times is my fault. Here it was impossible to understand what was going wrong, where it failed. (I’m really annoyed by it!)
For me this was the big problem with this contest. Otherwise it was as expected.
PS: got a few errors upon the end of the contest on the website while you were updating to the next contest.
Here is my strategy, and I think doing Thor and the Giants this month helped me a lot.
1st hour of challenge :
My only logic was to save the humans. So i first chose the first human close to me and staid with him… It worked immediately, and i was very happy… and I thought : “already finished ?”. So I tried some more…
I replaced “human closest to me” with “human with closest zombie”. I had to add too “if i can save him” to succeed 100% of tests. Then I changed my target to the attacking Zombie instead of the protected human because I thought score would be better. So I validated and got some 64k.
Pause 1h30 to think…
End of first day (till morning :):
I had all datas in a structure, and was ready to make a tree (fortunately I did not). So i decided to code the next situation : node1 -> node2. With this new situation, I could calculate how many zombies I would kill next turn. Then when all was clean, I began to give some freedom to Ash. What if he doesn’t go where I say to go, but choose between some other positions. Those positions would be in a square grid and would have values : side and step. Each position was simple to transform in coordinates in the circle limits, and to evaluate with calculating node2 each time I needed.
Then I had to say to Hash which positition is the best. The best was for me another value : number of zombies to kill. So Ash could choose if he kills 0 or N zombies. Playing with values of grid, I could give Ash less or more freedom. If grid is 1000, Ash has more free movements and if grid is 100, he can only follow the right direction with less choice.
Game analysis then was to decide when Ash has to do exactly what I demand (when humans are everywhere and you have to run quickly to save them), or when Ash could have some free movements (when you have to save humans but there is a great number of zombies, so let Ash optimize).
Last day :
When I woke up (late), my position in leaderboard had doubled… And I had no idea on how to improve my score (82k). So i decided to let Ash do what he wanted. I tried a grid of 2500 to let him go.
But in this situation; I had to give Ash another objective. so I remembered Thor, where I calculated the barycenter of the Giants. I gave Ash the center of zombies so he should go where he wants, but he would know that the center of Zombies was better. With some tries, Ash could succeed the Combo test (140k)
I spent the last time of the contest evaluating in whitch situation I had to let Ash do what he wanted or not. But when I saw the CG test on a video, I realized than I could earn more points not saving the humans. So I did a trick that was to say Ash : “If you want to kill a lot of zombies, let humans die !”. I did’nt have time to improve my algo to pack Zombies, because it is very “naïve” (search the center but try to kill the most), but this trick gave me some more points (161k in final score).
This was a great challenge (my 4th). Thanks a lot to Codingame.
(sorry for my french english