Mean Max (CC01) - Feedback & Strategies


I tried minimax last time with WW and ended up spending too much time dealing with time-out.
So I decide to heuristic all the way this time.

My reaper follows this strategy:

  1. Score all wrecks

  2. Find the best wreck

  3. Go to the wreck

  4. If you are in the wreck and someone is trying to push you out from the wreck, fight back.

  5. If you are in the wreck and have some free time, check if there is any skill possibility.

  6. If no move yet, find tanker with the highest score.

The other 2 of my vehicles follow this pattern:

  1. If next round an enemy reaper will hit you:
    hit it back with full power.

  2. If blocking the way of my own reaper, go away.

  3. If can reach enemy reapers target faster then they do, rush to the target.

  4. Check if there are places for the skill.

  5. If no action had been decided yet, go to the nearest enemy reaper.

Thanks for CG and the 4 Gamers for making this possible.

Best Regard


Thanks for the contest. I liked the game quite a lot but I found it a bit too chaotic with so many units. The circular map with 1v1v1 is a very good idea IMHO.

I finished 16th.

For the search algorithm, I used an AG (reused from CSB). My genes where a type (move/skill), an angle, and a power/distance for each of my looters. I hesitated for the depth, my last submit used 4 turns but I am not sure that it is better than 2 or 3 (very long submit times with players improving their bots).

For evolution, I used random mutations (40% of the times), uniform crossover (40 %), and 2 points crossovers (20%). I observed that large probability of mutations worked best for me. I did not tried to reduce the range of mutation with time, but that is something I will try.

My population size was 10. I tried to change it but there were no obvious improvements. At the start, I used my dummy bots (used to simulate my opponents) to create one solution, reuse the solution found at the previous turn (shifted by one) and random for the rest. At each generation, I kept the best two candidates from the previous generation, introduced 2 random one, and mutate/cross the rest.

My first breakthrough of the quality of my IA, was introducing dummies (used to model opponents and initialize my candidates).

For my dummies, I used:

  • Reaper: go to the nearest wreck that is not covered by an oil skill. The speed depends on the distance and the current motion vector. Checking for oil skill improved my IA significantly.
  • Tanker: go at full speed to the tanker nearest to the reaper target. I tried ignoring wreck covered by tar but it did not seem to help.
  • Doof: go to the reaper of one of the adversary. I settle to the highest scoring player, but I tried various selections.

I think that using my dummies for the tanker/doof helped a lot against some players but did the opposite to some other players (either I was loosing a lot at the begin of the submit, or at the end of the submit). I see in one PM, that it is possible to adapt at run time to the opponent behavior, I will definitively try that :slight_smile:

For my eval, I used:

  • my score
  • my rage
  • the distance of my reaper to the best wreck not covered by oil. Best is: distance to the reaper weighted by a value that depends on its water and the
    water of the other wrecks and their distances.
  • the distance of my destroyer to the best tanker not covered by tar. Best is the same that for wreck but for tankers.
  • the distance of my doof to the one of the reaper (see doof dummy)
  • the score of my opponents with different coefficients depending of the rank of the opponents.

What I realized after a long time and gave me a good boost is that the coefficient applied to the scores (and probably rage) needs to be very large compared to the rest. My interpretation is that we are only interested in the highest score, the rest is only there to select promising candidates to the evolution.

I used my eval at each turn of my simulations and applied an attenuation factor of 0.9. Using it to only evaluate the terminal position was not working for me, but I heard it did work well for other IA. Dunno why (maybe better eval/dummies ?).

I tried using CG brutal tester for checking my IA against previous version but it did not help me this time. A new IA performing worse sometimes was a lot better when submitted to CG, and the same for the opposite. I don’t know why.

I also tried to use my GA to search a solution for the opponents (against WAIT for me, or a solution found with a short time) and then search a solution for me against it, but it did not help. I did not tried very long on this idea.

If you have any questions, don’t hesitate to ask.


Because your dummy is very good to predict your own AI. Try to remove the dummy when you are testing against yourself.

1 Like

Mean Physics

Great job to the creators of the first (of many I hope) community contests! :smiley: Really well executed contest with interesting mechanics.

Couldn’t spend as much time on this contest as before (on vacation most of the week :stuck_out_tongue:) and only had the weekends to work on my bot resulting in my abysmal ranking haha :upside_down: was a great platform for some physics revision though \o/

After the previous contest, I figured out my bot was actually doing something pretty far from design. So this time round, I decided to devote the first weekend to writing a proper visual tool for me to debug my algos.


Used the default tkInter package in python (turtle graphics library) to draw each vehicles’ state which made life much easier breaking down the actions my vehicles took each turn.

Perhaps something like having two simple options to draw circles and lines to the debug mode visual screen be possible for the current game engine? Could be an additional regex pattern that accepts input like

"DEBUG [-circle | -line] [-pos] [-dist]..." 

in the referee so our bots could have a visual ‘output’ esp for physics-heavy games like this where it’s pretty hard to imagine the vectors and it’s a lot easier to have a visual feedback on what your bot is doing.


Most of my programming time was spent on implementing different navigation strategies for my Reaper which I focused on thinking that it would be the most important factor in a winning strategy:

  1. Collision Avoidance

    • The nearest obstacle intersecting the line from vehicle to target will exert a force onto the vehicle to push it away from a collision.
    • Got my bot stuck many times alternating between diverting left and right as the obstacle moves…
  2. Bug Pathing

  3. Gravity Pathing (For Destroyer)

    • Each body within a certain effective range exerts a ‘gravitational force’ on my Destroyer, empty Tankers repel, full ones attract, enemy Reapers attract, own Reaper repels etc…
    • Sum all forces to get resultant vector (current turn’s thrust)
    • Worked somewhat but needs a lot more tuning with how the state of each vehicle changes it’s force on the Destroyer.

Even with these pathing strategies, they were not as important to the overall performance of the bot. It became clear in Gold League that the complexities introduced by the many collisions and skills quickly made such an approach untenable. Even with a strong navigation algo (goes to wrecks fast and manages to brake in time to stop) won’t ensure your bot will win. Many times, the optimal pathing isn’t followed due to enemy collisions, grenades and oils messing up your prediction.

Deciding Wrecks

Moving from a distance-based selection to a time-based selection (simulate Reaper pathing with no collisions and sorting based on the number of turns instead of distance to wreck) was sufficient to go from Silver to Gold, thereafter I stalled :confused:

Tried improving the selection of wrecks by taking into account vehicles present on and around wrecks, hence blocking it, as well as nearby doofs with the potential to block with oil. Neither of these strategies gave that much improvement to the bot’s overall performance and instead ended up with a lower ranking.

One thing that improved my bot’s performance slightly was the addition of ‘simulated’ wrecks. I scanned through the current wrecks and added wrecks at the intersection of overlapping wrecks with the sum of overlapping wrecks’ water. This allowed my Reaper to treat this as just another wreck to consider going to in it’s selection algo; allowing it to target collecting from multiple wrecks at once.

Deciding Oil Spills

Only added in heuristics for using skills on the last day of contest. Oil target selection was done by running my Reaper’s wreck selection algo for each enemy and making my Doof to go towards this target (Oil it when within range).

Haven’t gotten to tars/grenades before the time ran out for contest :frowning:


Navigation turned out to still be slightly buggy (hurhur) and my Destroyer simply spam grenades at wrecks within range with nearby enemies (very sophisticated heuristics). Doof also isn’t aggressive enough towards enemy Reapers denying them from collection water on wrecks.

There’s still quite some work cut out for me to make my bot competitive enough to be promoted into Legend.

Game Design

One thing I loved in this game is the 1v1v1 mechanic, first 3p multiplayer contest I’ve participated in (GitC does seem suitable to be extended into 3 players). Too much complexity, however, resulted in the demise of heuristic bots in this contest :confused:.

The state of the game cannot be easily predicted without running costly simulations, hence limiting the amount of ‘metrics’? a heuristic bot can take into account. Overall the game isn’t suited for heuristics to prevail in the higher leagues.

Still this is an AI-programming competition and I think this just pushes me to properly implement search-based techniques (more widely applicable) instead of relying on heuristics.

:thumbsup: Thanks again Agade, pb4, Magus, reCurse for the amazing contest! Can’t wait to see what comes of the next one :slight_smile:

####P.S. Some blatant advertising :stuck_out_tongue:

The annual battlecode competition will be running in January next year. It’s a RTS-styled bot programming tournament (think programming a StarCraft II AI player). Feel free to join me while we wait for the next contest :smiley: hehe.



Hats off to Agade, pb04, Magus and reCurse, I believe Mean Max set the bar high.

What I liked:

  • 3rd player (1v1v1) adds more options, more uncertainty, more challenge
  • Full information - somehow this felt like a good thing to have, it fit the contest.
  • Always being P0 - allows to focus on how you play instead of realizing mid-way through the match, that yellow is the opponent :slight_smile: Who can say it never happened to them?
  • Creators in the leaderboards
  • It felt good from the beginning for you guys to be around, actually playing your game
  • Sure, you had more time to think, how to solve your game, but I strongly believe it was not what you were thinking about before the contest, not a lot anyways.
  • Fighting and beating you felt awesome even in the lower leagues, it must have been a great feeling in Legend, even if you were not trying your hardest.
  • All of us, we were trying to create bots that would beat everyone. It’s everyone, not everyone except for Magus, pb04 and Agade (I believe have not seen reCurse’s bot), so accept it and let’s have fun all together.

What I was missing:

  • Seeing the speed and target in the debug mode would have simplified some debugging…

Anyways, thank you guys a lot, for creating a great game, from the beginning I felt comfortably, from the description I had most information I ever needed to code and not be frustrated about “something working weird”.

My general approach:
This contest was a breakthrough one for me, I really wished I would get to Legend finally, in the end I reach my personal best, 209th in Gold and learned a lot.

When I saw the contest name, I thought I should get back to CSB and spend some time with that (good guess!), but in the end I did not.

Seeing most of high ranking people used some way of local simulation, I spend some time before the contest to get tools done. For me, the tools are:

  • One that’s able to merge multiple C# files into one, based on what classes are used.
    • Right before the contest I added “version control”, copying latest merged file to a new location, sequentially numbered (My doof’s message contained the version, so I always knew what I’m looking at).
  • One that’s able to run many games of my AIs against one another.
  • One that’s able to fetch replays of my games from CG and test the referee
  • And finally, one that generates C# source from Java source

I know there are tools already proven to do most of these things available, but I wanted to do them myself.

And before anyone asks, I will not be providing any of the tools, however I’m more than open to discuss anything that I learned while making them and assist you with creating your own. Write me a PM here on the forum, if you’d like to ask something.

My AI:

  • I decided well before the contest, I would go with heuristics (my sim solutions didn’t do so well in the past)

  • On the first day I wrote some heuristics that would hopefully get me out of the Woods. I went to sleep when W1 submit was in some 30% percent of evaluation.

  • On Saturday I woke up to see promotion to Bronze, it was time for step two.

  • Removed some obvious issues and went hacking on the Referee, there were two problems - the tool didn’t know how to do lambdas yet and had some issues with streams in Java, took me a few hours, and my referee tests where failing on small floating differences on collisions. But I ran three of my AIs against and the results where as expected.

  • I tested referee some more (when it says the change is bad and I submit, I end up worse than I was, when it says it’s better I go up in ranking - check). I wanted to continue with the AI coding, so I left the referee alone, missing a serious bug.

  • Then I proceeded to write what was to become AI that brought me to Gold

    • Reaper: Find a Wreck closer than at least one of enemy reapers and go for it, brake when I would overshoot
    • Destroyer: Find a tanker that’s closest to the Reaper and kill it
    • Doof: go for enemy reaper that has higher score
    • On Sunday I added “course corrections” that would pull me towards more water and push me away from enemies, if they were in some 3000 radius with power decreasing exponentially with the distance. It was working well most of the time, but not perfect. (Today I know the issue was not normalizing target vector to some fixed length, so the effect of push and pull was different based on how far I wanted to get.)
  • On Monday and Tuesday I tried to write a pathfinding based on A*, and included collisions. The results where horrible.

  • On Wednesday I recovered from the changes I did because of the pathfinding, while keeping some improvements.

  • On Thursday I took different approach on the pathfinding, some line-circle intersections and calculating tangents to avoid. The AI seemed to behave as expected, but my tests said it’s worse than what I had already submitted (on Sunday!)

  • On Friday I learned to generate SVG from my bot, it contained positions of all units, speed vector, vector to target, future positions based on speed and some lines I could draw from the AI, as I calculated the solution.

  • These improved my debugging a lot

  • I finally found the bug in the referee! The referee didn’t remove wrecks, when they where out of water, so the endgame rendering would be a million wrecks all with around -100 water remaining :frowning:

  • After fixing this, I improved additional 200 places, to around 100 in Gold.

There was just one big issue:

  • When the pathfinding failed to find a way, my units did just WAIT
  • I tried to fix this issue the whole Sunday, but whatever I changed just increased collision rate, making the AI worse
  • I tried to bring the “course corrections” back, which I ditched for the pathfinding, but it was not good enough, with all the other changes I did.
  • I fiddled with the skill usage and did several submits, that improved me 10s of places, but the others where improving faster. The biggest issue was left unsolved.
  • And as I went to sleep on Sunday, my last submit slowly slipped from 100s to 200s…

What I learned

  • Simpler is better.
  • Stick to the plan. I changed my mind sometimes in crucial moments before, I did so too in this contest, less than before, but I need to work on this.
  • My tools work good enough, but there’s a lot to be improved.
  • I spent around 5-6h each day, which is still too much (with additional 7h of coding at my job)
  • I was way too tired by the end of the contest, and I honestly hated Mean Max on Sunday and wanted for it to be over.

Today, I can say I really enjoyed the game and the time spent was worth what I learned. In this post I focused on what went wrong for me, maybe somebody could take away something for them.

See you all at the next contest.

Once again, big thanks to Agade, pb04, Magus and reCurse, Mean Max was great.
Thank you!

Edit: I’m adding a pic of my renderer output: (in bottom my destro aligning itself with the tanker and the others are doing something too!)


The one thing i definitely didn’t like about this contest was the lack of direction lines that are present in CSB, but not here.

1 Like

Hi, I finished 4th :slight_smile:
Here is my PM :


93rd - 2nd Javascript - Heuristics


I have implemented an IA with only heuristics because I use javascript and it has not the performances to compete with other languages on metaheuristics algorithm. So, I did not simulate anything and my three looters act separately.


At the beginning of each turn my IA evaluate all wrecks regarding these specifics criteria:

  • water in wreck
  • water in other wrecks with distance below 1500 of the wreck center
  • ratio between minimum distance of other reapers to the wreck and the distance of my reaper to the wreck
  • number of entities on the wreck

The main goal of this evaluation is to choose a wreck not targeted by other players. If there are no wrecks on the reaper goes to the next destroyed tanker, in order to be the first on that future wreck.


The destroyer first try to use a grenade on my reaper, if it is in range and there are more than two entities near it. If these conditions aren’t satisfied, it target a tanker.
Like wrecks, tankers are evaluated with these criteria:

  • ratio between water and distance to my destroyer
  • ratio between the minimum distance of other reapers to the tanker and the distance of my reaper to the tanker


Like many players, my IA use his doof to prevent other reapers to collect water. So if the doof is near a wreck and the reaper of the best enemy, it uses oil on this wreck. Else, it gets closer of the best enemy reaper.

What did not work

Before succeeding with this strategy, I have tried many other heuristics that did not work well. For example, I have tried to put a penalty on my wreck evaluation if a doof with enough rage to use oil is near the wreck or to use tar on best reaper enemy, but these tries leads to a lowest rank.

Global appreciation

Tanks a lot at the 4 creators and all the CG team for this contest. I have done my best score on this contest, for me it’s the proof that I really enjoyed it. It was very fun and appreciable to have the opportunity to reach a good rank with heuristics.
The only things that I didn’t appreciate, is the long time for submit and the debug mode who was almost unhelpful.


The game was really fun! But unfortunately, I also experienced timeout issues in Scala. Finished 150th with almost a third of matches being lost with a time out at first turn. Tried to respond a (WAIT, WAIT, WAIT) very quickly the first turn to avoid this but it did not help. I finally started rewriting the whole code in C++. But I did not have time to finish it in time… A little bit frustrating…


I think this has been the best game yet, kudos to all.

Loved that there was more than one way to attack the problem, heuristics,subsumption,searchspace,simulation where all there competing and that people who might not have the right skills to do a full sim with GA/annealling could feel they got somewhere.

The only comments i have are…

  1. could we have the full spec for later leagues? sometimes we build frameworks that become useless with new rules addid, so we can futureproof ourselves a bit.

  2. A few more comments in the ref, i often find myself helping people in chat with how things are working. I know most people wont need it but a few people do this while trying to learn :slight_smile:

  3. myself and a few in chat wished we could set some parameters for testing, working on algos with soo much going on in later leagues is a nightmare. even if we cant set up params maybe we can test in a league we already compleated?

Overall I really liked this challange, im annoyed with myself that i got sidetracked but im hoping the next one will be even better.

1 Like

This was my first contest on CodinGame!

I joined the site 3 or 4 weeks ago and didn’t really know what it was all about. I am an amateur programmer at best, and the things I learned in this contest are astounding.

I had never heard of a genetic algorithm before. I had to google “heuristics.” I really had no clue about anything.

Well, now I know a whole lot more. I did not use a GA, of course, I wouldn’t know where to start, but I did manage to make it to Gold league with the help of some people in the chat, and mostly just by observing the chat and stealing others’ ideas that they offered up :slight_smile:

The most amazing thing to me was how simple it was to get to Silver league after learning about the destination = destination - velocity * 1.5 formula.

I think the lower league example of analyzing the angle to target and changing thrust made me think that was how to go about things, and my code was a mess of conditions and Bronze was the best I could do. After scrapping all of that and just using that formula, I shot up the board into Silver immediately.

Getting into Gold was another struggle, but again I was saved by a helpful tip (thanks MadKnight). This time it was a very simple wreck-scoring algorithm, (water content) * FACTOR - (distance to reaper). I forget what the factor was, but it was high like 12000 or something. This got me into Gold with practically no SKILL use at all.

I finished around 245th in Gold I think, which is beyond what I expected in the beginning of the contest, so I was happy. I am now much better equipped to handle future contests and really just enjoyed the experience immensely.

Thanks to the creators and congratulations to the winners!


Thank you for sharing your overall strategy :slight_smile:
Could you please give some details regarding the createMoveActionStopOnTarget method?
I’m not sure to get what you are doing. Where these variables come from: “next”, “newVx”, “newVy”, “vx”, “vy”?
Thank you

Peut-etre une question stupide, mais comment retrouver son code du challenge avant la fermeture ? Je me retrouve avec le squelette de base au niveau du code, et je ne suis pas sur d’avoir sauvegardé (par moi-meme) la derniere version de mon code :frowning:

Stupid question : how do you retrieve the source code you used during the challenge, before it was closed ? I am not sure my own last save is really the last code used :frowning:

I have edited my post as there was an error.

See the following picture to understand the compute


Each contest has a report visible on which includes your final source code.


Thanks a lot !!

Thank you, this is clear now :wink:

First, a big thank to the creators for this amazing contest!
One thing that seems to have helped me is to create a “wind rose” for my reaper. For example, each ~10 degrees angles and always the same distance (let’s say 1000), investigate the next point where my reaper would land. For this I used the equations for the next speed given in the referee (in the thrust function). I would then select the best orientation according to the closest distance to the point I want to go. This is kind of a ‘one turn’ simulation, and only for the reaper, without taking into account collisions, but it seemed to have helped quite a lot.
Thanks again, looking forward to the next CC


PHP - 415th / 2512 - Heuristic

This was my first Contest, and my first post on this forum :slight_smile:

My IA firstly search a strategy for the reaper, secondly for the doof, and finaly for the destroyer.

Reaper :

If there is wreck on the map :
I put a score on each Wreck : bonus if closest of my reaper, bonus if contains more water, and malus if covered by oil.
My reaper go to the highest scored werck.
This is my “wreck strategy”.

If there is tanker on the map :
My reaper go to the Tanker closest of one of the three Destroyer.
This is my “tranker strategy”.

If there is nor wreck or tanker the reaper return to the center of the map.

Doof :

Secondly, the doof. I search where the doof should put the oil. I test this cases in this order, if one is true, i put the oil on :

  • on the wreck that is under the reaper of the best enemy (best score)
  • on the wreck that is under the reaper of the other enemy
  • on the wreck where my reaper go, if one of the other reaper is closest of the same wreck
  • on the wreck with distance of enemy reaper more than the distance of my reaper from the wreck + 3000
    For each of this tests, I verify is the wreck is not already covered by Oil.
    I also verify the distance, and if the test failed only by the distance I store the wreck in a variable, in other to target the wreck with a grenade of the destroyer.

if there is no Wreck on the map, or if I have not enough Rage, then my Doof follows the best Enemy Reaper.


My destroyer throws a grenade on the wreck that the doof wants to target but can’t because of the distance (distance doof <-> wreck > 2000).

Else, if my Reaper strategy is to follow a tanker, then my destroyer follow the same tanker.

Else, my destroyer follow the best enemy reaper.

Here are my strategies to reach Silver League.

With some improvements this AI can reach Gold League.