Fall Challenge 2022 - Feedbacks & Strategies

The topic where you can tell how you liked the challenge and how you solved it.

As the challenge game will be released soon as a classic game on the platform, please don’t share your code publicly.

Global leaderboard: https://www.codingame.com/contests/fall-challenge-2022/leaderboard/global


I finished in 8th place.

My bot splits the game into 2 phases: building and spawning+moving.

They run almost independently from each other. But first let’s talk about fundamentals.


We run a BFS (breadth first search) for our own and opponent territory to see who’s closer to each cell and will be able to claim it first. This gives us some frontier of cells that should be ours and are bordering the opponent and vice versa. From there we run another BFS from our frontier to know how far away each cell is from it (note: this is actually incorrect, as cells can get recycled while units are on the way. Therefore the distance from A to B can differ from B to A. But I failed to do it correctly in 50ms/turn).


We can further distinguish between recyclers to increase your income and those for defending.


Placed, when enemy units are nearby, or when placing multiple of them can cut off the entire enemy territory and win the game. I compare the number of cells I expect to own with and without recycler (recomouting the frontier for each possible placement) and give a bonus for building if I can’t defend the amount of enemy units or if I’m behind in income.


I went with a simple formula of matter gained / cells lost. For the lost cells I take the predicted owner into account, so destroying a cell is not necessarily a bad thing. When placing a harvester, I also make sure that the distance from my robots to the frontier will not increase and force them to take a long path around it.

When there is enemy contact, I will only build a recycler if it will still leave me with an advantage of predictedly owned cells or if it’s a really good ratio of matter gained to cells destroyed.


For movement I have a scoring function that tells how much I like a certain board situation. This does not look ahead further than the current turn and does not try to predict the opponent.

Scoring function

I group all robots by their target cell and count how many robots I want to send there. I also have an expected amount of robots that I need to defend or capture a cell: the sum of all opponent units surrounding it. Add one, if I don’t own the cell already.

With that in mind I can check how many robots I’m missing on a cell.

I give points for having robots there, but only as long as I don’t exceed my targeted amount. The inner frontier has a higher factor than the central and outer frontier (=attacking the opponent).

For robots that are not at the frontier I score the distance to the frontier (to make them go closer) and also the value of a possible harvester if the cell is neutral.

After looping over the robots, I also loop over the frontier and give a penalty for the distance of the closest robot to each cell of it. That penalty is also what motivates my bot to spawn units before the fighting starts.

Moving existing units

This is done with a hill climbing: loop through all robots and give them a new target, one robot at a time. Compute the score and keep it if it’s better.

I extended this to mutate 2 robots at the same time, if they are close enough together to affect each other and also move multiple robots along a chain (first robot moves somewhere, then a robot from a neighboring cell fills that gap, a 3rd tailgates robot 2 and so on).

Spawning units

For spawn targets there are obviously the cells I own. I also allow a spawn next to a non-moving robot, which will make that robot move and the spawn actually happen where the robot was previously waiting.

I have different motivations for spawning: first there is the possibility that a spawn will increase my score. Those have the highest priority obviously. When I still have matter left (even after building), I will just spend all the matter and attack the opponent.

Fallback movements

That’s the most annoying part, as it wasn’t documented. When you move all the units of a cell, you can give a fallback command: sometimes the opponent builds a recycler where you want to go to. The game will either try to find a path for you and move you anyways or it will let you stay, depending on your current and target location. If it lets you stay, you’ll get another chance of moving them somewhere else. So you want to print 2 commands just in case that the first one fails. Alternatively you could set a target 2 cells away instead of just one and then let the game figure out a path for you.

Combining spawn and build

My two phases are not really coupled. Therefore I have an argument to my spawn function whether or not to spend all the matter in case there isn’t really anything important to do.

I first try the vital things only, then build my harvesters if applicable, then spend the remainging matter for nagging the opponent in this “playful competition”, as the statement calls it.


Legend #14

My bronze bot started by identifying the cells where I want my robots; there are 3 types of these “targets”:

  • attack target: a cell owned by the opponent next to one of my cells
  • defense target: my cell next to one of the opponent’s cells
  • exploration target: an unowned cell that is either closer to me than to the opponent, or equally distant from both players, but has a neighbour that’s closer to the opponent

I continue by running the Hungarian algorithm to assign my robots to these targets, with priority exploration > defense > attack; only one bot can be assigned to each of the attack/exploration targets; defense targets can be assigned to multiple bots (as many as there are enemy robots on the neighbouring cells). Attack and defense targets can be assigned only to the bots in the neighbouring cells, exploration targets can be assigned to any bot, but closer bots are preferred.

After that, I run the Hungarian algorithm once more, this time to assign to every robot the cell it will move to - there are multiple “tasks” corresponding for every cell, so I allow multiple robots to move to the same cell, but the reward the next robots going to the same cell is a bit lower; also, only the 1st robot receives a bonus for moving to an unowned cell.

Spawning is handled as moving from a special cell that has all my owned cells as neighbours; I placed 0 or 1 recycler per turn - the cell where I placed the recycler was the one with the highest [profit/my cells destroyed] ratio out of my cells that no robot wanted to move to. Whether or not I would place the recycler depended on owning a cell that would give enough scrap, results of Voronoi and the number of targets of each type(attack/defense/exploration).

This simple bot (occasionally with some extra heuristics that didn’t seem to have any impact on its winrate) was good enough to get to top 2-5 at the time, then I started to look for improvements, but I mostly ran into things that didn’t work:

  • beam search the early game: I usually managed to get to depth 6-8, it was better that what I had at that point, but I discarded it when I improved my recycler placement
  • any sort of search that takes opponent’s moves into consideration - nothing seemed to work, I think it’s because a lot of rock/paper/scissors in this game
  • tuning the edge weights for the Hungarian algorithm - the algorithm (at least the way I used it) is just not good enough for the game and I decided to completely replace it

One thing that actually worked was using simulated annealing for recycler placement - I estimated the number of cells I can safely lose to recyclers, and I used simulated annealing to pick the best (still using the same formula as described above) recycler positions among the cells I owned, or those that were on a shortest path to one of the exploration targets assigned to my bots (then I rewarded moving towards those in the 2nd run of the Hungarian algorithm that I described above). I used 500ms in the first turn to calculate my plan for recyclers, then I spent a bit of time to update my solution every turn. The algorithm worked well in theory, but not in practice, because all the exploration targets had the same weight, so my Hungarian algorithm kept picking different ones every turn and the recycler plan kept changing all the time.

Hungarian algorithm had to go - it was simply not viable to evaluate the “importance” of targets in a vacuum and get better than mediocre results, and picking the target and the next move separately is bad, and moving towards one exploration target is the same as moving towards another (which this algorithm completely ignores), and sometimes I want more to move than 1 robot towards attack/exploration target (actually spawn stuff to get control of cells that are equally distant to both players instead instead of giving it to the opponent after we trade 1 bot each there - this is probably the biggest problem of my bot), and sometimes defending is more important than exploring, and the distance malus for exploration targets is a pain (e.g. d=1 and d=3 vs d=2 and d=2), and …

My intention was to replace it with another simulated annealing that would assign both one of the targets and a neighbouring cell (in the direction of the target) to each of my bots. A neighbour (next iteration of SA) of one such assignment would be found by changing the target of one of the robots, and if there’s already another robot targetting the same cell, with some probability change that one’s target as well… and repeat - inspired by looking for an augmenting flow/path. This seemed to virtually always find global maximum of simple evals I wrote for testing, and it was already much better in the early game because the recycler SA actually worked; then it was time to actually write a good eval taking into consideration everything that I had in my bot… and then I found that I had no time for CG in the last week, so the Hungarian algorithm stayed.

Considering that I also never implemented any sort of reasonable decision making of building a recycler vs spawning, or breaking deadlocks in the late game situations without recyclers, I’m surprised that I ranked this high - maybe placing recyclers somewhat efficiently, and planning them for future turns is something only very few people did and it actually does something, unlike most of the thing I tried.


30th in Legend! Viva la neuroevolution! :partying_face:

As some may already have suspected or know, I used NN :open_mouth: for the contest. It was trained using evolution strategies. It was coevolution, where in each generation, all the genomes fought against each other and the winners were favored for reproduction. Most if not all the logic and strategy, that is recycler builds, spawns and movements, was done by NN. I think I got lazy, nowadays when I see a game I’m thinking about how I can NN it so I don’t have to think about anything. Though for this game I tried to understand some of the gameplay logic myself.

Wood → Bronze
Nothing interesting, just some crappy bot with random spawns, randomly wandered prefering rather neutral and enemy cells.

Bronze and later
I’ve been playing with neuroevolution for quite some time, that is randomly perturbing neural networks weights in hope to find better one suited for the task. I had some modest success with it in other games, like sub-11k time in Search Race in my alt account, or being 3rd in clash of bots (at the time of writing). The problem with this particular game was that it is multi-agent and there are many actions sequences possible, not to mention variable map size. So for starters I broke the problem into parts:

  • ask NN for each cell (my own cells, in random order) what to do (build, spawn, movement)
  • the inputs was the 5x5 square vision around the cell, + my money, + opp money
  • I had 3 networks: first with one output for building recycler if it outputs > 0; second with one output for spawning if it outputs > 0; and third with five outputs for movement (wait, N, S, E, W), highest score wins
  • for now the above networks only made decision for spawning 1 robot, or for moving 1 robot, not multiple

I used modified referee code (i.e. removed parsing and made the actions directly) to have accurate environment. I used pgpelib for the evolution strategies. My popsize was 200, and each genome fought against each other 2 games, so around 40k games per generation. Quite big, unfortunately, the ES is kinda slow in this regard, as in, it is not sample efficient. Nevertheless, after just a handful of generations, the winrate against my crappy wood->bronze bot spiked. Then when I uploaded the somewhat evolved bot, its rank jumped from ~800th/1500 to ~200th/1500, so it wasn’t a delusion, the ES really worked. I let it run for few thousand generations and uploaded. Lo and behold, it was 20th, the rank was jumping around down and up from 15th to 60th, but the ES proved that the NN is feasible in this game. It didn’t have global situation of whats going on, the cell had vision of only 5x5, nevertheless it was beating at least semi-serious bots at the time. I think this early attempt was already enough to get into gold league.

In the meantime people were DMing me for advice and hints of recycler placements and spreading. I wish I knew how to play this game :dizzy_face: . All the decisions were learned. Until later I used more game-specific logic to, for example, restrict obviously stupid moves.

In the meantime, I was playing around with the right inputs, architecture etc. From python code, I was calling the java referee, but this was somewhat slow. I rewritten the game to C++. The biggest problem was the A* pathfinding implemented in the referee, I wanted the environment to be exactly the same as the one in the referee. Turned out it was the PriorityQueue, the java one handled ties differenetly than in C++, so I had to implemented, at least partly, the priority queue the same as the java one. But it worked and the speedup was considerable.

There are many ways to tackle this game from a perspective of NN. I tested many minor and major ideas, some quickly, some thoroughly. Different inputs, different architecture (more hidden units, different outputs, etc.). For example I was testing 7x7 vision, but it was much slower to see some progress. Maybe it would hit the higher ceiling, but I didn’t have time for that. I was considering putting global map into inputs, but that was out of question, at least for now. I was talking with fellow codingamer about the ideas (he also used the ES) and I used some of his ideas and he used some of mine.

The summary of the final net:

  • recycler and spawn networks merged into one with 2 outputs
  • spawn output acted more like ‘regression’, I used its output to spawn more number of units. bigger output - more spawns
  • I asked each cell with units up to 4 times per cell for movement, each time with updated inputs with ‘movement’. so my bot can move up to 4 robots at a time, from 1 cell
  • the 5x5 inputs contained more, like: each cell had nearest bfs distance to enemy non-recycler cell, euclidean distance to center scaled accordingly
  • added global inputs like map width or earlyFinishCounter
  • instead of asking random order of cells, I sort them by max of outputs of recycler/spawn net

Bot finally had partially some idea what was going on in global sense. Also, I implemented BFS to find and fill my own islands. Not only it sped up the learning generation-wise, but also time-wise, as there were less NN inferences because I didn’t ask NN what to do with for cells in my own islands. The evolution was progressing and that was reflected in the leaderboard. Bot was in top 30 for most time of gold, down to top 60 when more serious bots appeared.

Gold → Legend
I kept my last experiment run for few days and nights straight. After the legend opened, the bot was at the top gold but that was not enough to get to the legend. I added some if-elses, for example I restricted spawning and moves to soon-to-be-grass cells, because despite learning the NN would occassionally do that. It significantly improved the bot and I finally got into legend. Seeing how the evolution still progressed, I let it run until the end of contest. Patience is virtue, I was stopping the earlier experiments before reaching ~10k generations, maybe they could be better if I let them run for longer. Last run was about 25k generations, took around 5 days and nights and I still see it was improving before I finished experiment.

As for the game, it seemed simple enough. I was surprised there was no fog of war, not that I miss it, but I expected much rps and ties for the top bots. Except for alternative moves when someone places recycler, the statement was clear. There was also plenty time for experimentation. If I were in more competetive mood, I would think of more logic for the game myself, instead of relying only on NN. Restricting obviously stupid moves yielded much better results. If I used some hungarian algorithm etc. for many other edge cases I’m sure the bot would be much stronger. NN is good for approximating stuff, but not for exact outcome. I’m hyped to try this approach in other multi-agent games.


50th in Gold

My bots consists of two separate parts: fighting & navigation. The fighting algorithm does a search using a UCT Forest, or what Domiko told me is more commonly known here as Smitsimax (I thought it was called DUCT, but it’s all very similar anyway). The navigation is based on the Hungarian algorithm (Kuhn-Munkres).

For the fighting part I used multiple trees that sample based on UCT. At the moment they have a depth of 1, with an eval function on the resulting game state. One interesting thing to note is that I don’t use one tree for each robot, but instead for each front line cell that has at least one robot on it. The actions are then the possible ways to move some of its robots to the neighbouring front line cells (or stay). I also have a tree for each possible build or spawn depending on the amount of matter, where the actions are the front line cell in which to spawn or build. I have these trees for both my own cells and the opponent’s cells. In this setup I manage to do about 15k to 25k iterations per turn. The eval is based on the size of the territories of both players (flood filled), and the level of threat one player’s robots pose to the other player’s cells (many robots being opposed by mostly empty cells gets a penalty, etc).

My navigation goals are simple: go to the potential front line while spreading out and grabbing a few decent spots for harvesting. I pick harvesting spots that offer a good matter profit per lost cell (both due to becoming grass or becoming an island). I’m not so happy with the output of Hungarian actually, but tried to change that today and it got worse.

All low level calculations (flood fill, path planning, distance calculation, island detection, etc) are done using bit boards.

Quite happy with the result; First time to try the UCT Forest method and it seems to come up with decent moves. I don’t think that was the weakest part of my bot, at least. :wink: I didn’t have enough time to add every feature I wanted, and there are still a few known bugs, so it could have been worse. Fun fact: With all the different algorithms in one file, I had to reduce indentation from 4 to 3 spaces to stay within 100k characters. :smile:


Congratulations Delineate for your victory. You lead the race from the beginning to the very end. It’s well deserved.

I loved this new 1-month-format, even if Christmas + New Year is arguably the worse possible time slot. I hope it will become the new standard.


Great Challenge
Thanks CodinGame Team
I finished 53 in legend

Main steps of my algorithm

Defend 1 (with spawn/build/stuck unit)
list all cells in immediate danger (more attackers than defenders) and sort with an eval
If I have enough matter I spawn all the good number of unit in the cells to defend
otherwise I use builds when it’s interesting
I stuck the unit already existing => can be replaced by a spawn and re-used in next stage

Prepare Fight (with recycle)
Before starting fighting phase, i build recycler based on the number of enemy units and total material production capacity(all turn)
My objectif is prepate the fight and have the same level of unit and material reserve as the opponent when figthing
I also avoid building recyclers that block access to the free cell to be collect at the end of the game

Deployment (spawn and move unit)(reach the limit with opponent as fast as possible)
Construction of a map project with the two opponents at the end of the game assuming they have infinite spawns per turn
The deployment targets correspond to the limit between the two projections which are not already in fighting mode (i.e free cells)
Then i look for the most interesting robot unit/target couple to reach the objective by spawning or moving

Defend 2(move)
If some units are still in danger I move to help

Attack (spawn and move unit)
search for the most interesting move or spawn unit/target couple to attack the nearest enemy cell

End of the game
when there is no more limit between the players I will look for the remaining cells

For the move action i use a path finding based on a map with cumulative coefficients which allow
to avoid using to much times the same path when moving at the same distance, and cover more free cells.


Nice game, thx to organizers and community. Marathon format is quite good once in a while.
I tried a couple of different approaches early in the contest, but completely misconcepted what should matter, ending up in dead ends. I finally moved to heuristics with MC supported battle, although the will to have some kind of simulation stopped me from improving the early stages of the game, which were quite important. And when i realized that it was already too late and i had basically no time to rewrite everything in fourth time. Summing up i once again disappointed in myself, but not in the game, and hope to see it in multiplayer section =)


22nd in legend. It was better than I expected when I first saw the game. I am generally not good at multi agent games, so I’m quite happy with the result. My bot is enormous (3k+ lines, not including all the failed stuff). I’ll summarize the features and discuss failed attempts at alternatives:

General map features

  • Bitboarded voronoi: I could do 1 million voronoi maps/second. In the end I did not need it all that much, but it was a long contest and this sort of thing comes easy to me anyway.

  • Breadth first search. It’s nice to know the distances needed to travel.

  • Flood fill. Bitboarded, similar to the voronoi calculations.

In all instances I used a future bitboard for each of the coming 10 turns, to determine which cells can be travelled. This is important while doing the above map-related functions.

Early expansion

I use the Kuhn–Munkres algorithm to assign robots to voronoi-border cells (targeting equidistant and vorono-border opponent cells). I only allow moves that bring a robot closer to it’s objective. Spawns are also considered robots, except they have a variable location and need 1 more turn to travel, which features into the cost. This method gives an excellent spread and it’s hard to improve it. I did find one improvement and that’s to run munkres again with bordering neutral cells as objectives. This ensured that while the destination remains the same, robots would pick routes that captured as many cells as possible on the way.

Early recycler placement is heuristic, with a formula similar to what others used. I did make certain to avoid splitting off uncapturable islands and near the end of the contest, I added a check to avoid blocking my own robots. That did not help a lot, because my robots weren’t blocked that often and it limited my recycler choices.

After first contact

This is the most complicated part of my bot. I first place a single heuristic recycler if I can. I can no longer place them economically, just to block opponents (and gain extra scrap while doing it).

The second part is another munkres. I make defensive objectives, offensive objectives and so called “exploration objectives”.

Defense My highest priority. However, there is a catch here. For every cell that I need to defend, I run a separate munkres attempt. I assume my opponent defends all his cells, using recyclers and spawns and robots. Whatever is left after that, is used to attack my defending square. Often this meant I needed less defensive objectives than other players would (1 objective for every threatening robot). You might think this is risky, but think of it this way: Any player that tries to make use of the fact that I use less defense, does the same thing I do, since I assume they defend their territory fully. To attack me with more, my opponent compromises his/her own defense.

I not only defend my own squares but also neutral squares that my opponent is threatening.

Offense: For every connecting opponent square I make an offensive objective for every opponent-robot on there. I add 1 more, at a lower priority.

Exploration: The objectives are again voronoi border cells. Because the objectives are voronoi cells, this does not lead to my robots targeting my own uncaptured neutrals, just the contested ones.

Spawns usually end up being used on defense, because they can only start on my cells.

Win check I use the bitboarded floodfill to try all recycler spots, most combinations of 2 recycler spots, some combinations of 3 spots (if possible) and a few combinations of 4 spots. This used only around 1 ms. If I can win by placing these, I immediately do so and only capture remaining neutral cells on my side. I dont bother doing anything else, so my offensive robots just fall still and do nothing. Before the end of the contest I also added a check to avoid placing losing recyclers.

Stalemate breaker Often a game ends in a stalemate. If that happens and I am losing I take 10 matter and spawn a robot on the biggest contested island. It starts capturing neutral tiles. If that is not enough, I repeat this for non-contested islands also.

What didn’t work I spent 3 full days (over 20 hours) on an early game SA (simulated annealing) that included both movement and recyclers. The movement was quite sophisticated. I had movement maps (basically all tiles had a direction arrow) that I could mutate. This way routes retained memory even after mutations when routes changed. It could pretty easily create a good spread with lots of captured cells while still getting the robots to the front in minimal time. Adding recyclers sort of worked. It placed recyclers “ok”. But when I tested it, the end result kept being 50% vs my old bot after careful tuning. I think the munkres + heuristic recycler placement was just hard to improve upon. Beyond that I think it was hard for the SA to properly explore the statespace for recyclers on big maps. Even though it was as good as my other bot, it took a lot of calculation time. I kinda liked being able to do 10k games in 10 minutes locally. With the SA, that option was gone.

If anyone wants more detail on stuff I explained above, I’ll be happy to expand. Thanks to CG for a fun contest.


Gold #186

My agent was based almost entirely on dynamic programming (value iteration) to compute an ‘optimal’ policy for each action type. For each action (build, move, spawn, in that order), I set a value for each tile based on its qualities and type (build, neighbour opponent bots, if opponent owns the tile, etc), and allow VI to propagate that value out across the board. This produces a single policy for each action type, for example allowing all bots to move greedily w.r.t their largest valued neighbour tile. The best parameters I found were gamma=0.8 and theta=0.005. Using numpy, all of my policies could be calculated in 15ms or less, updating the values of up to 25k cumulative tiles in that time. I felt this competition was nicely suited to numpy and the opportunities for vectorisation helped level the playing field with compiled languages :slight_smile:


Builds were split into two phases. The first is for scrap collection, the second is for defense.
Scrap collection happens up until I ‘meet’ the opponent at a pre-calculated ‘frontier’. I try to build one recycler per turn until we meet, preferring builds with the highest ratio of expected lifetime scrap to number of tiles broken. I do not allow builds on tiles where more than 3 tiles will break.
The defensive recyclers are placed by using VI to calculate a policy for builds that only takes into account the number of opponent bots on each tile. I check if any of my owned tiles are empty, and if they also have neighbour enemy bots, then build a recycler on the highest valued of these tiles. Once the build actions are chosen, I pass them to the movement stage to take into account my build placements when calculating a movement policy.


For movement I compute 2 separate policies: the first allows the value of tiles to propagate through tiles with opponent bots on them, the second does not allow this (so the value must propagate around the opponent bots). I then combine these two policies linearly to compute the final movement policy. The intention for the second policy is to encourage surrounding my opponent. Even without this, my bots fan out roughly in tandem with the opponent: if they spread out to surround, my bots will roughly try move to match it. There is also a small penalty for moving onto the same tile that another bot has moved to, to incentivise spreading out. My agent likes to move a bit too aggressively onto opponent tiles that are populated with bots, which isn’t desirable behaviour, but I didn’t have time to experiment with weighting each policy when combining (I just linearly combine them). I contemplated adding some simple heuristics to move/stay or spawn on my tiles which are outnumbered by neighbouring opponent bots, and even implemented a beamsearch on the final day to solve this as a CSP (constraint satisfaction problem) which mostly worked, but I could not iron out the issues and get it to run effectively in the time constraint on top of the VI calculations (boo python).


The policy for spawns is calculated taking all of the above into account. I calculate a spawn policy using a very similar approach to the movement stage, but taking into account builds and movement as well. At this stage I burn all of my remaining scrap, spawning bots, subtracting some constant from the value of that newly spawned state (to discourage spawning all in one place unless absolutely necessary). The spawning is also a bit hacky and not optimal, I would have liked to force spawns away from tiles where I match neighbouring opponent bots, but ran out of time.
I’m a bit disappointed with my placement and felt I could have gone further with more time, but that’ll have to wait until the next contest :slight_smile:

Contest feedback

Sorry, this is mostly negative. This contest was rather dull compared to previous ones. There was a tremendous amount of subtlety in the movement of bots, but almost all of it could be summarised as “move to the centre first, block opponent, try to surround”. Being defensive in confrontations was overall the best strategy. This didn’t make for a very exciting viewing experience in the IDE, and this contest was unique in that I had to make an effort to find it enjoyable at first. Debugging in the IDE was also super hard; there are numerous visual bugs where bots that move ‘remained’ on the tile they just left mid-action. All of the actions playing out in one small moment made it very hard to scrub through the timeline and visually understand the behaviour of mine and others’ code too.
Without an integrated chat it honestly felt lifeless too. Discord was not an adequate substitute, there were very few people talking there. Overall it was a much lonelier experience. I hope this experiment with no integrated chat can now be put to bed and reverted.
Finally, while I greatly appreciated the extended contest duration over the holidays, I didn’t have much time to participate because of the contest timing, and I feel like this may have also hurt overall participant numbers.
Overall, I had a good time, am grateful for the contest, learned a tremendous amount and I’m looking forward to the next one :slight_smile:


Thanks CodinGame team for hosting an amazing contest, I really hope you keep doing them :slight_smile:

I finished #9. It’s an honor to finish right behind the amazing @eulerscheZahl and to have fought against such great programmers and competitors. I want to specially congratulate delineate. His bot submitted mid contest was undefeated for several days, we were all trying to chase it. And when it seamed he had finally been left behind he came up with a dominating new bot with possitive win rate against every other bot in the top 10. Congratulations !

My bot is 100% heuristics based. The first thing I consider when starting a contest is if I will go for a simulations based bot or a heuristics one. I don’t know NNs so that’s out of the question for me @jacek :slight_smile:. Since the action tree seemed too big I assumed simulations would not be possible with the time constraint.

The game is a very interesting resource allocation problem which seems simple, but there are a lot of really complex decisions to be made. Very often a subtle thing can define a game. The main strategy decisions to me are the following:

  • Build vs Spawn
  • Where to build
  • Where to spawn
  • Deffend or attack

Information is key for decision making, in codingame as much as in life :slight_smile:. The most important information I compute beside the trivial (score, matter, unit count, etc) to make decisions is the following:
Projected score: The amount of tiles owned by a player + the neutral closer to player territory, excluding all tiles that will die (recyclers and close by neighbors with less or equal scrap)
All future unit count: The amount of current units + total amount of future gained scrap / 10
Connected neutrals: The number of connected neutral to another neutral or to a player territory border
Lost tiles + will die tiles: The number of dead tiles and tiles that will die on each side of the territory (own and opponent’s)
Tile risk: I assign a “risk to lose the tile” value to every one of my tiles and opponent tiles. The more neighbors it has, the more risky it is to lose the tile. And if it has neutrla neighbors, much more risky. The logic here is that if the opponent takes a tile with many neutral connected, he can easily flood your side of the field. And if he takes a tile with many neighboring tiles, he have several choices to continue infiltrating your territory, or can place a recycler and make damage.

Battle front tiles: Tiles neighboring an enemy tile
Deffense battle front: Tiles neighboring an enemy unit
My side tiles: Tiles closer to me at the start of the game
Many others…

My bot follows a prioritized list of actions:

1) Close and win
If I win by placing a recycler in every cell of the battle front and have enough matter, I do it.

2) Voronoi moves
I prioritize a move that makes me “gain” territory. If a move makes me be closer than the opponent to 2 neutral tiles, I do it.

3) Deffend
For every tile in my deffensive battle front, I try to block opponent neighbor units with priority to more risky tiles.

The priority for deffense is the following:

3.1) If I’m winning (ie, my projected score is greater than my opponent’s), I can build, and after building a recycler I’m still winning, then I build.
3.2) Cover as much as possible with neighbor units that are not touching an enemy unit (safe to move)
3.3) Cover with units in the tile
3.4) Cover with units in neighbor tiles, even if they are unsafe.
3.5) Cover with spawn.

I try to avoid defending with spawn to allow the possibility to build.

4) Additional Moves
In the early game I try to cover as much territory as possible while reaching for the middle. If I can get a neutral and get closer to the middle, I take it. Then if I can get a neutral and remain at the same distance to the middle, I take it. I don’t get farthest from middle with a unit in early game.

In the mid-end game, with any surplus units not used to defend, I attack the riskier neighbor rival tile.

5) Voronoi spawn
With all surplus matter not used for defending, I find for spawns that makes me closer to neutral tiles.

6) Fight for middle spawn
When there is a neutral tile that touches an opponent unit and one of my own, I spawn units there to fight for it.

7) Build recyclers
I compute the best recycler spots in my territory with a scrap gain / lost tiles value function. I compute single recyclers and by groups of 2, waiting 1 turn between first and second. For the decision to build or not, I take put a limit on lost tiles on my side, and I take into account if I’m winning and the amount of units of each team.

8) Additional spawns
With surplus matter I spawn units to attack in the mid-end game close to rival tiles with higher risk, and in the early game I spawn in tiles with neighboring neutrals as close to middle as possible.

Additional features:
Lone wolf
In the late game, if we are stuck fighting in a tile and I’m losing, I go away with 1 unit (the lone wolf) and try to get the needed neutrals to be ahead in the score.

In the end my code was so big I had to remove all the spaces and line breaks before submitting because I reached the size limit. Thanks again for a great contest. It was a fun month of programming and thinking about the game !


AI of arturgo, legend #15

Intro :
What is a Nash equilibrium, and how to compute it.

Suppose we have a 2-player game.
In this game, the first player can play a move x from a set X,
and the second player can play a move y from a set Y.
Both of them play at the same time.
The reward of the first player is f(x, y) for some function f,
and the reward of the second player is g(x, y) for some function g.
Both players want to maximise their expected reward.

For example, Rock-Paper-Scissors is a game like that :

  • We have X = Y = {Rock, Paper, Scissors}
  • f(x, y) = 1 if the first player wins, 0 otherwise.
  • g(x, y) = 1 if the second player wins, 0 otherwise.

To win a game like that, we can’t be deterministic, for example,
if you always play Paper, you’ll be beaten by a player who always plays Scissors.

What you really want is to compute a distribution of probability
over your possible moves (this is called a mixed strategy).
For Rock-Paper-Scissors, it’s well known that this distribution is (1/3, 1/3, 1/3).

More formally, we say that a pair (distribution of probability over X, distribution of probability over Y)
is a Nash equilibrium if no player can improve its expected reward by changing its distrbution.
Nash proved that this always exists (it’s not necessarily unique).
It’s clear that in a zero-sum game, you really want to play a mixed strategy which is part of a Nash equilibrium.

But how to compute a Nash equilibrium ?
One way to do that is to use Multiplicative Weight Update (also called Learning with Experts) :

  • We start by assigning a weight 1 to every move in X and Y.
  • We sample a game, the two moves (x, y) are chosen proportional to the weight.
  • For each move x’ in X, we multiply the weight of x’ by exp(eta * f(x’, y))
  • For each move y’ in Y, we multiply the weight of y’ by exp(eta * g(x, y’))

We can show that the distribution converges to an epsilon-approximate Nash equilibrium,
where epsilon is smaller if eta is smaller ; but if eta is smaller the algorithm converges more slowly.

Link to the fall challenge :
The codingame fall challenge is one of these games. Since both players
move and spawn their units at the same time, we can try to design a scoring function
f and g that give the reward of players if the first player plays x and the second player plays y.

However, here X and Y are every possible moves/spawns of the units : this is exponential in the number of units.
The Nash equilibrium computation described above has no chance to pass the time limit.
We have to be smart.

To do that, we can approximate the distribution over the product of the units moves,
by one distribution for each unit.
This lowers the number of weights from 5^units to 5 * units for the moves,
and similarly for the spawns.

There is also a way to improve the convergence of the algorithm :
we have one different eta for each drone, depending on the standard deviation of
the reward depending of this unit.
It allows us to converge quickly even if a unit has deviations of ~1000 reward
and an other has deviations ~1 reward.

Now, my solution is basically :

  1. place the recyclers according to some heuristic : basically matter / cell destroyed

    • bonus if it blocks opponent units, malus if it disconnects the board or if the border is tiny, and depending if the opponent has more or less long-term units.
  2. compute the Nash equilibrium as explained above, and move accordingly. My scoring function is computed in this way :

  • The influence of a unit of a cell is 2^-distance.
  • The influence of an owned cell on a cell if 1.5 * 2^-distance.
  • The border are the cells on the edge of the Voronoi cells.
  • If a player has an influence of x on a border cell and the other an influence of y,
    the reward is w * (x - y) / (x + y), where w is the weight of the cell. It depends on the owner (I give priority to defense), and if it’s a weak point (if I lose this cell, the enemy will enlarge its Voronoi cell significantly).
  • I add bonus for cells which are good recyclers.

I’m not very good at designing scoring functions, but that gave me #15, and a 66% win rate against delineate :stuck_out_tongue: GG to him !



As we had a game with perfect information, I really wanted to come with a search algorithm instead of heuristics. However, the decision space is very large, and it seems important to take the opponent possibilities into account when units are close, which makes search algorithms hardly competitive against heuristics-only approaches. But there is one thing that help them: it is possible to only search at depth 1 and evaluate the simulation results since we can see the impact of each action right away: even the long term gain and loss of recyclers building can be evaluated immediately. So I decided to go for it.


Going through a large decision space while taking into account the opponent? The alternated Genetic
Algorithm I wrote for SC2020
seems like a good candidate:

I keep two population of individuals, one for me and one for the opponent . Alternatively, I play one generation for me, then one for the opponent, one for me… until I run out of time. At each generation I keep the best individual, and the next generation will be played against this best version. That way, both me and the opponent will keep improving and reacting to the other.

I tried to do something similar with Simulated Annealing, but without success so I stuck with the GA.

The genes are the different moves for each unit and a build/spawn action for each 10 matters. A bit of pruning was done: terrible build actions are eliminated, and spawning can only be done on one of the earest cells from a frontier cell.

Once the cross-over and mutations have been done, the game is simulated at depth 1 with one individual from each player, and evaluated.

As the game is sometimes a rock/paper/scissor, the GA can keep going around in circles instead of converging to a safe solution. To counter that I added a little trick: for the last 25% of the generations, each individual is played against the other player best individual, and also against an old individual. This old individual is updated when there is suddenly a big delta with the new best evaluation, as it might indicate that the current individual is risky, and we don’t want to fall again in that trap in the next generations.

While it helps stabilize the GA, it’s definitely not perfect and the GA takes sometimes too risky actions.


If each player is on a separated area we can know who the winner will be so the evaluation is +inf/-inf/0 for win/loose/draw.

Otherwise, the evaluation is the difference of scores of each player, which is a combination of:

  • A voronoï to determine which cells each player can get. The first iteration of the voronoï is only done with the current units, and if both player can reach a neutral cell, the player with the maximum number of units will be attributed that cell. For the next iterations I also take into account all the cells owned by each player s units can be spawned on those cells. The number of units is no longer used for those iterations (it would make the computation time way higher, and it is anyway not always relevant since units can be spawn on the next turns and change the outcome). A picture is worth a thousand words, so here are the voronoï iterations:

The voronoï also takes care of which turn the cells adjacent to a recycler will disappear, and the cells that will disappear anyway are of course not counted for the voronoï score.

  • The distance from each unit to the frontier (the frontier is computed with the voronoï).
  • The number of cells owned, useful for the initial exploration.
  • The additional matter the recycler will get
  • A bonus if the voronoï shows that I have a disadvantage, but I have at least 2 more units and futur units than the opponent. This helps build recycler to try to reverse a losing position.

My final AI still misses important features that I couldn’t add within the time limit of the contest and the time limit of 50ms (the evaluation needed to be quite fast to have enough generations):

  • early recycler planning (but I feel like I would have to run another search algo only for that)
  • spawning at the right frontier cell (there are no differentiation between frontier cells, so when attacking it picks a random one)
  • deadlocks

Contest Feedback

  • I wasn’t fond of the 1-month duration, I really prefer 10 days contests when I can be competitive without spending too much time.
  • I found it a bit disappointing that the bug/feature of moving to a spawning recycler was reported early in the contest, but we had no official answer from CG about it.
  • Otherwise, I really liked the contest rules : simple, perfect information, large decision space that makes both heuristics and search algorithm viable, easy to look at a specific replay and understand why we lost… and with the decision of building more or less recyclers it makes it one of the best designed bot contest on CG in my opinion :slight_smile:

Hi Saelyos, nice solution. Reading your PM, I think Smitsimax with depth 1 could have been a good option too, if I’m bored I might try it out in the multi…


Firstable I’d like to thank CodinGame for this challenge. I really enjoyed the game, and the format: one month allows to schedule time to think, code, experiment, and have a life :slight_smile:

I finished #51 in legend.

My bot runs the same 3 stages each turn (described a little more below):

  1. Knowledge computation: compute a lot of static data based on the current game state, with almost no stateful data.
  2. Search space generation: based on the current state and the newly gained knowledge, generates an N dimension space. A vector in this space can be represented as the bot output.
  3. Search in this space the “best” solution (here comes an evaluation function) until end of turn allowed time. All the concerns of the game are prioritzed by the search (moving, spawning, defense, attack, recyclers,…0).

Knowledge computation

Based on the current gamestate, the bots computes (non exhaustive list):

  • the future colliders (grass+recyclers) for next 10 turns
  • a floodfill, coloring cells of both players (and taking into account future colliders). From this it can compute the “potentially owned cells” of players, and more importantly, the “frontline”, ie all the cells that are equidistant from each player. This frontline can be cells owned by me, the opponent, or still unclaimed cells.
  • gradient field(s): with the frontline + current colliders, run a wavefront expansion algorithm starting at the frontline cells. This generates a map in which target cells (the frontline where I want robots to go) have the maximum potential, and this potential decreases each time a cell is further from the frontline. This is used by the eval function for robots pathfinding : by simply adding the gradient field value of my bot to the scoring of the state, a cell with a better potential will score better, so my robots will naturally move toward the frontline. This allows to have a performance-wise very cheap pathfinding (as I never had to use A* or anything else).
  • a threat map: this is a map with a dilation (actually it is a bit more evolved than this but it’s for the picture) of opponent robots placement
  • etc*

All these tasks are done only once per turn, so I did not spend time to optimize anything particularly.

Search space generation

With this newly acquired knowledge, I generate a search space :

  • one search dimension per single robot, with its allowed moves
  • defense: for each cell next to or on the frontline, a dimension with values “wait”, build, and spawn from 0 to M-O robots (with M my robots, and O opponent robots threat)
  • attack moves: add “some” more dimensions depending on some kpis to allow attack moves. Attacking is either spawning on frontline cells where there is no threat, or spawning on cells that already are in open conflict (as the defense part is already tackled by previous dimensions, the hypothesis here is that the cell already has a robots count aligned with the opponent threat). This allows already existing robots to attack opponent cells, while newly spawned robots come to reinforce defense.
  • recyclers: an optional sustain recycler
  • etc*


The search is a simple depth 1 genetic algorithm, without opponent simulation. Evaluation function includes, from most to least important:

  • sustain, so that my bot does not create a situation where it is unable to at least have the same amount of bots than the opponent
  • defense! by heavily penalizing placements that are not aligned to threat. This is enough to have robots collaborate for defense (for example by sliding together for fitting defense needs)
  • then attacking
  • then moving, by using previously computed gradient field
  • etc*
  • The devil’s in the details :smiling_imp:

BlitzProg ~ 500th (Gold league) ~ 450th gold

Pretty straightforward approach on my side. I pathfind to neutral and opponent cells, and if enemy units are standing next to a cell I own, I block with a recycler - or spawn another unit if there is only one. the IA may decide an attack on an empty opponent cell with the leftover scrap.
Occasionnally, I will also drop a recycler in the middle of my territory if I only sacrifice a cell doing so - while still earning a lot of scrap.

When completely separated from the opponent I spend all of my scrap in units to grab my territory.

To get into Gold League, I also used a Voronoi Diagram to assist in movement for existing units. This compute a theorical “territory” I own, and my units tries to walk along its borders to secure it. The overall voronoi score also helps deciding blocking, a single enemy will trigger a blocking recycler reaction if I am winning.


Indeed it was a pleasure to be Jacek’s fellow and by our difference of rank to show how skillful he is!


#1 on the first day of the contest
#47 legend at the end

This time I won’t be writing a long post mortem as I believe other people have more interesting stuff to say.
My bot was purely heuristics, I did simulate some stuff but can’t really call it a search.
I learned that I should be using way more asserts in my code because I spent more time debugging than actually writing features and asserts would save me a lot of that time.
(Hidden bugs basically that came out after a lot of replay watching)
As usual the game surprised me with it’s depth and at the end I had almost 2000 lines of code.
I liked this game a lot because it feels like there was a lot of different approaches that worked.

My logic was handled in this order, I pre-calculated some other stuff before:

  • spawnKillMoveRecyclers(); // detect winning recycler
  • staleMateDetection(); // seems obvious
  • cellsCallFreeDefendersWithAllyTilesOnly(); // units from the back support defense
  • spawnDefenseRecyclers(); // I aimed for 1 recycler per turn
  • captureNeighbourBorderTiles(); // was used for capturing border instead of staying and defending
  • comboAttack(); // 2 tiles that have more units than 1 common enemy tile
  • allInWhenSurrounded(); // when number of robots grew and one tile is surrounded by enemy I just send enough units to the tile with lowest amount of units while leaving enough to defend a small attack
  • stayToDefend(); // obvious, keeping the “border” stable
  • moveToBorder(); // was used mostly before the main fight to target border tiles, kind of like hungarian algorithm but simpler
  • moveToNeutralsDuringStalemate(); // it does what it says
  • myCellsCallDefenders(); // surrounding tiles that do not use robots for anything support the defense
  • summonToProtect(); // if there is not enough defense before then we summon there, usually we summon first where I have least robots like 0 v 1
  • spawnMacroRecyclers(); // if we still have money then I try to gain advantage by making recyclers based on some equation. This had a lot of potential like difference in cells but I mostly focused on scrap gained.
  • captureNeutralTilesInDirectFight(); // usually it’s better to move to a neutral tile on the enemy side of map and spread than move to enemy and die to a defender
  • moveToEnemyTilesDuringFight(); // if we didn’t use robots before, then we attack
  • summonToFlank(); // spawn nearby cells where enemy has no units and can be captured
  • moveToCapture(); // if we didn’t “spend” robots before they move to some tiles (usually they were already used)
  • moveToNeighbours(); // before fight starts, I guess if nothing was picked before then I just move to a neighbour that leads to the center
  • spawnAtBorders(); // only spawn that is not defense - it depends on the surrounding tile value, enemy units and my units
  • unitsMoveToWhateverTheyCan(); // this should not be used but if units did not move then they are sent to whatever they find, usually during stalemate or “captured” areas

For me it’s more fun to code in the beginning than to optimize the last 5% of my bot but maybe in one of the future contests I will try to aim for something more than just legend.
Congratulations to everyone who reached legend, it definitely was not easy this time due to length of the contest.


Here’s an overview of my strategy for this competition: GitHub - delineate/cg-2022-fall-challenge-strategy


148 Gold

My bot is pretty simple and have following steps:

  1. Calculate utility information
    Calculate useful info for the current step, e.g. amount of time left for tiles near recyclers and detect isolated “islands” of the field:
  • If all tiles on the island have single owner, ignore them
  • If there is only my and neutral tiles - mark them as low priority
  • All other tiles are high priority
  1. Build recyclers
    Building has two modes:

    • Before contacting the enemy - build recyclers on any owned field every few steps. Recyclers that will destroy least amount of neighbouring cells have higher priority.
    • After contacting the enemy - build recyclers on empty tile if there are enemy units on neighbour tiles.
  2. Spawn
    Bots are spawned in the tile that is closest to an opponent tiles. If nothing reachable bots are spawned on the tile with the most neutral neighbours.

  3. Move
    Since all my attempts to implement decent spreading failed, I decided to designate two bots and move them to top and bottom tile in the center of the field. Other bots are moving to the closest tile that is not mine, prioritizing empty opponent tiles over anything else and moving to the direction of enemy spawn. Each bot was calculated independently and I tried to avoid sending bots to the same cells if possible.

I failed all attempts to add some enemy awareness and defensive movements aside from spawning recyclers like crazy, so I spent last days adding small features and fixing bugs in existing code, e.g ignoring tiles that will die on next turn, switching to A* pathfinding, avoiding move/spawn to the tile I’m building recyclers, and finished at the upper half of the Gold. Thanks CG for the great contest!