Spring Challenge 2023 - Feedbacks & Strategies

#1st Legend


I enjoyed the contest and the game. It was very non-trivial, requiring several different algorithms to work together. And still, it is not clear to me what could be an optimal strategy.
Thanks to the organizers!
Also, special thanks to @aCat for support and discussions and to the rest of the University of Wrocław group. And of course, thanks to the great opponents!

The overall idea of my bot relies on two phases and is similar to the others. In the first phase, it makes a general project for current path connections (tree), and in the second phase, it runs micromanagement modifying beacon weights. For calculating beacon placement, I did not see an alternative way to enforce the best move than trying many configurations by simulation, so the branching factor is huge. As in the Cultist Wars contest, I tried to make efficiency to be my strength; this time, instead of exploring deeper, the effort was to explore the game tree wider, which just means trying more own moves.

I spent the first days encoding the algorithm from the referee and optimizing it. Additionally, I had to adapt to constantly-changing rules. At that time, without any reasonable AI algorithm, I found it hard to reach even Bronze. But Bronze was required to test the full rules. The situation changed immediately after adding any path-making algorithm with equal beacons. However, as I saw improving the micromanagement phase affects the results much more and actually compensates for the weakness of the high-level design, I neglected the development of the first phase and meta-decisions for too long. Yet, they were also very important when encountering strong mature agents.

There were also some glitches worth noting: The rules are quite difficult (because the movement algorithm is a part of the rules, although undescribed) and they were changing through the first half of the contest. Therefore, implementing the engine early was risky, and indeed I had to redesign it a bit after some changes. While I understand and accept the concept that ants cannot be controlled directly, the algorithm behind it looks artificial, because it affects the ants globally. This globality also makes it difficult to design moves by hand, in favour of simulation. In my option, a more natural way to model ants would be, e.g., when an ant goes to the nearest beacon that it sees, follows other ants if possible.

Technical details

The tree phase

The tree is computed in a greedy/Hill Climbing way. The resources (crystals or eggs) are added iteratively, each time evaluating the resulting constructions. The estimation is based on the value of the extracted resources divided by the build cost. The resource is connected to the existing tree with the cheapest path, assuming, e.g., that going through existing ants is cheaper and promoting going through other resources.
This is computed with the standard Dijkstra algorithm. It is not a minimum spanning tree (MST) on resources, because we can connect a new resource directly to a part of an existing path; if we knew what exactly resources we want to connect (but we do not), it would be a variant of the Steiner tree problem.
Instead of adding a new resource, it is also possible to increase the width of an already existing path.
The proposed width (the number of wanted ants) at each cell is also a part of the project, and it takes the opponent’s attack chains into account, increasing the width wherever needed along the way up to the base.

The construction is complete when its size exceeds the number of ants, with a slight surplus allowed.

The micromanagement phase

This is absolutely crucial for the strength of my agent. The variant with the tree only has only about 0.82% win ratio against the full agent. Improving this part was a game-changer several times. The goal of micromanagement is not just moving ants as closest as possible to the given high-level project, but it can extract nearby resources apart from the plan, adjust the width of chains, block the opponent, and maximize the extraction. Actually, it may lead to a trap of ignoring these factors in the first phase and I fell in that for a few days.

Here, the algorithm takes the initial tree project and modifies it in a Hill Climbing manner. The initial project is applied and the algorithm identifies places that might be good to change: where there are fewer or more ants than proposed, not forgetting about nearby resources. There are several iterations, modifying either a pair of beacons or a single beacon (increase or decrease). If time permits, more candidates to try are added. What may be distinguishing from other solutions, I do not try random changes but rather search through all possible single changes exhaustively. And in this phase, I do not keep the constraint relating the number of beacons to the number of ants, allowing exploring the full spectrum of possible moves.

The evaluation uses a simulation of two turns, applying the same move and adding the value after the second turn with a strong decay. The main components are, of course, the score and the number of ants, and also the agreement level with the initial project, as well as the formed attack chains against the opponent. Initially, I was using three turns. However, after the evaluation became more complex, the third turn stopped being profitable. And surprisingly, this is not just because it is more expensive, but the third turn prediction seems already too unreliable and useless. The bot using three turns achieves a win ratio of about 48% against the same bot using two turns when they are both limited to evaluate at most 10 000 moves instead of a time limit. Of course, its situation is worse if we take the time overhead into account (then it achieves only 45%).

Additionally, before the entire phase, a simplified and short micromanagement for predicting the opponent is run, which is slightly better than assuming that it does not move. The predicted move consists of beacons at the places occupied by the opponent’s ants and adjacent to them.


I was seeing a big difference in results depending on the time budget, so I paid much attention to optimization, which is also the part that I like the most. The original movement algorithm is implemented in a very inefficient way. I got rid of generating and sorting pairs by computing them on the fly. For this, it was enough to precompute for each cell and each distance, the list of cells at this distance. Also, the list of ant cells is dynamically modified. This still has worst-case quadratic time complexity but works much faster when the beacons are close to ants (usual case) and there are enough beacons to assign the ants shortly.

As to the ant movement, it depends on the cell neighbors ordering, which is affected by the assignment (target cell), other ants (game state), and beacons (move). I sort the neighbors using dedicated sorting networks for 2–6 elements. Under a sensitive measurement, it appeared slightly faster to sort in this way than selecting from unsorted neighbors, which is most likely because we may need to do this several times for the same ant cell. Here, I regret the change in the rules that beacons also affect ant’s path selection. Before that change, the neighbors order depended only on the state, so could be precomputed for faster performance.

For computing the attack and harvest chains, I implemented Dial’s algorithm (Dijkstra with buckets as the priority queue). The maximum number of ants on one cell is usually small, so it works faster than Dijkstra with the traditional heap. Also note that updating the chains is cheaper, and there are some cases when updating is not needed.

Finally, everything that was possible was precomputed, like resource cells, quantities in different areas, and distances of various types.

How important was the efficiency? See the following comparison of my bot with different time limits (in milliseconds) with itself:

   MSz(5 ms) vs MSz(100 ms): 15%
  MSz(10 ms) vs MSz(100 ms): 23%
  MSz(30 ms) vs MSz(100 ms): 36%
  MSz(50 ms) vs MSz(100 ms): 42%
 MSz(200 ms) vs MSz(100 ms): 55%
MSz(1000 ms) vs MSz(100 ms): 60%

Conclusion: If it was 3 times slower, it would be of similar strength to @DomiKo bot (based on its the estimated win ratio of 34% after the final recalculation).


Designing meta strategies was also quite important. The most important rule is: harvest on the border first (the cells at an equal distance from the players’ bases and around), and when there is nothing left, go back to own area. The resources threatened by the opponent but not in its area are also considered a priority. Additionally, when detecting that it will lose because there are not enough crystals, it tries to steal something from the opponent’s area; however, when such a situation occurs then it is usually hopeless anyway.

Finally, there were many parameters to tune (e.g., egg values, localization bonuses, path cost, eval parameters, and the values controlling micromanagement). I find that there are more of them compared to other of my agents for other games.

Not working ideas

It is natural that some improvement attempts do not work, and I also had a lot of them. From those ideas I remember, e.g., hashing the moves to avoid repeated eval computation, trying several paths from one resource, estimating the local density of resources to prefer rich locations, and predicting the game ending after 100 turns to affect meta decisions.


Legend 4th

My strategy is consists of two phases.

  1. Choosing which resources (= eggs, crystals) on the map to be target destination using rule based strategy. And then decide the path from my bases to the target destinations using BFS.
  2. Deciding how to place beacons by hill climbing algorithm. It maximize the number of crystals and eggs and minimize ants distances from the paths decided in phase 1.

[Phase 1]

Choosing target destinations

  • Eggs are more important than crystals for the first 5 turns.
  • After that, priority is given to crystals near the middle cells.
    (“middle cells” is which have same distance from my bases and opponent bases.)
  • Put off the crystals that are close to my own bases (however, if the opponent’s ant chain is closer than mine, go to defense it).
  • Excluding places where the opponent’s attack chain is effective and I cannot create more powerful chain.
  • In games where players are competing for a small amount of crystals in a single center cell, always mark it’s cell as the important target destination.

Deciding paths

  • Sort the target destinations in order to distance from my bases(closer is better), and connect one by one by the shortest path to my bases or already connected cells.
  • Repeating this process until the total path length reaches the number of ants I have, or all the target destination path is decided.

[Phase 2]

Hill climbing to optimize the beacons placement.

  • The initial solution is to repeatedly place beacons one by one on the paths decide in phase 1 until the number of beacons is equals to ants I have.
  • Solution changing is to place and remove beacons on the paths decided in phase 1 or its 6 neighbours. The number of beacons and the number of ants always be same.
  • Two players alternatively improve their moves. This is same algorithm as I used in Fall Challenge 2022.


  • In the early game, giving high evaluation value to the number of ants.
  • In the middle or end of the game, giving high evaluation value to crystals that targeted by rule based strategy in phase 1.
  • As a tiebreaker, using sum of distances between the ants positions and the beacons(closer is better). For this, I used a simple algorithm that sequentially run BFS from each beacon and assign nearest unassigned ant.
  • As a further tiebreaker, I call the Phase 1 rule-based routine in the evaluation function and evaluate the sum of distances to its paths. This is useful when there are cells that will be empty of resources in this turn, to move surplus ants to the next target destination in advance.
  • In games where players are competing for crystals in a single center cell, I give high evaluation to getting the majority of them.

Thank you to the organizers for giving us this valuable opportunity to have fun and compete.


#32 Legend

Thanks CodinGame for this great contest, and congratulations to all my teammates at 1337 school for securing the 2nd place in the school’s leaderboard.

I used a simple heuristic from wood to gold, to achieve the legendary league. I needed a search and simulation algorithm. That’s why I implemented the game engine from Java to C++. After that, I utilized a Genetic Algorithm (GA) for the search because it’s the one I’m most comfortable with. In the first turn, I computed the distances between all cells and stored them. After each turn, I ran the GA for 100 milliseconds, combining random beacons in random cells. Then I called the simulation to get the score after the ants’ movement. I repeated this process for each player, swapping for another player in each GA iteration. The scoring mechanism was simple: I scored for eggs before reaching half of the eggs on the map, and I considered the crystals and Voronoi diagram to guide the bot towards resources near the enemy. Since I only simulated three turns, I could only see the resources within three cells’ distance. This is where I used Voronoi in the scoring. Thanks again for organizing this great contest.


#31 Legend

thanks CodinGame and my teammates at 1337 school.

My bot is a combination of heuristics and basic simulations.

    need to know how many eggs I need, to collect all the crystals in the minimum turns possible, using the 1000ms of the first turn I simulate all the combinations of decisions, it’s a collection of clamming (crystals * N + eggs * M) N and M is the number of turns of clamming each one of them. then I save the best collection which has a minimum number of turns to collect the (INIT_CRYSTALS_SUM / 2) needed to win. it lets me know how many eggs need to end the game earlier to switch from collecting eggs to crystals.

    build a tree of paths that contain the minimum empty cells. using a map of targets, first, make my bases as a target and run BFSs multiple times from all the resources (that I didn’t make them as a target yet) to the nearest target, save the shortest one, then make all the cells of that path as a target, and repeat that until (I make all the resources as a target or number of target + path.size() <= MY_CURRENT_ANTS_SUM), to make sure that I can reach all the cells of the tree.
    Perform a breadth-first search (BFS) using priority_queue (the minimum empty cells at the top of the queue) to add the path to the tree.

    I did this part to get a good expansion in the tree,
    for this, I needed to know how ants move when I placed beacons in the cells within a strength,
    that’s why I reimplemented the movement of ants from the source code of the game in cpp to predict where ants will move,
    then I randomized the beacons strength of the tree cells for the 100ms and each time I predict the next turn if I use it
    using a scoring func I save the best.

    I calculated the flow of ants in each cell that contains ants from the bases for me and the opponent to know how much we can collect from the eggs, and do it again after adding the clamming eggs to each base to know how much we can collect from the crystals,
    then use this for scoring

        return (my_collected_eggs / opp_collected_eggs) * 2 + (my_collected_crystals / opp_collected_crystals);
        return (my_collected_eggs / opp_collected_eggs) + (my_collected_crystals / opp_collected_crystals) * 2;

#50 Legend

Thanks CodingGame for the contests. Once again, it has been a lot of fun.

About the Bot itself:


Even with the help of ChatGPT, it took me a while to get it fully working (with optimizations).
Around 2 full days, I guess…


During the first turn I compute a MST by pruning the tree using Kruskal’s algorithm since I saw it in a discussion on Discord.
(ChatGPT has gently written most of the code for me…)
But I also added direct path from any of my base to any of the resources.

For each path, there are 3 possible actions:

  • Put beacons of strength 1 on all cells of the path
  • Put beacons of strength 2 on all cells of the path
  • Put beacons of strength 3 on all cells of the path

Search algorithm:

My bot mostly uses a kind of MCTS search.

I was using a genetic search first.
But the randomness was not helping me understanding where were my bugs.

In this kind of MCTS search, each node represent a path to put beacon with strength 1, 2 or 3.

When expanding, the algo can choose only actions using paths not already chosen by any of the parents or a wait action.
When choosing a wait action it will not expand anymore.

During parent update, I don’t compute the mean, but I only keep the best score of the children.

I already used that approach during “Legends of Code & Magic”.


At each turn, I compute a match temperature score between 0 and 1.
1 means that we are the beginning of the game and that we should get some ants.
0 means that the game is ending and crystals are the priority.

For each rollouts, I do a simulation during 6 turns.
For each turn I compute a turn_score = my score - opponent score
With player_score = crystals + 10 * match_temperature * eggs

Then the final_score is using a discount factor of 0.4:
final_score = current turn_score + 0.4 * turn_score in one turn + 0.4 * 0.4 * turn_score in 2 turns + …

What I will remember the most from this contest:

1. Using chatGPT to translate the Java referee code into C++ code.

Only the functions I was interested in. Not the whole thing.
That has been helpful since the logic of ant movements was quite hard to understand.
However, I have had to rewrite most of it for performance reasons, but that has been helpful anyway.

2. An epic fight against the Gold Boss 30 min before the end of the contest.

I have had a hard time to pass the Gold Boss.
On sunday night, I decided that I could not do it.
On Monday morning, I woke up with a simple idea of a bug fix on my match temperature.
(It is still not working correctly, BTW…)

I have pushed the change in the arena at 07:17. Just in case…
The server were completely overloaded at that time.
It was slow as hell…
So when I finally got to 100% around 9:30, I had already drunk 10 coffees and was in a state of excitement hard to describe :wink:

However, I got lucky and finished in Legend league.
(But without these server lags, I guess that @egaetan and @BlitzProg would have passed the boss, too)


#84 Legend

my bot used a heuristic to find shortest path from resource to previously placed beacons, then approximating the beacon’s strength using simulated annealing, scoring 5 turns ahead in each SA iteration.

evaluation function was simple, scoring the resources gathered.
neighbors generation, randomly choosing one beacon and assigning randomly from 0 to 3.

performance, on large maps I did 6k iterations in the first rounds (without to 1000 ms round), when beacons and ants grow I do ~500 iterations.

Enjoyed the contest, good job everyone!
Special thanks to @Sheeesh @AlgoAce and all 1337 team for lending their help when I was stuck.


Yeah, it was insane. Possibly the closest one can be from a league and not getting into it on my side

  • Went in front of the boss because of a strength/weakness triangle (@bourgeof >> Gold boss > Me > @bourgeof)
  • @bourgeof got into legend before I could finish my games
  • My AI failed to sustain its rank and completed its games behind the boss by mere 0.01 point
  • got pushed in front of the boss briefly (for only like 10 seconds), but the server was too laggy to notice
  • boss got back in front with very varying degrees of distance, shortened down all the way back to 0.01 point at the very end of contest
  • @egaetan pushes right before the end, he only has to win once to the boss or lose once to me - but that doesn’t happen
  • further matches winden the gap by 1.5 points because his AI is very strong against me but too weak against boss

Since this contest, I checked my code and could find a few glitches, but the removal of their side effect caused my AI to perform slightly worse for some reason. So to this day I don’t know if there’s any simple improvement that could have made it to the ultimate promotion.


Feedback for the CG team on the game. Just a heads up this is mostly negative.


  • It was different from most contests of the past two years.
  • State representation (once parsed properly) was nice to work with.
  • Game rules made it interesting to explore very different approaches. Nice learning opportunity.
  • Great to see contests continue on this platform! This is what I’m here for.
  • Game felt uniquely well suited for advanced solutions, and also 100ms response time gives lots of breathing room for computation to support this.


  • Overall this was a horrible contest for newbies. Everything about it seemed to make engaging with the game more difficult rather than more intuitive and fun. Examples: black-box ant movement, unintuitive state representation (cells + neighbours), awkward cell ids, etc. I tried onboarding some colleagues to CG starting with this contest and it was probably the worst contest/game I could have picked to do it. Nobody liked it. Not a good introduction to what CG contests can be…
  • Visually it was so cluttered and chaotic, that it was very difficult / nigh-on impossible to parse/debug visually in the browser IDE.
  • The rules of the game (specifically ant movement) presented a much higher barrier to simulation (compared to previous contests) but also made it more difficult to use heuristics, as it wasn’t clear or explained anywhere what the rules for ant movement were. The only way to figure out ant movement was from the game source code, and that really shouldn’t be necessary for regular players to understand the rules.
  • Discord was dead. Not just dead, but dead dead. Community is the greatest strength of this platform. Discord clearly does not facilitate that sense of community and discussions during contests (and in clash of code). Bring back the in-browser chat. Somehow. Please.


  • Contest cover art was obviously AI generated and lacked inspiration (why are some antennae floating midair?). Big thumbs down. Please bring back human cover art.
  • Visually the game is the most boring I’ve seen, it is dull and grey and horrifically complex to look at. Where are the pretty visuals like in Spring 2021? This combined with the above made this contest ‘feel’ relatively low budget.

#16 Legend, C#

My best individual result so far, and also the best result of our University Team (team’s top5 in global top30). Many thanks to all core team members and other students that were here to help. I hope you had at least as good time as I.

Congratulations to our top 5: @MSz @DomiKo @marwar22 @Sko0owi as well as the rest of you in Legend: @surgutti @alechin28 @Zylo - great work. Special thanks and praise for @MSz and @DomiKo - extraordinary performance - it is a real privilege to be your friend and be able to work with you.

My Solution

Actually, nothing extremally fancy nor different from multiple other top-legend approaches. My todo list was still quite large, and there were some obvious improvements I didn’t have time to implement.

The first part is a greedy algorithm that creates a beacon tree by joining resources. Nodes of the tree are bases and previously joined resources. I check all pairs of tree nodes and unconnected resources and evaluate them, taking into account their base weights, amount, placement, etc. adding (with some small weight) scores of the resources on the way. The cost of the path depends on how competitive it is (opponent’s attack chain value) and how many ants I already have placed on the way. I choose the best path maximizing score/distance that fits within a budget (unassigned ants). Then an edge is created going from the new resource to the tree node (I don’t have a connection to any closest node with an already placed beacon). Edge is also chosen greedily in each step prioritizing: shortest route, cells with already placed beacons, cells with most ants, cells with resources.

Then, there is a random hill-climber. I randomize a beacon cell and weight to be added or subtracted (max delta is an average ants/beacon value). If the state evaluation after 2 turns is better than the current best - it stays. The state is evaluated mostly based on the number of my and opponent’s crystals and ants (somehow taking into account their placement - near me, near opponent, border), plus a medium reward for winning and smaller for drawing. There are some special modes that influence particular weights depending on the number of bases and when only one or two crystal cells are left.

The Game

Didn’t like it. Turned out to be more interesting than I thought, but with forced, very artificial behaviors, middle-challenge bugfixes and changes of rules, and uninformative hard-to-read visualization. It is also really a shame and pity that all the hard graphics/programmers work on fancy ant visualization goes to nothing as everyone uses only debug mode.


What went well:

  • Efficiency matters. Especially if you’re not using c++. Counting time for each section of the code and keeping a close eye on performance bottlenecks pays off. For nearly the same state, I went from 54(!) state evaluations per 80ms to ~1600.
  • Weights of the parameters are essential. There is no time wasted on tuning them.

What did not:

  • Complaining about rules and allowing destructive thoughts that I don’t like the game. Focus on coding, not complaining!
  • Leaving the code-writing (of important parts) to the last day. I made ‘some’ progress luckily but it was too much work under stress and I couldn’t properly focus on algorithms. The last day should be only for param tuning and adding small enhancements to quick test.

it seems that a lot of codingamer used the hill-climbing algorithm. If like me you don’t know what it is and you don’t speak/read technical English but French, you can watch this lesson.

or read this

[FR] translate
il semble que beaucoup de codingameur ont utilisé l’algorithme d’escalade (Hill-climbing). Si comme moi vous ne savez pas ce que c’est et que vous ne parlez/lisez pas l’anglais technique mais le français, vous pouvez regarder cette leçon.

thanks to codingame for this challenge which always pushes you to learn new techniques, learning by playing is the best


#2 in Legend. (First Gold Boos :wink: )

Thanks CodinGame for the contest. It was a bit of a bumpy adventure, but overall I really like it.

First Challange

First I have to tell what was the biggest challenge at the beginning: MOTIVATION
When the contest started people started hating the game.
The main concern were of course the beacons.
So I told myself just to have fun and don’t bother with understanding how they worked.
That single thing helped me sooo much. I was able to test many ideas without losing any drive for reaching good rank.


As everybody I divided my algo into two phases.
First: build the tree.
Second: Hill Climbing to optimize ants movement.

Phase 1

I randomly create many trees for 35 ms, and then I take the best one, evaluated with sim for 6 turns.
I start every tree with my bases only.
Then I add random path from a node from tree to any cell with resources.
It isn’t uniform. Each edge has it’s own ppb: (1 / distance) * resource_value.
Then I add that edge to the tree with a certain number of ants for each cell (random with some constraints).
I keep adding edges to the tree until I don’t have enough ants.
After each added edge I evaluate the tree.


Sim for 6 turns.

I assign value for each resource.

First I was looking for a good value for eggs. It wasn’t easy to decide if eggs are still important or not.
I finally come up with such a value: egg_value = (total_important_crystals / number_of_my_ants)
Where total_important_crystals is the sum of crystals in the middle part of the board.

Then I learned that crystals in the middle of the board are soo much more important than the ones closer to my bases.
For each cell I calculated diff[pos] = abs(distance_from_my_base[pos] - distance_from_opp_base[pos]).

Then resource_value[pos] = 1 * pow(0.3, (diff - min_diff)).
That was resources with minimal diff has value equal to 1, and those far from the middle part are equal almost to 0.

Phase 2

Hill Climbing with the same eval as Phase 1, but with only 3 turns.

It can only change existing beacons.
With ppb=15 + bonuses change strength of the because by ±2.
I give a little bonus for ends of the paths.

That was pretty simple to write and immediately crashed the version without HC.
But wait there’s more!

It’s using the same beacons for 3 turns, does it make sense?
Kinda, but not really. Imagine simple example where you have a resource and it will be harvested in one turn. You definitely don’t wanna have ants in there for the next two turns.

So instead of having one beacons for 3 turns I changed it to:

  • beacons for first turn,
  • beacons for second and third turn.

I start with the same beacons for 1 and 2/3 turn.

BOOM 64% win rate against version without it!


Referee was super slow, so you have to optimize it a lot.
Remove sorts, sets, that bfs inside ant pathfinding etc…

For me enemy was always waiting, but still I was harvesting with him, and spawning new ants, etc…

Another big change, was implementing the end game tiebreakers.
It gave me 58% winrate.
I changed my eval result from single float to tuple <float, crystals diff, ants diff>.
So when I reached terminal state in my sim eval was just 1e9 - depth for the win, -1e9 for the lost.
It was crucial for me, because I had that custom evaluation of crystals and eggs.

Local Arena

It was crucial to test things localy. I made 30 versions. Each of them gave me some improvement.
Sometimes it was 51%, sometimes 95%, but mostly in the beginning it was something about 60%, but with time it was soo much more challenging to gain even 53%.

In the end my bots played few millions of games.

Final thoughts

As you can see I didn’t do anything crazy, but still my bot was really powerful. There was just many simple little things that gave it that power.

Congrats to @Msz for dominated top1.