Fall Challenge 2022 - Feedbacks & Strategies

16th in Legend.

Like previous multi-agent games, I went for a Genetic Algorithm, simulating my player only for one turn.

The first (money / 10) entries of the gene stored the build or spawn actions.
The rest stored the move actions for each robot.
The crossover between 2 genes was a “uniform crossover”.

My evaluation function was a combination of:

  • number of units
  • expected territory so far after full recycles
  • units distances to opponent territory
  • sum of distances to not owned territory
  • expected money after full recycles
  • vertical spread of my units

The opponent was simulated by a dummy AI that did not build recyclers, and evaluated each possible spawns and moves individually using the same evaluation function.

Things tried without success:

  • Run a GA for the opponent and then a GA for myself.
  • NN for the evaluation function.
  • Dual GA (keep the best found so far for each player and improve both). Will probably try again using Saelyos knowledge :slight_smile:

Thanks for the challenge!

22 Likes

GOLD 5th - First in Javascript or second in Typescript.

Hello to all!

After testing CodeRoyale and getting Gold, I started my first Codingame Challenge with my meager AI knowledge.

So I’m very happy to have arrived at the gates of Legend with for a few hours even the hope to pass. Unfortunately the boss was too strong for me and all my attempts to improve the last 3 days were a complete failure.

As the only search algorithm I know is Djikstra, all my AI is heuristic and mainly uses Djikstra to calculate the distance from a cell to any other cell.

My AI is relatively simple and breaks down into 5 steps:

  1. Defense
  2. Construction of recyclers
  3. Expansion
  4. Moving the remaining robots
  5. Building the robots

Defense:
I loop on all enemy robots and see if he has a neighbor that belongs to me. In this case, I defend the square by doing :

  • if he has 2 or more robots, I build a recycler
  • otherwise I spawn robots if I have some material left
  • otherwise I leave the robots on the square
  • I move robots from the neighboring square if necessary

This heuristic had a lot of problems. How to choose the square to defend if the attacker touches several of my squares. How to choose the best squares to defend if I can’t defend them all. I did not find good solutions to these questions.

Construction of the recyclers
If I didn’t build a recycler in the previous round and there are more than 130 squares left on the field, I build one that maximizes scraps/grassCreated.
I haven’t managed to implement anything better than this basic but overall effective version.

EDIT following @Lysk question:

I have in my IA many experimental hardcoded threshold to solve issues with my heuristic. It seems globaly a bad idea but I didn’t manage to do better.

The exact condition to build a recycler is the following AND condition:

  • turn > 2 => to avoid blocking my robots in the first turns
  • !hasBuildLastRound => to avoid to build too many recyclers
  • (notGrassBlocks.length >= 130 || opponentRecyclers.length > myRecyclers.length) => to avoid killing myself on small maps. The 130 threshold was originaly 80 but it seems that 130 was giving better results. Therefore on small maps I only build if the opponent has more recyclers than me. I count every blocks on the map but blocks with 0 scraps.
  • (myRobots.length < 10 || myRobots.length <= opponentRobots.length + 5) => to build robots but not too many
  • myMatter < 40 => if I have 40 matter in one turn, I have already too many recyclers.

If all this conditions are true, I build the best recycler on a case which will bring me more than 20 scraps and sorted by scraps/grassCreated.

Expansion
I calculate in the first round the squares that separate the map in two. At each turn if these squares are not occupied, I make a double loop on the squares of the separation and my robots to find the square and the robot that minimizes the distance. Very expensive algorithm in n*m^(min(n,m)) (n = robots, m = separation squares) which gave me many timeouts. I continue the algorithm until I have no more robots or until all the cells of the separation have found a robot that will go towards it.

EDIT: after reading some postmortem, I think I was only trying to write an Hungarian Algorithm but mine is poor writen in O(n!) against O(n^3)

Moving the remaining robots
If a robot has not moved for expansion, I move it with an evaluation function that checks the squares within 5 range and adds for each square 1/distance if it is neutral 2/distance if it is enemy.

Construction of the robots
I spawn a robot closest to an enemy square and at equal distance on the square which has the least robots.

I tried a lot of upgrades for each step but I felt like I had reached a local minimum and any change made my rank go down.

Very happy to have participated in this Challenge, the people on the Discord were all cool and will more fun if there was more people on Discord to discuss about the Challenge :slight_smile:

10 Likes

Hi @Apofils , thanks for the write up. Could you elaborate on the "[...] there are more than 130 squares left on the field" please? How did you came up with that threshold and what do you include in "left on the field" (neutral, only accessible to you, something else?)

Cheers

1 Like

Hi @delineate, congratulations on an amazing contest and thanks for your post mortem. I found your money vs squares very interesting. Can you please elaborate on the graph you showed ? Is it extra robots relative to the opponent’s ? Meaning if my opponent have 10 robots in a wide 6 boundary, then if I have 13 I would get an extra 10 tiles. And the absolute number doesn’t matter ? Only difference ? Thanks in advance.

1 Like

Thanks @IloIlo. Yes, that’s right. The twelve lines correspond to widths ranging from 1 to 12. More specifically, I had maps where one player started in the top row and the other in the bottom row and I had the AIs try to maximize their score regardless of winning or losing the game. The starting money was 10 vs 10, 10 vs 20, 10 vs 30, up to 10 vs 200 (so the AI would build the extra robots on the following turn). I updated the write up and graphs to make it more clear (and fixed the x-axis).

I would guess only the difference in robots matter since robots cancel each other out 1:1. This was all to get an approximate relationship between army_strength_difference, boundary_size, and extra squares.

It was later fine-tuned using arena testing using more realistic maps/conditions. It confirms the idea that if you have a big army advantage or disadvantage, building extra income recyclers isn’t very helpful (they have diminishing returns).

9 Likes

Hello @Lysk,

Thanks for the question, I agree it seems that the threshold comes from nowhere and it is a bit the case.

I explain myself:

I have in my IA many experimental hardcoded threshold to solve issues with my heuristic. It seems globaly a bad idea but I didn’t manage to do better.

The exact condition to build a recycler is the following AND condition:

  • turn > 2 => to avoid blocking my robots in the first turns
  • !hasBuildLastRound => to avoid to build too many recyclers
  • (notGrassBlocks.length >= 130 || opponentRecyclers.length > myRecyclers.length) => to avoid killing myself on small maps. The 130 threshold was originaly 80 but it seems that 130 was giving better results. Therefore on small maps I only build if the opponent has more recyclers than me. I count every blocks on the map but blocks with 0 scraps.
  • (myRobots.length < 10 || myRobots.length <= opponentRobots.length + 5) => to build robots but not too many
  • myMatter < 40 => if I have 40 matter in one turn, I have already too many recyclers.

If all this conditions are true, I build the best recycler on a case which will bring me more than 20 scraps and sorted by scraps/grassCreated.

I hope the 130 treshold become clearer.

Cheers

2 Likes

My first attempt was smitsi, but i never tried depth 1, and rejected the idea quite early. The biggest problem for me was spawning many units in a single turn.

1 Like

I used Smitsimax with depth 1. It’s viable I think (if you get the little details and other heuristics right, which I didn’t yet).

1 Like

k4ng0u - 69th (11th gold)

Since the contest was during end year celebrations, I didn’t know how much time I could spend on the game and I started with a simple heuristic bot that would do the following:
_defense
_build recyclers
_expansion
_free moves

I didn’t use any search algorithm and based my whole bot on prioritizing tasks.

For defense, I would list all the threatened cells (owned cells with more than 1 opponent unit as neighbour) I would give defense priority depdending on the number of opponents threatening it, number of neighbour neutral cells that are closest to me and some other micro heuristic. Then I will loop on the sorted cells to defend, and try to first assign lazy robots (that don’t need to defend anything) to move there, then put robots in hold or spawn robots/build recyclers depending on conditions.

For recyclers, I would only build one if I have enough matter, I have no cell to defend, and the nbCellsLost <= 2, expectedGain/nbCellLost >= 15 and expectedGain >= 20 (one recycler should at least give me a new unit advantage)

The expansion phase was my weak point. In my early experiments I decided to list the 12 cells closest to each frontline, sort them, then sort the frontline depdening on my closest owned cell (either dist to unit or dist to owned cell without unit + 1) to them. Then I loop on the frontline and assign the best remaining unit/spawn to them. Retrospectively this approach was quite flawed as my robots were often changing targets doing back and forth and losing turns.

Then free moves were totally handled by heuristic prioritizing neutral cells and cells that are closest to the border.

When legend opened, I was around 20th gold. For a few all nighters, I tried to tweak the sorting/heuristic coefficients to enhance my bot but it was not enough. I probably should have rethought the expansion algorithm and actually insert a notion of attack instead of relying on expansion and free moves.

Overall the challenge was quite nice and the various strategies from PMs are quite refreshing. But for the future, I wouldn’t recommend contests during end year celebrations and I really didn’t like the fact that legend league opened on a week day and ended before a week end.

Thanks CG for the contest and congrats to delineate for winning and leading the contest for quite a while!

5 Likes

Seems we have a very close Heuristic Bot based :slight_smile:
I have exactly the same issue with my expansion phase where my bot frequently changed targets and going back on their path.
GG for your 11th place !

56th in Legend.

I tried my hand at beam search for the first few days/one week, but got overwhelmed by the sheer amount of possible moves.
Reluctantly, I switched to a heuristics bot.
Every turn, I calculate all the distances from each tile to another (using Floyd Warshall), I re-use this information to check how far each tile is away from any of my units or enemy units which I can use later.
I also divide the board into islands.

On the first turn I calculate all of the midline/frontline tiles (my_dist - opp_dist + 1 < 2) (I try to weed out ‘fake’ midline tiles) and each turn after that I update it if I haven’t already grabbed it.

Before calculating all my moves, I generate all combinations (within the time limit) of my BUILD/SPAWN actions, calculate the heuristic of those with an evaluation function for a build action, and one for a spawn action, and grab the best combination of actions and perform those.

Then for each island, I grab my units, and send them to midline tiles if they still exist (they can swap targets if it makes the total distance shorter, I saw people talking about the Hungarian Method on the last few days but it was too late for me to implement it), otherwise I just send them towards the nearest enemy. This last part could have used some work, but I was too exhausted the last 2 days to put actual effort into it.

Thanks for the contest, it was a lot of fun, happy to have helped my Uni to take pole position and my first ever top 100 finish!!!
I’m kind of on the fence about the duration of it, but it was at least fun for a change.

See you all next time.

5 Likes

Hi @delineate, congratulations for your contest.
Can you detail this: "add random noise for each cell’s cost in the movement BFS searches, and then repeat (20-30 times) to find the best set of paths ".
Why add noise ?
Thanks for your post mortem.

Thanks @hexabin

So the BFS search might pick a suboptimal path, especially when a lot of paths are close (or tied) in cost.

For example:
image

Imagine there are 2 possible boundary targets (A and B). And we want to find paths from X and Y.
The lowest cost path might be X → B going through R. But the best global set of paths might be X->A and Y->B.

If we just reran the same search without any noise, it would just give the same set of paths every time. Adding some random noise helps generate different results each time.

The noise was just used for sorting which square was next in the priority_queue and the actual “true” cost (without noise) is also stored since that is needed too.

I experimented with different things (varying amounts of noise, different magnitudes, etc.), but a random uniform amount between 0 and 1 was just as good as anything else so I kept that. Noise was added for both starting squares, target squares, and intermediate squares. Basically whenever anything is pushed onto the priority queue: sort_value = value + noise

I’m sure there are more clever and efficient approaches, but this was easier and faster to implement.

7 Likes

Me too one month is so cool for improve our bot ! It was an excellent idea with an excellent contest ! Thx for the staff !

2 Likes

Nothing specific to add (finished bottom gold) other than really interesting game, and thank you to CG for doing it. Also thanks for having a specific name for the contest too! Month-long format was cool for a game like this but I’m not sure christmas/new year period was the best idea; though I appreciate that CG had a fairly hectic year in terms of change and whatnot.
Still, it was fun. And congrats to @delineate for the clear win. Some very interesting approaches in this thread, much respect to all players :slight_smile:

3 Likes

Hello !

Thanks for your PM et gg for your rank !

I’m trying to improve my expansion with Munkres algorithm but I didn’t manage to include the spawn.
You’re talking about variable location but I don’t understand how you manage to put that into the Munkres algo.

Could you help me to understand :slight_smile: ?

Cheers

Sure.

For munkres you need a combination of an objective, a worker and a cost.

-The objective is just any objective you want to assign the spawn to. It makes sense to use the spawn as defense, but if you use them for the early expansion, that works too.

-The worker is the same as a robot would be, except the robot is tied to 1 specific place and the spawn is not. It’s just wherever you want it to spawn. This is no problem, because munkres does not require a position as input.

-The cost is where the difference between a spawn and a robot matters. You will probably figure in distance for some objectives. Robot distance is calculated with BFS. Spawn distance also, but instead of using the distance from a specific origin to a specific destination, you use the distance from the closest owned tile(that has no recycler) to a destination. And you add 1 because of spawn delay. You don’t even have to know exactly from where you are spawning it, to use the munkres algo, just as long as any distance calculations are correct. After munkres is done, you can choose from several owned tiles that are equidistant. Perhaps some make more sense than others.

3 Likes

Thanks !
It s much clearer for me now !
My issue was with the distance but I understand now that you can just the closest one and decide the exact location afterwards.
Last question: if you can spawn n robots, do you put n more workers in the Munster matrix?

10th

My bot uses a mix of Hill Climbing and Hungarian algorithms.
It tries several different options for action selection and chooses the best through evaluation of the resulting game state.
For instance, recyclers are added greedily one by one, after which moves and spawns are added by the Hungarian, which are then completed again in a Hill Climbing manner.
Of course, I also compute the opponent’s actions and mix evaluation of different possibly resulting game states.

The evaluation uses a lot of ingredients and as it is quite costly, I had to optimize it a little to be able to evaluate more action sets.
The exact computation of regions (islands that will not be recycled) and distances to the borders (inverse Voronoi) are computed with an adaptation of Dial’s algorithm, which works here in almost linear time.

The issue that appeared crucial to me is the units–terrain trade-off, determining how many recyclers to build. I tried to optimize this parameter through extensive experiments for each map size and the boundary size, which gave moderate success. Now I know it is not enough and I am impressed by the better solution of @delineate.

The problematic part of the rules was pathfinding, important for backup moves. I did not have enough time to rewrite the engine to calculate them exactly so I stayed with an easy heuristic here.

Thanks for the challenge!

5 Likes

Yes, for each spawn you can do (matter / 10) you make a new worker.