Spring Challenge 2020 - Feedback & Strategy

Finished 3rd !

Intro

Congratulations to everybody, espacially Saelyos, Romka and the CG team for hosting these events.
I don’t know if it’s the prize pool or the marketing or both, but seeing almost 5k contenders is amazing !
Keep it up CodinGame, your work is wonderfull.
I’m obviously very pleased of the outcome, this is my second contest podium after Code a la mode but this one was not pure heuristic so I feel the competition level was higher. This is my first PM, I usually don’t feel like sharing.

Journey

Got to bronze as fast as I could with python to get a feel of the game with the full rules.
I worked on the simulation the first weekend, I realized collision handling didn’t need to be perfect. Then I wondered what kind of algorithm could be good.
Tried with pure random search depth 10 with the simpliest eval and felt like the result was ok.
Didn’t feel like aggresive bot could be effective until last weekend when I saw some amazing tricks done by other bots.
I almost instantly felt like FOW mitigation would become necessary but this was intimidating so I started to work on that only last saturday.
My bot almost focus exclusively on farming.
Each feature was tested with CGBenchmark <3

Final bot

The main search algorithm is inspired by Magus Fantastic bit genetic algorithm.
The point was that I felt this could make Pacs cooperate as Wizards did without needing to tell them so.
A Particularity is that I randomize only on valid output for each pack, so I simulate each turn before randomizing the next.
Merge and mutate were done on a single whole Pac path.
I did manage 10 to 30k full 12 depths simulation per 50 ms.
For ennemy I run same algo for 10 ms before with my pacs standing by. I only simulate ennemy pacs if I’m sure of their position (line of sight or if the FOW mitigation tells me so).

The eval is the simpliest possible (myScore-OppScore) with huge bonus/malus in case the victory threshold is passed. I sum up eval each turn for the whole solution score so pacs focus on closest pellets.
At the end of simulation I minimize for each pac the distance to the closest pellet because I saw my pacs wondering endlessly at the end of games when no more pellets were close by (none in 12 depths).

The FOW mitigation system :
Each turn I compute all paths possible starting with symetry. I then remove some based succesively on :

  • Line of sight (pacs, pellets present, pellets missing…)
  • Score, is this path possible according to opponent score difference between this turn and the previous one ?
  • I use observed collisions and deaths to trim paths
  • reduce paths length themselve if all shares common prefix marking those cells as having no more pellets.
  • only keep last cell if all paths score = 0
  • deduplicate
  • if numbers starts to grow to high, remove paths with less size (meaning opp pacs would stand still or move only 1 cell instead of 2 on speed), remove paths with less score, meaning opp pac would be dumb…
    This way I could generally go through mid game.
    Then when there are fewer pellets remaining paths number explodes, so I clear it and goes blind for this particular opp pac until I see it again.
    I would use paths to :
  • pin point an opp pac if possible to use in sim and avoid certain death.
  • reduce cells value (pellets score) by % of occurrences in paths to use in sim (ex: if 8 paths out of 10 has cell (x, y), going on it only yield 0.2 score).
  • try to avoid ennemy embush

I have a turn 0 brute force to exclude all actions which could lead in immediate death, this works defensively.

I also added in the last night a quick trapping system which might detect a trapped opponent situation taking advantage of the FOW mitigation, it doesn’t occures very often.

Misc

I’m very impressed by the bots which pacs cooperate to trap !
Mine is only a farming machine, I realized only at the end that killing was an option and the fact that two different approaches could both yield very good results makes me think that this contest is a success. Surely future multi top bots will be a combination of the two.

Was hopping to win the contest when I saw a 10% overall win rate improvement after implementing a new feature when I was already top 3 on the leaderboard yesterday evening !

Looking forward for the next contest, hopefully with even more contenders and nice prize pool so other top coders join us !

28 Likes

Hello! I found this website and the spring challenge on Thursday, and spent the weekend trying to break out of bronze league. I was able to crack silver within an hour of the final countdown.

It was so fun to write a basic AI and then tweak it by watching it play! I seriously spent probably 8 hours every day for the last three days just watching pacs and making code adjustments. It was very satisfying to finally get the bump up to silver! Can’t wait to join the next tournament from the beginning and have more time to mess with my code.

One thing I’d like to do differently in the future is to actually use the fancy BFS, etc. that people talk about here - all of my “AI code” was just spaghetti and if/else statements. I’m looking forward to learning a more advanced way to write this code.

9 Likes

I finished 112th in Legend and used Go.

I had tons of fun! I’ve never done a programming contest or any AI programming before. I have read the “Red Blob Games” blog before so I had some ideas. Excuse me if I use wrong terminology.

At first I thought I wouldn’t enjoy it much, but it was easy to get into and I was hooked.

My final approach was something like this:

First turn

Walk the map building a graph of nodes, and marking dead ends and corridors to dead ends. This made some heuristics easier to code later.

Game Turn

  1. Load in pacs and calculate visibility. If a pacman was missing, I try to find a likely path for them to have taken, otherwise store their hidden location for later. Also process visible pacs by finding their last location and assume all pellets between them have been eaten.
  2. Load in the pellets and remove missing pellets in line-of-sight. If a pellet is missing down a dead end area, I mark them as likely being gone which is used in heuristics.
  3. Analyze whether enemies should be attacked or change type in defense.
  4. Execute pathfinding: A* on all of the big pellets and on the 30 closest pellets
  5. Sort the paths by most valuable per spaces traveled.
  6. The pac with the most valuable route will get first pick, and then mark all the steps in the route as assigned (used in heuristics).
  7. Reprocess the next most valuable routes based on this new data and repeat until all pacs are assigned a route.
  8. Output

A* heuristics

Pellets and empty paths cost very little to traverse and pathing through a unit costs a ton. I didn’t want paths to be completely blocked by other units otherwise they would have found no path when trapped by units.

Paths also accumulated value based on:

  1. Pellets (duh)
  2. Neighbor pellets
  3. Far from other friendly units

Paths lost value when:

  1. Close to enemies (especially if they can eat the pac)
  2. The path is in a dead end
  3. The path includes another unit’s path this turn

Each turn typically took < 2ms.

Tools and Methods

A few days into the game I got tired of waiting for results so I built a small wrapper around the referee to run the game 1,000 games concurrently. I used my previous bot as the “champion” and in-development bot as the “challenger.” Once I had made progress I would submit to rank up and then promote the challenger to the new champion.

I also got frustrated with regressions so I started writing tests for situations my code was trying to address. I would take a small map as input and assert the moves taken were correct. Here’s an example input of collision avoidance:

#############
#...#.:.#...#
#..####.#...#
#.....C.C...#
#.....#.#...#
#.....#:....#
#############

(C is a pac and : is a pellet)

The tests were a huge benefit as I could refactor code and try new things out knowing the bot still behaves in the way I expect.

GGs

Thanks for this competition and great competitors! I look forward to learning new algorithms. Keep the write ups coming!

17 Likes

That contest was awesome!!

My goal was to reach Legends while streaming everything (on twitch.tv/thibpat), but I ended up in gold, 220th in the general ranking.
My main problem was not tracking the opponent position, so in the end I’m happy to be that high in the ranking without this feature :sweat_smile:

AI
My bot precomputes the visible cells and all shortest possible paths between each walkable cells on the first turn.
Every turn I try to find the best path out of all possible paths, while having a score for each pellet (a close pellet is better than a far one).
I started to store the opponent positions when I see them, and tried to clear the path between their two last position if there was only one possible path between these two last positions.

You can view everything (approx 17h) from the first lines of code to the last debugging issues on Twitch: https://www.twitch.tv/collections/dBeOIFg8EBZENg

Overall I enjoyed the contest, and I loved this fresh visual aspect :sparkles:

9 Likes

Hello, I finished rank 10.

I focused on the farming strategy for my bot and it was heuristic-based. I was very impressed by the bots that implemented trapping and good combat mechanics, this was an area my bot was weak in.

I’ll share what I did for the farming since I think this was the strength of my bot. The algorithm is as follows:

For each of my pacman:

  1. Perform a BFS starting at the position of the pacman in the map. Use the BFS to calculate the distance of each point from the pacman; (i.e. 2-d array bfsGrid where bfsGrid[x][y] is the distance of position (x, y) from the pacman.

  2. Perform a DFS from the pacman now with the following policy - for a given node you open at (x, y), you will only examine its neighbors (nx, ny) where bfsGrid[nx][ny] > bfsGrid[x][y].

    I think this is basically the same as if you run Djikstra’s on the pacman and you have this graph where the path from the root to every node in the graph represents the shortest path from the pacman to that node. You are going to now run DFS on this graph starting with pacman at the root.

    So with that intuition in mind, here is the scoring fn that does this DFS. Point (x, y) represents
    a position on the board. The boolean thruPac refers to whether you visited a pacman during this DFS.

scoreFn(Point (x, y), boolean thruPac):

    # Get the tile score, if we visited a pacman, this is 0 b/c the pacman we pass through will get it.
    tileValue = 10 if (x,y) is a bigPellet | 1 if (x, y) is pellet | 0 if floor
    if thruPac:
        tileValue = 0
   
    # Base score is just tile value divided by distance to tile
    baseScore = tileValue / (distance from pacman to (x,y))
    
    # Get all the neighbors (up, down, left, right) neighbors of (x, y) that follow the bfsGrid requirement
    # described above.
    Point[] neighbors = {neighbor (x', y') of (x, y) where bfsGrid[x',y'] > bfsGrid[x,y]}        
    
    # for the next recursive calls, thruPac' is only true if the current position has a pacman or we
    # passed through one already AND if there is only 0 or 1 neighbors to visit.
    # If neighbors.size > 1, that means we can now take a diverging path to not go through a pacman.
    thruPac' = ((x, y) is a pacman OR thruPac) AND neighbors.size <= 1

    scores = []
    for n in neighbors:
       scores.add(scoreFn(n, thruPac'))
    
    # Note here we check thruPac and not thruPac'
    if !thruPac:
      return baseScore + max(scores)
    else:
      # If we pass through a pacman, they're gonna walk down the best path,
      # so we give ourselves the second highest path score (second highest value in scores[]).
      return baseScore + secondHighestNumberIn(scores)

This is the intuition of the scoring algorithm. Now I run score(x, y, thruPac=false) on every pacman for each of their neighbors {up, down, left, right} directions and move my pacman in the direction of the highest score.

This works pretty well; the pacmans naturally kind of spread out and avoid each other due to the thruPac management. The scoring method needed to be tuned though; I did some things like:

  • If the current pacman is not the closest pacman to the position, multiply the tile score by 0.5
  • If distance > 10, multiply score by 0.5; etc.

Beyond this, the features I implemented:

  • Enemy tracking - The enemy tracking I did was to run the algorithm above on my enemies to guess where they would move if they were in fog. I would update the pellets as well based on these projected movements. If I see something to invalidate my opponent model for an enemy pacman I would remove that pacman from the list of tracked enemies and wait to see it again (i.e. seeing a pacman in a different location or seeing a pellet that the pacman should have collected still be present are invalidators).

  • Combat - I implemented a 1-ply search of all actions my opponent could do and check if my move would cause my pacman to die. If so, the move would be assigned a large negative score.

15 Likes

:laughing:…I didn’t aware Golden boss have such kindness.

1 Like

I finished 14th.

Another fog of war game. These are painful to debug, as you need the full match history (where have the pellets disappeared already? where have I seen the opponent). I described my approach how to handle this on tech.io. In the following I will use some illustrations from this replay.

I split my turn in three phases: tracking, killing (with higher priority) and harvesting.

Tracking

Initially the opponent positions are known (mirrored to your own).
Then I list all possible following states for each individual opponent as long as they don’t become too many. It’s possible to exclude some in later turns (when I see a pellet, it hasn’t been eaten) or in the same turn already (observing the score change it’s easy to detect all opponents speeding at once). For an opponent path only a few things matter: the eaten pellets, the final position and the cooldown. It does not matter if the opponent randomly walks around while invisible to me.

Let’s have a look at turns 2-4:
image
The opponent rock with ID 0 moved 2 cells at first. I know that because of the score gain. There are a few possible locations for the opponent. Then a super pellet is missing with ID 0 being the only opponent in range. So it’s clear who picked it up and which path was used. I can now safely remove all pellets on that path. When I see the opponent again later on, I can filter for the according states. If there are multiple, they might share some of the eaten cells at least.

As you can see, there are float at my pellets. These are probabilities that they still exist. It’s not the result of a statistical analysis but some numbers that seemed to work. I also apply some heuristics here. E.g. when I see the entrance cell of a long corridor with a missing pellet, I assume a lower probability for the remaining cells there as well. The opponent won’t start harvesting and then turn around. Even visible pellets may have a probability below 1, when an opponent is close than me.

Fighting

I only attack opponents with a cooldown greater than 0. Otherwise they could just switch and kill me. For each opponent and group of 1-2 of my own units (only those capable of beating or at least blocking the opponent, you don’t attack a rock with a scissors and cooldown > 0) I try the following:
List all possible cells, where the opponent can go and that are not blocked by me. Then move my units as close as possible to the opponent (taking the furthest possible opponent location for each of my own units). If my unit can beat the opponent, remove that cell from the possible opponent locations.
Repeat this until the opponent cooldown is 0 or there are no opponent positions remaining. If the latter is true, we can go for the kill.
You can see a 1 vs 1 kill at the beginning of the replay linked above. A 2 vs 1 fight starts at frame 29 with a rock blocking the escape route and a paper eating the opponent.

Harvesting

I use a beam search here. I do it for one unit after another. When one is fixed, I keep those moves while simulating the next. I run the search twice in reverse order of my units to avoid one harvesting in the territory of the other and making that one unemployed. The depth varies from 4 to 8 turns depending on the amount of units I have to control.
My scoring rewards collecting pellets with some decay for later turns. I also score the final position my taking the minimum distance for each pellet to the closest of my own units into account. I multiply these numbers for each pellet by their probability.

Here’s an animation of the replay from above:


It helped me to spot bugs like not removing some pellets easier.

23 Likes

The structure and tone of this PM is pretty similar to my OoC one.
Nice game, with lost of players and strong competition (besides the usual, many marathon veterans from another site), gratz for the winners!

This time I have reached #1 early and stayed there for most of the time. First with heuristics, than with simple search and prediction. Last few days were more challenging (+randomness of near 50% winrates), after a long weekend/nights I was still hoping for a top3 result. But recalc did justice once again, I fell down to #6.

As usually my code is not based on a simple “super-algorithm”, but many heuristical features:
-search: multiple tries with varying order and depth (up to 17) of independent DFS for each pac
-eval: based on single map (with all/predicted pacs) + pellet probability map
-last resort: BFS to nearest (avoiding bump or being killed)
-speed: positions with “x+y parity” are preferred (to avoid 1-step waste and give more visibility), need a delay (but which give speed advantage)
-switch: when “forced”, especially when another type have better winrate vs. enemies than current one
-bump/repeat: detect, continue with it, but let other pacs go for the same targets
-chase/trap: check stuck enemy (BFS) or when with speed advantage or when cant see - chase (even with switching)
-enemy “prediction” (track of disappeared enemy): multiple steps, prefering pellet take, set probabilities
-enemy reapperance: pick direct path and set pellet probabilities; if no direct path use backward-prediction
-superpellets: avoid if better enemy pac have advantage; handle enemy just taking it
-pellet (post)prediction: handle new empty positions and lower probabilities “around” it
+1 learn enemy behaviour: an important thing which i had no time

12 Likes

Hope this doesn’t start a religious war, but I notice C++ dominates the top of the leaderboard but not so much further down. Is it just a great language for this sort of thing, or is there some reason winners are especially drawn to it?

Do you have stats about how often that would happen ? Or about how acurate this prediction model was ?

Thanks for the PM :wink:

I never ended up coming up with stats for this, but just tested this by eye. I had each pacman print out where they thought the enemy was (with same ID as my pacman). There were some players (like Kovi :slight_smile:) who printed out their next move in the game as well, so I would run games between myself and these players and then see how often the coordinates I printed out matched theirs in the replay. Here is an example: https://www.codingame.com/replay/467342481

I think it worked OK… :slight_smile: but yeah definitely can also go wrong at times. I wish I had a better way to quantify the accuracy.

2 Likes

Challenge was fun! Alas I wasn’t very successful this time - stuck in Silver.

Wood 2 to Wood 1
As I knew new rules will follow, I didn’t want to design something big at this point. I wanted to make something small and dirty just to leave Wood2. To make it more challenging I decided to make it as small and dirty as possible.

So I ended up with 100 bytes JS solution:

(f=_=>[x,y]=readline().split` `)(g=n=>n&&g(n-1,f()))
g(y)
while(!g(f(g(f(f())))))print('MOVE 0',x,y)

It can be easily turned into one-liner, but I left it like that to keep readability, maintability and self-documentingness.
All this code does is ignores most of the input and just moves to the last pellet.

Wood1 to Bronze
This time I created a proper solution in Java.
If we consider that eating pellet has negative weight, then all we need is find the “shortest” path, i.e. path with minimal weight (usually also negative).
I used Dijkstra algorithm, though it’s not very suitable for negative weights, so I had to modify it a bit. Surprisingly, it worked pretty well and my bot reached Bronze right after the first submit.

Bronze to Silver
New rules and more intense competition required some fixes:
— implemented “fog of war” for pellets. It concidered every unseen pellet to exist until proven otherwise;
— fixed NPE when one of my pacs got eaten;
— fixed bug when 2 pacs blocked each other;
— intensified weight of nearby pellets (previosly best path was concidered only by total number of pellets, independent of how far they are);
— impemented Rock/Paper/Scissor mechanic to decide if path is blocked or not.

Every of this improment moved me 300-500 places upper, but with 2800 total players in Bronze that was not too significant.

Then I implemented SPEED command support and instantly my bot skyrocketed to Silver.

Silver
I found that sometimes my pacs walks in empty areas just because another allied pac already was there. So my first big improment was to choose paths that are not intersecting. Solution was pretty straigtforward:

  1. initially every pac chooses path indepently
  2. cells that are planned to visit are marked
  3. second run of pathfinding is executed with respect to plans of other pacs

This gave me pretty good results: from ~400th place to ~100th place.

Then I implemented enemy tracking: every time my pacs see enemy pac, a table is calculated, on which turn this enemy pac may reach each cell on the map.
I used this table to avoid colisions with enemy pacs, or intentially try to colide if pac is “eatable”.

I expected it skyrocket me to Gold, but it gave me nothing.
Then I found a nasty bug and after fixing it… I dropped from ~100th to ~700th!
Yes, broken enemy tracking gave better results then fixed one. That demotivated me and I gave up on further improvements.

Here’s one of my Bronze games:

6 Likes

Thanks for the PM! Do you run this on each pac in sequence (decide pac 1 > decide pac 2…)? Would that favor the first pac since it would have less often a thruPac (being the only one to move)?

First, C++ is generally faster than most of the other languages here.

Second, the winners tend to be professionals or college students, so they’re either really good at C++ or they’re learning it in school.

At least, I think…

1 Like

I want to say that companies competition is a great thing. That was the main motivator to improve for me even when thought that I couldn’t find what to improve.

Finished 236th (golden league)

My solution
Unfortunately this time my solution became a bunch of "if"s very quickly.

For each pac:

  1. Decide whether it wants to use the ability:
    1.1) SWITCH if we have an enemy in the next cell who can beat us
    1.2) SWITCH if we want to resolve a collision by switching type (only if our score is less than the opponent’s)
    1.3) SPEED if we are not blocked
  2. Add everything to the obstacles grid (including other pacs and stronger enemies)
  3. If we have a stronger pack around also add cells that it can attack as obstacles
  4. Calculate distances to each cell using Lee algorithm
  5. Find the closest reachable super-pellet and chose it as target path
  6. If target path is not selected choose the most desirable pellet to go after (weight = pellet neighbors count - distance - how many steps ago we seen it last time divided by 10)
  7. If target path is not selected find out if we have our pack standing nearby to make a step towards him (to his current position)
  8. If target path is not selected just go to the center
  9. If we had a collision last frame, check that we didn’t bump into our pack (a pack in 2 steps who also had a collision)
    9.1) If we did and we’re not the first by the order, don’t move this turn
  10. Calculate the next step we going do to reach the target and mark it as an obstacle to other pacs this turn
  11. If we are sped up, calculate the second step we’ll do this turn
    11.1) If our target is further than one step, use the step from path calculation algorithm
    11.2) If our target is in one step
    11.2.1) Extrapolate the step and check if it’s not blocked by a wall or our pack in 2 turns
    11.2.2) Run weight calculation again from the point that we’ll stand in one step and choose the best move
    11.3) Save the second step to the obstacles for other pacs in this turn
  12. MOVE to the appropriate position calculated above

Thanks a lot for an awesome event, and especially for updating C# environment! (I would have skipped this contest if .NET Core hadn’t been deployed.) The game was balanced, clear rules, not too many mechanics. (Maybe SPEED implementation/visualization was a little bit confusing.)

What was really frustrating is the lack of some testing environment or local runner. I didn’t manage to use CGBenchmark; I would try to implement my own tool if the API was well documented.

The outcome of any tiny change is unpredictable; you are never sure if this is regression or improvement. With so many ideas of enhancement I was paralyzed with the fear of breaking smth accidentally, and not being able to detect regression, or false-positively reject some good ideas.

Uncertainty :(

How do you organize you dev environment to test for regression and benchmark the improvements, to make the progress more evidence based?

(I hope, the next event will be more “continuous”, like Code Royal or Coders Strike Back :))

2 Likes

@Lysk Good question - The algorithm is stateless; i.e. it doesn’t favor any pac based on the order you run it. It just propagates the scores back up to the root and tells you which direction you want to go but doesn’t commit the pacman to moving any way.

The main idea is that because we are doing this DFS on the graph that would be produced by djikstra’s, it means that if you open up a node that contains a pacman, this pacman will have closer access to all the points under that subtree than the pacman at the root of the tree. So when we return the score and propagate it back to the parent, I choose to return the secondMax value instead of max.

2 Likes

“Can you give a single example where simulation isn’t better?”

Thats exactly the problem. Those games had became mostly a feast of search algorithims, instead of favoring heuristics, logic and implementation of some kind of AI. It´s all about brute force (and its variants that just filter some results, but are a form of dumb possible results testing). We will never create a true AI just testing every possible solution.

2 Likes

I ended up at the bottom of gold league. Was in the middle of it and then I changed something that dropped me significantly and no matter what I tried to fix it, it didn’t fix my leaderboard position.

As for the strategy, mine was pretty simple. BFS, track taken cells by my other pacs to avoid collisions, track distance 1 and 2 cells (surroundings) and identify any threats, and then finally cell scoring based on what is in the cell.

Here’s a pseudocode for my rules and cell scoring: https://gist.github.com/depy/d6969158cedc81c6c323d099ed445bf8

All in all, given it was my first CG competition - I use CG but never competed - and given I used Ruby which is not blazingly fast, then I’m pretty happy with the outcome.

If I had more time to refactor my code or if I wrote my code better from the beginning, the next thing I would add would be checking if next enemies that are in sight are a threat and can also move to the same cell I plan to move to and avoid that - I got eaten so many time because I planned to turn a corner and the enemy 2 cells away in front of me had the same plans for the next move.

And the other thing I did not implement is taking into account that “rules” change a bit if I have “speed” activated or the enemy has it activated.

One question I have for everyone is how did you avoid getting “corner killed”?

2 Likes

Finished #4
Many thanks to CodinGame and all the competitors for this very entertaining contest :slight_smile:

For the first two thirds of the competition time-span, I tried to implement a full tracker of the opponents’ paths, similar to what I did for Ocean of Code.
I really enjoyed this aspect in UTG and OOC contests, trying to extract the maximum of hidden information from the few inputs we have.
It was very challenging due to the exponentially increasing number of states, and I was close to give up, facing all these timeouts :wink: . I’m glad I persevered, this tracker was certainly the only strength of my bot :slight_smile:
It allowed me to remove a large amount of the pellets gathered by the opponent, and to avoid some (sadly not all :slight_smile: ) of the bad encounters at the corners.

The rest of my bot is the selection of an action successively for each pac, only based on heuristics on the current state of the game. I did not take the time to implement a MCTS or other search algorithm, simulating next turns or trying to maximize farming.

Tracker
Each bot had a list of possible states (position, cooldowns, type, and visited cells).
The tracker maintained a list of possible valid combinations for those states.

Expected information :

  • Sets of possible pacs positions, types and cooldowns,
  • Removal of most of the pellets gathered by the opponent.

Information we had :

  • Disappeared pellets in pacs’ visible area (+ big pellets),
  • Increment of our and opponent scores,
  • State of our pacs,
  • State (or absence) of visible opponent’s pacs,
  • Possible actions for a pac, based on its state, or based on combination of states (valid moves imply no collision).

How to use it :

  • A disappeared big pellet means at least one pac was on this location last turn,
  • A disappeared pellet means at least one pac visited the cell on one of the previous turns,
  • A remaining pellet (or big pellet) means no pac ever visited the cell,
  • An increment of X in opponent’s score means that each valid new combination of pacs’ states, expanding a previous valid combination, must have eaten the exact amount of pellets leading to that score,
  • An unexpected loss of score means an opponent pac has taken the pellet or big pellet right under our nose,
  • Spotted pacs (and starting positions) : self explanatory :slight_smile:
  • Non-spotted pacs in visible area means no pac is here now,
  • Unexpected blocking (or death) of one of our pacs means an opponent pac is near us trying to move to the same cell,
  • Absence of blocking or still alive pacs means the opposite.

Details (feel free to skip :sweat_smile:) :

Pacs individual states :
Each pac state’s visited cells were reduced to their intersection with the possible remaining pellets at the beginning of the turn. The implied merging of states allowed to reduce now identical previously valid combinations of pacs formations.
Pacs were sorted by the amount of associated or expected states (dead first, then spotted, then pacs with the least previous states). Spreading of next evaluated pacs’ states could indeed benefit more pruning from the common properties of all new valid states for an already ‘expanded’ pac.
Taking into account the maximum and minimum amount of pellets gathered by other pacs and the score variation, we can deduce lower and upper bounds to the number of pellets gathered by the evaluated pac. In addition to gathered pellets count, verification also included (un)blocking, deaths (or absence of), invalid visited cells, etc.
The expansion of a state could identify previous states without children. Such an invalid state allowed the pruning of all previous combinations containing this state. Other pacs’ states referenced by invalidated combinations could themselves be invalidated if they did not appear in other valid combinations.
The intersection of visited cells for all remaining valid states associated with a pac allowed a first removal of remaining pellets.

Combinations :
After all these expansions of individual states, a set of new valid combinations was created from each previous valid combination, taking into account each new states expanded from those composing the combination. At each level of the cascaded loops, the evaluation of the new combinations was interrupted if requirements (gathering, killing, blocking, presence in one of the previous turn, etc.) could not be satisfied, whatever the state we chose for the next pacs (taking into account the common properties of the next pacs’ states).
Combinations leading to impossible combined moves due to collision between opponent pacs were filtered. When a seemingly valid combination was found, a last check involved merging of root combinations pacs’ visited cell and new pacs’ visited cells. If the variation of gathered cells between these 2 did not match the variation in opponent score, the new combination was discarded.
The intersection of visited cells for all valid combinations (union of visited cells for each pac) allowed a second removal of remaining pellets (and more merging/pruning of states and combinations next turn :slight_smile:).
All states belonging to none of the new combinations were then erased, and a new intersection pass lead to yet another removal of pellets…

Fallback :
It sometimes managed to track the opponent the whole game, but often the combinatorics was too high. In those cases, all combinations were cleared, but states expansion and pre-filters remained active. This resulted in a loss of performance, but was still usable. Most of the important pellet information was deduced before.
All in all, that was quite a complicated tracker, not sure it was worth it, but it was fun to implement (not so fun to debug :wink: ).

Game logic
The selection of each of our pac’s next move was done through the evaluation of the following steps, by order of importance (when an action was assigned to a pac, next step would not replace it) :

  • Attack : try to trap and kill pacs in dead end (some ifs involving cooldowns, length of dead end, pacs types, …),
  • Defense : flee, wait or switch to killer type if trapped in dead end (again, some ifs :slight_smile: ),
  • Enable speed : if cooldown ready, unless near big pellet or in danger (a known killer opponent is nearby, need to keep cooldown ready for switch or to flee),
  • Gather big-pellet (some ifs, could have largely been improved…),
  • Gather pellet (sort pellets by score for each pac ; select pac with the higher score then loop ; remove pellets on evaluated pac’s way for the evaluation of other pacs ; avoid dead-ends ; try to spread pacs),
  • Flee or fight if needed, instead of standing still.

At each step, every pac was evaluated independently. But before the validation of an action, I checked that the pac would not collide with other already assigned pacs, and would not be exposed to a possible killer pac.
A set of possible opponent pacs’ states (up to 8 per pac, if more pac was ignored) was deduced from tracker, to help with the evaluation of those steps.

This game logic is relatively simple, with no deep search of the best possible moves. And I’d like to think that the efficiency of the bot was due the accurate enough information extracted from the tracker.

Thanks for reading :wink:
And again, many thanks for this very entertaining contest, and congratulations to the winners :slight_smile:

22 Likes