Spring Challenge 2020 - Feedback & Strategy

I finished 14th.

Another fog of war game. These are painful to debug, as you need the full match history (where have the pellets disappeared already? where have I seen the opponent). I described my approach how to handle this on tech.io. In the following I will use some illustrations from this replay.

I split my turn in three phases: tracking, killing (with higher priority) and harvesting.

Tracking

Initially the opponent positions are known (mirrored to your own).
Then I list all possible following states for each individual opponent as long as they don’t become too many. It’s possible to exclude some in later turns (when I see a pellet, it hasn’t been eaten) or in the same turn already (observing the score change it’s easy to detect all opponents speeding at once). For an opponent path only a few things matter: the eaten pellets, the final position and the cooldown. It does not matter if the opponent randomly walks around while invisible to me.

Let’s have a look at turns 2-4:
image
The opponent rock with ID 0 moved 2 cells at first. I know that because of the score gain. There are a few possible locations for the opponent. Then a super pellet is missing with ID 0 being the only opponent in range. So it’s clear who picked it up and which path was used. I can now safely remove all pellets on that path. When I see the opponent again later on, I can filter for the according states. If there are multiple, they might share some of the eaten cells at least.

As you can see, there are float at my pellets. These are probabilities that they still exist. It’s not the result of a statistical analysis but some numbers that seemed to work. I also apply some heuristics here. E.g. when I see the entrance cell of a long corridor with a missing pellet, I assume a lower probability for the remaining cells there as well. The opponent won’t start harvesting and then turn around. Even visible pellets may have a probability below 1, when an opponent is close than me.

Fighting

I only attack opponents with a cooldown greater than 0. Otherwise they could just switch and kill me. For each opponent and group of 1-2 of my own units (only those capable of beating or at least blocking the opponent, you don’t attack a rock with a scissors and cooldown > 0) I try the following:
List all possible cells, where the opponent can go and that are not blocked by me. Then move my units as close as possible to the opponent (taking the furthest possible opponent location for each of my own units). If my unit can beat the opponent, remove that cell from the possible opponent locations.
Repeat this until the opponent cooldown is 0 or there are no opponent positions remaining. If the latter is true, we can go for the kill.
You can see a 1 vs 1 kill at the beginning of the replay linked above. A 2 vs 1 fight starts at frame 29 with a rock blocking the escape route and a paper eating the opponent.

Harvesting

I use a beam search here. I do it for one unit after another. When one is fixed, I keep those moves while simulating the next. I run the search twice in reverse order of my units to avoid one harvesting in the territory of the other and making that one unemployed. The depth varies from 4 to 8 turns depending on the amount of units I have to control.
My scoring rewards collecting pellets with some decay for later turns. I also score the final position my taking the minimum distance for each pellet to the closest of my own units into account. I multiply these numbers for each pellet by their probability.

Here’s an animation of the replay from above:


It helped me to spot bugs like not removing some pellets easier.

23 Likes

The structure and tone of this PM is pretty similar to my OoC one.
Nice game, with lost of players and strong competition (besides the usual, many marathon veterans from another site), gratz for the winners!

This time I have reached #1 early and stayed there for most of the time. First with heuristics, than with simple search and prediction. Last few days were more challenging (+randomness of near 50% winrates), after a long weekend/nights I was still hoping for a top3 result. But recalc did justice once again, I fell down to #6.

As usually my code is not based on a simple “super-algorithm”, but many heuristical features:
-search: multiple tries with varying order and depth (up to 17) of independent DFS for each pac
-eval: based on single map (with all/predicted pacs) + pellet probability map
-last resort: BFS to nearest (avoiding bump or being killed)
-speed: positions with “x+y parity” are preferred (to avoid 1-step waste and give more visibility), need a delay (but which give speed advantage)
-switch: when “forced”, especially when another type have better winrate vs. enemies than current one
-bump/repeat: detect, continue with it, but let other pacs go for the same targets
-chase/trap: check stuck enemy (BFS) or when with speed advantage or when cant see - chase (even with switching)
-enemy “prediction” (track of disappeared enemy): multiple steps, prefering pellet take, set probabilities
-enemy reapperance: pick direct path and set pellet probabilities; if no direct path use backward-prediction
-superpellets: avoid if better enemy pac have advantage; handle enemy just taking it
-pellet (post)prediction: handle new empty positions and lower probabilities “around” it
+1 learn enemy behaviour: an important thing which i had no time

12 Likes

Hope this doesn’t start a religious war, but I notice C++ dominates the top of the leaderboard but not so much further down. Is it just a great language for this sort of thing, or is there some reason winners are especially drawn to it?

Do you have stats about how often that would happen ? Or about how acurate this prediction model was ?

Thanks for the PM :wink:

I never ended up coming up with stats for this, but just tested this by eye. I had each pacman print out where they thought the enemy was (with same ID as my pacman). There were some players (like Kovi :slight_smile:) who printed out their next move in the game as well, so I would run games between myself and these players and then see how often the coordinates I printed out matched theirs in the replay. Here is an example: https://www.codingame.com/replay/467342481

I think it worked OK… :slight_smile: but yeah definitely can also go wrong at times. I wish I had a better way to quantify the accuracy.

2 Likes

Challenge was fun! Alas I wasn’t very successful this time - stuck in Silver.

Wood 2 to Wood 1
As I knew new rules will follow, I didn’t want to design something big at this point. I wanted to make something small and dirty just to leave Wood2. To make it more challenging I decided to make it as small and dirty as possible.

So I ended up with 100 bytes JS solution:

(f=_=>[x,y]=readline().split` `)(g=n=>n&&g(n-1,f()))
g(y)
while(!g(f(g(f(f())))))print('MOVE 0',x,y)

It can be easily turned into one-liner, but I left it like that to keep readability, maintability and self-documentingness.
All this code does is ignores most of the input and just moves to the last pellet.

Wood1 to Bronze
This time I created a proper solution in Java.
If we consider that eating pellet has negative weight, then all we need is find the “shortest” path, i.e. path with minimal weight (usually also negative).
I used Dijkstra algorithm, though it’s not very suitable for negative weights, so I had to modify it a bit. Surprisingly, it worked pretty well and my bot reached Bronze right after the first submit.

Bronze to Silver
New rules and more intense competition required some fixes:
— implemented “fog of war” for pellets. It concidered every unseen pellet to exist until proven otherwise;
— fixed NPE when one of my pacs got eaten;
— fixed bug when 2 pacs blocked each other;
— intensified weight of nearby pellets (previosly best path was concidered only by total number of pellets, independent of how far they are);
— impemented Rock/Paper/Scissor mechanic to decide if path is blocked or not.

Every of this improment moved me 300-500 places upper, but with 2800 total players in Bronze that was not too significant.

Then I implemented SPEED command support and instantly my bot skyrocketed to Silver.

Silver
I found that sometimes my pacs walks in empty areas just because another allied pac already was there. So my first big improment was to choose paths that are not intersecting. Solution was pretty straigtforward:

  1. initially every pac chooses path indepently
  2. cells that are planned to visit are marked
  3. second run of pathfinding is executed with respect to plans of other pacs

This gave me pretty good results: from ~400th place to ~100th place.

Then I implemented enemy tracking: every time my pacs see enemy pac, a table is calculated, on which turn this enemy pac may reach each cell on the map.
I used this table to avoid colisions with enemy pacs, or intentially try to colide if pac is “eatable”.

I expected it skyrocket me to Gold, but it gave me nothing.
Then I found a nasty bug and after fixing it… I dropped from ~100th to ~700th!
Yes, broken enemy tracking gave better results then fixed one. That demotivated me and I gave up on further improvements.

Here’s one of my Bronze games:

6 Likes

Thanks for the PM! Do you run this on each pac in sequence (decide pac 1 > decide pac 2…)? Would that favor the first pac since it would have less often a thruPac (being the only one to move)?

First, C++ is generally faster than most of the other languages here.

Second, the winners tend to be professionals or college students, so they’re either really good at C++ or they’re learning it in school.

At least, I think…

1 Like

I want to say that companies competition is a great thing. That was the main motivator to improve for me even when thought that I couldn’t find what to improve.

Finished 236th (golden league)

My solution
Unfortunately this time my solution became a bunch of "if"s very quickly.

For each pac:

  1. Decide whether it wants to use the ability:
    1.1) SWITCH if we have an enemy in the next cell who can beat us
    1.2) SWITCH if we want to resolve a collision by switching type (only if our score is less than the opponent’s)
    1.3) SPEED if we are not blocked
  2. Add everything to the obstacles grid (including other pacs and stronger enemies)
  3. If we have a stronger pack around also add cells that it can attack as obstacles
  4. Calculate distances to each cell using Lee algorithm
  5. Find the closest reachable super-pellet and chose it as target path
  6. If target path is not selected choose the most desirable pellet to go after (weight = pellet neighbors count - distance - how many steps ago we seen it last time divided by 10)
  7. If target path is not selected find out if we have our pack standing nearby to make a step towards him (to his current position)
  8. If target path is not selected just go to the center
  9. If we had a collision last frame, check that we didn’t bump into our pack (a pack in 2 steps who also had a collision)
    9.1) If we did and we’re not the first by the order, don’t move this turn
  10. Calculate the next step we going do to reach the target and mark it as an obstacle to other pacs this turn
  11. If we are sped up, calculate the second step we’ll do this turn
    11.1) If our target is further than one step, use the step from path calculation algorithm
    11.2) If our target is in one step
    11.2.1) Extrapolate the step and check if it’s not blocked by a wall or our pack in 2 turns
    11.2.2) Run weight calculation again from the point that we’ll stand in one step and choose the best move
    11.3) Save the second step to the obstacles for other pacs in this turn
  12. MOVE to the appropriate position calculated above

Thanks a lot for an awesome event, and especially for updating C# environment! (I would have skipped this contest if .NET Core hadn’t been deployed.) The game was balanced, clear rules, not too many mechanics. (Maybe SPEED implementation/visualization was a little bit confusing.)

What was really frustrating is the lack of some testing environment or local runner. I didn’t manage to use CGBenchmark; I would try to implement my own tool if the API was well documented.

The outcome of any tiny change is unpredictable; you are never sure if this is regression or improvement. With so many ideas of enhancement I was paralyzed with the fear of breaking smth accidentally, and not being able to detect regression, or false-positively reject some good ideas.

Uncertainty :(

How do you organize you dev environment to test for regression and benchmark the improvements, to make the progress more evidence based?

(I hope, the next event will be more “continuous”, like Code Royal or Coders Strike Back :))

2 Likes

@Lysk Good question - The algorithm is stateless; i.e. it doesn’t favor any pac based on the order you run it. It just propagates the scores back up to the root and tells you which direction you want to go but doesn’t commit the pacman to moving any way.

The main idea is that because we are doing this DFS on the graph that would be produced by djikstra’s, it means that if you open up a node that contains a pacman, this pacman will have closer access to all the points under that subtree than the pacman at the root of the tree. So when we return the score and propagate it back to the parent, I choose to return the secondMax value instead of max.

2 Likes

“Can you give a single example where simulation isn’t better?”

Thats exactly the problem. Those games had became mostly a feast of search algorithims, instead of favoring heuristics, logic and implementation of some kind of AI. It´s all about brute force (and its variants that just filter some results, but are a form of dumb possible results testing). We will never create a true AI just testing every possible solution.

2 Likes

I ended up at the bottom of gold league. Was in the middle of it and then I changed something that dropped me significantly and no matter what I tried to fix it, it didn’t fix my leaderboard position.

As for the strategy, mine was pretty simple. BFS, track taken cells by my other pacs to avoid collisions, track distance 1 and 2 cells (surroundings) and identify any threats, and then finally cell scoring based on what is in the cell.

Here’s a pseudocode for my rules and cell scoring: https://gist.github.com/depy/d6969158cedc81c6c323d099ed445bf8

All in all, given it was my first CG competition - I use CG but never competed - and given I used Ruby which is not blazingly fast, then I’m pretty happy with the outcome.

If I had more time to refactor my code or if I wrote my code better from the beginning, the next thing I would add would be checking if next enemies that are in sight are a threat and can also move to the same cell I plan to move to and avoid that - I got eaten so many time because I planned to turn a corner and the enemy 2 cells away in front of me had the same plans for the next move.

And the other thing I did not implement is taking into account that “rules” change a bit if I have “speed” activated or the enemy has it activated.

One question I have for everyone is how did you avoid getting “corner killed”?

2 Likes

Finished #4
Many thanks to CodinGame and all the competitors for this very entertaining contest :slight_smile:

For the first two thirds of the competition time-span, I tried to implement a full tracker of the opponents’ paths, similar to what I did for Ocean of Code.
I really enjoyed this aspect in UTG and OOC contests, trying to extract the maximum of hidden information from the few inputs we have.
It was very challenging due to the exponentially increasing number of states, and I was close to give up, facing all these timeouts :wink: . I’m glad I persevered, this tracker was certainly the only strength of my bot :slight_smile:
It allowed me to remove a large amount of the pellets gathered by the opponent, and to avoid some (sadly not all :slight_smile: ) of the bad encounters at the corners.

The rest of my bot is the selection of an action successively for each pac, only based on heuristics on the current state of the game. I did not take the time to implement a MCTS or other search algorithm, simulating next turns or trying to maximize farming.

Tracker
Each bot had a list of possible states (position, cooldowns, type, and visited cells).
The tracker maintained a list of possible valid combinations for those states.

Expected information :

  • Sets of possible pacs positions, types and cooldowns,
  • Removal of most of the pellets gathered by the opponent.

Information we had :

  • Disappeared pellets in pacs’ visible area (+ big pellets),
  • Increment of our and opponent scores,
  • State of our pacs,
  • State (or absence) of visible opponent’s pacs,
  • Possible actions for a pac, based on its state, or based on combination of states (valid moves imply no collision).

How to use it :

  • A disappeared big pellet means at least one pac was on this location last turn,
  • A disappeared pellet means at least one pac visited the cell on one of the previous turns,
  • A remaining pellet (or big pellet) means no pac ever visited the cell,
  • An increment of X in opponent’s score means that each valid new combination of pacs’ states, expanding a previous valid combination, must have eaten the exact amount of pellets leading to that score,
  • An unexpected loss of score means an opponent pac has taken the pellet or big pellet right under our nose,
  • Spotted pacs (and starting positions) : self explanatory :slight_smile:
  • Non-spotted pacs in visible area means no pac is here now,
  • Unexpected blocking (or death) of one of our pacs means an opponent pac is near us trying to move to the same cell,
  • Absence of blocking or still alive pacs means the opposite.

Details (feel free to skip :sweat_smile:) :

Pacs individual states :
Each pac state’s visited cells were reduced to their intersection with the possible remaining pellets at the beginning of the turn. The implied merging of states allowed to reduce now identical previously valid combinations of pacs formations.
Pacs were sorted by the amount of associated or expected states (dead first, then spotted, then pacs with the least previous states). Spreading of next evaluated pacs’ states could indeed benefit more pruning from the common properties of all new valid states for an already ‘expanded’ pac.
Taking into account the maximum and minimum amount of pellets gathered by other pacs and the score variation, we can deduce lower and upper bounds to the number of pellets gathered by the evaluated pac. In addition to gathered pellets count, verification also included (un)blocking, deaths (or absence of), invalid visited cells, etc.
The expansion of a state could identify previous states without children. Such an invalid state allowed the pruning of all previous combinations containing this state. Other pacs’ states referenced by invalidated combinations could themselves be invalidated if they did not appear in other valid combinations.
The intersection of visited cells for all remaining valid states associated with a pac allowed a first removal of remaining pellets.

Combinations :
After all these expansions of individual states, a set of new valid combinations was created from each previous valid combination, taking into account each new states expanded from those composing the combination. At each level of the cascaded loops, the evaluation of the new combinations was interrupted if requirements (gathering, killing, blocking, presence in one of the previous turn, etc.) could not be satisfied, whatever the state we chose for the next pacs (taking into account the common properties of the next pacs’ states).
Combinations leading to impossible combined moves due to collision between opponent pacs were filtered. When a seemingly valid combination was found, a last check involved merging of root combinations pacs’ visited cell and new pacs’ visited cells. If the variation of gathered cells between these 2 did not match the variation in opponent score, the new combination was discarded.
The intersection of visited cells for all valid combinations (union of visited cells for each pac) allowed a second removal of remaining pellets (and more merging/pruning of states and combinations next turn :slight_smile:).
All states belonging to none of the new combinations were then erased, and a new intersection pass lead to yet another removal of pellets…

Fallback :
It sometimes managed to track the opponent the whole game, but often the combinatorics was too high. In those cases, all combinations were cleared, but states expansion and pre-filters remained active. This resulted in a loss of performance, but was still usable. Most of the important pellet information was deduced before.
All in all, that was quite a complicated tracker, not sure it was worth it, but it was fun to implement (not so fun to debug :wink: ).

Game logic
The selection of each of our pac’s next move was done through the evaluation of the following steps, by order of importance (when an action was assigned to a pac, next step would not replace it) :

  • Attack : try to trap and kill pacs in dead end (some ifs involving cooldowns, length of dead end, pacs types, …),
  • Defense : flee, wait or switch to killer type if trapped in dead end (again, some ifs :slight_smile: ),
  • Enable speed : if cooldown ready, unless near big pellet or in danger (a known killer opponent is nearby, need to keep cooldown ready for switch or to flee),
  • Gather big-pellet (some ifs, could have largely been improved…),
  • Gather pellet (sort pellets by score for each pac ; select pac with the higher score then loop ; remove pellets on evaluated pac’s way for the evaluation of other pacs ; avoid dead-ends ; try to spread pacs),
  • Flee or fight if needed, instead of standing still.

At each step, every pac was evaluated independently. But before the validation of an action, I checked that the pac would not collide with other already assigned pacs, and would not be exposed to a possible killer pac.
A set of possible opponent pacs’ states (up to 8 per pac, if more pac was ignored) was deduced from tracker, to help with the evaluation of those steps.

This game logic is relatively simple, with no deep search of the best possible moves. And I’d like to think that the efficiency of the bot was due the accurate enough information extracted from the tracker.

Thanks for reading :wink:
And again, many thanks for this very entertaining contest, and congratulations to the winners :slight_smile:

21 Likes

Thanks to the creators, CG and congrats to the winners ! It was a good contest. For some reason I tend to not do very good on fog of war contests (WW, OOC) and this was no exception. I ended up #203, dissapointing since i had a lot of the complex mechanics that many of the top players implemented. I think with some tweaking and bug fixing I should be able to be higher in the ranking.

The different thing from the others that I did was a smitsimax as the search algo. That search gave me good results in other contests, but I struggled with it this time around, specially with the ucb parameters. I never managed to make it work properly, and in the end I just used it as the battle algorithm when one or more of my pacs are close to a rival. As the main search algo I used a depth 15 dfs for each pac, with reruns to prevent blocking each other.

Do you know when will the multi be available? Or at least some beta version of it…

Thanks again and looking forward for the next contest (end of year, ouch…)

5 Likes

Hello! I finished 76th in Legend.

My bot used a Search that was based on Smitsimax. This was my first contest on CG that I worked on from start to finish, so I was very excited to reach Legend.

First Turn

  1. Fill in the location of all enemy pacmen and all pellets.

  2. Calculate which pacmen has the shortest path to every super pellet.

  • If two pacmen are the same distance, but one pacman will kill the other, the weaker pacman should cede the super pellet to the stronger one.

Search Function

  1. For each pacman (mine and opponents), I built a tree of the pacman’s possible actions, with a depth of 4.

    a. Like most others, I only considered paths flowing outwards from the pacmen. They couldn’t turn around mid-path, unless they hit a wall.

    b. I did not make any assumptions about when a pacman has to use an ability, they could save it up if they wanted to.

  2. I ran Smitsimax. An action was chosen for each pacman, and those actions were simulated. This repeated until we reached a depth of four. At this point the board was scored. Repeat for N iterations.

Problems with my Search

At this point. I found that although Smitsimax was giving decent moves, it was hardly ever finding the optimal moves. So I added another layer to my algorithm. I decided to call it (for some reason) Lock and Rock. I think a lot of people did something similar, minus the Smitsimax.

Let’s say there are 8 pacmen. At this point we have the “best” set of actions for each pacman, as returned by Smitsimax.

Pacman1: NORTH, NORTH, EAST, EAST
Pacman2: SPEED, (NORTH, NORTH), (NORTH WEST), (WEST WEST)

Pacman8: SWITCH_SCISSORS, SOUTH, WEST, SOUTH

Let’s assume that Pacman2 - Pacman8 are going to follow those paths. But, what about Pacman1? We can try every possible action in Pacman1’s tree and evaluate it against the other pacmens’ actions. So given the other pacmens’ “locked” paths, determine which path is truly optimal for Pacman1. Repeat this process for the rest of the pacman. At this point all the pacmen are going to have a very good move.

I run this algorithm twice, and then for each pacman I do a final analysis on the two best actions for each pacman and choose the best one.

Evaluation Function

  • If pacmen have goals (to get a superpellet) +SCORE for getting close to that goal, or achieving it
  • -LOT for dying, +LITTLE for killing a pacman
  • +SCORE for eating pellets, -SCORE for opponent eating pellets
  • -SCORE for being far away from pellets
  • +SCORE for number of pellets in range (using something like voronoi)

Opponent Tracking

  • Because my Search also predicted the best moves for the opponent’s pacman, I assumed they made those moves, and marked any pellets that they would have eaten as “unlikely_pellets”
  • If I saw an opponent later in an unexpected spot, I would update their location.
  • If I didn’t see an opponent somewhere that I DID expect them, I would stop tracking them.
  • However, if I made a wrong guess I didn’t have any code to go backwards and figure out where they really might be, or what pellets they may or may not have eaten. I would have liked to do that if i had more time.

Overall, this was an excellent experience. Congrats again to all the winners, and thanks to CodinGame for a great contest!

9 Likes

26th Legend / Java

I spent very good time playing this contest, it was not so easy due to the fog of war and the number of moves combinations!

Pretty proud to have contributed to the 1st place of Thales and 9th of INSA Toulouse :grinning:

Wood to Bronze

First, it is important not to spend too much time when we do not know all the rules so i try to reach quickly bronze with the minimum of code and strategy.

Here, i just move pacs to the closest super pellet or if not any the closest pellet. I use speed whenever i can. For the distance, i used manhattan at this moment.

Strategy to reach Legend

Precomputed data
I use the first turn to compute as much possible data to save time after:

  • position : instantiate a Position object for each coordinate. Just a trick to avoid garbage collector pressure when creating too much objects

  • distance : for each position, save the distance to each other position using a BFS

  • line of sight : for each position save the positions in sight

  • moves positions : for each position save the positions we can reach with speed 1 and speed 2

  • deadend : mark position in deadend and save the position list associated to find the whole deadend position associated so it is possible to get the start and end easily

  • paths : list of possibles paths for each position computed using a DFS depth 10. Turn around is done only when reaching a deadend

Init data
A pellet board is created marking all empty position with no super pellet and no pac.
I use for that a bit board with one long per line.
At turn one, i deduct from my pacs positions the opponent pacs positions and type.

Update data
Each turn, i update my pellet board with information i have using pac line sight.
I try to guess also pacs positions. For that i have a list of possible position/visited for each opponent pac i update each turn so when the opponent reappears i can update my pellet board removing the visited positions.

Engine
I have implemented the engine so i can play easily actions and see what happen managing also collisions. I was thinking i could do a depth evaluation at this moment but when i try all permutations moves it was really too slow and i timeout regularly with 5 pac and depth 1… (my evaluation function was in cause too)

Compute
Like i said before my first try was to play all permutation moves and evaluate the game. It lets me reach silver but some timeouts in 5 pacs maps prevent me to reach gold.
Then i try to play iteratively. I compute the best move of one pac, then i compute the best move of the second one using the best move of the previous one and so on. It works well and let me reach legend (with some other stuffs).

Evaluation
In that game, it is important to control the whole map and to have a good distribution so i was thinking it is pretty like Tron game with voronoi. My main evaluation is based on that.

  • evaluate positions : affect points based on the distance of each pellet to the closest pac positive for my pac and negative for opponent pacs. value = pelletValue / dist
  • score : 10 * (myscore - oppscore)
  • kill : +/-500 for each pac alive
  • best path evaluation : evaluate the best path using precomputed dfs and a decrease at each depth. score += pelletValue * coeff^depth. i do not care of other pacs position for the path but if a position is closer to another pac i can not eat it bring 0

Tips & Tricks

  • Start : start strategy is really important and i have to precompute some moves. I affect super pellets to the closest pac and a pac can have multiple targets if he can reach another before other pacs. i compute also the opponent start move so i can during some time avoid taking into account in my evaluation function super pellet i am sure i can’t reach before the opponent. So my pac will not be in danger trying to reach them and also will not go on paths where pellets are already eaten.
  • Ability : i use speed as much as possible but i verify if my position is safe before using it. I can switch if i am in deadend and my position is not safe.
  • Prevent Hunting : to avoid hunting an opponent which can flee, i will simulate the opponent move if there position is not safe so my pac will attack only if the opponent can not flee.
  • Deadend Hunting : if i know i can hunt a pac in a deadend i do it. For that i have to be able to eat him before he flees or he recovers his ability
  • Pellets potentially eaten : as i try to guess the opponent i know the potentially visited positions so i decrease these pellets value in my evaluation function

Conclusion
Really pleasant contest. I can’t wait to see the leader strategy!

13 Likes

Why do you need separate DFS pass for paths if you already have BFS pass for distances? Can’t you save all-pairs paths info during BFS phase?

DFS gives me all the paths of a given length while BFS gives me only one as i do not visit positions multiple times

2 Likes

The challenge was very … “challenging”. I was hoping to see the code of other people! Is there any way to do so?
May be someone would send me their code in message so that I could compare my code with theirs?