HyperSonic - Feedback & Strategy

Very fun contest. A few observations:

  • big plus for coming up with such an elegant challenge - complex enough yet simple interface
  • the fact that throughout the leagues the output stayed the same makes it really cool; the higher league AI is “backwards compatible” with lower leagues - big plus compared to other contests
  • also big plus for the “all actions are resolved at the same time”

Strategy-wise, I found it very interesting that my “prototype” implementation in which I took some shortcuts performed so well. Initially I figured I’ll just pretend that all bombs explode on the same turn and that proved to be very effective. Towards that end I found that implementing a more accurate simulator to determine forward game state ended up being very costly and in fact did not perform as well as the heuristic version which I ended up improving in a few key ways. The biggest insight for me was when I realized that the key was to find not the closest safe place to move to after placing a bomb, but a safe place as far away as possible, also taking into account some points of interest along the way.

Overall, I had a lot of fun!

Thanks for a contest, it was really fun.

Just few things.

  1. One week for contest it is not enough. Really, for someone who have fulltime work, family and not a robot - real challenge is to find time to it. Most interest part of contest - programming killer logic I think stay behind of scene for most contestants. Same was with previous contest, when only one-two contestant find time to “shepherd” part of contest.

  2. Start another contest even when first is not finished, also strange idea. Really, a lot of people need take rest and recover. I think many will skip 1-4 days of Accountant contest, or maybe even skip it all. See 1.
    It is really strange decision, really strange.

  3. Game mechanics really unclear and so many details stay underwater. As for me, instead of big description of mechanics placed in different part of contest description would be nice to have pseudocode, or similar to that. I spend a lot of time to try understand why my character doing strange things, because “you can’t place bomb that you give in this turn, because you haven’t it yet”.

  4. Battles performance. It takes really to much to wait some results after submit, or after press play. Each time I think about write my own sandbox, and each time I can’t find time to do this (see .1). Maybe good idea give us ability to play games at our local computers (backend and client)? It saves you a lot of money and resources.

2 Likes

Hello, rank 37 here.
I really enjoyed the contest, as it was fairly easy to get into, but very challenging with all the rules.


When I just started thinking about the problem, I first thought about implementing a kind of Monte-Carlo tree search (generating random sequences of moves and choosing the best based on simulation, maybe with some optimizations like genetic algorithm). However, I decided not to go for it and instead do a better analysis of the game state.

One of the key functions I had was determining whether one can escape from current position on a given grid, assuming no new bombs are placed. What it did was:

  • While there are bombs on the grid
  • Find all the reachable squares from the previous turn (by either moving or standing still)
  • Simulate one turn (reduce bombs’ timers and blow up the bombs that reach 0 timer)
  • Filter the reached squares, leaving only those that haven’t been blown up

If at any point there are no squares we could reach, it means we cannot escape in this situation. This serves as a filter of which moves not to take. Alternatively, this function would return the first move that leads to at least one surviving position (after all the bombs blow up).

As for the logic for placing bombs, I simply went with the highest number of boxes I could blow up while walking the least towards that square. If I have many bombs available, I also put them as I walk if they blow up at least one box and as long as this won’t kill me.

This strategy led to my bot almost never dying, and for a couple of days, this kept me in the top 10. However, the last 2 days people started submitting really good bots in terms of the number of boxes and I started falling behind.


What I didn’t like about the contest was:

  • The description had some problems for the first couple of days, not fully matching the actual mechanics of the game.
  • In almost all the matches, a lot of time was spent after all the boxes have been blown up, and I feel this really slowed down the ranking. The rule for timeout could have been better if it was like in chess: if you reach the exact same state 3 times (with no bombs), the game should end.
7 Likes

BlitzProg, 25th.

Got past W3 and W2 with an AI that was placing bombs that was calculating a way to clear the most boxes possible within the next 8 turns, did a short work of the bosses.
W1 original boss contained most of us for a while. Got past without realizing when it was promoted to bronze, that same boss became a dummy because it was blowing itself up with its own bombs for a while, so I got inside silver league by not dropping any bomb.

Silver league during most of the week because I have a job and it’s taking a lot of my time. I simply did the usual thing I do for almost every contest: Monte Carlo approach. Magus helped me a bit with the patience thing for the heuristic (score += turn_score * (patience ^ depth), 0 < patience < 1) and when properly set up, went through the entire silver league. It was really effective to rack up points much faster than everyone in here.

For most parts until now many others were way ahead of me. I do remember some players asking me when I was finally getting promoted and what was taking so long :stuck_out_tongue:


Gold boss came up, started working again late Friday. Improved my heuristic a bit, which helped me nearing the promotion. At some point many good players were submitting at the same time, farming on the boss, nearly getting me promoted. But it wasn’t enough, and by the time I could start working more, Boss reclaimed a solid 1st 2 points ahead of the 2nd. My AI needed something new, mostly to avoid dumb deaths (such as going in a narrow gap after dropping a bomb with a player ready to block the other side.) I started tweaking my MC to try to predict what others would do (Protip: don’t do that) - but it was a failure, most of the time my IA was giving up thinking it was dead when it really wasn’t. What I needed was to block risky moves, not calculate everything based on the assumption the opponent would do the said moves.

That’s when I came up with the idea to create another quick and short monte carlo search before starting the main one. Listing the 10 possible moves (4 directions + standing still, without then with a bomb) and figure if it could find a way to sneak around and survive the next turns assuming the opponent dropped a bomb. Many times this blocked a single move, helping my bot to avoid obvious traps, and in the worse possible case where no survival possibility was found : ignore that calculation completely and just play ignoring others.

That did amazingly well, I woke up Saturday in the top 30 inside legend. Added a few extra fixes on Sunday that helped me to stay in here, sometimes peeking inside the top 10. I ended up surprising quite a few acquaintances, because I usually never rank that high in a multiplayer leaderboard. Could probably have added a 3rd search to trap others, but I didn’t want to risk it. :slight_smile:


Overall I really enjoyed this contest, as always with the league system, it really helped me setting progressive goals. I wish I has a bit more time to work on it!

8 Likes

I finished 21st.

Here is an overview of my last version:

  • Monte Carlo search over 10 moves + wait for 8 turns. I would simulate ~200k game turns per turns so around 11k “10+8” turn strategies. I think my performance wasn’t very good compared to other top AIs. But I’ve seen so many contests where performance didn’t really matter much, so I didn’t bother.

  • Evaluate game state at every turn and score a simulation as the sum of scores*pow(0.9,turn). This is the “Patience coefficient” which I learned from Xylo’s post mortem of Poker Chip Race. It avoids your AI doing nothing, always hoping for a better score one day.

  • Evaluation of game state was boxes+smaller coefficients for range, bomb carrying capacity (without encouraging more than 5 bombs), Manhattan distance to other players, manhattan distance to all boxes. And a small penalty for placed bombs to discourage placing useless bombs.

  • To avoid numerous deaths by getting trapped by other players I had the following scheme: Simulate for 5ms under the pessimistic assumption that enemies place a bomb if they can. If a solution is found where I am still alive, simulate under this pessimistic assumption for the rest of the 100ms. But if the pessimistic assumption leads only to death, simulate without the pessimistic assumption for the rest of the turn. This allows your AI to avoid traps whilst not killing itself because it thinks the enemies are going to kill it.

Some details:

Place bombs if you think your going to die

Sometimes the “pessimistic assumption” of enemies placing a bomb turned out to be optimistic. It was rare but sometimes my AI would trap and kill itself because under its “pessimistic” assumption, an enemy would have broken a box and I would have escaped my trap through there. The enemy doesn’t place the bomb and I kill myself.

~400 lines of code

Feedback:

I liked the game, it was quite bruteforcy, with the top being dominated by fast languages. But there were still people quite high that didn’t use the 100ms.

However, I feel like submits were much slower than other contests, I feel like submits were 4h long at the best of times and were 8h long at peak hours. From what I remember this was not the case in previous contests. Personally I was basically unable to test any coefficients in my Eval, and I could hardly ever be sure if my new versions were better than the previous ones. To me this was a big issue, developing without testing is a ridiculous idea.

8 Likes

This contest was pretty fun. I wish i had more free time for it :frowning:

If i can give an advice : NEVER EVER EVER do a 1v1v1v1 game with 100ms for each player !! 50ms is enough. We can’ ttest anything when all players use the 100ms. A single submit take 4 hours !

Royale, 4th

My basic strategy was: do not die and try to maximize my score.

##Getting to Wood 1
At each turn, I tried to put a bomb on each cell and simulate the explosion to see how many crates were destroyed.
Then, I BOMB to these coordinates (until the crates exploded). The automatic pathfinding by CodinGame was doing the work for me.

##Getting to Silver
I skipped the rule where only opponents’ bombs hurt and considered all the bombs could kill me (bronze rule).
In order to destroy crates and avoid the bombs, I used a DFS algorihtm that would:

  1. update the bombs timers
  2. simulate the bombs explosions and chain explosions
  3. for leaf nodes make all the bombs explode whatever the timer was
  4. for each crate that is destroyed by my bombs increment my score
  5. if I am hit by a bomb return with -INFINITE score
  6. recursively consider all my valid actions (MOVE 5 directions + BOMB 5 directions) and keep the best one

Note that I ignore the opponents possible actions.
At first, my DFS was looking at a fixed depth of 6.
As I added more features (pick up bonuses, leaf nodes heuristic) I switched to an iterative deepening DFS that tried to go as deep as possible without the risk of timeout.

##Local simulator
From my experience during previous contests, the arena is crowed near the end of the competition so it can be long to have a new evaluation of your AI and consider if it is better than the previous one or not.
So I started my own local simulator (most of the rules were already implemented in the DFS) in order to quickly run many simulations on my computer.
I tried to mimic the maps provided by CodinGame (same walls, same number of crates/bonuses).
By comparing the game results for the same map between my local simulator and CodinGame, I found many bugs in my simulations (bombs did not explode/block/chain or bonuses did not appear/disappear as expected).
I also used a smaller timeout value to run the simulations faster and emphasize the importance of the heuristic.

##Getting to Legend
After trying Monte-Carlo without much success, I kept the DFS algorithm until the end of the contest, tuning it for performance (changing my data structures several times).
My score evaluation was basically:

  • bonuses I pick up depending on my current range and number of bombs.
  • number of crates I destroyed
  • big malus if I die
  • for leaf nodes: Manhattan distance to remaining crates, number of escapes I have

##“Fake” bombs
At this point, looking at the replay I was sometimes dying stupidly because of a bomb placed by an opponent that would trap me in a dead end or link a chain of bombs that I did not consider.
On the other hand I did not want to simulate the opponent moves because I thought this would cost too much time.
So I just considered each opponent that had a bomb left could put it on his current position during the first turn of the simulation.
Then during the DFS, if such a “fake” bomb exploded and killed me, it would lower my score (but not as much as a real death).
Also, my character was allowed to move through the “fake” bombs with another score malus.
This allowed my AI to avoid dangerous situations, but still escape if the opponent did not really attack me.

Finally, I would like to thank CodinGame and the staff for this contest, as usual it was really fun!

20 Likes

Thanks to the CG team for this great challenge; it had all the good ingredients: fun, simplicity of the problem, but complexity of solutions, a nice taste of old games nostalgia.

I’ll second a couple of the comments above:
. a single week is short; it indeed didn’t allow me to give much thoughts to the kill mode strategies, which could have been a lot of fun, as i was too aborsbed by fixing my botnfor him not too kill himself in the first place ! A two weeksèformatèmay be a good idea.

. submit time is not reasonnable

Rank 6. My first contest at CG, the experience was amazing. Standing at rank 1 a few times for some 20+ cumulative hours was a huge thrill. :smiley:

Game feedback

  • The game itself is great. The complexity was balanced just right, and had enough room to elaborate different strategies.
  • The bizarre ordering of actions in each turn caused a lot of unnecessary frustration and bugs. “Half-destroyed” boxes you can’t walk through, getting your bombs one turn after explosion, the input not giving the real/full state of the game, etc.
  • I wish there were different wall combinations and a smaller amount of items overall, it could have added more interesting depth to the game.
  • The endgame was boring and a waste of 100+ rounds. It should have ended as soon as the last box is destroyed. That could have given us MUCH faster IDE and submit results.
  • Seconding Magus on the 50ms per turn to help on the same issue, and maybe favor more strategy over computing power.
  • The server issues on the last day or two were a nightmare. I wasted a lot of time (8+ hours) with servers timing out for over 9 games out of 10 at times.

My strategy

I only summarize the final version to keep it short. I might write a blog with the full story and more details later.

  • I used a variation of GOAP (goal oriented action planning) with A*, which on average planned ahead the next 20-25 rounds.
  • To further explain, instead of evaluating the set of immediate possible moves (directions and/or dropping a bomb), I would instead define medium term actions: pick up this item, bomb this (set of) boxes, wait for an explosion to be over, etc.
  • The cost function is the number of turns elapsed after the current one.
  • The heuristic function is the negated cumulative score of each action taken. For example, bombing boxes had the biggest score, small bonus for boxes destroying earlier or if the box contains an item, another bonus for picking up an item, etc.
  • I later added a voronoi diagram to favor areas I can reach faster than enemies and penalize the other way around. It helped keeping my bot safer, and not waste time considering boxes the enemy will probably get first anyway.
  • Towards the end, I added a leash algorithm to check the possible bombs the enemy could place in the next 3 turns and determine if a (suicidal) move I make this turn could lead me to a mostly guaranteed death. Sadly, the final version still had a few bugs, and the long site/server issues did not help testing it as much as I would have wanted to.
  • The leash pass would be executed first and then, when planning, would yank out the action nodes starting with the suicidal moves.

Other notes

  • I wasted a lot of time trying to make a fast and reliable heuristic to evaluate traps and danger zones, when in the end, the leash ended up just better overall, even with the bugs.
  • It turns out I was actually losing a lot more to poor score than death. Focusing that much on preventing death was similar to optimizing a function that’s actually taking 5% of the running time — probably not worth it when you can optimize the other 95% instead. Doing it again, I would have spent more time improving the scoring and planning heuristic instead.
  • Against my better judgement I did not take the time to write functional tests. I paid for it several times, debugging stuff after refactoring that could have been easily prevented.
  • Magus’ stats website was a tremendously helpful tool to figure out which opponents/game modes to focus on next. I hope CG will provide stats like this in future games, with more metrics like timeouts, deaths, score differential, etc.
  • The timing of The Accountant contest was very poor. I understand the marketing reasons behind it, and if it helps CG get more money it’s for the best, but it still sucked to see the attention turned away from the current contest towards the end.

Thanks to CG for the awesome contest and to the participants for providing an incredible challenge, kudos to Ixanezis and Romka who ended the contest in such a dominating fashion. I am eager to participate in the next multiplayer contest. :slight_smile:

8 Likes

I second the thought #4! I found my constructive time was severely limited by two things:

  1. The amount of time it takes to run a battle. Especially with 3 added-in players, when most or all of them were making the most of that 100ms timeout, it took a long time to debug a failure point in my code or evaluate a weakness in my logic.

  2. The settling time for ranking placement after submitting code. Sometimes, I wanted to test a minor incremental change and try to scrape out a few more rankings. This was impossible, however, in gold/legend ladders. It took hours for a submission to complete all of its battles. In cases in which I could keep coding during that time, I often found myself working under the false assumption that my previous improvements were helpful - once the new rank settled, if it turned out worse i had to backtrack and rework the newest changes.

Also I would like to submit a request for future contests. In games like this one, some players had nondeterministic AI. If you ran the same match against the same player under the same conditions, the result would not always be the same. It would be nice if the random seeding for player calls was seeded the same each time when the ‘replay under same conditions’ option was selected. Often I would discover a weakness in my AI that was exposed by one of these players, but which I would be unable to reproduce by replaying that game. Instead, I had to search through my battles history to find another instance of the same weakness being exposed.

But it’s impossible, or at least hard to implement if bot uses current time as rand seed.
If input contains map generation seed, though, bots can use it as their rand seed. But players can’t be forced to do that. It can only be some kind of Gentlemen’s agreement.

As alternative, logs of all matches can be made accessible, not only played in IDE. This way you can easily reproduce needed game state if you logged all your inputs.

1 Like

The ability to download the inputs is really necessary for debugging! I spent a lot of time inserting debug prints and re-running the game in the built-in IDE, trying to figure out what’s going on. Sometimes opponents act randomly and states are impossible to reproduce. If I was running things locally the load on the servers would also be lighter.

As an FYI, I used CG Spunk extensively in this contest to test out my bots. It keeps a log of STDERR for every match played, and every time I found an interesting situation that my bot didn’t handle correctly, it became a unit test for my code.

  • danBhentschel
1 Like

Can CG Spunk show STDERR of a match that is not in IDE? Arbitrary replay?

Yeah, I used same approach :slight_smile:
http://pastebin.com/nkfgxeUi

Moreover, for EACH game state I autogenerated unit-test which forbids move that bot just made:
http://pastebin.com/v7QSNkkP

No, but what it does is run many (hundreds, if you desire) of matches against random opponents in the IDE. So I would frequently just start it up and go away for a couple of hours. Then when I came back, I could look through the matches run to see if any produced unexpected behavior.

It won’t help you for a specific match not originally run in the IDE, though.

  • danBhentschel
1 Like

Kimiyuki, 3rd.

My strategy

It was simple beam search: generate 10 (MOVE/BOMB and 5 directions) next states for each current state and select good ones, depth was 7,8 turns.
I thought that the main difficulty of this game is the distance of time between the action (e.g. putting a bomb) and the effect (e.g. explosion), so I tried to solve this.
I think the important points of my implementaion are:

  • In the evaluation function, I estimate the boxes which will be broken in the future.
  • Each state in beam search, I roughly calculate whether it is a survivable state or not.
  • For diversity of states, I restrict the state sets to having only few states for same position of the player.

other notes:

  • I don’t implement much about other players. So my AI was often killed by others.
  • I suspect that the one step of beam search should be one event (e.g. getting item), instead of one turn.

Feedback

  • I was confused some ambiguous rules. Especially, ghosts of boxes were bad things.
  • I used #pragma GCC optimize "O3,omit-frame-pointer,inline" magic. I think that the -O3 command-line option should be set by default, not pragma.

I enjoyed this game. Thanks for CodinGame.

11 Likes

Feedback

  • Overall fun contest and well executed. This is my second contest. CodinGame is the best thing I discovered this year.
  • Please improve the clarity of the game rules. I took it as my responsibility to resolve any questions I have, by making hypothesis and testing them. This time I even wrote the server locally to get an absolute understanding of the rules. Reading the forums it appears rule clarity is a universal problem.
  • Battle computation was unusually long in this contest. On the last day when I beat the Bronze boss it took 5 hours to get promoted to Silver. This happened again when the contest ended. Luckily CG allowed computation to finish after the contest, and I got promoted to Gold.
  • Also please improve the overall page loading time of the site.
  • And the Accountant contest is overlapping Hypersonic? I love CG but cannot afford to spend that kind of time on it.

I ranked #339 - The post below is not so much about strategy, but my tl;dr account of my progression and lessons learned:

First of all I’ve been on CG for several months, my initial excitement was over. I am a family man so I wasn’t gonna do a contest at the expense of everything else. Still I took the contest as a challenge to see what I can do with limited time, and for the extra CP gain (2000+!!).

I was thinking long term and wanted to build a foundation for me tackling several multis in the near future. First thing I did was to take the contest offline and built a client/server setup which I can reuse for other multis. I wrote the game sim so I could get complete understanding of the rules (and CG’s conventions for multis in general). My other idea is to place my bot in the arena and play/test it by having myself play against it interactively with WASD keys. It would be useful to do functional testing of the bot and server logic, and it would be extra fun for my kid who likes to watch me working on CG.com

Writing the game sim was not without its costs. By the time I got the client/server, game sim and visualisation done it was already Thursday. Looking back this time spent may be necessary, so I could have certainty about the game rules by writing the simulator. The bomb explosion and countdown rules are important in the upper leagues for calculating escape routes.

Wood3 was easily beaten by reusing the logic I wrote for the game sim. I find hub points to plant bombs to destroy the most boxes, taking into account the steps I need to walk and the waiting time for my next bomb.

Wood2 costed another day for me, as I had to update the simulator logic to drop items and account for power-ups. Client side logic was easy - just updated it to pick up items.

Then I shot past Wood1 without code changes, as my bot is minding its own business and does not often run into the enemy bot.

Bronze was a big problem. I was tired and found it hard to have the mental clarity to create the logic to evade bombs. It took me the whole of Saturday. The final strategy was still the same goal-based as in the earlier leagues, except I added a ‘filter’ which interferes with the main logic and brought the bot out of a situation before it’s too late to escape.

After Bronze I landed in the middle of Silver. I was running out of time and found myself on par with the boss. Eventually I defeated Silver by just adding a few lines of code, by assigning value to players like boxes do, so my bot would go after them. I finished in Gold.

My lessons learned

Taking the contest offline gave me the comfort of large console screens and stepped debugging. It forced me to understand the rules very well through writing the simulator.
But I should have watched more replays online to get inspired by other players’ strategies.

The interactive concept did not work as well as I expected. It’s mentally exhausting to play against my bot move by move, and depending on the type of game, some games which require tedious thinking and precise controls cannot be played this way. It’s less effort and more productive to just put 2 versions of my bots in the arena and watch the replays.

Thoughts on the strategy

I used to think the game mechanics of CodeBusters was more interesting due to the FOW and team aspects. I did not expect much from Hypersonic as it was based on Bomberman, a game that ran on old systems, which probably lend itself well to search techniques, but looking at replays and reading forum posts I realised there were many subtleties to the strategy as well.

I also looked online for Bomberman AI’s and previous contests with the same themes.

At first I didn’t think much of Monte Carlo, minimax or DFS techniques as they require a good heuristic/scoring of the game situation to work effectively, and due to the number of possible moves a player can make (10), I thought the depth could not go far enough considering 10 moves myself and 10 moves for another player. Reading the above posts it seems these search techniques were good without considering other players’ actions.

If I had time I would look at bomb chaining and attacking logic.

2 Likes

Here’s the postmortem from speedrun: http://ankursheel.com/blog/3883/hypersonic-codingame-ai-contest-postmortem

3 Likes

Contest feedback

  • As with most CG contests, the strategy proved to be deeper than I expected on first glance. Kudos for making a challenging, but easily understandable challenge.

  • The difficulty of making a serviceable AI was about right.

  • It was more challenging to do something useful than in Coders Strike Back. This is a good thing.

  • It was less difficult to get up and going than in Code Busters. This is also a good thing.

  • The endgame was a bit disappointing. With a fuse time of only 8 moves, it’s not very useful to place more than 3 bombs at a time. With the limited board layouts available, this means that, once all the boxes were removed, it was practically impossible to kill a well-written AI. This could have been alleviated by:

  • Making more complex wall layouts, as has been mentioned before.

  • Add the ability to kick and / or throw bombs, as in many classic versions of Bomberman.

  • Add the ability to place bombs with longer than normal (12 turns) or shorter than normal (4 turns) fuses.

  • While I understand how these things happen, I found the mid-contest change to the Wood 1 boss to be a bit disruptive. Yes, he was hard. I agree. My personal opinion was that it should have been left as-is, though.

  • Does anyone else find the two top-placing contestants a bit suspicious? :smirk:

Game logic

  • My logic was designed with reproducibility in mind. I wanted to be able to provide a set of inputs for a given turn, and always get a predictable output. Two main guidelines fed into this:

  • Stateless - Moves were calculated each turn, with no information saved from previous turns. The one exception to this was score information, which was not provided in the game inputs, and needed to be tracked over time.

  • Deterministic - No random elements. A couple of times I accidentally broke this guideline, and had to correct my mistake.

  • The first action of each turn was to build a 3D representation of the game board, with the 3rd dimension being “time”. Then, when planning out moves, I could lookup the proper board state in the array to determine what the board would look like after N moves. To do this, I:

  1. Make an array of bombs where the remaining fuse is equal to board index and calculate a collection of squares exploded by these bombs.
  2. Make an array of bombs located on any exploded squares and loop to step #1 until no more bombs can be added.
  3. Replace any boxes (0, 1, or 2) with a ‘#’ character to indicate that it has been destroyed, but is still impassible for this turn.
  4. Remove any items on an exploded square.
  5. Add a new board to the array. Remove all exploded bombs and destroyed boxes, and add new items (from destroyed boxes) where appropriate.
  6. Repeat this procedure until no more bombs are left. After this point, board state is assumed to be static for subsequent turns.
  • Also at the start of each turn, I created up to seven additional hypothetical board state arrays, following the same procedure as above:

  • One in which I have placed a bomb in my current location.

  • One each for a bomb placed on the next turn in each reachable square adjacent to me.

  • One in which all my opponents have placed a bomb in their current location.

  • One in which all players (opponents and I) have placed a bomb in their current location.

  • I created a method called IsTrapped() which takes an X, Y location and a 3D board state as input and returns a boolean indicating whether or not a player on the provided location is able to survive.

  • I created a list of every single reachable square on the board (and a history of moves to get there) using a BFS.

  • Squares were eliminated if I would be trapped by moving to the square (according to the hypothetical board state in which opponents had placed bombs)

  • If a square had been previously “visited” by an alternate route, I would compare the scores of the two routes to determine which method was better, and drop the one with the lower score.

  • All reachable squares were scored by:

  • Number of boxes that could be destroyed from that location

  • Number of items collected along the way

  • Distance to the location (above and beyond the number of turns required to replenish my bomb supply)

  • If the square was less than 2 turns away, then bonus points if an opponent would be trapped by placing a bomb there (according to the appropriate hypothetical board state array).

  • Given this list of candidate destinations, I would either simply MOVE to the first square in the move history of the destination with the highest score, or I would BOMB if a logical combination of the following held true:

  • Placing a bomb would not leave me trapped (according to the hypothetical board state array where all players place a bomb)

  • My current location was the selected destination

  • The score for the current location was > 0 and I had more than one bomb available

  • If the list of candidate destinations was empty, this meant that I was potentially trapped. I would create a new list of candidate destinations, this time without discarding any previously visited squares, and using a DFS. This would allow me to “zigzag” to possibly avoid staggered explosions. This extra “Oh sh**!” logic got me out of a lot of tough situations, while keeping my default pathfinding much faster.

Tests

As I mentioned before, I ran MANY games in the IDE using CG Spunk. I formatted the STDERR in such a way that it printed both the information read from STDIN, and the move selected by my algorithm, as code that could be easily copy / pasted into my unit test framework. Whenever I came up with a situation where my AI made a bad move, I could paste this information into a test of the format:

Given contents of STDIN
Should not select stupid move

Over time, my arsenal of stupid moves to avoid grew, and I was able to hone my logic without repeating past mistakes.

  • danBhentschel
8 Likes