But it’s impossible, or at least hard to implement if bot uses current time as rand seed.
If input contains map generation seed, though, bots can use it as their rand seed. But players can’t be forced to do that. It can only be some kind of Gentlemen’s agreement.
As alternative, logs of all matches can be made accessible, not only played in IDE. This way you can easily reproduce needed game state if you logged all your inputs.
The ability to download the inputs is really necessary for debugging! I spent a lot of time inserting debug prints and re-running the game in the built-in IDE, trying to figure out what’s going on. Sometimes opponents act randomly and states are impossible to reproduce. If I was running things locally the load on the servers would also be lighter.
As an FYI, I used CG Spunk extensively in this contest to test out my bots. It keeps a log of STDERR for every match played, and every time I found an interesting situation that my bot didn’t handle correctly, it became a unit test for my code.
- danBhentschel
Can CG Spunk show STDERR of a match that is not in IDE? Arbitrary replay?
Yeah, I used same approach
http://pastebin.com/nkfgxeUi
Moreover, for EACH game state I autogenerated unit-test which forbids move that bot just made:
http://pastebin.com/v7QSNkkP
No, but what it does is run many (hundreds, if you desire) of matches against random opponents in the IDE. So I would frequently just start it up and go away for a couple of hours. Then when I came back, I could look through the matches run to see if any produced unexpected behavior.
It won’t help you for a specific match not originally run in the IDE, though.
- danBhentschel
Kimiyuki, 3rd.
My strategy
It was simple beam search: generate 10 (MOVE/BOMB and 5 directions) next states for each current state and select good ones, depth was 7,8 turns.
I thought that the main difficulty of this game is the distance of time between the action (e.g. putting a bomb) and the effect (e.g. explosion), so I tried to solve this.
I think the important points of my implementaion are:
- In the evaluation function, I estimate the boxes which will be broken in the future.
- Each state in beam search, I roughly calculate whether it is a survivable state or not.
- For diversity of states, I restrict the state sets to having only few states for same position of the player.
other notes:
- I don’t implement much about other players. So my AI was often killed by others.
- I suspect that the one step of beam search should be one event (e.g. getting item), instead of one turn.
Feedback
- I was confused some ambiguous rules. Especially, ghosts of boxes were bad things.
- I used
#pragma GCC optimize "O3,omit-frame-pointer,inline"
magic. I think that the-O3
command-line option should be set by default, not pragma.
I enjoyed this game. Thanks for CodinGame.
Feedback
- Overall fun contest and well executed. This is my second contest. CodinGame is the best thing I discovered this year.
- Please improve the clarity of the game rules. I took it as my responsibility to resolve any questions I have, by making hypothesis and testing them. This time I even wrote the server locally to get an absolute understanding of the rules. Reading the forums it appears rule clarity is a universal problem.
- Battle computation was unusually long in this contest. On the last day when I beat the Bronze boss it took 5 hours to get promoted to Silver. This happened again when the contest ended. Luckily CG allowed computation to finish after the contest, and I got promoted to Gold.
- Also please improve the overall page loading time of the site.
- And the Accountant contest is overlapping Hypersonic? I love CG but cannot afford to spend that kind of time on it.
I ranked #339 - The post below is not so much about strategy, but my tl;dr account of my progression and lessons learned:
First of all I’ve been on CG for several months, my initial excitement was over. I am a family man so I wasn’t gonna do a contest at the expense of everything else. Still I took the contest as a challenge to see what I can do with limited time, and for the extra CP gain (2000+!!).
I was thinking long term and wanted to build a foundation for me tackling several multis in the near future. First thing I did was to take the contest offline and built a client/server setup which I can reuse for other multis. I wrote the game sim so I could get complete understanding of the rules (and CG’s conventions for multis in general). My other idea is to place my bot in the arena and play/test it by having myself play against it interactively with WASD keys. It would be useful to do functional testing of the bot and server logic, and it would be extra fun for my kid who likes to watch me working on CG.com
Writing the game sim was not without its costs. By the time I got the client/server, game sim and visualisation done it was already Thursday. Looking back this time spent may be necessary, so I could have certainty about the game rules by writing the simulator. The bomb explosion and countdown rules are important in the upper leagues for calculating escape routes.
Wood3 was easily beaten by reusing the logic I wrote for the game sim. I find hub points to plant bombs to destroy the most boxes, taking into account the steps I need to walk and the waiting time for my next bomb.
Wood2 costed another day for me, as I had to update the simulator logic to drop items and account for power-ups. Client side logic was easy - just updated it to pick up items.
Then I shot past Wood1 without code changes, as my bot is minding its own business and does not often run into the enemy bot.
Bronze was a big problem. I was tired and found it hard to have the mental clarity to create the logic to evade bombs. It took me the whole of Saturday. The final strategy was still the same goal-based as in the earlier leagues, except I added a ‘filter’ which interferes with the main logic and brought the bot out of a situation before it’s too late to escape.
After Bronze I landed in the middle of Silver. I was running out of time and found myself on par with the boss. Eventually I defeated Silver by just adding a few lines of code, by assigning value to players like boxes do, so my bot would go after them. I finished in Gold.
My lessons learned
Taking the contest offline gave me the comfort of large console screens and stepped debugging. It forced me to understand the rules very well through writing the simulator.
But I should have watched more replays online to get inspired by other players’ strategies.
The interactive concept did not work as well as I expected. It’s mentally exhausting to play against my bot move by move, and depending on the type of game, some games which require tedious thinking and precise controls cannot be played this way. It’s less effort and more productive to just put 2 versions of my bots in the arena and watch the replays.
Thoughts on the strategy
I used to think the game mechanics of CodeBusters was more interesting due to the FOW and team aspects. I did not expect much from Hypersonic as it was based on Bomberman, a game that ran on old systems, which probably lend itself well to search techniques, but looking at replays and reading forum posts I realised there were many subtleties to the strategy as well.
I also looked online for Bomberman AI’s and previous contests with the same themes.
At first I didn’t think much of Monte Carlo, minimax or DFS techniques as they require a good heuristic/scoring of the game situation to work effectively, and due to the number of possible moves a player can make (10), I thought the depth could not go far enough considering 10 moves myself and 10 moves for another player. Reading the above posts it seems these search techniques were good without considering other players’ actions.
If I had time I would look at bomb chaining and attacking logic.
Here’s the postmortem from speedrun: http://ankursheel.com/blog/3883/hypersonic-codingame-ai-contest-postmortem
Contest feedback
-
As with most CG contests, the strategy proved to be deeper than I expected on first glance. Kudos for making a challenging, but easily understandable challenge.
-
The difficulty of making a serviceable AI was about right.
-
It was more challenging to do something useful than in Coders Strike Back. This is a good thing.
-
It was less difficult to get up and going than in Code Busters. This is also a good thing.
-
The endgame was a bit disappointing. With a fuse time of only 8 moves, it’s not very useful to place more than 3 bombs at a time. With the limited board layouts available, this means that, once all the boxes were removed, it was practically impossible to kill a well-written AI. This could have been alleviated by:
-
Making more complex wall layouts, as has been mentioned before.
-
Add the ability to kick and / or throw bombs, as in many classic versions of Bomberman.
-
Add the ability to place bombs with longer than normal (12 turns) or shorter than normal (4 turns) fuses.
-
While I understand how these things happen, I found the mid-contest change to the Wood 1 boss to be a bit disruptive. Yes, he was hard. I agree. My personal opinion was that it should have been left as-is, though.
-
Does anyone else find the two top-placing contestants a bit suspicious?
Game logic
-
My logic was designed with reproducibility in mind. I wanted to be able to provide a set of inputs for a given turn, and always get a predictable output. Two main guidelines fed into this:
-
Stateless - Moves were calculated each turn, with no information saved from previous turns. The one exception to this was score information, which was not provided in the game inputs, and needed to be tracked over time.
-
Deterministic - No random elements. A couple of times I accidentally broke this guideline, and had to correct my mistake.
-
The first action of each turn was to build a 3D representation of the game board, with the 3rd dimension being “time”. Then, when planning out moves, I could lookup the proper board state in the array to determine what the board would look like after N moves. To do this, I:
- Make an array of bombs where the remaining fuse is equal to board index and calculate a collection of squares exploded by these bombs.
- Make an array of bombs located on any exploded squares and loop to step #1 until no more bombs can be added.
- Replace any boxes (0, 1, or 2) with a ‘#’ character to indicate that it has been destroyed, but is still impassible for this turn.
- Remove any items on an exploded square.
- Add a new board to the array. Remove all exploded bombs and destroyed boxes, and add new items (from destroyed boxes) where appropriate.
- Repeat this procedure until no more bombs are left. After this point, board state is assumed to be static for subsequent turns.
-
Also at the start of each turn, I created up to seven additional hypothetical board state arrays, following the same procedure as above:
-
One in which I have placed a bomb in my current location.
-
One each for a bomb placed on the next turn in each reachable square adjacent to me.
-
One in which all my opponents have placed a bomb in their current location.
-
One in which all players (opponents and I) have placed a bomb in their current location.
-
I created a method called
IsTrapped()
which takes an X, Y location and a 3D board state as input and returns a boolean indicating whether or not a player on the provided location is able to survive. -
I created a list of every single reachable square on the board (and a history of moves to get there) using a BFS.
-
Squares were eliminated if I would be trapped by moving to the square (according to the hypothetical board state in which opponents had placed bombs)
-
If a square had been previously “visited” by an alternate route, I would compare the scores of the two routes to determine which method was better, and drop the one with the lower score.
-
All reachable squares were scored by:
-
Number of boxes that could be destroyed from that location
-
Number of items collected along the way
-
Distance to the location (above and beyond the number of turns required to replenish my bomb supply)
-
If the square was less than 2 turns away, then bonus points if an opponent would be trapped by placing a bomb there (according to the appropriate hypothetical board state array).
-
Given this list of candidate destinations, I would either simply MOVE to the first square in the move history of the destination with the highest score, or I would BOMB if a logical combination of the following held true:
-
Placing a bomb would not leave me trapped (according to the hypothetical board state array where all players place a bomb)
-
My current location was the selected destination
-
The score for the current location was > 0 and I had more than one bomb available
-
If the list of candidate destinations was empty, this meant that I was potentially trapped. I would create a new list of candidate destinations, this time without discarding any previously visited squares, and using a DFS. This would allow me to “zigzag” to possibly avoid staggered explosions. This extra “Oh sh**!” logic got me out of a lot of tough situations, while keeping my default pathfinding much faster.
Tests
As I mentioned before, I ran MANY games in the IDE using CG Spunk. I formatted the STDERR in such a way that it printed both the information read from STDIN, and the move selected by my algorithm, as code that could be easily copy / pasted into my unit test framework. Whenever I came up with a situation where my AI made a bad move, I could paste this information into a test of the format:
Given contents of STDIN
Should not select stupid move
Over time, my arsenal of stupid moves to avoid grew, and I was able to hone my logic without repeating past mistakes.
- danBhentschel