Tell us what was your strategy to perform on the Back to the Code multiplayer contest, or just leave a comment about the event
first back is ok, second is lol
Oh my… Fixed
When I began the challenge, I did not know where to go. FIrst, I thought that it was the same principle than Tron but as it was possible to go through walls, I had to do something wrong. I thought about a general technique like alpha-beta but my mind got stuck with the evaluation function : I had no idea of what to evaluate and how to evaluate it. So I chose to begin simple and to increase the complexity of my algorithm with time.
The first thing I did was just exploration : simply go to the nearest free place. The problem with it was that it does not do closed loops. So I forced a direction that can only change in one direction (if it is possible). That made more loops. However, it made loops so big that they were super hard to close. Then, I limited the size of the loop by limiting the number of steps in one direction. There were still some problems to close the loops correctly.
So, I created a loop coefficient : it represented from a given point the ability to create a loop. It consisted in exploring in one direction, to turn, explored in that other direction and observe if a wall was hit. That gave good result.
I used this coefficient to do exploration : I observed what it was in the cells around me and I chose the best one. Next, I corrected some general stupid things that my player was doing.
I implemented different strategies for the given number of player : the maximum size of the loop, some priorities…
I tried to attack and use the back in time. It was useful for 1v1. I first tried to close a loop and then I go to where the other player is trying to close a loop. I can know where those places are by counting the number of foe cells we meet in the four directions around a cell. If my attack succeed, it is great. Else, I come back in time and I continue to do my loops so I do not waste time. I tried to do the opposite but most of the time, it destroyed my first loop and I lose because I constructed nothing consistent to make points.
I made a lot of small change in a strategies. For example, I observed that the player who controls the center has more chance to win (like in chess). So I tried to go there first. I had not enough time at the end of the challenge to improve my algorithm but it worked fine, my final rank was around 60, at least for a “not so intelligent algorithm”.
I would like to know what kind of exploration technique can be used here : do people used alpha-beta ? Other kind of exploration ? It would be nice to give references
Hi! here’s my story:
I started doing problems on codingame only a few weeks ago and Back to the Code was my first contest. I don’t have much experience with optimization problems, and I although I know the names of some popular algorithms, I just don’t know how or when to use them.
So, my initial thought was that in order to win the game, I had, at least, to maximize the captured area while minimizing the number of steps to capture it.
My first implementation was looking at the whole grid, assigning a score to each cell by looking at its surroundings. The idea was to have some sort of heatmap indicating which of the cells were the most critical given their position. For example, cells forming a potential loop closure had a higher score and would be privileged when choosing the most interesting cell to go to. It was going not very good, although the fact that the bot was mostly moving in diagonal seems to disturb the other players at the time.
For the second implementation, I decided to have less freedom in the choices of cells. So, instead of trying to score each cell independently, I implemented a square detection algorithm to find out the largest possible squares of free cells in the arena. Once I had a list of possible squares, it was simply a matter of choosing which square to target given its area, its distance and the number of missing cells on its border. This one was much better that the first version.
My last version is the same kind of thing, but instead of detecting squares, the bot detects closed regions, and choose the most interesting one with the same criteria as before. It also uses an initial split of the arena that uses the initial position of the players to create closed regions that aren’t too big. At some point, in 2v2, I had an implementation that was making a big 18x10 region on my side, just large enough to win most of the time, but people started to find the counter-measure, so I had to go with smaller regions.
Regarding the back in time power, I had a few usages of it, but most of the time it was detrimental to me because it drove me away from my areas to try to piss of the other players. So in the end I think I didn’t use it.
A few tricks that I used:
- Move in diagonal whenever possible: when moving to a target cell far away, I try to move in diagonal rather than letting the game go in straight lines. From the first implementation it seemed to disturb other players a bit, especially the ones that do rectangles. It’s not used very often though, as the target cells are usually close.
- Whenever close to an enemy cell, if there’s a free cell on the other side, go take it from time to time. It costs around 4 turns in total, but it could cost the enemy a lot more if it is inside of one of its region, plus it’s funny. I think I stole the idea from someone who used it against me. Can’t remember who, sorry.
- Future prediction: given the enemies position this turn and the previous turn, assume that they will be going straight for the next few turns. This helped a lot for not going to suicidal positions, closing regions before the enemy comes messing up everything, etc… Predicting too much wasn’t helpful though.
What I wanted to improve but couldn’t get right in time:
- Often I found myself trapped in the spiral of death: an enemy and I are trying to capture the same region at the same time, chasing each other in spiral. After some time, when the potential gain decreases too much my bot would go to another more interesting region, leaving the rest of the cells to the enemy. Probably there is something to do in this situation to avoid spending useless turns and leaving the spoils of war to the enemy…
- Take the enemies into account: All the decisions are taken without even looking at the enemy position. I tried to integrate the enemy proximity into the score for regions selection, but it didn’t look helpful. Probably it could be done right.
- Use the back in time power: it’s probably useful for something
In conclusion, this was really fun!
I like your idea about heatmap berenm. So bad it didn’t was useful for you.
One of the first thing I implemented while testing the game was a function to generate the Voronoi map. This map tell you who would reach every cells first.
At the end of the game, when everyone try to fill last few cells of the map, it helps you to go where you can get points, instead of following an opponent that will get everything.
I also started to write an alpha-beta algorithm which follow the pattern:
- Count the cells you have
- Compute Voronoi map
- For each of cells you own and that you can go first according to the Voronoi map
3.1 Create a path to get there and add it on the map
3.2. If you closed an area doing so, fill the map
3.3. Count the number of cells you have
3.4. Evaluate this
My implementation took something like 20-25ms to evaluate a cell to go to in Perl (18ms for points 3.2.). I lacked time to improve these performance.
As for the back in time, I planned to use it to be greedy as long as I have it. If something I try doesn’t work, just go back in time and play safe.
First some feedback: the game was awesome! I especially liked the fact that we are always player 0, so no need to include twisty bits everywhere to handle the fact that our player ID may change.
The game was quite nice by itself, no complicated mechanics, no need to polish our outputs too much (tron-like outputs would have been more painful for example).
About the duration, I found it either too short or too long. Shorter (say 24~36h), it would have been a speed contest, the most efficient coders win. Longer (3 weeks), it would have been a real AI strategy game.
1 week… well, either you have time, and you are able to pull on elaborate strategies, or you don’t, and it becomes a speed contest for you. It means that some coders started with a disadvantage: some of us had 8 days to code an AI, others had only 3 (sunday, saturday, sunday, and I’m not sure if I should count the last sunday, given how hard it was to rate our AIs).
But varying contest length is good anyway, so that everyone finds one to their taste.
About strategies:
In the first days I tried to find the farthest neutral point on the map, and implemented some pathfinding to reach that point efficiently. Then I realized the built-in pathfinding was almost good enough, so simplified the whole thing. This was enough to get in the top100 after 24h, but obviously it didn’t last.
I tried to stick to my local solution until the wednesday, implementing other criteria for the choice of the target, and defensive measures if I couldn’t reach my target before an opponent. But I kept losing my zones because opponents could enter from behind me too.
So I restarted from scratch, tried to make the AI aware of what it is doing (going for a new area, surrounding one, shortcutting one in defense) and behave accordingly. By saturday it was more or less working, the defense in particular appeared to be working well, maybe even too well.
And, hu… at the end of the contest my old local AI was able to beat the new one consistently, so I just pretended the last 4 days never happened and submitted the old one.
All in all, I was ranked 500ish (and the AI from saturday was much closer to the 600ish). I probably should have spent more time on my offensive strategy. For example, I read a lot of people talking about checking rectangles in the chat, that might have allowed them for faster area scanning than my surface-like algorithms.
I came back from holiday two days before the end so I knew it would be hard to get a competitive bot going.
My first idea was to move around to a random position until the end of the game. It got me to about 1000th place or something, mostly because a lot of the opponents crashed during the games.
Next I generated 50 random positions and used an evaluation function to choose the best one.
The evaluation function simply calculated the ratio of empty cells on the path to the destination.
This got me to about 600th (36h before the end of the contest), I was quite pleased with the results because there was so little code/logic
The next improvement was to check if at any point in time, changing the direction would hit a cell of mine (think raycasting). The problem is that my bot would turn all the time and fill the space line by line.
The solution was to estimate my position for the next turn and check if turning at that point would still hit one of my cells. In that case I would not turn yet in order to build a bigger rectangle. This was the breakthrough that got me to just under 200.
I did play a bit with coming back in time, only when I was leading and then detecting that someone overtook me. I would then get back and rush to spoil the opponent’s rectangle. It didn’t work very well and I lost about 200 ranks so I removed the feature.
The number of random position got increased to a few hundred and that brought me back to about 200th.
I was happy enough with that result and decided to spend Sunday with my girlfriend
If I had to improve the code while keeping this strategy, I would take my opponent’s position into account.
Final rank: 289
My strategy consisted of two main parts:
Part 1. Try to find the largest rectangle such that it takes the fewest turns to complete. Make sure that opponents cannot spoil our rectangle by getting there before we are able to complete it. Later I noticed that opponents are lazy and often don’t bother to spoil your rectangles too much. So I added a opponentLaziness=k factor, which assumed that opponents would take k times as long to reach my rectangle. I ended up using k=1.9. Also I need to ensure that I can make the rectangle before time runs out. Finally I check both directions (clockwise and anti-clockwise) to see which one will complete the rectangle faster. The score for each possible rectangle was M-T, where M is the number of cells that I will gain by completing the rectangle and T is the number of turns it will take to complete it (this included the turns taken to reach the rectangle).
Part 2. If the best score from part 1 is 0, it means that we weren’t able to find a good rectangle. So now we move to the closest empty cell. Here I make sure to choose a cell that is not adjacent to an opponent. This ensures that we choose cells that our opponents are less likely to get and works quite well in the end game. Sometimes we could end up in a silly cycle, jumping left-right between two cells. To avoid these cycles I added a rule that we don’t go to a cell that we visited two turns before.
Optimisations. With the above approach I was able to scan 20%-100% of all possible rectangles on the board. I wanted to bring this closer to 100%, so I needed some optimisations. One optimisation is rejecting rectangles that have an area smaller than our current best, this can be done quickly. Another optimisation is to use the rectangle from the previous move as a starting point of the search. This allows us to reject many rectangles faster.
Things that didn’t work. I tried looking for non-rectangular regions. To check whether a region got completed I needed to run a flood fill algorithm (starting at some point inside it) and see if it hits any opponents or grid borders. This method worked, but it was considerably slower. Since I wasn’t checking all the possible moves this gave me a lower rank. I didn’t even bother going back in time, because I didn’t see much benefit from it against top 100 opponents.
Results: Throughout the week I was ranked in the top 100, but my final position was 143rd. This is probably because people submitted their best bots in the end. I loved the problem and I had a great time thinking about it. Looking forward to the next multiplayer contest.
“The score for each possible rectangle was M-T”
I work with M / T, I find this scoring far more accurate. It’s gain / cost.
For what it’s worth I’d like to mention that the end rankings still felt quite random. I have already risen 23 spots from my final ranking of 91 without doing anything, no resubmit, no changes. I ended up about 30 spots away from people whom I’d been crushing all contest. Now I’m not sure how it happened but I noticed those last 100 games for everyone moved people around like crazy, maybe that system you use isn’t meant to have submits with people already ranked? The only fair thing I can come up with is playing games untill convergeance, e.g Play games untill people’s position vary no more than epsilon spots per x games, assuming that equilibrium exists.
Very very similar to what I’ve done for my part (Aries / Haskell). My “opponentLaziness” was called “hope” and was just a (too?) simple constant, but the intent was the same. It is actually this hope I was reducing to 0 before going back into the past when noting I wasn’t able to close a rectangle.
This contest was my first experience with programming and instantly made me addicted.
it started out saturdaynight with some experiments to figure out how programming really works, which resulted in my bot searching the closest corner and moving towards it in a very roundabout way with print statements everywhere.
I scrapped that code and started again, this time with a new plan: a list of options that shrunk or expanded, with a random pair of coordinates chosen from said list.
-
i started with a list of the 4 spaces around me.
-
next I made him remove the occupied/illegal spaces, which worked till I got surrounded and the list was empty, so i needed a scanner.
-
the scanner was a very simple algorithm, activated when the options list was empty, that checked al spaces in a range n around me for “.” , once found, it would add all empty spaces in range n to the list of options. This got me quite far, since I was now moving quite efficiently, but it rarely, (if ever) made zones.
-
Next was a squarescanner, which checked all 4 directions for squares with increasing sizes. and added the possible squares to the options list.
-
After combining my squarescanner with my regular scanner, and adding a weight to each option, my final build was complete:
-
scan for squares around me and make the largest possible
-
if there are no squares available locally: scan proximity for zones where squares could be made.
-
if squares are not a realistic possibility: start randomly hopping around, avoiding taken spaces to try to get the most points.
-
constantly check if current goal is still available, otherwise recalculate.
This basic construction got me to place 270, which I’m quite proud of with only 5 days of programming experience, now I’m setting out to complete most of the site so I can compete in earnest in the next contest. See you then!
I tried a very simple solution:
- find out who’s best
- copy it’s strategy
While this worked well for fights with 2 bots it f*cked up with more than 2. So I ended at the last third of all places (around 1400 ).
I had a similar issue. My bot is like yours, it’s not steady. More games statistically means more accuracy. I went from top 24 from 50% phase 1 to top 75 at 100% phase 2.
With a tweak I did for the multi, my bot is always top 30 now.
Randomness is part of this challenge but as you can see with top 5, which I often beat in 1 v 3, you can control it.
First contest I took part in. Cannot really say much about it. It was entertaining and fun… kind of wish I really had some more useful insight on the matter.
When I first started, I would pick a random empty cell, and create a two part path that would max the number of neutral cells on my why there. This kept my in the top 25% of the ranks for the first two days (with some adjustments). I quickly added an initial path that would create a square between my starting point and the nearest corner. I also added interrupting into my pathing code that would do path goal checks more often as the game progressed. I also started playing with the weights that were applied to my random variables.
On Wed. or Thurs. (due to my weights being heavy on the pick a direction over random location), I created an AI that got rid of my pathing, and would pick a random spot in the set of continuous unclaimed grid points that were adjacent to my current location. This brought me back up in the ranking a bit. But, I spent most of my time somewhere in the 30 to 40 percentile.
On Sat., once I got back home (had to leave town to go to my birthday party), I dug in to work on Island generation code. This code used the paint bucket algorithm to find Islands (continuous unclaimed points) in the grid. I would then sort the Island in reverse order of their size divided by the of wanted-border-points. This brought me up a good 80 spots in the ranking.
While I was watching the test cases for my new Island Code, I noticed that it would skip cases that would close a nicely sized square if my agent went on a tangent of non-border points. I created a random tangent function and tested it out. But, the results were really mixed on my test cases. The random tangent function had me tied with the code that was ranked at 200 place when we there were 5 hours left in the contest. Yet, the same code was always losing (and quite badly) the agent that was ranked at 300. All my test results were very mixed, I was seeing a small improvement in against some AI, and a vast decrease in performance against others.
So, with 4 hours left, I removed my random tangent function. I had considered breaking up the best ranking Island into a more manageable components to help stop stop other AI from crossing into Islands before I could close them off, but it was 5:45 am and I wanted to go to bed. So, I made sure that my original Island selection code wasn’t broken during the switch back. Pushed that into the area, and went to bed.