Thanks for this. Very interesting (and depressing :P) as always !
Just to be clear:
Before, the matchmaking selection was far smaller. And the top 3 was playing only against the top 3. @pb4 asked to codingame to change that because the top 3 was just tweaking their code to fight against the top 3.
This position on opponent sampling was formed after the CSB contest for reasons that were different from what reCurse explains. Here were the main points:
- I do not think there was any score averaging back in the day
- Opponent selection was skewed at the top. The 4th player would play very rarely against the top2, whereas the 3rd player would play often against the top2.
- This was not an issue for the top2, but it was a problem below: the 3rd player would be pushed down by repetitive losses against the top2, until he was below the 4th player.
Hence, my position was formed in the past in different conditions, to avoid distorting the top N.
Today, with the current re-run procedure, I believe the system is āas good as weāll get itā. More precisely: itās imperfect during the contest when there are few games, but good enough at the end when many games are added.
Iāve found this island three times already, glad itās somebody elseās turn
Thanks for the report ! I particularly like the enemy detection algorithm. At first, it looks impossible, giving the branching factor. But with clever pruning it becomes fast ! Nice
What did you do for the beginning of the game ? Same algo with less info ?
Good question, itās like you said, it uses less information at the beginning of the game. When the current state cannot be reached for whatever reason, it just uses the given input, so if the unit is invisible, it is treated as if it did not exist.
Here is my postmortem. Please tell me if you find a mistake or something I didnāt explain.
Could you tell me how do you write your referee programs because i donāt know where to start and I donāt understand anything in your referee program?
Well, you need code to simulate the state of the game like you would have in your minimax, but you also need code to generate the initial state of the game. In this case Codingameās referee had a certain way of generating the maps and spawning the playerās pawns.
For me the real tricky part of writing my first referee was to deal with launching a process, feeding it inputs in itās stdin and reading out its outputs in its stdout. So I had to mess around with linux pipes. It took me a while but I made my first referee on Tron and then I sort of copy paste a large part of it for every referee, replacing the game specific code and keeping the part that starts AIs, feeds them inputs and gets their outputs.
I believe you can do it more easily in Java and python in a non OS-specific way (my referees use linux pipes and only works on linux). Alot of people like to use Magusās brutaltester.
If you want to write from scratch like I did, to answer your question of āwhere to startā: try to make a program that starts an other program feeds it some input and gets back some output.
Thank you.
and I use Linux too
First of all thanks to organizers for intersesting contest. Iām glad that this time they did not find a reason to remove me from the leaderboard (yet).
About strategy
Reverse engineering
Detecting enemies in the fog is a must-have feature.
From all possible positions, I place the opponentās units where they have optimum advantage, disadvantaging myself (based on evaluation function). After that, we get a usual minimax game with full information.
Minimax
I used a variant of minimax - negamax with alpha-beta pruning and iterative deepening. As the branching factor of this game is around 100 on each ply, Iām sorting all possible moves by the evaluation function, and only keep the best 12 moves, discarding the rest.
This way I usually achieved search depths between 4 and 8 ply.
Evaluate function
I calculated the shortest distances from my units to each cell and the same for the opponentās units. The player whose unit in closest proximity owns the cell (Voronoi?).
int res = 100*score;
for(int y=0; y<game.Y; ++y)
for(int x=0; x<game.X; ++x)
{
int diff = d2[y][x] - d1[y][x];
int val = 20*max(-3,min(3, diff));
if(0 < diff)
val += 50 + (3-game.maze[y][x])*20;
else if(diff < 0)
val -= 50 + (3-game.maze[y][x])*20;
res += val;
}
Where d1 and d2 are shortest distances from first and second player.
In addition I added some bonus for possible move directions:
foreach(x,y in neighbours of unit)
{
if(game.is_valid_move(unit, x, y))
{
res += 20*game.maze[y][x];
}
}
Testing
Most of times I tested bot locally against previous versions. Thanks Magus and Kevin for tester and referee.
What did not work
- Rear brake on my bicycle
- Placing enemies after my first move based on full depth search (if possible places <= 3).
- Add bonus to score if units position is undetermined for opponent.
- Fine tuning evaluation function.
- Hardcoded solution.
Question to those who used Voronoi diagram: is it only about comparison of distance beetwen two tiles (Euclidean distance)? or also taking into account height of tiles, accessibility etc.?
134th programming with C#
During this challenge as well as its preparation by playing other multiplayer games, I had 95% only one enemy, my own bot, IDE code vs code Arena, and finally I am satisfied to have managed to simulate a complete tour Gaming, having tried the MinMax, a chaotic evaluation function etc ā¦
I loved the concept of CodinGame and I would prepare for the next challenge, using everything Iāve discovered before, and especially trying to fully meet the challenge of competing against other participants.
Thanks to CodinGame Staff for offering us a great time in a totally unforgettable and super fun and fun quality game.
In my case I take into account path distance, so if there is no path the distance is infinite -> not euclidean distance.
I started the same as Agade, with Tron and with playing around with ways to pass and read information. I used processes. If anyone is curious what a (poorly coded) Tron referee in Python 3 looks like, see here. (I wrote it on a Mac, but I donāt think it is OS specific. It only works on python files, but can be easily adapted to anything you can run via the command line.) To get started, look at the code in the main
function.
If you want to test it out, run the following in the command line
python tron_arena_battle.py <path-to-bot1>.py <path-to-bot2>.py -n <number-of-games-to-play>
Thank you.
Legend #46
Iām a bit late to the partyā¦got carried away making graphics for illustration, ended up writing as much code to generate the graphics as I did during contest time Messed around with python turtle graphics library and got some interactive simulation for Wondev working One in which you play as the enemy! \o/
Still figuring out how to isolate it to read/write to other processes thoā¦the version I have now has my botās python code embedded inside it XD I will however, upload it once itās ready
Wood -> Gold
Only had time to work on this contest during the weekends this time roundā¦almost didnāt make it to legend >.<
The ~200 lines of python I rushed out on the first weekend surprisingly got me from Wood I to Gold
Unsophisticated but somewhat effective
Some observations I made watching replays:
MOVE&BUILD
- Try to climb higher
- Prefer move to 3
- Build higher
- Donāt waste time building higher than you can climb to
- E.g. On 1, building 2 -> 3 would incur a penalty
- When moving from 3 -> 3, build lower (maximize time spent jumping around at height 3)
- When I have to build a 3 -> 4, I pick the one that gives maximum floodfill area
- That is: max accessible area when I run a BFS on the resulting state (allowing my bot to play a decent end-game)
PUSH&BUILD
- Push enemy lower
My early focus was on point-scoring and MOVE&BUILD action, not paying much attention to PUSH&BUILD. Even so, by scoring moves according to the above naive rules, my bot climbed straight from Wood to Gold leagueā¦
Gold -> Legend
Climbing to legend required some more sophisticated code haha Tried my hand coding a minimax for this but by the end of Saturday, I was so swamped with bugs, I made the switch back to my heuristic code from the previous weekend.
Made the shift from point-scoring to area control like many others and finally reached Legend on Sunday evening with 2 major improvements to my code.
Enemy Tracking
After some debugging whilst generating the graphics, I now hesitate to call it āenemy trackingāā¦more like scoring for ābestā enemy position.
The approach my bot took to tracking enemies was simply to overlay possible moves (from previous turnās predictions) against cells from which the enemy is able to build at the detected grid.
So in the above image, enemy was at (3, 6) then moved to (2, 7) and build at (1, 7). Pardon the turtle resting in the corner, sheās tired from drawing all those squaresā¦
- Green are the cells the enemy needs to be to build at (1, 7).
- Red are possible positions the enemy couldāve moved to from possible locations tracked the previous turn.
- Yellow are the overlapping cells (predicted enemy locations this turn).
- Darker shades of yellow and red are previous turnās predictions.
It becomes obvious that this wonāt work after a few moves where all 8 adjacent cells around the build location couldāve been moved to from the previous turnās guessed locations. However, with some filtering and cases of enemy pushes, the possibilities can be narrowed down.
- When (-1, -1) is read for enemy position, it provides information on where the enemy is not. Namely, the cells adjacent to your units.
- When pushed, the enemy could be in 1 of 3 locations that gives the valid push-angle.
So with these two additional methods of filtering in conjunction with the above ātrackingā, my bot was able to decrease the number of possible enemy locations. Using this set of possible locations, I then did a positional scoring on each of them and picked the top as the guessed enemy location to be used for the rest of my turn.
Voronoi Diagram
The secret to area control and why we want to track enemy positions. For each move, I generate a voronoi diagram on the resultant state and a large portion of the score for that move is based on the ownership of the cells.
In this example, Red is where my enemy is, Green is where my unit is. Based on these locations, I run a stepwise BFS alternatively on my units first, then the enemyās, taking into account the validity of each move (cannot climb higher than 1, cannot move onto other units). I then split the cells into 3 categories, self_control -> green, en_control -> red, contested -> yellow.
Contested cells are cells that take the same number of moves to get to by either player. Even though I can reach these cells first (you always make the first move), I scored them lower than cells in self_control since I could be pushed from these cells by the adjacent enemy.
To make my bot more aggressive, I gave a bigger penalty to enemy controlled cells, so it will try to cordon off the enemy and trap them. With this addition into my scoring function, my bot finally made the jump from Gold into Legend
Thoughts
First off, congratulations to the top 3!!! The score difference in this contest is drasticā¦even though minimax was the way to go, seems like they nailed the implementation details and scoring functions just right
Thanks to CodinGame team for another great contest! This contest was more to the likes of Code4Life, maybe intentionally so based on positive feedback? Nonetheless, the boardgame-like nature of this contest is at least similar to that in c4l.
Looking forward to when multiplayer comes out, would be good to properly implement minimax and enemy tracking on this