Me 8th in ranking using C++. If you wan’t all details, here it is.
###First Feelings
I guess everyone who did participate in Coders vs Zombies thought: it’s almost the same. But, when I tried to make first GA, during processing of ranking of my bot for Hypersonic (it took 6 hours to process 100%), I was really surprised.
Even though, comparing to CvZ differences are small, they are bringing a lot of troubles. As you know, in CvZ when any enemy is close to player, he instantly shoots. Here, you should select who you should shoot. Implying total count of possible order of killing enemies, even for ten enemies, there is already 10! = 3628800 variants.
After Hypersonic contest was over, and after taking some rest, I started to strugle about figuring out how to choose between shoot and move. First idea was: uniformly shoot or move. No success, just very dumb bot. Played around a bit, and then throw it away.
Meanwhile, there was bug in mechanics of validator. While moving by decreasing x coord without change of y coord, sometimes y was changed. Later it was fixed.
###MCTS appears on scene
Later… Someone mentioned MCTS. I decided… Why not? Just for learning what it is, and also, for fun.
My first implementation wasn’t MCTS as it supposed be. I was doing random choice at any step. So, it was as simple as: Select? Random! Simulate? Random in loop! Expand? Explore 12 moves and shoot in all enemies!
Main question: what random? Uniform!
That was issue. Because, uniform decision was making almost no shoots. This was gaining very very low score (and less than 100% ofc)
Next, to deal with low shoots, I decided to first dice 50:50 shoot or move, and then, who target or where we move. It started to work, still bad but much better.
###Fooling around
During this playaround with such poor bot, I was thinking about main performance issues, and one point stucked in my mind: bottleneck later will be in damage calculation. LOL.
So, I started to optimize it… First consideration: best will be to compute it straight from square of distance. I thought that best will be just unrolled binary search. But, for consistency, I made three versions: plain math formulae, unrolled binary search, list of corner values (of square distance).
Due to bug in code, list of corner cases was fastest. Later, it turned out, that fastest is unrolled binary search, second is list of values, and last is plain math formulae.
After playing with it, It was time to continue reading about MCTS and trying to understand it completely. I put UCT formulae as it is. Almost no difference. still dumb bot. Was trying to play around with coefficients… No hope.
###Suddenly
Then, I replaced MCTS Simulation function, with simplest ever function: select closest enemy, shoot him. It was like… Oo. How is that possible, he ROCKS!!! Score was ~30k but around 97% complete. Issue was tests with very many units.
Thinking that it’s right way, I was just trying to tune UTC coefficient or replace formulae with something else, and figuring out how can I increase speed. I did some optimizations, and got ~39k & 100%
##Optimizations.
###1. Damage calculation
I’ve made even faster function to calculate damage. You can’t make plain lookup table from square distance, because largest absolute value is 337000000. Plain lookup table will take ~ 300 MB. So, Idea is following. We want to make smallest switch (case). Code for each case should be small. In binary search we use comparisons. There already few (four) of them, so in each case we should do less than three comparisons.
Question: can we have less or equal to one comparison in each case? Yes we can!
Write code that splits whole range of square distances in fewest count of ranges, where each range has width as power of two. For each such case should be case, and in that case code should be small. So, resulting code is
int dmg(int d)
{
if (d < 90719)
return 0;
int z = table[(d-90719)>>20];
switch(z)
...
}
Each case is either return or single comparison.
Separating comparison cases into negative values of case, allows us to put straight into table resulting value if you don.t need to compare.
###2. Don’t delete!
Simply don’t free any memory.
As simple as it is. This is why I had 97%, I was starting to delete nodes, after timer expired, that was causing one validation test fail.
###3. Closest unit
During simulation, you very frequently need to find
- closest data point for enemy, or closest enemy for data point, to know where he is moving.
- closest enemy to data point, to know should it be deleted, or who will grab it.
Idea here might be use some search trees, but in my case with MTCS, you simply can’t copy search trees for each new game state.
So, my idea is following: keep up correct links to closest.
It’s first what coming into mind, but if one will implement it, he may think that “It just don’t work!” Why? Closest data for enemy - is working. Issue with closest enemy for datapoint
Because when enemies moving, closest enemy can be on larger distance than his closest datapoint.
But nope! It works! All you need to do is add check. Enemy target should be this datapoint. Then, it will work. (At least for me, it never failed. I have forced crash with exception if simulation result is different comparing to server reply)
Side question: how to store links? I’m using indexes in array.
Algorithm is following:
- If datapoint has nearest index of dead enemy, find closest alive enemy. If nearest is alive and same coords as datapoint, mark datapoint dead.
- Remove all dead datapoints while making table of new indexes.
- Remove enemy, he is only one, so just remember index
- For each datapoint where nearest enemy index greater than removed, decrease it. None of datapoints has closest index to dead enemy, because of first step.
- Replace old nearest indexes for enemies using table.
- For all enemies who no longer knows closest datapoint: find one. If this enemy is closer then nearest index enemy for datapoint - update it.
I’m not sure how much it increase speed, but I was using it.
###3. Middle step
I don’t remember when I introduced it here though.
One of the biggest optimization. Idea is following. Same as in some of previous contests where I was participating…
In this game, actual player action takes in the middle of step of the game. This is what we will deal with.
Split whole simulation of single game step into three functions:
-
EnemyMove - Does processing of enemy movement.
-
PlayerAction - Does processing of player action.
-
RemoveDead - Does cleanup.
Also I had helper function PlayerIsAlive…, that was called after PlayerAction, in before RemoveDead, because in game logic dead enemies still killing.
What this is allowing to us? In phase of PlayerAction we can very fast check can we stay and shoot, or we should move. Just simply checking does any enemy (dead or alive) reaching us from current position. (no processing of movement!)
So, my nodes of MCTS were storing state of the game in the middle of game step. Confusing, but true.
###4. Towards
After realizing, that I’m stupid, and damage calculation is called only once during step, comparing to other things in loop, it became obvious, that bottleneck should be towards function.
It’s how I call it. It does calculate position closer to other position by certain distance. I even did some research about fast calculation of inverse square roots, and other hacks. One of famous one is from Quake III. I wasn’t satisfied with anything, because of quite big loss in precision.
Funny fact: same maths using floats, was giving errors, and no performance boost. I was trying to use integers for storing coords as far as there is everywhere integers except in the middle of calculations. Again, integers gain no speed. Fastest is doubles.
####SSE
Ha. Ha. Ha. I’m mad. Yeah.
I got this idea while looking for inverse square roots. Somewhere was mentioned about fast inverse square root in SSE instruction set. Surprisingly for me, intristics was working on codingame! Inverse square root is again approximation.
So, I made two versions of towards function using SSE.
One was for integers, other for doubles.
Got no speed.
WHY? Why I spent so many time on useless thing?
Because I wanted to try!
It was very surprising, that SSE does no gain in speed.
Reason was in GCC compilation flags. In Codingame FAQ it’s not mentioned, but if you think: why there is no flags listed? You’ll figure out that reason is simple: there is no flags passed into GCC, only libraries.
GCC flags
Question was: is it possible to set -O2 for example, straight from source?
Answer: yes!
#pragma GCC optimize "O2"
I was using in the end:
#pragma GCC target "tune=native"
#pragma GCC optimize "O3,omit-frame-pointer"
Gain of speed was obvious.
Next, I tried to check SSE again. No visible gain in speed,
Later I come across online compiller that shows assembly, and using it, figured out what issues is. After optimizing code, still almost no gain. So I droped it. Perhaps codingame system using virtual machines (no?) that does not affect SSE timing, or I don’t know why…
###5. Allocation of memory in single buffer
I made custom allocator that allocates all memory in single 700 MB buffer. As long as I never free memory, it’s working.
Strange, I was unable to see any gain in speed perhaps it’s somehow related on virtualization. I don’t know whether codingame using virtual machines or no. I have no idea why memory allocation so freaking fast.
###5. EnemyMove using aligned SSE
Yeah, I wrote aligned allocator, and then version EnemyMove function, that using aligned load and store. This did gain speed. Not more than x1.5.
Overoptimized
In state of dissapointment by fact that all my very cool optimizations gain almost no speed nor score… I decided to check my bot under local tests in profiler.
After making it working without server reply, profiling showed, that I was right, that with simple EnemyMove function, bottle neck is EnemyMove. And when I replace EnemyMove with SSE version, this bottleneck dissapearing.
Second bottleneck is function MTCS->Select. Nothing surprising it does sqrt(log()) a lot of times. But at that moment, I was unable to think about any other optimizations. Because locally I started to run bot with much bigger time limits, and see almost no gain in score.
That was The Issue.
Speaking of MCTS
Main trouble for MCTS is so known “Exploration vs Exploitation” problem. You have to balance between exploitation of best known turns so far, and exploration other turns. It’s known that some of games has some good statistical properties, that allows to use some modern methods.
Regarding to this problem: I don’t have few months to make research, propose theorems, make proofs etc. Thus, I could just try ready for check methods, and state: it does not work. May be just my hands from wrong place.
Anyway, almost all MCTS implying that result of game is win or loose. So range of outcome is [-1, 1]. In our task, score 7500 is possible, so range is [0, 7500+]. We can normalize somehow outcome into range [-1, -1] but simplest one method by division by best will make normal score 1100 into 0.1 In other words, normalizing will make normal scores very close to zero. And as consequence… MCTS will morph into simple BFS (Breadth First Search).
Other choice to scale up other side of UCT to have some balance between them. I can’t say anything about it.
Anyway. UCT stuff is proved for games that has some certain properties, and this game may not have those required properties.
I did test many of them: best in node divided by best so far, average score, and so on. Don’t remember which won.
Other one bad side. Supposed optimal score.
Let say we wan’t to somehow predict best possible score, without underestimating. Then, first that going into head: we imagine that we shoot in each enemy at least once, also each hit deals MAX_DAMAGE because we somehow able to shoot from closest distance, and we will somehow save all checkpoints that is still in game.
Problem is following: this optimal score decreasing rapidly, disallowing us to cut off some branches of MCTS tree, by using rule: if best so far is greater than optimal in this game state then no need to check this game state.
Another problem: increasing branching of movements (not including shoots), you decreasing probability to explore game state where you kill enemy which is a bit far away, but if you would kill him, it will end up with much more score then just killing closer one. And from other side, restricting movement, is cutting off potentially better positions for shooting.
There is work, and algorithm called SP-MCTS (Single Player Monte Carlo Tree Search) where most of this issues for such games is mentioned.
###Deal with it
There was test, called something like “Nearest to impossible to save”. This test case was showing issue with targeting long distant enemy with high life. And to figure out, that you need to kill him. For MCTS you need explore in MCTS tree several nodes towards to him, and several nodes shooting him, and not “some of”, but strictly very small subtree leads to it.
I made just heuristics that finds enemy with lowest distance to datapoint, then analitically finds fastest way to reach enemy directing at datapoint, and killing him. Repeat. Need to notion: if someone is closer than enemy killing range, then retreat. I don’t need check movement, because I’m in middle of step. (discussed above) I need to avoid enemies because in case if you killed, score is zero, and it’s would not better than MCTS result.
I was running this heuristic after reading each game step. If it is giving better result than MCTS, then use it instead. Using this method, on that certain test, I got 660.
Lets name it strategy2. Previously discussed heuristic is strategy1: shoot in closest enemy.
I tried to run MCTS using in simulation both: strategy1, strategy2. Time that requires this, was lowering total score.
###Don’t interleave
I tried to use following strategy: if you shoot once into enemy, all other shoots should be into same enemy, until he dead.
Same for movements: if you move in k-th direction, all next movements (you can stop and shoot for few turns), should be in same direction until fist kill.
So, reset trigger is kill. After you kill - you can move anywhere, shoot anyone. Was playing around with it: increasing number of directions, disable one or other… No hope. 32k-36k score.
###Last day
Around 10 hours until end, I had 40818 score 100%. Rank #29. I have analytical function to figure out how you can in fastest way reach enemy. Maybe instead of closest to datapoint - select closest to player, and walk?
I even add calculation of fastest possible kill of this enemy, and if fastest is after he reach datapoint, you need to shoot already, else you can move closer. Heuristic codename strategy3.
Riding that heuristic give me 41968 result.
Before this, “Extreme” test case my score was ~5100.
After: I got 6k, and this was sign. Following enemy makes sense: killing faster, easier to kill more distant enemy.
This was with still turned on rule: don’t interleave.
After disabling it back, got 42751.
Then, I changed back to my previous best, got a bit more. 43155
###2 hours left
I decided to deal with selection of first victim al least, by… running enemyCount in parallel MCTS during first step!
Why many MCTS? Because each MCTS should optimize their own task: killing certain enemy first. This was inspired by Meta search in SP-MCTS
By the way: did you know that timelimit of first step is 1 second?
So, first 1 second was many MCTS, each one were using heuristic that trying to kill selected for this MCTS enemy first.
Then, starting from second step: selected enemy is chosen, continue try to kill him first.
Result was lower than my record. Then, I tried to reduce time on selection, to 150 ms, and got 43511 score.
Last hour
First, I did some play with “new” version. Somehow got 43,535
Then, I tried to select second victim too, by fixing first selected victim. Results was like 43331, funny many 3, but nope. Sent few different variations during last few minutes, but nope, still 43535.
###Account
##10/10
Was very fun. A bit uncomfortable two contests in row.
Sad though. Don’t ask why. Perhaps, because it’s over.