The Accountant - Feedback & Strategy

The topic where you can tell how you liked the challenge, & how you solved it.

The next contest is Fantastics Bits, don’t forget to register :

The maximum known score was 46.293 points. Yet, noone reached that :’(
Validator 01: 162
Validator 02: 162
Validator 03: 302
Validator 04: 684
Validator 05: 1030
Validator 06: 546
Validator 07: 564
Validator 08: 1880
Validator 09: 638
Validator 10: 1058
Validator 11: 286
Validator 12: 616
Validator 13: 1830
Validator 14: 1020
Validator 15: 827
Validator 16: 192
Validator 17: 180
Validator 18: 314
Validator 19: 490
Validator 20: 1080
Validator 21: 652
Validator 22: 314
Validator 23: 660
Validator 24: 440
Validator 25: 1649
Validator 26: 1306
Validator 27: 410
Validator 28: 456
Validator 29: 7270
Validator 30: 3280
Validator 31: 2756
Validator 32: 5696
Validator 33: 4853
Validator 34: 2690

V1

V2

V3

V4

V5

V6

V7

V8

V9

V10

V11

V12

V13

V14

V15

V16

V17

V18

V19

V20

V21

V22

V23

V24

V25

V26

V27

V28

V29

V30

V31

V32

V33

V34

9 Likes

BlitzProg, 126th.

It’s been a long contest and I couldn’t really focus on it much because, well… I am quite busy (signed a permanent job contract in the middle of the contest as a web developer)

“Monte Carlo” were the first words I said when I saw the game screen. considering how this challenge looks very much like CvZ, I didn’t really take the time to think twice.

My approach is to pick a move and simulate for the next 12 turns using another combinaison of random moves, then play the one that did the best score using an heuristic.
Did well enough, 31.5k

My code evolved the next week into a full game prediction calculating the entirety of a game each time with an accurate score, also retaining the list of the best moves played so it could be re-used later. Unfortunately that proven to be less effective in very busy maps, and I was unable to get anything above 26k.

in a desperate attempt to get something useful out of my 2nd week, In the last hour I did something crazy and started copy pasting huge chunks of code from a previous submit, with the goal of using the work I did the first week if there was more than 8 ennemies on the map. That just barely succeeded, and I nudged my score up by about 300 points with a last second submit (literally) for a total of 31799 points.

4 Likes

Great contest, congrats for the winners … and CG!

Nice to see the Validators, but could we have the actual text files please?
I hate my score (too many ideas and only one day to go… finished outside top 500 :frowning: ) and would love to work on it further!
'Cause it was really super fun :wink:

2 Likes

Hey there,

Last news I was 6th, my best was around 43,7k.

I used a Monte Carlo approach : random sampling of moves while simulating the game. Since the research space was so huge I used a few tricks to simplify the problem. The idea was to balance “the risky” and “the obvious”. So instead of generating completeley random sequences of “move/shoot” I generated sequences of “strategies”. There are basically 4 different strategies :

1- Going towards a Data point (full speed)
2- Going towards an enemy (full speed)
3- Moving randomly around me (between 100 and 1000 units of distance)
4- Killing the closest enemy (ie shooting it until it dies).

Each of these strategies has a given probability of happening. Every step I generate a move, I “cast a die” and apply the corresponding strategy. On top of that, there is an additional “overriding probability”. If I can one-shot an enemy, there is a very high chance that I will do it (but not every single time, for instance, on the test case 26 of the IDE, it as a bad idea). Last thing about the strategies : I add a repetition coefficient. For every strategy 1-3 (not 4), I also cast a random number for repetitions : often, you will need to move a lot in the direction of a data point or an enemy. This number repeats the chosen stragegy (up to 5 times). This is done to avoid the annoying case where the MC will go one step towards a DP then one step towards another, and will oscillate “stupidly”.

A few more words that, I think, made a small difference :

  • I only use one square root (in the function that moves an entity). All comparisons of distances are based on the squared distances. It is really expensive to compute square roots, avoid it at all costs.
  • Same for the pow : It is better to compute the damages dealt statically instead of using the formula with pow
  • Some cases are really hard, so it also is important to have a way to compare losing strategies. For instance, if two strategies lead to a game over, I will always prefer the one that allows me to survive longer.
  • Never compute something you can pre-compute :stuck_out_tongue: For the stratey 3- I use a table of precomputed random moves.
  • Some people don’t know this, so please note it :stuck_out_tongue: Every contest/multiplayer on the website allows 100ms for the computations of every turn, EXCEPT for the first. The first turn has 1 whole second to work with. Use this to generate as many solutions as possible.

If you want to see my code (since the contest won’t be available as multi) : http://pastebin.com/stg3TcCJ

If I had more time, I would have :

  • Made a real stat study on the probabilities of every action. Here every coefficient has been attributed by submit-trial/error.
  • Used a MC to generate loads of solutions during the first time step, then use another method (local optimizer) to improve the best solutions during the following turns.

Overall on the contest :

I loved the contest, it was extremely interesting to work on something so close and at the same time so different to CvZ. Rules were pretty well defined from the beginning and no big server problems happened !

Down side : I am really surprised at the validators, I found the submission process extremely frustrating. Improving the general score in the IDE would often lead to a loss of points during the submit. All in all, once I got to 41k, I got to 43,7k by exhausting every single submit while trying stuff completely blindfolded. Not very exciting to win in ranking while not understanding what’s happening :slight_smile:

Another thing : I understand the will to limit the number of submits to avoid what happened during CvZ BUT : for people working their day job and trying to code for the contest at night, it was a bit hard to have only 25 submits for a night. Moreover, if such a system is conserved for the following optimization contest, could it be possible to deactivate the other limitation that blocks a user of submitting too often ? Because it gets really annoying to know you have only 25 submits, you try to win a few points while tuning your parameters … and end up being stuck for half an hour because of the second limitation process.

Anyway ! Thanks a lot for that contest once again ! Time to get some sleep after working on contests for three weeks !

15 Likes

This.

I personally didn’t use my submits until the last day of the contest, it tried a structured approach by selecting enemies based on current dmg / hp ratio with a random pick among the n first in this order.

I used a cache tree for the enemy positions which allowed me to run quite a lot of simulation. I was steadily doing about 10k simulation per turn on the last “extreme” test case and achieving 40-42k on submit with a peak at 43007 with granted me 14th place.

I was basically simulating 3 attack ways, straight, left and right of the enemy testing every steps starting from the time i reached a certain number of shot to kill him at the current position (just an evaluation based on current dmg/hp)

I used some king of binary GA on random seeds used by a LCG to generate tries to select the best options at a certain enemy depth. It seemed to perform better than using a more gene based GA or pure MC in the case of my algorithm… However, I noticed that adding some extra random on the target position allowed me to sometimes perform better.

With fixed positions I was doing 7063-7270 on “extreme” last test case but adding some random on the test position I was able to achieve a maximum of 7890 but with a range going from 5xxx to 7890 instead…

I then started to submit again and ranged from 36k to 42k however I only had a day of submit left, so i did maybe 25 submit on that and wasn’t able to get past my previous score.

I was often getting random failures on submit which was very frustrating to have like 4 consecutive failed submit with random validator failed while I couldn’t fail a single test case with several “play all test cases” in a row that was a very frustrating experience.

It didn’t really feel like it was an optimization challenge but more like a luck contest on the submit.
It would have been a lot more of an optimization challenge if first, you could not fail validators if you didn’t fail the test cases and then your score was based on an average score rather than on your best submit peak, which was a lot about luck and how much submit you did.

I’m not sure that trading “hardcoded solutions” for “best submit peak” is such a good deal actually.

6 Likes

Me 8th in ranking using C++. If you wan’t all details, here it is.

###First Feelings
I guess everyone who did participate in Coders vs Zombies thought: it’s almost the same. But, when I tried to make first GA, during processing of ranking of my bot for Hypersonic (it took 6 hours to process 100%), I was really surprised.

Even though, comparing to CvZ differences are small, they are bringing a lot of troubles. As you know, in CvZ when any enemy is close to player, he instantly shoots. Here, you should select who you should shoot. Implying total count of possible order of killing enemies, even for ten enemies, there is already 10! = 3628800 variants.

After Hypersonic contest was over, and after taking some rest, I started to strugle about figuring out how to choose between shoot and move. First idea was: uniformly shoot or move. No success, just very dumb bot. Played around a bit, and then throw it away.

Meanwhile, there was bug in mechanics of validator. While moving by decreasing x coord without change of y coord, sometimes y was changed. Later it was fixed.

###MCTS appears on scene

Later… Someone mentioned MCTS. I decided… Why not? Just for learning what it is, and also, for fun.
My first implementation wasn’t MCTS as it supposed be. I was doing random choice at any step. So, it was as simple as: Select? Random! Simulate? Random in loop! Expand? Explore 12 moves and shoot in all enemies!

Main question: what random? Uniform!
That was issue. Because, uniform decision was making almost no shoots. This was gaining very very low score (and less than 100% ofc)
Next, to deal with low shoots, I decided to first dice 50:50 shoot or move, and then, who target or where we move. It started to work, still bad but much better.

###Fooling around

During this playaround with such poor bot, I was thinking about main performance issues, and one point stucked in my mind: bottleneck later will be in damage calculation. LOL.

So, I started to optimize it… First consideration: best will be to compute it straight from square of distance. I thought that best will be just unrolled binary search. But, for consistency, I made three versions: plain math formulae, unrolled binary search, list of corner values (of square distance).

Due to bug in code, list of corner cases was fastest. Later, it turned out, that fastest is unrolled binary search, second is list of values, and last is plain math formulae.

After playing with it, It was time to continue reading about MCTS and trying to understand it completely. I put UCT formulae as it is. Almost no difference. still dumb bot. Was trying to play around with coefficients… No hope.

###Suddenly

Then, I replaced MCTS Simulation function, with simplest ever function: select closest enemy, shoot him. It was like… Oo. How is that possible, he ROCKS!!! Score was ~30k but around 97% complete. Issue was tests with very many units.

Thinking that it’s right way, I was just trying to tune UTC coefficient or replace formulae with something else, and figuring out how can I increase speed. I did some optimizations, and got ~39k & 100%

##Optimizations.

###1. Damage calculation

I’ve made even faster function to calculate damage. You can’t make plain lookup table from square distance, because largest absolute value is 337000000. Plain lookup table will take ~ 300 MB. So, Idea is following. We want to make smallest switch (case). Code for each case should be small. In binary search we use comparisons. There already few (four) of them, so in each case we should do less than three comparisons.
Question: can we have less or equal to one comparison in each case? Yes we can!
Write code that splits whole range of square distances in fewest count of ranges, where each range has width as power of two. For each such case should be case, and in that case code should be small. So, resulting code is

int dmg(int d)
{
  if (d < 90719)
    return 0;
  int z = table[(d-90719)>>20];
  switch(z)
  ...
}

Each case is either return or single comparison.
Separating comparison cases into negative values of case, allows us to put straight into table resulting value if you don.t need to compare.

###2. Don’t delete!
Simply don’t free any memory.
As simple as it is. This is why I had 97%, I was starting to delete nodes, after timer expired, that was causing one validation test fail.

###3. Closest unit

During simulation, you very frequently need to find

  1. closest data point for enemy, or closest enemy for data point, to know where he is moving.
  2. closest enemy to data point, to know should it be deleted, or who will grab it.

Idea here might be use some search trees, but in my case with MTCS, you simply can’t copy search trees for each new game state.

So, my idea is following: keep up correct links to closest.
It’s first what coming into mind, but if one will implement it, he may think that “It just don’t work!” Why? Closest data for enemy - is working. Issue with closest enemy for datapoint
Because when enemies moving, closest enemy can be on larger distance than his closest datapoint.

But nope! It works! All you need to do is add check. Enemy target should be this datapoint. Then, it will work. (At least for me, it never failed. I have forced crash with exception if simulation result is different comparing to server reply)

Side question: how to store links? I’m using indexes in array.
Algorithm is following:

  1. If datapoint has nearest index of dead enemy, find closest alive enemy. If nearest is alive and same coords as datapoint, mark datapoint dead.
  2. Remove all dead datapoints while making table of new indexes.
  3. Remove enemy, he is only one, so just remember index
  4. For each datapoint where nearest enemy index greater than removed, decrease it. None of datapoints has closest index to dead enemy, because of first step.
  5. Replace old nearest indexes for enemies using table.
  6. For all enemies who no longer knows closest datapoint: find one. If this enemy is closer then nearest index enemy for datapoint - update it.

I’m not sure how much it increase speed, but I was using it.

###3. Middle step
I don’t remember when I introduced it here though.
One of the biggest optimization. Idea is following. Same as in some of previous contests where I was participating…

In this game, actual player action takes in the middle of step of the game. This is what we will deal with.

Split whole simulation of single game step into three functions:

  1. EnemyMove - Does processing of enemy movement.
  2. PlayerAction - Does processing of player action.
  3. RemoveDead - Does cleanup.

Also I had helper function PlayerIsAlive…, that was called after PlayerAction, in before RemoveDead, because in game logic dead enemies still killing.

What this is allowing to us? In phase of PlayerAction we can very fast check can we stay and shoot, or we should move. Just simply checking does any enemy (dead or alive) reaching us from current position. (no processing of movement!)

So, my nodes of MCTS were storing state of the game in the middle of game step. Confusing, but true. :grin:

###4. Towards
After realizing, that I’m stupid, and damage calculation is called only once during step, comparing to other things in loop, it became obvious, that bottleneck should be towards function.

It’s how I call it. It does calculate position closer to other position by certain distance. I even did some research about fast calculation of inverse square roots, and other hacks. One of famous one is from Quake III. I wasn’t satisfied with anything, because of quite big loss in precision.

Funny fact: same maths using floats, was giving errors, and no performance boost. I was trying to use integers for storing coords as far as there is everywhere integers except in the middle of calculations. Again, integers gain no speed. Fastest is doubles.

####SSE
Ha. Ha. Ha. I’m mad. Yeah. :laughing:

I got this idea while looking for inverse square roots. Somewhere was mentioned about fast inverse square root in SSE instruction set. Surprisingly for me, intristics was working on codingame! Inverse square root is again approximation.

So, I made two versions of towards function using SSE.
One was for integers, other for doubles.
Got no speed.
WHY? Why I spent so many time on useless thing?
Because I wanted to try!

It was very surprising, that SSE does no gain in speed.
Reason was in GCC compilation flags. In Codingame FAQ it’s not mentioned, but if you think: why there is no flags listed? You’ll figure out that reason is simple: there is no flags passed into GCC, only libraries.

GCC flags

Question was: is it possible to set -O2 for example, straight from source?
Answer: yes!

#pragma GCC optimize "O2"

I was using in the end:

#pragma GCC target "tune=native"
#pragma GCC optimize "O3,omit-frame-pointer"

Gain of speed was obvious.
Next, I tried to check SSE again. No visible gain in speed,
Later I come across online compiller that shows assembly, and using it, figured out what issues is. After optimizing code, still almost no gain. So I droped it. Perhaps codingame system using virtual machines (no?) that does not affect SSE timing, or I don’t know why…

###5. Allocation of memory in single buffer
I made custom allocator that allocates all memory in single 700 MB buffer. As long as I never free memory, it’s working.
Strange, I was unable to see any gain in speed perhaps it’s somehow related on virtualization. I don’t know whether codingame using virtual machines or no. I have no idea why memory allocation so freaking fast.

###5. EnemyMove using aligned SSE
Yeah, I wrote aligned allocator, and then version EnemyMove function, that using aligned load and store. This did gain speed. Not more than x1.5.

Overoptimized

In state of dissapointment by fact that all my very cool optimizations gain almost no speed nor score… I decided to check my bot under local tests in profiler.

After making it working without server reply, profiling showed, that I was right, that with simple EnemyMove function, bottle neck is EnemyMove. And when I replace EnemyMove with SSE version, this bottleneck dissapearing.

Second bottleneck is function MTCS->Select. Nothing surprising it does sqrt(log()) a lot of times. But at that moment, I was unable to think about any other optimizations. Because locally I started to run bot with much bigger time limits, and see almost no gain in score.
That was The Issue.

Speaking of MCTS

Main trouble for MCTS is so known “Exploration vs Exploitation” problem. You have to balance between exploitation of best known turns so far, and exploration other turns. It’s known that some of games has some good statistical properties, that allows to use some modern methods.

Regarding to this problem: I don’t have few months to make research, propose theorems, make proofs etc. Thus, I could just try ready for check methods, and state: it does not work. :joy: May be just my hands from wrong place.

Anyway, almost all MCTS implying that result of game is win or loose. So range of outcome is [-1, 1]. In our task, score 7500 is possible, so range is [0, 7500+]. We can normalize somehow outcome into range [-1, -1] but simplest one method by division by best will make normal score 1100 into 0.1 In other words, normalizing will make normal scores very close to zero. And as consequence… MCTS will morph into simple BFS (Breadth First Search).

Other choice to scale up other side of UCT to have some balance between them. I can’t say anything about it.
Anyway. UCT stuff is proved for games that has some certain properties, and this game may not have those required properties.

I did test many of them: best in node divided by best so far, average score, and so on. Don’t remember which won.

Other one bad side. Supposed optimal score.
Let say we wan’t to somehow predict best possible score, without underestimating. Then, first that going into head: we imagine that we shoot in each enemy at least once, also each hit deals MAX_DAMAGE because we somehow able to shoot from closest distance, and we will somehow save all checkpoints that is still in game.

Problem is following: this optimal score decreasing rapidly, disallowing us to cut off some branches of MCTS tree, by using rule: if best so far is greater than optimal in this game state then no need to check this game state.

Another problem: increasing branching of movements (not including shoots), you decreasing probability to explore game state where you kill enemy which is a bit far away, but if you would kill him, it will end up with much more score then just killing closer one. And from other side, restricting movement, is cutting off potentially better positions for shooting.

There is work, and algorithm called SP-MCTS (Single Player Monte Carlo Tree Search) where most of this issues for such games is mentioned.

###Deal with it
There was test, called something like “Nearest to impossible to save”. This test case was showing issue with targeting long distant enemy with high life. And to figure out, that you need to kill him. For MCTS you need explore in MCTS tree several nodes towards to him, and several nodes shooting him, and not “some of”, but strictly very small subtree leads to it.

I made just heuristics that finds enemy with lowest distance to datapoint, then analitically finds fastest way to reach enemy directing at datapoint, and killing him. Repeat. Need to notion: if someone is closer than enemy killing range, then retreat. I don’t need check movement, because I’m in middle of step. (discussed above) I need to avoid enemies because in case if you killed, score is zero, and it’s would not better than MCTS result.

I was running this heuristic after reading each game step. If it is giving better result than MCTS, then use it instead. Using this method, on that certain test, I got 660.

Lets name it strategy2. Previously discussed heuristic is strategy1: shoot in closest enemy.

I tried to run MCTS using in simulation both: strategy1, strategy2. Time that requires this, was lowering total score.

###Don’t interleave
I tried to use following strategy: if you shoot once into enemy, all other shoots should be into same enemy, until he dead.
Same for movements: if you move in k-th direction, all next movements (you can stop and shoot for few turns), should be in same direction until fist kill.

So, reset trigger is kill. After you kill - you can move anywhere, shoot anyone. Was playing around with it: increasing number of directions, disable one or other… No hope. 32k-36k score.

###Last day
Around 10 hours until end, I had 40818 score 100%. Rank #29. I have analytical function to figure out how you can in fastest way reach enemy. Maybe instead of closest to datapoint - select closest to player, and walk?

I even add calculation of fastest possible kill of this enemy, and if fastest is after he reach datapoint, you need to shoot already, else you can move closer. Heuristic codename strategy3.

Riding that heuristic give me 41968 result.
Before this, “Extreme” test case my score was ~5100.
After: I got 6k, and this was sign. Following enemy makes sense: killing faster, easier to kill more distant enemy.

This was with still turned on rule: don’t interleave.
After disabling it back, got 42751.
Then, I changed back to my previous best, got a bit more. 43155

###2 hours left
I decided to deal with selection of first victim al least, by… running enemyCount in parallel MCTS during first step! :joy:
Why many MCTS? Because each MCTS should optimize their own task: killing certain enemy first. This was inspired by Meta search in SP-MCTS

By the way: did you know that timelimit of first step is 1 second?

So, first 1 second was many MCTS, each one were using heuristic that trying to kill selected for this MCTS enemy first.
Then, starting from second step: selected enemy is chosen, continue try to kill him first.

Result was lower than my record. Then, I tried to reduce time on selection, to 150 ms, and got 43511 score.

Last hour

First, I did some play with “new” version. Somehow got 43,535
Then, I tried to select second victim too, by fixing first selected victim. Results was like 43331, funny many 3, but nope. Sent few different variations during last few minutes, but nope, still 43535.

###Account
##10/10 :joy:
Was very fun. A bit uncomfortable two contests in row.
Sad though. Don’t ask why. Perhaps, because it’s over.

16 Likes

As the closest non lethal range possible dmg was 14 at about 2020 units, a table with 14 entry with squared distance was enough to get damage for a squared distance, I guess that what most people used.

I also envisaged using sse for calculation of square roots but as I was using a cache I thought it wasn’t really needed… The problem with sse is that you tend to lose time switching between sse registers and doubles and you really need to set up your data in a way that will allow you to get improvement out of using sse. Also square roots are quite fast nowadays, optimization seems to be a lot about avoiding cache miss more than anything else…

Good point on GCC flags, haven’t tried much beside -Ofast without noticing much change.

On the “Nearest to impossible to save any dataset” test case, I noticed that my fixed position method was randomly shooting the 2 first low hp target or moving toward the large hp guy for it was only nailing down the option to save both DataSet at a late stage of the game.

However with adding some extra random to the position I was able to always nail the best solution from the first frame, I was quite enthusiastic with it however it was creating a lot of variation on large number of enemy test cases like “extreme” one, I only had a few submit left and about half of them had some validator randomly failing unfortunately :frowning:

Thanks ! I have been looking desperately in gcc’s doc for the pragma equivalent of -march=native !

By the way, if you are using inline functions, you should also add “inline” to the optimise pragma, otherwise your inlined functions will be plainy and simply ignored (well, jsut not inlined).

Aside from that, I also tried to compare O3 and Ofast but the latest gave me poorer results every time.

1 Like

Thanks for the validator screenshots! Would it be possible for you to share the raw inputs so we can play with them locally?

2 Likes

I finished 29th. I tried some Monte Carlo (MC), a Genetic Algorithm (GA) and Simulated Annealing (SA). The best version of my code was the SA.

Overview of my Algo:

I first tried to do some MC. I found out that the search space is so large that searching completely randomly more than a few moves ahead led to terrible results. My first decent code did the following: at every turn try 3 random moves and shoot the nearest enemy untill game over and use the final score as the evaluation function. This code did a maximum of ~35K submits. What I found really interesting was that searching 4 moves ahead did noticeably worse than searching 3 moves ahead. This to me showed that the search space was really too large for the pure randomness of MC.

For the GA/SA I use some milliseconds on my first turn to generate a solution with the 3+X MC described above and then I apply the GA/SA algorithms to improve that solution. The results were a little bit better but nothing amazing.

The big improvement for me was when I changed my Random_Move_Target() function, instead of moving randomly I made it move randomly 50% of the time and move towards a Data point at full speed 50% of the time. This guides the randomness towards moves that are more likely to be useful and had a huge effect on my score: I reached ~41K. Along the same lines I tried adding moves like: Intercept enemy trajectories and shoot nearest but it never seemed to make a difference.

Towards the end I made a local version of my SA and used it to run alot of tests (much faster than IDE tests). This allowed me to understand that the SA was getting all its best solutions around 100-500 degrees. So I changed my cooling schedule to a linear cooling schedule from 500 to 0. The algorithm therefore spends 80% of its time in the 100-500 degrees range. This improved my average score enough to be noticeable but I didn’t have many submits left and only improved from 41K to 41.5K.

Outlook:
I did not understand why my further attemps to guide the randomness towards likely good moves (intercept enemies, shoot closer ennemies) did not improve my score.

I’m not sure about my Neighbour() function in the SA: it would change 1 move at random, go through the list of moves, remove obsolete ones (shooting a dead enemy, and now I realise I wasn’t checking for moving outside the board), and add shooting nearest enemy at the end if the game wasn’t over. Maybe something more simple was possible/better.

Feedback:
I liked the game, didn’t bother me that it was similar to Code versus Zombies (CVZ), it was different.

Unlike CVZ though, it seemed quite a few people were able to reach the optimal solutions in the first turn, which makes me wonder if the game/Test Cases was complicated enough. I don’t think you can get the optimal solution in 1s in Mars Lander or CvZ.

Limiting submits so aggressively I was not a huge fan of. This was done because of what happened in CVZ. In CVZ, because you are ranked by your highest submit, people were encouraged to trade off average score for high variance and spam submit. I think we can all agree that this is undesireable as we would like an algorithm with a better average score to be ranked better than an algorithm with a lower average score.
Limiting submits so aggressively does encourage people to make less of a trade off between average score and high variance. But it doesn’t resolve the fundemental problem that you are ranked according to you max submit. And the reduced amount of submits, in a way, only makes the error bar on people’s score larger. With other obvious side effects such as: advantaging people who could connect regularly to spam more, lower people’s ability to test their codes and make developpement towards the end of the contest almost useless as you won’t have enouh submits left to improve your score with reasonable probability.
Solution: in some way, rank according to average score.

6 Likes

On validator 29 you can get either 7384, 7580 or 7890, didn’t seem to happen on any submit but those are score that I’ve seen relatively often in IDE.

On the other hand, as the validators are slightly different from the test case maybe it’s another story…

Some people don’t know this, so please note it :stuck_out_tongue: Every contest/multiplayer on the website allows 100ms for the computations of every turn, EXCEPT for the first. The first turn has 1 whole second to work with. Use this to generate as many solutions as possible.

Can we get all of these tiny details stated somewhere on the description page? In these competitions getting the first move right is critical (smash the code, anyone?), so it would be very nice if not only a handful of people would know these extra rules. I wonder how many tricks are out there only used by a few people. I feel it’s a bit unfair, as we’re no longer playing by the same rules as others. How do they know these, btw?

I tried to be clever and failed :stuck_out_tongue: I won’t write too much since I only scored 28.8K, but the approach was fun.

I framed the problem as achieving a sequence of goals that maximizes score, where each goal is a tuple (enemy_id, damage). A goal is achieved by using any number of move actions followed by one shoot action.

To achieve a goal I decided I wanted to make a flawless god-algorithm that found the perfect movements and avoided enemies and strategically positioned Wolff for dealing with multiple enemies, and made me breakfast. This is the gist of it:

  1. The points the player moves to p_1, … p_n all have the magnitude of the vector p_1 - p_n <= 1000.
  2. p_1 is anchored to the initial state’s player location.
  3. Each enemy is a circle where the origin is the enemy’s location and the radius is 2000. Each point the player moves to is associated with a set of enemy circles at that time step that the point must be outside of.
  4. The goal is a circle where the origin is an enemy location, and the radius is based on the damage formula. p_n must be inside the goal circle.

The algorithm increments n (1+the number of moves) until this system of constraints is satisfied. At each iteration, the player’s movement points are initialized to some arbitrary locations, then they are stupidly moved around using verlet integration until they settle to a position that satisfies the constraints.

When I got this to work I was like hah! The hard part’s over, now I can slap together any tree search and do well! It turns out it doesn’t work that way. There’s no value in being able to find convoluted paths to achieve one of these goals, since if someone’s in your way you can shoot them and move on with life :slight_smile: . It’s good at avoiding deaths, and hitting the 14 damage regions, and stitching together multiple goals, but there are too many possible combinations of goals to make the precision valuable.

Lesson learned: When the search space is huge consider using a shotgun approach (monte carlo).

Feedback

The challenge was fun and the servers were responsive :smiley: The only thing I felt uncomfortable with were the validators. In particular even though the validators were hidden, I still felt like people were fitting their solutions to them by making random tweaks / resubmitting. Maybe the rules could define a way the enemies are distributed: for example they could be laid out in a uniform random way. Then the validation tests could be automatically generated with enough of them that incidental tweaks don’t matter, and it also opens up the possibility of showing the validation test replays to the user, as long as old ones are periodically cycled out and replaced by new ones. This would also make it easier for users to test their algorithms locally.

2 Likes

I actually reminded that there was some difference in the first frame time limit from a previous contest but as I couldn’t find anywhere to find the infos I tweaked it manually reaching a 1125 on the first frame, to my surprise actually (I remembered only a 150ms from CSB), that I later downgraded to 1110ms wondering if some of my random fails were eventually coming from there…

I reckon that this is really something huge that it was not stated in the rules,
the first frame choice is very important indeed and can really greatly change the results.
For the simple reason that once you started a move to chase or shoot an enemy you can’t turn back and chase another without a considerable waste of time so first choice is crucial…

Even if you know that the first turn has a longer time limit, you can never be sure that they won’t change that before the end of the contest if it’s not described in the rules. So you have the choice of going down a route which is based on exploiting the 1000ms on the first turn which might get “broken” if they “fix” this, or playing it safe and perhaps getting beaten by those who took the risk. It’s an unfortunate situation. In HyperSonic it was the same. I used the 1000ms time limit in both, but I didn’t really base my approach on that extra time.

This should definitely be clarified officially.

On the other hand as most codes relies on iterative algorithm like GA MC PSO or SA, it’s just about tweaking a time limit for the process, i.e. one line of code to change…

However I also had in mind to do further tweak regarding the amount of time available to take a better advantage of it, but the reason you stated is exactly why i didn’t do it so i could change things back in a matter of seconds in case of need… I agree this should be clarified.

What happened with T1024? Why he is not the first in leaderboard of Accountant?

1 Like

38th.

Here is a brief summary of these 15 days.

1. Choice of strategy

As soon as I read the description of the challenge, I immediately thought of a “brute force” approach. Not pure brute force, but more of a Genetic Algorithm (GA). Also I was wondering how far I could go without really thinking strategy.

2. Local simulator

I started to implement a local simulator, since everything about the inner working of the game was explained (highly appreciated!). It took me about 4 evenings to get it right and well tested. I wrote several tests to make sure I will not break the simulation later on when it will be time to focus on performance optimization. This was super useful.

The solution was implemented in C++, but I used Python and the Jupyter notebook for the local UI and tools. pybind11 makes it easy to call C++ code (the simulation) from Python (in the notebook). Also with Python, I could gather and display nice stats to know how the GA was performing.

I made a quick visual representation of the game (no animation, just frame by frame) with the help of gizeh and moviepy. Here is how a homemade CG replay viewer looks like:

Making a visual representation was useful in the beginning to write the simulator and the tests for it. Once it was done, I made a batch runner to automatically run all the test cases and output the score for each of them, as well as the total score. I used this batch runner quite a lot to validate any subsequent improvement.

3. Implementing the GA

It was the first time for me going with a GA during a contest. I had already done some tests a while ago with the Mars Lander, but that was about it.

Once the first version was implemented, I was surprised to all the test cases (but not optimally yet). A few tweaks later, I was around 25k points.

The implementation was quite simple:

  • Population of 100 chromosomes.
  • Each chromosome has a fixed size of 80 genes, but not all of them are used at every turn.
  • A gene is either a SHOOT action or a MOVE action.
  • Generate the list of possible moves at the beginning. Moves are relative to the player, created from a circle of radius 1000 (full speed) and sampled every 2 degrees: dx = 1000 * cos(angle); dy = 1000 * sin(angle);. This creates 180 move actions to pick from.
  • There is a 50% chance to SHOOT and 50% chance to MOVE.
  • The enemy id for the shoot action is picked randomly: target_id = rand_int(0, init_nb_enemies - 1)
  • When the actions are being processed by the simulation:
  • If it is a MOVE, add the dx and dy to the current player x and y.
  • If it is a SHOOT, shoot the enemy if it is still alive, otherwise shoot the closest enemy to the player (if it was killed during a previous turn).
  • At the end, return a heuristic: score * 8 + (init_enemy_life - current_enemy_life) * 10. The second part of the heuristic encourages the bot to inflict damage to the enemies.
  • If the number of enemy is too high, go only 5 levels deep.
  • If the number of data points is too high, go only 15 levels deep.
  • After each generation, keep the best 2 candidates (elitism). This helped a lot to converge quickly and find good enough solutions, but would make trade-offs on more challenging test cases.
  • Use the initial 900 ms (first turn) to create a good starting population.

4. Tweaking parameters

The rest of the contest (last 6 days) was just about tweaking GA parameters and making the code a bit faster.

One big change that produced a jump from 28k to 37k was to go wider rather than deeper when the searching for solutions. It worked better to go to only a depth of 20 and have 100 generations, rather than going 40-50 levels deep but only perform 30 generations.

After this big improvement, I spent most of the time tweaking the crossover rate and mutation rate, which gave a few more thousands points to reach 40k. I spent a bit of time trying to push the GA and scoring function to take more risks, especially for test case #26 which has a solution where the two data points can be saved. But it was a fail, the GA would most of the time converge too quickly. Same goes for test case #10, where I was a few hundred points behind the optimal solution. On the contrary, test case #32 would often get 7580 points (which is high).

In retrospect, it was quite challenging to find a “one size fits all” heuristic that would perform good on all tests and validation tests.

Final words

I liked:

  • The duration. I usually find the duration of the multiplayer games (8 days) too short, hard to free time with a full day job and all the rest. On the contrary, 15 days was perfect. It gave me time to work on a good simulation and tools during the first 5 days, which turned out to be extremely useful for the rest of the contest (fast iteration time).
  • The fact that it was easy to iterate on and make a local simulator.
  • Simple game mechanics.
  • As mentioned above, the servers responsiveness. So much better than having to wait 6 hours before testing a change. But yes I know, it wasn’t a multiplayer game this time so hard to compare.

However:

  • As said above: several submissions with the same code could show a difference of a few thousands points sometimes. It was very tempting to use as many submissions as possible, especially when tweaking parameters in the end.
5 Likes

Some players were removed from the leaderboard as they cheated.
As of now, the leaderboard shouldn’t change anymore.