Coders of the Caribbean - Feedback & Strategies

I totally agree with what has been said about the referee. It’s a real +

And I want to add that it’s a good thing that the referee is readable and not optimized in any way : It gives room to do optimization on your own with the data structure or the algorithms.

1 Like

By now many of you would’ve been aware of my 'code-sharing’ scandal :confused:

Maybe I was simply careless in using a public git repo or perhaps I’m naive enough to think people won’t abandon their morals for a tshirt zzz If the shirt is really all you want (and not what went into earning it), just drop me a message and I’ll make sure to send the next shirt I win to your address…

For what it’s worth, I apologize again for my actions has unfairly impacted the final rankings of both GitC and CotC :frowning:

So…now, if you haven’t already gotten your hands on my source code

Python!

Well…this competition was dominated by C/C++ and Java, probably due to the nature of the problem statement. It is one that suits searching algos well, very much in my opinion like Fantastic Bits on discrete space (and without the physics).

That being said, kudos to my fellow Python comrade @Icebox for sticking it through with Spaghetti :wink: and reaching top 50!

Rough Algo

  • Sequential BFS search for each ship (updating between ships) for the next move that results in best-scoring state after 3 moves
  • If WAIT, decide whether to FIRE or MINE

That’s it! My bot is essentially a bfs with some heuristics layered on top of it.

Heuristics

  1. Decide between running main function or ‘sacrifice’ option

    • The latter simply tweaks certain scoring factors in the bfs algo by adding a score (distance to highest health ship and vice versa for the highest health ship to move closer to the sacrificial ship)
    • Once the two ships are sufficiently close together and enemy ships are sufficiently far as to not be able to swoop in and steal the dropped barrel, either move into an already existing cannonball/mine or fire one to self-destruct
  2. Run bfs for enemy ships (3 moves)

    • Doesn’t account for obstacles, more of an approximation of enemy ships’ range
    • Based on simulation, predict possible locations enemy will MINE
    • Also adds a score to list of visited locations for my ships to track probability of hit based on enemy ships visiting that cell (something like what @RoboStac did) to determine where to fire upon enemy ships.
  3. Predict enemy fires (that will land in <= 2 turns)

    • Sometimes makes my ships run into a mine in front of it thinking that the enemy will shoot at the center…(so it picks the -25 option rather than the -50 one even if the enemy doesn’t actually shoot on that turn)
    • Although it breaks occasionally, it keeps my ships moving when the enemy is close and enables them to have greater flexibility to evade enemy fires. So ultimately it was an improvement to my bot’s performance.
  4. Stopping just short of the final few barrels to ration rum :stuck_out_tongue:

    • If you pick up the last barrel later than your opponent (and have more total rum), you’ve likely won that game unless your opponent does something cool like having their ships exhibit cooperative behavior to block your ship then shoot it…:open_mouth:

BFS + Scoring

Within the bfs function is where most of the magic happens :wink: Significantly it’s where I teach my ships to ‘run into space’. Future flexibility of movement contributes a portion of the score after rum count and distance to barrels. Basic scoring like +k1/distTo(barrels), -l1/distTo(mines) are standard to determine some positional advantage after movement. In addition to those, I threw in an accumulative scoring of free cells you can travel to after each move.

  • Look ahead at what cells are reachable and weigh cells ahead of you higher than those you’ll have to turn to reach.
  • Filter out undesirable cells
  • Add a score based on how many of these cells exist
  • Tuned to be a magnitude lower score than distance scoring so it’s more of a tie-breaker (hidden if-else spaghetti inside scoring :stuck_out_tongue: )
  • So if barrels are close, distance scoring will outweigh this 'space-finding’ weight, but otherwise, on my way there and after all barrels have been picked up, I take the path I have the greatest degree of freedom on.

For such scoring functions you really have to get a feel of how it is performing by tweaking the constants in it…probably a GA would be able to auto-tune these for you to reach peak performance haha, but it was faster for me to manually tune them as I didn’t have ready access to a GA and local arena.

Optimizations

With python, I could at maximum, simulate ~1k moves before my bot times out…so with 3 ships, I could not move past a search depth of 3 moves into the future (but it was seemingly sufficient). Even 1k moves required me to change some of my code to optimize its execution speed.
Honestly, I’m not even sure these are legitimate optimizations for python, but the following are some things that I changed to make my code work under the time limit…

Dfs

Initially I went ahead with a dfs search (since I wanted to compare end-states rather than picking the first one that had a barrel in it - you might run into an unavoidable mine/cannonball on your next move with that sort of pathing), but it constantly made my bot timeout even at depth 3 (5^3=125 moves). Apparently python and recursion don’t go so well together. So I took @DarthPED’s advice from the previous contest and ported over to a bfs algo, and the while loop ran quite smoothly…for a while…

Bfs

Even with bfs, I took ~10ms per ship for depth 3 o.O pretty surprised at that…python is slow, but shouldn’t be that slow…So after looking into my algo, this line

#Declaring v[23][21][3][6]
v = [[[[False for i in xrange(6)] for j in xrange(3)] for k in xrange(21)] for l in xrange(23)]

by itself took ~8ms :open_mouth: So instead of declaring such a big sparse 4d array, I encoded the current state into an unique int:

#v_state = xxyyso
v_state = int(x*10000)+int(y*100)+int(speed*10)+int(orientation)

added it to a list and searched the list when I needed to find out if I’ve visited the state before (since my search depth wasn’t that large, this took significantly less time than declaring a huge array.

Modifying game state between simulated moves

Something that I learnt playing FB multiplayer >.< Since python passes variables in functions solely by reference, I had to explicitly copy the game state before modifying it to prevent one simulation affecting another inaccurately which resulted in a huge overhead to my bfs. However, as changes to the gamestate between moves aren’t really that numerous, using a list to keep track of adjustments to the state between moves was far cheaper. So I had stuff like:

if (key in mine_key_list and key not in cur_rem_mines):

Once again I encoded grid locations for objects into an int and stored that in an array rather than cloning and mutating a sparse 2d array.

Precomputing distances and adjacent cell locations

Because I stuck to a 2d array to work with hex coordinates, I had to do some work to access adjacent cells:

def cellAhead(loc, orien):
    tmp_bow = (loc[0]+hexAdj[loc[1]%2][orien][0], loc[1]+hexAdj[loc[1]%2][orien][1])
    return tmp_bow

So to speed up my algo since many such calls were necessary (to check where the bow and stern of your ship is after certain moves for example), I precomputed a table of the 6 surrounding cells given the central cell location as well as the distances from each cell to another.

These were the significant ‘optimizations’ I did to get my algo running smoothly under time limit in python >.< which probably won’t be necessary in another language like C++…As I’m writing this, I came across this document which seems really useful to keep track of algo complexities for python :slight_smile: - So I should’ve used a set instead of a list for my lookups since containment is O(1) instead of O(n)

I’m very much too alive (Epic bugs)

  • Bfs that doesn’t even work (Silver -> Gold)

    • Instead of using if (v[cur_state]) continue; I carelessly did v[prev_state] instead…getting my bfs stuck on depth 1 all the time
  • if (fire.ttt-cur_depth < 2): (Gold -> Legend)

    • My simulation wrongly took cannonballs with a time to target of < 2 as ‘hit’ :confused: so dead cannonballs with a time of 0, -1 etc…would affect my pathing too
  • Indexing error (Legend -> Top 50)

    • In order to optimize my code, I precomputed a table of adjacent cells. But to account for stuff like precomp_adjCells[-1][0] I added some offsets to prevent it from breaking the lookup table which I forgot to add back when looking up the table in my code lol…

So lesson learnt here: Fix Bugs > implementing new features. Maybe when you find yourself scratching your head over why you’re not getting that promotion and implementing a ton of new supposedly better features…It’s time to take a closer look through your code, refactor it even.

Thoughts on contest

Really loved the contest this time round, much more freedom than GitC (less mathematically deterministic) but sufficiently constrained (discrete rather than continuous space) so costly boilerplate physics simulation code was unnecessary. This left much more time to add the ‘I’ into your AI bot :slight_smile:

The Boss bots were perhaps a little too inaccessible for beginner players and as @marchete pointed out, many couldn’t get out of Wood…However, the Gold Boss was incredibly satisfying to finally beat haha and made Legend league that much more legendary :stuck_out_tongue:

It’s unfortunate that the kerfuffle with ‘code-sharing’ happened due to my negligence…I’m more sad about negatively impacting the contest experience than losing my tshirt hmph…I love the challenge of bot coding contests, so much so that any external incentive is a nice bonus to aim for but not my main purpose of participation. Ofc for the next contest you won’t be seeing my code on a public repo anymore :sweat_smile:

Thanks to dev team once more for yet another fun contest and to @marchete @mmmd @eulerscheZahl and many others who publically supported me after finding out ppl cloned my bot :slight_smile: :+1:

EDIT : I’ve released a stripped down version of my code for reference.

22 Likes

Somehow ended up rank 1… :slight_smile:

I just finished writing my post-mortem here. I may not have proof read it enough and also probably wrote way too much, so my apologies. I just wanted to get that done ASAP to finally take a little break from pirates!

Let me know if you have any questions, comments, want more details (really?) or what to improve.

Thanks CG again for this great contest!

30 Likes

Thanks for the detailed post-mortem!
You’ve briefly mentioned that local testing didn’t help much, did you use it at all to tune evaluation parameters?

83rd

This challenge was a lot of fun, and the concept with hex grid was quite original and it added some complexity.
I’ll explain how I worked, from wood 3 to legend. I was using Java during the whole contest.

Wood 3

In this league, we could only use “MOVE” action. So clearly the goal was to pick up all the barrels as fast as possible. Already in this league, I hardcoded the map of all the “Point” as a two dimension array during the first turn of the game. For each Point of the map, I determined all the neighbours, with the specificities of a hex grid (even or odd rows). Then, I used BFS to make my ship moving toward the closest barrel. This was enough for the next league.

Wood 2

In this league comes the commands “MINE” and “FIRE”. I chose to deny the mines at this point of the game. I added a cooldown for FIRE command on the ship, and chose to do as follow :
if I can use FIRE command and an enemy ship is close enough to fire on him, then aim the bow of the ship.
else, move to closest barrel
this was enough to get to Wood 1

Wood 1

As I was already coding like if I had several ships, this league changed nothing for me. All of my ships were having the same behavior in their greedy way, and it worked well enough to reach Bronze league. I still added a little rule for my FIRE command: if the ship aimed has a speed of 0, aim at the center of the ship. If the cannonball takes two turns to get to the target, you are sure to hit center or stern of the ship. If you aim the bow and the enemy ship rotates, you fail.

Bronze

Here came the commands “FASTER”, “SLOWER”, “PORT”, “STARBOARD”. I chose to deny them for now, but I knew it would be a key to get gold or legend. This was the best way to avoid mines when looking for barrels in my algorithm. I improved my heuristics so that my ships would behave in a better way. I split the decisions in two parts, and each part had two choices.
First part: Barrel part (true if there are barrels left on the map)
if I can fire on a ship, I try to do so like I was doing in previous leagues.
else, I go to the closest barrel
Second part: No barrels left
if I can fire on a ship, I do like above
else, if I have a ship with more rum than my enemy, I compute a “Safe Point”. A Safe Point is a cell of the map where there are no mine and no cannonball is going at it, but this has also to be true for every neighbours of this cell. Then, I use MOVE command to the Safe Point.
if the enemy has more rum than me, I go toward him in order to get in range of fire.

I noticed in this league that it was useless to fire if the closest enemy was too far from me. As I was giving a priority to the FIRE order, changing the FIRE distance allowed me to dodge more cannonballs and mines because I was more often in the case “compute Safe Point”. This lead me to Silver

Silver

In all leagues above, I played with heuristics. This helped me to discover the game, and some basic strategies. To get gold, I chose to radically change my code in order to use the commands “FASTER”, “SLOWER”, “PORT” and “STARBOARD” instead of “MOVE”. In previous leagues, I was giving the priority to “FIRE” command, and it worked well to get to silver. But when you reach this league, the enemy ships start to move manually and to dodge every damage they can. I decided to implement a Greedy Brute Force algorithm with a depth 4, that would check every movement possible between “WAIT”, “FASTER”, “SLOWER”, “PORT”, “STARBOARD”. Each ship was simulating 625 cases, for a maximum of 1875 simulations with 3 ships. I could simulate over 3000 cases, but there was no need for that. A depth 5 would have been too much. I hardcoded a two dimension array in the first turn of the game that was containing all the cases. Then, my ships would test each case and keep the highest score in memory to play the first move of the best case. If the first move was “WAIT”, I would check outside of the simulation if I can replace this by a “FIRE”.
For the simulation, I was copying the current state of the game. Then, for each round simulated, I moved every other ship than the one evaluating the case as if they were doing “WAIT”. I didn’t check for collisions for this moves. Then, I moved my ship accordingly to the current case, checking for collisions with ships, barrels and mines. When my ship finished to move, I looked for cannonball explosions.
For my evaluation, I considered the following:

  • If the action is “FASTER” but current speed is already 2, then stop to test the case because it’s better to test the one with a “WAIT”.
  • If the action is “SLOWER” but current speed is already 0, same than above.
  • If I spot a collision, I stop the evaluation too because I consider this is very dangerous.
  • If I hit a barrel, the score is increased by the health restored
  • If I hit a mine or get hit by a cannonball, the score is decreased by the damage taken
  • The score is increased by the speed at each turn of the simulation : if a ship has speed 2 during the whole simulation, it score 8 bonus points. This incitates my ships to keep mobility, but still avoiding mines and cannonballs because 25 > 8. Without the feature, as I wasn’t checking for barrel distance, the ships would prefer to stay where they are until danger is coming. With high mobility, ships would go all over the map and then take the barrels when going close enough to them.

This evaluation leads me gold.

Gold

The thing that made me reach top gold was implementing suicide. Whenever one of ship with less than 25 rum was close enough to another one, and far enough from the enemy, it would think that going on a mine would score 25, instead of -25. This way, the ship throw himself in the mine and die, living a barrel with more than 20 rum for the closest one of mine. I don’t suicide a ship if I’m in a winning position. But this wasn’t enough to get legend. I decided to implement a way to put mines on the map. My IA was really good to avoid mines, so I began with "if you can’t fire on a wait but you have mine cooldown up, put a mine. This was more effective than I would thought. But I improved this part a little bit later: if an enemy ship is behind me, and have the possibility to move on the mine I’m going to put, then I do it. I gave priority on mines and not on fire, because there were less cases where putting a mine in this way was possible, and the cooldown was higher.
I alse needed to improve my evaluation.

  • Still no use of “FASTER” or “SLOWER” if this is useless
  • Still avoiding collisions
  • Same score variation when hitting mines, barrels, cannonballs, but with the suicide implementation on mines
  • Same score variation with ship speed among the simulation
  • If there are some barrels left, the score is decreased according to the distance of the closest barrel. Take care with this one, you don’t want your ship to turn around a barrel to say “That’s good, I’m close to a barrel” and not taking it because the next one is to far away.
  • If there are no barrels left and my ship with highest rum has more than the enemy ships, the score is increased by the distance with the closest enemy (making my best ship to flee battle if I’m in a winning position)

I tried a lot of parameters variation until I finally managed to get Legend. I submitted quite a lot on sunday afternoon because the boss was a little bit high and I reached 2nd place after a lot of pushes, but it achieved it in the end. :smiley:

3 Likes

Ended 31th with a Monte-Carlo Algorithm.

First day:
I wrote some naive heuristic in Java in order to go Bronze. This was very instructive to understand the rules of the game. I finally, found it very hard to imagine how my bots should work together. My AI looked horribly dumb, going through mines and cannon balls and firing in the middle of nowhere.

Next days
Rewrote a simulator in C, because i don’t like c++ or c++ doesn’t like me. Lots of debug. I didn’t use a map of cells, but some fixed sized list of entities, this allowed me to revert the state of the world by a memcopy, i think i should have tried to revert what has changed instead but not enough time and energy.

I met some very nice guys in the codinghub i organized the Wednesday evening, very constructive discussions about eval functions and ways to structure the datas.
Slowly but surely, my AI became better and better, mainly from bug eliminations, and some minor improvements in the eval.

Friday
Legend League opened but I didn’t make it at first only by a few ranks… I stayed in the top 5 of gold so many hours, it was very frustrating. I finally got pushed by someone I don’t know.
Then I started to improve performance of the simulator and eval function. Little steps by little steps. And tweaked some parameters of the eval. I tried to use CGSpunk in order to compare my AI’s but the results didn’t help, I had something too close to 50% of win or lose against people around me to see if things were better or worse, so submit only can do the job.

End of contest
Not a lot of time (thanks kids…), I wanted to try to combine my MC with a minmax, but my ships became very careful, my eval wasn’t good for this. I ended, playing some simus for the opponent (depending of the numbers of ship) against me WAITING, then me against his best move. My eval depends a little bit on the global situation but the segregation of behavior wasn’t very well done. I use the square of rum of my ship, i found it better than sum and max, I fired only in the first turn of my simu at a random position around me, i found it better than on the simulated position of the enemy (he never moved the way i wanted). I tried to precompute all distances but found no performance improvements in my benchmark (comparing on CG server in IDE play).

A disappointment
I discover monday, that during the rerun, I lost a lot of matchs because of timeouts, I didn’t see any sunday evening. I will for sure take a bigger margin next time.
I broke my simu somewhere Sunday afternoon too, i didn’t have time to re-debug all, so I reverted some changes. I knew I had to write some tests and compile my code locally!

+/-

  • It was very cool to play pirates
  • The referee is really very nice to have, it helps a lot to understand the mechanics of the game
    ++ Meeting other people at a codinghub was a very good evening
  • The chat and people there, i kept it open as much as i could
  • It was very frequent to have win/loss ratio close to 50%/50% with a lot of other AI
  • I really, really have to improve the way i write my code in C,
  • I really hope we could have a visual indicator in the last battle of the match lost from a technical error, maybe i already had timeouts Sunday evening and just didn’t see them.
6 Likes

Hi,

Great contest. I ended 245, which is a big improvement over my preceding attempts. I spent something like 10 to 15 hours on this contest.

My second try with a simulation approach. I simply made a random search over 5 turns with an evaluation function. I wanted to reach silver without firing, but was stuck to bronze. So I decided to add the fire command which took me to middle silver.

I still had random timeout, without any exception in the console. I ignored them for a while since they did not harm enough to prevent my bronze->silver promotion. I decided to fix these timeout Saturday. I thought that, since there was no exception, I might be stuck in an infinite loop, so I added a crude watchdog in the two while loops (move, rotate). That was it. My simulation was buggyier than I thought :slight_smile: To fix it, I wrote a simple fuzzer test (https://en.wikipedia.org/wiki/Fuzzing) that sent random commands to the boats and asserted that no collisions remained after the simulation step, that the boats were actually on the map… This was enough to take me to gold.

My last major improvement was to take into account the flying bullets. I simply added them to my simulation and the boats started avoiding them. It was like magic. Simply say that if you’re hit by a bullet you take damage and the boat avoids it :slight_smile: I know for most of you it will seem like normal, but I’m new to this AI/bot thing.

My bot is mostly focused on moving. I did not have time to make nice firing strategies. I only reliably shout still boats.

On the strategy side. My evaluation function said that if there is no barrel and I’m loosing then get close to the enemy to hopefully shoot him. If I’m winning, get away from the opponent. This gave some nice behavior: https://www.codingame.com/replay/212123063 In this replay, my boats turn around the opponent’s boat and move away when they hit him and are in a winning position.

These are the rules I tried to apply during the contest:

  • refrain from adding features before you’re sure existing code actually work. Which is no so easy in such games, where you can watch the boat and think “Ok, they seem to do what I think they should”.
  • write tests, the fuzzer test was key. I suggest it to every one. It was 10 lines to write and found me several bugs.
  • stick to immutable code. The same approach as TrueLaurel. I used scala and I’m happy I stick to immutable code as much as possible. E.g: this naturally gives you the sequence of world when you simulate a sequence of moves. You don’t have to undo work when the move is bad. It’s a bit like copying the world at each step.
  • try to not get to exited by the competition to stay focused. But it was hard :slight_smile: I got exited pretty quicky !!!

Thanks for a great contest !!

2 Likes

A few questions after reading your post-mortem for the first time. I have no doubt there will be more questions to come :slight_smile:

Can you confirm the depth 5 obtained in Minimax ? Is it depth 5 for each ship ?

How do you explain the ability to go effectively more than 3 times as deep in the tree compared to brute-force ? I am only aware of an exponent 0.75 in terms of number of nodes explored gained on average with pruning.

You write that you used minimax “for all combinations of two ships”. Can you explain this sentence with a bit more details ?

You explained how you handled FIRE. You also explained that your AI became afraid of the opponent using FIRE against you. Therefore I assume you handled FIRE in the tree search for the opponent. Same thing, any more details ?

Regards,
pb4

1 Like

Yes I still used it briefly on and off one day (Friday if I recall) to quickly test different sets of weights. I ended up keeping one bot that seemed to do well enough and indeed improved my real world results too. I think at least 2 of these weights are still in the final code. But I needed real results during the last weekend so I did not use it again. Maybe I should have. Such testing was critical in FB, and I did not even have anything near the nice setup I have now back then.

Yes, 5 for each ship, but only in one ship games. I prefered capping it to 5 to get faster results anyway, but maybe on rare occasions 6 could be reachable. Not sure how helpful that would be in practice. Two and three ship games were very often on 4, but sometimes only 3 for complex scenarios.

The only way to speed up tree search, other than faster state updates (which I also did in between brute force and minimax, maybe made them 2-3x faster), is efficient tree pruning coming from all the sources I mentioned in the article, and you can still do a lot more if you keep digging the chess wiki. I am not sure which pruning you are referring to for that number, nor on which games, but I don’t think you can describe so easily the average for different games, or even different scenarios of the same game. I could collect some stats from my bot later if you are interested.

I built a list of pairs, matching each allied ship with all the enemy ships, sorted by decreasing distance. I then ran minimax for each pair in succession, keeping the best moves found in between, for the other ships not involved in the search.

Exactly. I started by only evaluating firing at the sweet spots I talked about, but as the search got better I could handle many more firing locations, which ended up in this AI paranoia.

7 Likes

Hello everyone, great contest :slight_smile: I ended up 23rd with a genetic algorithm.

Metagame

Because of the huge amount of rules in this game, I figured out very late what I think the “correct” metagame is.
Given that it’s quite hard to actually hit a cannonball or a mine, many games end up with ships running out of rum over time, which means whoever has the ship with the most rum has a great advantage.
With that in mind, I made up 3 eval functions for my GA:

  • If there are still barrels around, pick up barrels
  • If there are no more barrels and I have the ship with the most rum, sail in circles and try not to get hit
  • If there are no more barrels and the other player has the ship with the most rum, focus all my energy on trying to hit this ship, and this ship only

Maybe the third case deserves a few more details:

  • All my ships had a reward for being in front of the bow of the target ship
  • I added an extra reward if the speed of the target ship was zero, which encouraged my ships to collide with it
  • I only allowed my ships to fire at the enemy ship with the most rum, and totaly ignored the other ones

Simulation

I didn’t feel like implementing heuristic rules to avoid being hit by stuff in the game, so I re-implemented the referee in C. In the end I could run between 150k and 300k simulations of turns in ~40ms, which I believe is more than enough for this contest.

Some people wanted to have a look at the implementation, so here it is: link. (It compiles and I believe for now it makes your ship shoot at themselves, but I don’t have the IDE to test it :confused: )

Genetic Algorithm

Nothing outstanding to say here, I had a depth of 5 turns, a population of 50, 50% mutation and 50% crossover.
The evaluation of the fitness was the sum of the fitness for the next 5 turns, with decreasing weights over time.

At each turn I used the best solution from last turn as a starting point, gave myself 5ms to make my ships move, then 10ms for the enemy ships, then I dealt with connonballs (see below), and the remaining time (somewhere around 20ms) to make my ships move and fire again.

Firing

Ok, so, that’s the one idea I’m proud of in this contest. But, spoiler alert, it didn’t work at all. I think I ended up with the worst firing algorithm of the top 50.
Still, I’ll spend some time explaining it :smiley:

The starting point is simple, I wanted to quantify the gain of having a cannonball arrive at a given spot and turn, so that when I compute my fitness, I could take this gain into account when firing.
This way, one of my ships could change his course in order to make a better shot and maximize its fitness.

The other thing I wanted to achieve is to not shoot to hit, but shoot to minimize the fitness of the target. So a shot could force an enemy ship to slow down, get closer to a wall, etc. This way I could sort shots from a minor annoyance to a garanteed hit.

In order to do that, I listed all couples (coordonates, delay) that had a chance of hiting the enemy ship with the most rum, and for all these, I ran a GA with only 3 generations to try and find an escape route for the enemy ship.

The difference between the original fitness and the fitness assuming a cannonball was fired at a given coordonate+delay gave me the gain of firing at the said coordonate+delay.

The problem (I think) is that in only three generations, a GA often wrongly thinks that a cannonball cannot be dodged.

TODO list

Overall there are many areas in which my bot is lacking.

  • The prediction of what my opponent will do is attrocious (he can’t fire or mine)
  • I mine randomly
  • My shots are way off
  • I still get stuck in walls sometimes :disappointed:

General Feedback

Thanks again to all the CG staff for this really nice contest. As much as I enjoy reverse-engineering, I think providing the referee code is a very big plus.

Still, there are a few points I think could be improved in the following contests:

  • There was no place for low efficiency languages here, I had to drop Python on day one in favor of C.
  • Rules a bit overcomplicated: hew grid, collisions, three boxes per ships, ‘clunky’ move function, etc. I feel like some of it could have been removed. For example the fog of war was probably too much.
  • Really hard to follow cannonballs on the replay screen.
  • As marchete (brilliantly) pointed out, wood was really hard to pass.
  • Legend boss arguably a bit to tough

See ya all at the next one!

11 Likes

Thanks, I’m so glad to see such a post :slight_smile:

I also compete in scala since a few years and I’ll have a deep look at your github project.

I also shared some scala ai stuff which can relate to CodinGame but the code is not really documented for the moment (except with the unit tests).

2 Likes

Hello Tyrcho, feel free to leave comments on the codes.

This time, It took me a lot of time to handle the timeout and to optimise the arena performance. I realized that operations with immutable map and set in scala are time consuming. I will continue the optimization work once the contest is released in practice mode.

Some articles I find interesting www.lihaoyi.com/post/BenchmarkingScalaCollections.html and www.lihaoyi.com/post/StrategicScalaStylePrincipleofLeastPower.html

@reCurse thx for the awesome post-mortem. I learn lots of things. You mentioned a replay downloader in your post

Replay downloader: A program to download the JSON files used to store replays on CG. The main use case is for producing a crude but practical suite of nearly exhaustive unit tests for the simulator code at a very low cost. Replaying thousands of matches and validating the expected result turns a simulator cannonproof very quickly, making optimizations and refactors very safe.

could you please share more details about this tool? How do you download replays in Json with which API? How to you make tests?

2 Likes

It is completely undocumented and unsupported by CG, so you will need to reverse engineer all of it. :slight_smile: I think it would take longer to explain the details than figuring it out by yourself. Judging from a few who started doing just that in chat (presumably from reading my article), it seems straightforward enough. Just inspect the network traffic when using CG.

It took me 2 hours to code a replay download/analyser. But it’s my job to do this kind of things. I don’t really want to publish it. It download all replays from the last battles page of someone. If everyone start to use it, Codingame will put a limit on it (like the submits with CGSpunk).

If you code in java, all you need is Apache Http Component and something to read the JSON (i prefer Jackson, but you can use Gson or standard java JSONObject).

Assume that I know how to download replays in Json :stuck_out_tongue: , how do you approach the testing part?

Thx

Thanks :slight_smile: I have read most of what LiHaoyi posted !

I’ll integrate some of your ideas in my next contests, like the Bundler and jmh tests.

A suggestion : keep in separated repositories common library code, and code specific to each CG contest. Those should be in private repositories actually … otherwise someone could just copy/paste your solution and achieve top100 in practice mode. I use gitlab for those.

Would you like to collaborate on a public repository of scala libraries for CG / AI stuff ? Maybe @csj would be interested too. Mine is a bit of a mess at the moment (https://github.com/tyrcho/scala-ai/), it would motivate me to organize it better :slight_smile: There are for instance some generic implementations of minimax/alphabeta/MCTS.

1 Like

24th Java referee based

It took me some time to write the post mortem, but here it is!
A great challenge as usual :slight_smile:

I reached bronze league with a simple logic based on manhattan distances without any simulation.

I then started again from the referee code, and kept it for the whole competition. I did not had time to improve performances, but having the referee code was a huge gain in time.
Regarding performances, with this code and my evaluation function I was able to run maximum 600 simulations with the available 50 ms.

My bot proceeds first to an analysis of the situation to adjust some parameters (High level AI more details bellow). It then try to define what is the best order by doing a brute force search 3 turns in advance for each ship, considering the other ships are doing their best move if already defined, else considering they wait.
I start with the opponents ships and then compute this for my ships.
In the brute force search (A MaxNTree with only one player) I consider mining and firing only for the first turn in order to limit the branching factor (and cost to compute fire target) for the 2 next turns.
And that’s it :slight_smile: !

A few details:

High level AI:

Depending on several conditions, I adapt a few parameters.
If there is more than two barrels left, I adjust the MAX_SHIP_HEALTH to 150 to trick the ships and make them believe they will have the entire rhum of the barrel. So that the ships do not wait near a barrel because they will have more rhum in 3 turns if they wait.
If there is no more barrel left, I adjust several parameters of the evaluation function in order to change my ships behavior. Main differences are for example the distance to the oponnent ships that I try to maximize if I am leading and I try to minimize on the contrary. More details in the evaluation function part bellow.

Sailing distances vs Hexa distances:

When I brainstormed on paper, I noticed that depending on the orientation and speed of your ship, the hexa distance and the minimum number of turns required to reach another cell is clearly different. Since I don’t see far away in the turns, making a difference between ships oriented differently can make the difference between a ship that turns or stays static because the barrel is behind it and in all cases in 3 turns it can’t be closer.
So during the first turn, I initialize 3 huge arrays of 50*50 integers where I brute force over the first 5 moves of a ships in the center position the minimum of moves required to reach a cell. I then complete the entire map by considering all the ships will be at speed 2 starting from the borders of the initialized cells.
Each map represent the “sailing distance” for each 0 1 2 speed with a fixed (0) orientation.

Example (cropped):

Speed 0, orientation 0:

Speed 1, orientation 0:

Speed 2, orientation 0:

You noticed how the cell next to you when you have a speed of 2 is in fact already 5 turns away from you :smiley:

When I compute the sailing distance of 2 positions later on, I extract the ship speed and orientation, I compute the initial vector that it represent, I perform a rotation of this vector in order to put the ship in the 0 orientation, and I do a translation so that the ship is in the center cell. I then extract from the map the distance.
Using the Cube representation it’s quitte easy to do, and having these maps precomputed during the first turns it cost almost nothing more than computing hexa distances. Thanks codingame by the way for the article on the hexa grids, it was really helpfull!!

Fire positions:

I first try to find barrels where an oponent ship is closer than me considering the sailing distance, and that I can fire on it before the opponent ship can reach it (still based on sailing distances and time to travel for the canon ball)

If there is none, I have an array of integers initialized to 0 for each ships and for each cell of the map. During the brute force search, I sum the health of the ship on each cell where it is present if it is greater or equal to it’s original health.
It builds a kind of presence probability for the next turns that take into account the fact that the ships will try to avoid damages, and favor capturing barrels.
I extract the cell with the maximum value and if I can fire on it in less than 2 or 3 turns I keep this as a possible target.
I also consider shooting to the mines, using the presence probability map in order to compute the sum of the neighbors divided by 3. I’ll shoot to the cell with the greatest score either a mine, either based on presence probabilities.

This fire order is then added to the list of possible moves and in my evaluation function I have a bonus when I fire in order to favor this move.

Evaluation function:

My evaluation function takes into account several parameters:

  • Difference of total health with the one of my opponent. This one has by far the biggest coefficient.
  • Sailing distance to nearest barrel
  • Number of barrels that have a sailing distance less or equals to one of my ship compared to opponent ship
  • Distance to nearest allied ship (sqrt(abs(distance-4)) so that it tries to avoid beeing too close
  • Distance to nearest enemy ship
  • Sailing distance to nearest mine
  • Speed
  • If the ship fired on first turn
  • If the ship is on the border of the map

Depending on the situation, some parameters change and for example if there is no more barrels, either I try to flee from the opponent ships, either I try to get closer depending if I am leading or not.

22 Likes

Finished quite low at 808.
Here is my post mortem

1 Like

video is no longer available.
Did you download it … ? :confused:
plzzz say “yes” :wink: