Coders of the Caribbean - Feedback & Strategies

I was 107th overall, 4th in Python3/Python.

This is mostly written as advice to other Python programmers (obviously not the ones who beat me—I’d be really interested to know how other Python bots handled or avoided the heavy computations.)

Genetic Algorithm

I also implemented a genetic algorithm. Before, you dismiss that as too hard, the idea is quite simple (also see Magus’s Coders Strike Back Genetic Algorithm Tutorial.) :

  1. Make, say, 5 lists of 5 moves each (from WAIT, FASTER, SLOWER, PORT, STARBOARD) representing the next 5 moves
  • One or two manually computed strategies (for example, code a quick and dirty navigation to the nearest barrel, ignoring mines and cannonballs)
  • (After first turn) The best list of moves from your previous GA results (with the first move chopped of)
  • The remaining moves are random
  1. Simulate those move lists for 5 turns
  • This wasn’t easy, but any good strategy involves simulation.
  • See my notes on using NumPy to speed up this part
  1. Score the runs using an evaluation function
  • Start with a simple evaluation function, like the net change in rum over the simulation, and then get more nuanced as your strategy improves.
  1. Choose the best (fittest) of your 5 trials and randomly select modifications to those to simulate.
  2. Repeat 2–5 until your run out of time (give yourself a heavy buffer, like 10 or 15 ms.

If my GA returned WAIT (or SLOWER/FASTER when my speed was 0/2) then I fired a cannonball or laid a mine. I found this worked really well at avoiding cannonball and mine damage and collecting barrels. This basic approach got me to gold. I spent the remaining time playing around with my evaluation function and general strategy. I unfortunately, couldn’t get to legend this time.

NumPy (made my Python simulation much faster)

I knew from my Coders Strike Back genetic algorithm that pure Python was going to be slow. So instead, I used NumPy to do the simulation part inside Python. The idea is that NumPy does parallel vector calculations fast. For example, I had NumPy arrays ship_q, ship_r, d_ship_q, d_ship_r, and speed, each of dimensions

(number of trials in a batch) x (number of ships on the playing field)

Here _q and _r refer to the axial coordinate system. My simulation included code such as
ship_q[speed > 0] += d_ship_q[speed > 0] ship_r[speed > 0] += d_ship_r[speed > 0]

I was usually able to simulate 10-15 batches per turn (where a batch was 5 trials of 5-turns each simulating every ship on the board simultaneously including all explosions, collisions, etc). I was surprised at how readable and natural the NumPy code was (at least before some optimizations). It wasn’t all fun and games (pun?) however. It took some work to handle collisions since I had to compare all ships to all other ships.

Import precomputed structures with pickle > zlib > b64:

I don’t know if this ultimately helped me, but I precomputed a table of fastest routes to get to any point starting at a certain speed and orientation. Computing the table took a long time (especially when coded inefficiently :slight_smile: ), so I computed it on my laptop and then used these three modules/functions to convert the table to plain text which code be pasted into my CodinGame code:

  • pickle.dumps - Encode any Python structure into a bytes object
  • zlib.compress - Compress bytes object to a smaller bytes object
  • b64.b64encode - Convert bytes object to plain text (this is 2-3x shorter than the string representation of the bytes object)

Once pasted in my .py file, I decoded with

pickle.loads(zlib.decompress(b64.b64decode(string_repr_of_object)))

Of course, to avoid running up against the character limitation of CodinGames, I found it helpful to not encode the table itself, but a more compact representation of it, for example a list of key-value pairs (or better yet a NumPy array) which can then be converted back into a table. (Actually, I did a number of other things to shrink this table. I took symmetries into account and only stored next moves.)

Summary

This was a hard puzzle and I was surprised how much strategy mattered. It doesn’t matter if you take more barrels faster, unless you get the last barrel, or can trap and destroy the opponent, or sacrifice your own ships, etc. It was especially complicated with the fact that a dead ship produces a barrel. I think due to the heavy computations (to avoid cannonballs if nothing else) one would have trouble getting first place without using C/C++. However, Python + NumPy did respectably, and I feel my not getting to legend was more due to a lack of good end game strategy than a lack of speed.

17 Likes

Feedback about the challenge

Pros:
-The referee code was super useful. This is a must have from now on. There are so many little things that the only way to have it right is by having it.
-Fast matches, 1v1 symmetric that are the most fair.
-50ms is good for fast matches…
-2 full weeks are nice for many people.

Cons:

-Wood leagues have more than 50% of players.
I think this is frustrating for these players. The real challenge should start at bronze and not wood. I now there was the twitch video, but is not heavily promoted. So to avoid player frustation, reaching bronze should be easier.
-Crazy hard legend bot, only 13 players passed at first batch, so it was like a legend 15th bot. This makes a lot of good players can’t pass from gold, when they deserve it. This was another BIG point of frustation, for me it’s a no-no.
-Ended on monday at like 3-5am in America, it’s a bad time.
-Bad timeouts at sunday? Suddenly my bot started losing a lot due to timeouts, I changed nothing.
EDIT: -Being 335th/3624 on this challenge gives more points that being 1st/1669 on a low played challenge like Codebusters. Now tell me that being 1st on CB is easier than 335th on Pirates. This is a BIG problem for correct rankings. Scoring isn’t fair for low played challenges.

About my bot
Final rank 40th legend.
I worked on this bot from monday to sunday, about 60hrs of work and 2200+ lines of code (I’m a bad, slow coder).
Most of the time was for polishing the simulator. Even with the referee code, converting it to another language isn’t trivial. Java manages List of objects as references, whereas a C++ vector is a hard copy. This makes me waste 1 day on searching why collisions got on an endless loop. Another thing was the difference between double values on both languages. The function angle() gives different results on C++, that leads to a different moveTo() outcome.
I precalculated a lot of things on first turn, mainly distance() and angle(), that are the most expensive. Both were a 2D array, like dist[pointA][pointB]. I coded coords to index with (((y+2)<<5)+x), the +2 was to avoid out of bounds on the array due to negative positions on bow() or stern() I have a bigger array than the map due to the index wasting space with the <<5. I also coded neighbours on an array, neigh[point][orientation].

Once I had a simulator I started testing it with my own bot as enemy. As I know both movements, I can apply them to both simulations and the resulting GameState update must match with the next turn on the game. I had problem with that due to mines, because they aren’t seen at more than dist = 5 (another day wasted).
All this took me +40hrs unfortunately, so I needed to rush the AI.

My AI is an almost pure Genetic Algorithm (GA) for move actions, WAIT and MINE with this genoma:

class Genoma {
   //Only myShipCount used for GA, not always MAX_SHIPS. MAX_DEPTH was 5
  Action action[MAX_SHIPS][MAX_DEPTH];

, with FIRE being used manually when I have a WAIT as an action for one ship. Monte Carlo also worked fine, but on my final submit it was a GA.
WAIT is like free time to do shootings, so I manually wrote some basic targeting heuristics to shoot at future position of the enemy, I first tried to replicate the moveTo actions for enemies, guessing they’ll target the nearest rum barrel. But it didn’t work very well with speed=2 ships, so in that case I try to shoot a barrels near the enemy and farther from my ships. A late addition was the manual sacrifice of ships, ships drop rum when killed (up to 30) so I checked if I was losing (less health than the enemy), in that case the ship with the least rum shoot itself, and mark its id as sacrificied. This way I don’t do double shooting or anything.

About the fitness function, I have two stages, with or without barrels. With barrels I priorize speed, health and rum captures, and penalize distance to nearest rum barrel. That makes ships goes naturally to the nearest barrel. I also gives bonus to my ship with the most rum. For the mine placement I didn’t have time to do something good so I just penalize a lot if the ship has mineCooldown, that makes ships only place mine if they think it will hit an enemy. For enemy ships I penalize this health.
Once there are no barrels I keep all the above but added more scores. I go to cat & mouse game, If I’m winning OR the total health of my ships are greater than enemy’s total health, I run away. I score positively speed, distance between enemy ships and my ships, and penalize distance between my ships, That makes my ships tries to stay near, but away from the enemy. Then I have the right conditions to sacrifice the ship with least life, and buff the bigger.
If I’m losing, then I penalize distance between enemy ships and mine, just that.
I have many flaws, specially with FIRE, I target so badly that I even hit myself. I don’t focus on fire a single enemy too.
I don’t trap ships, bad mines, etc etc.

But I’m happy with the results, the GA made some nice moves. My whole AI doesn’t have a real pathfinding because map is simple, so it’s done by the GA. And that means they can run into a mine, or take hits if with that the AI thinks it’s a better move.
If you see from frame 142 in that replay https://www.codingame.com/replay/212958220 you’ll see how one of my ships just go for one mine to die. That’s not intended by me or hardcoded, but the GA thinks it’s better to “sacrifice” a ship and buff the biggest ship. The other ship at turn 156 did the same, but this was enough to win the game.
And my ships also got hit on purpose to die (again without a manual sacrifice), like on this https://www.codingame.com/replay/212958005 turns 172/174.
GA (or even Monte Carlo) can do nice stuff, stupid moves that ends well. This is hardly done with pure heuristics, because it’s not easy to do. But you need an almost perfect simulator, or things can go terribly wrong.

I don’t simulate offline at all, neither do fine tuning of coefficients. All was done on Notepad++, and testing only on CG IDE and sometimes CG spunk.

I recommend new players they learn about Monte Carlo and Genetic Algorithms, they are really interesting search algorithms. Codingame is about to release a new learning platform, and I’m pretty sure there will be a lot of nice courses about these subjects. E.g. a fast hands-on for Monte Carlo I wrote: https://tech.io/playgrounds/4003e618997c561e705af1ade9cfca4b432/monte-carlo-method-in-7-minutes (beta version, can change in the future). Stay tuned :slight_smile:

15 Likes

60th in legend.

Before the contest started my ambition was to make it to legend in the laziest way possible. Early in the contest I made the decision to use heuristics + some BFS instead of going through the hassle of making a simulator. In hindsight that was wrong, and I should have gone for the simulator path.

Heuristics easily got me to gold, but it seemed almost impossible to get past the gold boss. A major problem with heuristics is that you easily end up with tons of if/else statements that in the end become a monster to tune. First when I systematically went through all the different weights and cases I got something that made it to legend, late at Sunday night.

Basic algorithm when there are still barrels left:

  • calculate distances (in nr of moves) from barrels to all ships (also enemies) using a BFS search that avoids mines
  • assign each ship a barrel using a matching algorithm
  • for each ship: try each action (left, right, wait, slower, faster) and check collisions with mines/cannonballs/other ships (done individually, which is sometimes results in a completely different outcome than in the engine), also from there look ahead 1 move to check for unavoidable collisions with mines and cannonballs.
  • wait to take a barrel if we are closest and the resulting ship rum would be >100. In only did this when there were not many barrels left, had no time to experiment.
  • fire at barrels that are closest to the enemy, or at the enemy itself

This was a really interesting game with surprising depth. As always the graphics looked great, although the ships looked more like fishes to me:). I had a lot of fun, thanks a lot to the organizers!

4 Likes

103rd, 1st in Scala

Thx CG, I had a great time working on this game. I want to also thx my wife for her support.
I worked more than 30 hours on this game.

This is the second time I developed an AI with the CodinGame-Scala-Kit and this time I integrated JMH for micro benchmarking.

Functional Reactive Bot Programming

In a functional way, this is the hard constraint I made on my coding style. Both the local arena and player logic is developed with immutable data structures and function composition.

Performance

To optimize the simulation performance, I used JMH for micro-benchmarking (example here) and VisualVM for profiling. Everytime I manage to reduce the simulation latency, I climb in the ranking without modifying any game logic. That’s magic! It seems that the entire job is to optimize!.

Meta-Heuristic

What?! You are talking about Scala Performance before those C++ guys. Fortunately, the game is not all about performance. For those who want to learn to Meta-heuristic, I highly recommend the book Essentials of Metaheuristics. I used an Evolutional algo from this book MuPlusLambda.

For the quality function, I want to give credit to @Manwe who shared a hint on the ship-to-barrel distance function during Murex’s CoderHub. Instead of taking into account only the physical distance, he advised including the orientation and speed factors. This is really helpful. I also met @Agade in the Hub and I appreciate a lot his great tips. Join a Hub if you can next time. In Paris, you can find us in Murex.

Streaming

During the contest, I had the chance to be hosted by CodinGame for a one-hour streaming (video here). I would make it again and make it better. If you want to discuss Functional Reactive Bot programming in Scala, please join me next time.

Thanks for reading.
TrueLaurel

10 Likes

Nice game, I was really attracted to it. But lack of good knowledge and experience moved me far away from first hundreds :).
One thing I’d like to mention is - IMHO it’s not correct to provide source codes.
Yes, that way someone can notice bugs and they will be fixed.
But at the same time I think game is more challenging if you have it like a black box and have to find out some subtle details\circumstances by yourself experimenting.
Just my thoughts, keep on great work!

2 Likes

Ended up on the 25th place.

Naive bot

Since I knew I will not have much time for this contest, I decided to kick-start the competition with the Python bot to get early into the league which had the complete rule set.

I implemented following naive heuristic AI:

  1. if there’s enemy ship within 5 cells from where I am, extrapolate her trajectory and shoot there
  2. if there’s no enemy around, move to the closest barrel
  3. else pursue closest enemy ship

I did not even bother working in hex coordinates, used only Cartesian.
Cannon cooldown was resolved by firing only at odd turns.

That was enough to get me to the Bronze League.

Simulator

Then I started writing the game simulator; and having the referee source code was super helpful (but one should have watched for bugfixes there).

Porting the referee from Java to C++ was a bit painful and I second @anon41962965 that the object design could have been better (especially the collision part which is overly complicated).

Scoring function

My scoring function appeared very simple yet super efficient, I came up with it very early and most of the consecutive improvements to it were rather non-improving :frowning:

So I was maximizing the difference between my and enemy’s score, which was comprised of:

  • big bonus for every ship alive
  • small bonus for total health across all friendly ships
  • a bonus for total speed
  • a penalty for bigger distance to the closest barrel
  • a penalty for relative angle to the closest barrel
  • big penalty for being stuck at the border of the map

Lastly, when all the barrels were collected, like most of the people here I penalized for staying closer to an enemy when I was winning, and vice-versa.

Search

Initially I wrote DFS with the depth of 2 moves (for worst case of 3x3 battle) - that made me competing on par with the middle of the Bronze. Then I started predicting enemy moves applying my naive AI during the simulation and implemented beam search, which propelled me into the top 50 and I never left it since then.

Smart things

…did not work well for me. I tried different advanced tactics, but they at best kept me with the same ranking:

  • forcing close combat when ships stuck
  • firing at mines that obstruct navigation
  • firing at the last barrel when it’s reachable by the enemy
  • sacrificing own ship to gain extra health

Probably I did something very wrong.

Performance

Even though I was writing full-fledged OOP with polymorhpic types, dynamic memory and standard library, performance was not an issue. With the use of optimizer #pragmas I’ve been able to simulate about 14k moves per turn which in my opinion was more than enough for such a dynamic game.

Tools

I relied heavily on my unit tests (30 in total, covered mostly the scoring function and the search), which helped a lot to avoid regression.

I used git flow to experiment with improvements on dedicated branches and discard them freely if they did not bring desired result.

CG Spunk was my friend to estimate new bot versus real players before doing timely submission.

Finally I used own-crafted small Python script to compose all the source & header files into single submission unit.

12 Likes

Small feedback before a more appropriate post-mortem :

I disagree with this comment by @anon41962965 . The referee was super useful to understand the intricacies of movement, and easily readable.

Sure, the data structures are not appropriate for high-performance simulations, but :

  • The referee code has absolutely no performance requirements, it’s only run once per turn
  • In the case of this contest, fast data structures would make for less readable code
  • This leaves some margin of improvement for the players who wish to implement a higher performance code.

I only have positive feedback with regards to how the rules & referee were handled during this contest. Thank you :+1:

8 Likes

Okey, explain a little bit more my point.

Of course it was useful, and it was nice to have it. And it is not 100% evil, it just can be better.

Read it was not so easy, also as port it, in rules it was explanations, how turn processed, step by step. But when I read a code, I saw that on some steps there is performed calculations for further steps. Like in “moveShips” it was storing some information for rotate step. Since when collisions appears, ship move was reverted back, also as this “rotate” information probably must be changed. So, it looks like there is no rotation step performed yet, but already some logic involved. And it is lovely place for bugs.
This came me to thinking, that right idea was not try to understand how it works, but just port it “as is” and consider wrong work of this code as feature, but not a bug. Like latest discovered for me - I can’t place mine directly to another ship. What is an explanations for this? I don’t check it in details, but I’m fear, that such “mine” plant ignored by referee, but “mine cooldown” was changed in this case.

The question is about people who use heuristic way to solve problem. Changes in referee can have big impact on such strategies and follow changes in referee much more harder in this case.

Same as strange (on my mind) damage calculations. I don’t get it, really. Just make a quick look at it and decide implement only easiest case and easiest calculations. In rules says - “damage performed”… And by code I saw that this damage logic spread in a lot of places (especially I don’t get this trick with zero damage, why it is needed, and why it is implemented so complex).

Anyway, as I said, it is good enough, and of course nice to have it. Definitely it is good improvement for contest.

1 Like

I’m pretty sure all the 0 damage code was there to show explosions in the replay viewer.

1 Like

As a reminder, before Gitc, the referee code was not public. Everything was the same (bugs in referee, bugfix later during the contest, hidden rules/behavior …).

Now we have the referee code. Everything is the same. But we can check it by ourself.

We don’t ask Codingame to make a perfectly readable referee. And we don’t want to. At the beginning, the asked feature was to have the referee code published without any additional works from codingame. If they have to spend time on the referee to make it readable, i won’t complain, but it is not what we asked.

And yes, the 0 damage was here for the viewer data. You should simply ask me on the chat, i could anwser you on that :smiley:

7 Likes

@Magus is indeed very helpful on the chat :thumbsup:

Ended 35th with a Genetic Algorithm, implementing the same kind of things as previously mentioned (GA opponent, then myself, evaluate health, distance, nb of ships, …).
I was able to do around 100k simulations/turn, but that wasn’t that useful. I guess it was not that good to find the best action against only one possible move of your opponent.

A little thing I haven’t seen yet : My FIRE gen was neither “FIRE X Y” nor “FIRE TARGET”, but just “FIRE”. The program had to find the best possible target by itself (closest, …). A bit
of heuristic in the GA that I think leads to a smaller and less random search space. Don’t know if it is a good idea, I just saw that I missed as often as the others :slight_smile:

+/- :

  • The game was really nice : perfect difficulty and fun. I especially liked the hexagon cells which were very refreshing, and the ships placed on 3 cells which lead to a nice boat-like steering behaviour.
  • Having the referee is nice, it helps understand the game, and players that want to simulate but don’t have enough time can use it.
  • It was difficult to make a difference : a lot of IAs were able to dodge most of the cannonballs, and often the games were just ships running around, avoiding mines and waiting for the end. There doesn’t
    seem to be any difference between ranks 30 & 100. It may have been avoided with more powerful weapons, like faster cannonballs or area damage ?
  • I found the graphics not as beautiful as in previous games, and they lacked clarity : for example it was hard to tell the difference between mines and barrels. And the score panel took a big part of the width.
    Maybe the game window could be bigger too ?

Great contest, thanks CG.

4 Likes

Here is my postmortem.

Please point out mistakes and unexplained things to me in the chat.

A small point of feedback: it’s unfortunate that the referee had so many bugs. Even after all the bug fixes the collisions among boats were a bit strange. If a boat runs into the back of another it gets stopped and not the other, this is fine but then why does the other boat get stopped if I go forward into its bow or if I rotate and hit it in the back?

17 Likes

Hey there, I’ve finished quite far away (665th, which is disappointing since I was top 50 during the previous contest)
I’ve started a bit late, 6 days before the end of the contest.

First steps
I started with a crappy heuristic basic AI, with the Move command, to unblock manual navigation (bronze league)
Then, I’ve updated it using manual navigation (I’ve used the code from the MOVE command in the referee, with few modifications) which led me quickly into silver. (just dodge cannonball and shoot closest enemy assuming he’s going straight)

Let’s rewrite everything
On friday, I realized I would not be able to have a good and evolutive AI by adding more if/else. I’ve decided to start from scratch, trying to minimax in order to learn something and maybe end up with a good AI. Long story short: it still had bugs in it on sunday (I was still hitting mines damn it), and the new AI was performing so poorly I decided not to even submit it.

New beginners!
I’ve introduced this contest to my younger brother (who studied briefly programmation during his studies) and a designer friend (who is able to hack a code, and add some if else here and there, but doesn’t understand the concept of objects).
Both succeeded to reach bronze level. (without my help). I’m saying that because I’ve seen some complains about too many people not getting out of Wood divisions. I’m really proud of my crew :smiley:

Bottom line
I feel this contest was quite hard (which is not necessarily a bad thing) compared to the previous one, especially for non-specialist.
I feel you had to go and look in the referee code in order to achieve anything.
For my brother and friend (n00bs), they didn’t even try to use manual commands.(even if I adviced them to do so, they felt it was “too complicated”) so they stuck to Move command until the end. I feel they might have performed better in the previous contest.

We’re going to participate to the next contest. I’m probably going to think earlier about how my code will work (not start to simulate 3 days before the end of the contest). And if a simulation is needed, I might switch from c# to c++. Which would be a very good learning experience.

Last thing: I really like the talks in the chat (even if I’m not participating), I enjoyed seeing Icebox fighting to reach legend with his spaghetti code, or getting some basic strategy ideas, or fun/interesting replays.

Thank you CG for this contest, and congratz to everyone for participating in it, no matter what ranking you got :slight_smile:

4 Likes

I tried to use the Referee code to write a simple Monte Carlo simulation in Dart, but ran into horrible performance issues. I could only get 30 - 50 evaluations per turn. I think all of the additional object creation and list management really hurt my code. I attempted to convert the Referee to more of the simulation style used by Magus in his CSB breakdown. Re-using objects and “alive” flags instead of deleting objects and creating new ones. Unfortunately I ran out of time before I was able to make any significant improvements. Dart is great for quickly writing and debugging code, but its garbage collection is a bit painful for these types of contests.

I loved the game itself, and thought it was really interesting.

Thanks for your Post Mortem!
But do you agree that having the referee (even with weird bugs) is a must-havel?
I remember trying to simulate other games like Poker Chip Racer, and without any reference is a hard, disgusting, complicated task.

Of course, it really helps, and as Magus said, we always had bugs except now we can see them and get them fixed by CG more easily. And previous games like CSB had rules like its collisions that weren’t properly explained and had to be reversed. The referee solves these silly issues.

Having a referee available is definitely a net positive, I really hope this trend continues.
It reduces barrier of entry for search based bots (imagine writing a sim for this game without a reference) and provides with a way to quickly verify things unclear in the description.

I totally agree with what has been said about the referee. It’s a real +

And I want to add that it’s a good thing that the referee is readable and not optimized in any way : It gives room to do optimization on your own with the data structure or the algorithms.

1 Like

By now many of you would’ve been aware of my 'code-sharing’ scandal :confused:

Maybe I was simply careless in using a public git repo or perhaps I’m naive enough to think people won’t abandon their morals for a tshirt zzz If the shirt is really all you want (and not what went into earning it), just drop me a message and I’ll make sure to send the next shirt I win to your address…

For what it’s worth, I apologize again for my actions has unfairly impacted the final rankings of both GitC and CotC :frowning:

So…now, if you haven’t already gotten your hands on my source code

Python!

Well…this competition was dominated by C/C++ and Java, probably due to the nature of the problem statement. It is one that suits searching algos well, very much in my opinion like Fantastic Bits on discrete space (and without the physics).

That being said, kudos to my fellow Python comrade @Icebox for sticking it through with Spaghetti :wink: and reaching top 50!

Rough Algo

  • Sequential BFS search for each ship (updating between ships) for the next move that results in best-scoring state after 3 moves
  • If WAIT, decide whether to FIRE or MINE

That’s it! My bot is essentially a bfs with some heuristics layered on top of it.

Heuristics

  1. Decide between running main function or ‘sacrifice’ option

    • The latter simply tweaks certain scoring factors in the bfs algo by adding a score (distance to highest health ship and vice versa for the highest health ship to move closer to the sacrificial ship)
    • Once the two ships are sufficiently close together and enemy ships are sufficiently far as to not be able to swoop in and steal the dropped barrel, either move into an already existing cannonball/mine or fire one to self-destruct
  2. Run bfs for enemy ships (3 moves)

    • Doesn’t account for obstacles, more of an approximation of enemy ships’ range
    • Based on simulation, predict possible locations enemy will MINE
    • Also adds a score to list of visited locations for my ships to track probability of hit based on enemy ships visiting that cell (something like what @RoboStac did) to determine where to fire upon enemy ships.
  3. Predict enemy fires (that will land in <= 2 turns)

    • Sometimes makes my ships run into a mine in front of it thinking that the enemy will shoot at the center…(so it picks the -25 option rather than the -50 one even if the enemy doesn’t actually shoot on that turn)
    • Although it breaks occasionally, it keeps my ships moving when the enemy is close and enables them to have greater flexibility to evade enemy fires. So ultimately it was an improvement to my bot’s performance.
  4. Stopping just short of the final few barrels to ration rum :stuck_out_tongue:

    • If you pick up the last barrel later than your opponent (and have more total rum), you’ve likely won that game unless your opponent does something cool like having their ships exhibit cooperative behavior to block your ship then shoot it…:open_mouth:

BFS + Scoring

Within the bfs function is where most of the magic happens :wink: Significantly it’s where I teach my ships to ‘run into space’. Future flexibility of movement contributes a portion of the score after rum count and distance to barrels. Basic scoring like +k1/distTo(barrels), -l1/distTo(mines) are standard to determine some positional advantage after movement. In addition to those, I threw in an accumulative scoring of free cells you can travel to after each move.

  • Look ahead at what cells are reachable and weigh cells ahead of you higher than those you’ll have to turn to reach.
  • Filter out undesirable cells
  • Add a score based on how many of these cells exist
  • Tuned to be a magnitude lower score than distance scoring so it’s more of a tie-breaker (hidden if-else spaghetti inside scoring :stuck_out_tongue: )
  • So if barrels are close, distance scoring will outweigh this 'space-finding’ weight, but otherwise, on my way there and after all barrels have been picked up, I take the path I have the greatest degree of freedom on.

For such scoring functions you really have to get a feel of how it is performing by tweaking the constants in it…probably a GA would be able to auto-tune these for you to reach peak performance haha, but it was faster for me to manually tune them as I didn’t have ready access to a GA and local arena.

Optimizations

With python, I could at maximum, simulate ~1k moves before my bot times out…so with 3 ships, I could not move past a search depth of 3 moves into the future (but it was seemingly sufficient). Even 1k moves required me to change some of my code to optimize its execution speed.
Honestly, I’m not even sure these are legitimate optimizations for python, but the following are some things that I changed to make my code work under the time limit…

Dfs

Initially I went ahead with a dfs search (since I wanted to compare end-states rather than picking the first one that had a barrel in it - you might run into an unavoidable mine/cannonball on your next move with that sort of pathing), but it constantly made my bot timeout even at depth 3 (5^3=125 moves). Apparently python and recursion don’t go so well together. So I took @DarthPED’s advice from the previous contest and ported over to a bfs algo, and the while loop ran quite smoothly…for a while…

Bfs

Even with bfs, I took ~10ms per ship for depth 3 o.O pretty surprised at that…python is slow, but shouldn’t be that slow…So after looking into my algo, this line

#Declaring v[23][21][3][6]
v = [[[[False for i in xrange(6)] for j in xrange(3)] for k in xrange(21)] for l in xrange(23)]

by itself took ~8ms :open_mouth: So instead of declaring such a big sparse 4d array, I encoded the current state into an unique int:

#v_state = xxyyso
v_state = int(x*10000)+int(y*100)+int(speed*10)+int(orientation)

added it to a list and searched the list when I needed to find out if I’ve visited the state before (since my search depth wasn’t that large, this took significantly less time than declaring a huge array.

Modifying game state between simulated moves

Something that I learnt playing FB multiplayer >.< Since python passes variables in functions solely by reference, I had to explicitly copy the game state before modifying it to prevent one simulation affecting another inaccurately which resulted in a huge overhead to my bfs. However, as changes to the gamestate between moves aren’t really that numerous, using a list to keep track of adjustments to the state between moves was far cheaper. So I had stuff like:

if (key in mine_key_list and key not in cur_rem_mines):

Once again I encoded grid locations for objects into an int and stored that in an array rather than cloning and mutating a sparse 2d array.

Precomputing distances and adjacent cell locations

Because I stuck to a 2d array to work with hex coordinates, I had to do some work to access adjacent cells:

def cellAhead(loc, orien):
    tmp_bow = (loc[0]+hexAdj[loc[1]%2][orien][0], loc[1]+hexAdj[loc[1]%2][orien][1])
    return tmp_bow

So to speed up my algo since many such calls were necessary (to check where the bow and stern of your ship is after certain moves for example), I precomputed a table of the 6 surrounding cells given the central cell location as well as the distances from each cell to another.

These were the significant ‘optimizations’ I did to get my algo running smoothly under time limit in python >.< which probably won’t be necessary in another language like C++…As I’m writing this, I came across this document which seems really useful to keep track of algo complexities for python :slight_smile: - So I should’ve used a set instead of a list for my lookups since containment is O(1) instead of O(n)

I’m very much too alive (Epic bugs)

  • Bfs that doesn’t even work (Silver -> Gold)

    • Instead of using if (v[cur_state]) continue; I carelessly did v[prev_state] instead…getting my bfs stuck on depth 1 all the time
  • if (fire.ttt-cur_depth < 2): (Gold -> Legend)

    • My simulation wrongly took cannonballs with a time to target of < 2 as ‘hit’ :confused: so dead cannonballs with a time of 0, -1 etc…would affect my pathing too
  • Indexing error (Legend -> Top 50)

    • In order to optimize my code, I precomputed a table of adjacent cells. But to account for stuff like precomp_adjCells[-1][0] I added some offsets to prevent it from breaking the lookup table which I forgot to add back when looking up the table in my code lol…

So lesson learnt here: Fix Bugs > implementing new features. Maybe when you find yourself scratching your head over why you’re not getting that promotion and implementing a ton of new supposedly better features…It’s time to take a closer look through your code, refactor it even.

Thoughts on contest

Really loved the contest this time round, much more freedom than GitC (less mathematically deterministic) but sufficiently constrained (discrete rather than continuous space) so costly boilerplate physics simulation code was unnecessary. This left much more time to add the ‘I’ into your AI bot :slight_smile:

The Boss bots were perhaps a little too inaccessible for beginner players and as @marchete pointed out, many couldn’t get out of Wood…However, the Gold Boss was incredibly satisfying to finally beat haha and made Legend league that much more legendary :stuck_out_tongue:

It’s unfortunate that the kerfuffle with ‘code-sharing’ happened due to my negligence…I’m more sad about negatively impacting the contest experience than losing my tshirt hmph…I love the challenge of bot coding contests, so much so that any external incentive is a nice bonus to aim for but not my main purpose of participation. Ofc for the next contest you won’t be seeing my code on a public repo anymore :sweat_smile:

Thanks to dev team once more for yet another fun contest and to @marchete @mmmd @eulerscheZahl and many others who publically supported me after finding out ppl cloned my bot :slight_smile: :+1:

EDIT : I’ve released a stripped down version of my code for reference.

22 Likes