Platinum Rift II - Feedback

Hi all!

First, thanks codeingame for this great challenge.
Lots of good suggestions above. Most helpful would be to have:

  • Summary of all matches played in arena from last push (fight in arena): timeouted matches(as stated above), lost matches, won matches, won matches against top N players
  • Some kind of code history or at least a way to save one version of the code separately, something like “last good code”. If I recall correctly, after PlatinumRift 1 ended, versions of the code were visible
  • Higher code limit. In order to reduce code size needed to reformat code, delete comments, and replace default 4-spaces inserted by the editor instead of a tab. A simple replace of 4 spaces to tab reduced by code size from 50k to 40k. This tabs related issue, it’s probably common to all contests.

Some languages permit more condensed code, some have more built-in features than other. Code size limit does not reflect the complexity of an AI algorithm.

Was there a final evaluation after the end of the contest?
It would be interesting to perform an evaluation of PR2 AIs, having a minimum number of matches for each player. Having players with more than 1000 and other with less than 150 battles in the final leaderboard it’s not fair.

There are also huge differences in numbers of battles played between players with close rankings.
Ex: ranking 14 and 15: 1352 and 398 matches. The reason for this is obvious, 14th ranked last pushed on 09:18 and 15th at 18:34. In my opinion, it’s not the best thing to be such differences caused by the ‘commit’ time, as long as the ‘commit’ was before the end of the contest. More matches played=more accuracy of evaluation.

Overall, it was a great challenge.
Thanks codeingame and all codeingame addicted folks!

Later edit: It would be great to have some kind of codeingame plugin for a widely used IDE like eclipse, to help with source versioning, faster code editing and who knows what other useful features :smile:



As the TrueSkill rank is calculated based on mean score minus score variance, I’d like to see those values to see how stabilized the leaderboard really is.

Please provide a chart showing the score variance for each player.



True. I’m used to use statistics in my work everyday and I simply don’t trust any results which aren’t statistically proved and explained. For instance, the score difference between me and Neumann was 0.03 in final results, but what were score variances? Only statistical test gives us the answer, that my result was better with a given confidence level (i.e. 95%). If such test fails, we cannot tell that me or Neumann was better. There were few more issues with trueskill and i’ve pointed them in the previous topic (PR2 Your Rules). I suggest everyone who hasn’t read about truskill yet to do this before new contest begin. It’s extremely important to know the basics of this ranking method and to suggest admins to tune it in the way giving the best possible fairness.


The current ranking system can be exploited as poksh pointed it out.
If someone have 2 AIs, and one is just slightly worse than the other, and submits it many times before the deadline, then the worse AI could boost the better AI’s TrueSkill rank, because the better wins most of the time.
And it’s possible to resubmit the worse AI before it goes into battle with other higher ranked AIs.

Also if this is true, then other cheats could be available:
Security problems with internet connections

It’d be good to get clear about these points before the next multi battle.

Thanks :smile:

1 Like

Yes, with opened internet connections you do not need “slightly worse” algorithms, they can be the same, but AI of “bad” account can identify “good” AI and intentionally give way.
Also with opened internet connections AI can receive possible name of opponent (based on the latest matches were played) and use it for optimizations.
Testing in sandboxes is the only way for these contests.

Anyway the contest was immense and fun.
Thanks for it!

Hello Everyone!

Thanks to CodinGame for this great challenge, and I’m glad that a new multiplayer-contest is starting soon :slight_smile:

I think that it would be great to run the evaluation tests across all submitted codes to have an accurate leaderboard.
The most important thing is to run symmetric tests only, and each one 2 times. Each player must play in each base.
Sounds senseless, but is important. The numbering of the map-nodes is from left-top = 0 to bottom-right = N.
If the arrays in the code are created and used in this order, it is possible that the pods will move differently in the right part of the map, than in the left part.
I hope you understand what I am trying to explain. There could be many other reasons of the change, but this is one of them.
I saw many cases on symmetric maps, where if I’ve changed the players position, the result was total opposite…
That means, if someone didn’t handled these cases in his code, and had some luck, he could reach positions, that he would never reach otherwise and vice-versa…

I would be curious to know how the leaderboard would be affected after tests!

Have a good day everyone! :slight_smile:


Strategy on high level

Initial input processing:

  • Pre-calculate all path length between zones, because there was about 900-1000 millisecs for reading and processing before the first round.

Input processing per round:

  • Set a zone’s owner ID and mineable platinum source only when it is visible. Owner ID not always contained the proper information as the game description said. Just when the zone was visible for own pods.

Zones processing per round:

  • Check if a zone worth to be explored. Means: zone has platinum on it, or has linked zone, which was not already visible before. This check is vital in the initial phase of the battle, helps fast exploration. (I know this could be improved, but didn’t have the time and brain for it :slight_smile:
  • For every newly explored zone (which just became visible), calculate the ‘shortest reachable zones count’ for every linked zones of it. This helped a lot to distribute pods if they have to go in different directions.

Discover/Conquer zones strategy:

  • Quick and efficient discovery is a major part of my AI. It was improved all the time.
  • First pod(s) check linked zones if they have platinum on them. If yes, then move to the zone with most platinum.
  • Second if no platinum in the vicinity then move to the direction which have the most unexplored neighbor zones. This helps accelerate exploration a lot.
  • Third check the above calculated ‘shortest reachable zones count’ and move towards the most promising direction.
  • Pods distribution (when a zone have a lots of pods): Send more pods in the direction which have higher value of ‘shortest reachable zones count’.
  • The tricky part is when all the neighbor’s of a zone is under my command (means owned). At this point the AI has to decide in which direction(s) the pod(s) worth move. Also calculated this using the ‘shortest reachable zones count’ of possible destinations, but divided by the distance between zones. Using this formula helped to sort possible explorable zones and choose the most promising.
  • Disrupt any discover activity if enemy owned zone sighted and take over it. This helps track and hunt invisible infiltrators :slight_smile:

Attack and defence of zones:

  • If neighbour has enemy, but has more platinum than current zone, then attack with enough pods to conquer.
  • If current zone has platinum and there is any enemy in the vicinity, then stay on that zone.
  • If neighbour zone has more platinum and there is any enemy in the vicinity, then move to that zone to protect.
  • Only attack or defend if there are not enough pods to keep/conquer the zone. This helps prevent “overprotection/overattacking”.

Special tactics for already owned zones:

  • Check if a zone possibly in danger of take over by enemy in the invisible zone. Means: is there any enemy owned zone which can be reachable from that zone.
  • If the zone - with platinum -, in danger of unnoticeable take over by the enemy, send guard to it in certain intervals.

Special tactics for long battles, which could lead to “undecided” matches:

  • There are maps with just some narrow passages between bases. These narrow passes could lead to battles which don’t really end in victory from any of the sides because similar platinum sources.
  • To help win some of these battles I implemented a logic, which kicks after 125 rounds: command 1 pod to discover all the previously not worthy to explore (grey) zones. I think this was a little help sometimes. ^^

Special tactics for maps where bases are really close to each other:

  • Check in the first round is the distance of bases smaller than 9. I found that above 8 zones the exploration could be so extended, that the “natural” defence of bases could be enough to repel fast attacks. (This “magic” value could be calculated based on map size, but not worth the effort for me in practice.) If the enemy base is near, then send a few pods towards it to give them a goodbye note :slight_smile:
  • If the own base is in danger (means: enemy has more pods within a certain distance - for example 3 - than my own pods count)), then order enough own pods to come back to the base to defend.
  • If the bases are really close to each other (means: less than 3 zones distance): order 1 pod to sit permanently in the base. This helps to prevent invisible sneaky attacks when my AI sends out pods to explore, but enemy comes in the shadow…

Possible improvements:

  • Distribute more pods where more platinum discovered over time.
  • Send reinforcements where more enemy sighted.
  • Movement could be improved with evade manoeuvers, when enemy is overpowered and own pod could try to sneak besides them.

This wall of text just grasped some high level of tactics.
Have fun :slight_smile:


Well described ! This is exactly my code. Only two more things:

  • First, before any list treatment, SORT THE LIST ! It can be sorted given the amount of platinum, the amount of enemy, the distance from your base, or from your opponent, or maybe a combination of all these criterias, but the order of treatment is very important in this game. The maps are symetrics, the opponent uses probably the same tricks as you, so if you want to win, every details count. Because, every 20 turns, each platinum you have more than your opponent means one more pod for you, and this pod will make the difference on the battlefield.

  • Second thing, create a way to call your troops where they are needed. Personnaly, I developped a system to call a pod each time a pod was destroyed. This helped a lot when the battle happens on bridges. When my army was loosing ground on a specific bridge, some backup was immediatly sent there.

  • Finally, my best idea, the one that made me win I think, was to mark some zones as contaminated when an enemy was inside my area. Then, each turn, I contaminated each area surrounding a contaminated area. This way, I could know where the enemy probably was, and the zone that I had to check. This way, I sent patrols only when really needed, and only to the zones that could have be stolen.

Also, just a remark, you should notice that sending more than 4 pods to the same place is completely useless as the rules says you cannot destroy more that 3 pods each turn. It’s better to use this information to cover more ground.


I think one of my greatest mistakes was that each pod was deciding were to go. Near the end of the tournament, I was convinced that it make much more sense that it should be zones that choose pods. But it would have required a near complete rewrite of the code.
This is maybe the biggest difference between your solution and the others.

My IA is a bunch of patch trying to adapt to each new rules but here is my favorite trick to the fog of war.

Detect symmetrical maps.

  • On these maps when I discovered a new cell in my side, I update the opposite cell. So when a pod succeed to sneak in the enemy camp, he can easily target platinium cells and have not to wander to their discovery.

It allows among others to avoid useless corners / branches / areas and save a lot of time in the battlefield.


Find the ID of a symmetrical cell:

Sym_ID = Total cell - 1 - Init_ID

Check that all cells of the map have the same pattern

Easy way : In a while-loop, check if all couple of cell have the same number of neighbors, break the loop if not.

Then check the bases

Check if my base is symmetrical to the enemy base.


I didn’t notice a big difference in rank after submit of symmetric maps detecting. I think, actually there are many tricks which can help only in rare situations, but if AI have a lot of them - they begin to affect.

About zones sorting, this thing was very helpful for me:

  1. build targets, based on zones and enemy pods
  2. sort targets by “importance” (depends on platinum, enemy pods and so on, if there are important targets nearby some zone - it’s value also increased)
  3. select the most important target
  4. select the nearest my pod
  5. determine the best target for this pod (taking into account the distance)
  6. if this is new target - repeat step 4

My AI also have “contaminated” zones detection, but based on splitting map on “rooms” with gates (importance of gates between “clean” and “contaminated” were higher). Unfortunately, I have defective algorithm of marking rooms as “contaminated”, but they looks like good and I had found problem on some maps only in the last day and did not have time to come up with a good fix.
Looks like darksharcoux’s solution is much clearer and more effective. In fact, it is really, really important improvement, and I think all of top-10 have such ideas, and won the best of them.

Yes, yes! Sorting is important too :smile:

I wanted to detect symmetrical maps, but didn’t know how to do it. Could you please elaborate on your procedure?

This was my first competition on this site and I truly enjoyed it, so thank you! However, I did lose interest half way through when bases were introduced. It meant that my old strategy was useless and I would need a complete rewrite to fix it. So I will not describe my final bot, because it is rubbish. I just want to mention a nice solution I had for deciding where to spawn (in the original rules, before bases were introduced).

So suppose you can spawn K bots and you have N zones that will be attacked in the next turn, how should you distribute your K bots more effectively? Clearly you want to save the highest-producing zones. A greedy approach here does not always work. Let Mi be the number of bots needed to protect zone i and Pi the number of platinum produced in zone i. Let Xi be a boolean variable for each zone i; if Xi=true we spawn Mi bots at zone i (no point spawning less, because we will lose it) otherwise Xi=false and we ignore zone i (spawn 0 bots). It turns out that this is a 0/1 Knapsack problem, where we want to maximize sum_i XiPi, such that sum_i XiMi <= K. K is the size of our sack and Pi is the weight of each item. Implementing this gave me an optimal solution for spawning for defence. Of course this doesn’t solve defence entirely, as I also needed to move my bots correctly, but it was a big step towards a good solution. It seemed to have worked quite well in practice and allowed me to reach top 10 in the early days.

1 Like

@aude @FredTreg Is my request coming along or is it just not possible?

To detect symmetric maps on the first turn you can compare zone links count, starting from the edges to the center.
Some maps symmetric by structure, but not by platinum placement, so on the next turns you should cancel symmetric mode after first detected mismatch.

1 Like

@lechium06_: We chose not to show the variance because it is not useful as once you have done 50 matches it stays between 0.75 and 1 for all players in the top list. This is also why we are not using it in an algorithm that would make more or less matches based on it.
For example for the top 10 of PR2, the variance is:


Thanks for the response.