How will winners of The Great Escape be determined?

Hi everyone!

This is the first time I participate in the contest here at CodinGame. I could not find anywhere description of how winners are determined at the end of the contest. Is it just top three players at the last moment or there will be some extra matches after coding phase?

I’m sorry if I miss this information somewhere on site, but I spent some time to find it without any success.

Thanks in advance!


I am also very interested in knowing this as it affects my submission strategy for the next couple of days.

The way it worked for PR2 was submissions to the arena closed at the end of the contest, and then any running rankings were completed. There were no extra matches after the coding phase was done. I haven’t seen anything from the CodinGame team saying this will be different.

In this contest it is very important to do many additional matches. A strategy can be poor against lower-ranked players, while being good against top players. This is especially the case for 3-player games. So I’ve noticed a certain “layering” of bots: bots that use a similar strategy end up being clustered together on the leaderboard. If you can beat bots from the n-th layer then you advance to the next layer; if you cannot beat them then you stay in that layer even if you have a really strong bot. Anyway, more games against a variety of opponents is the best. The final top 20 rankings could be determined by playing each bot against every other bot in the top 20, like in a sports league.

I agree, I resubmitted the same code and I dropped from 30 to 60. Their ranking system seems very broken.

I’d also be interested in an official statement regarding how / when the leaderboard will be frozen. It can affect what I do If I think of a last minute fix to my AI.

I totally agree. I feel that when you have a position in the ranking, it is quite difficult to move around. When you resubmit a new bot, you climb in the ranking, and rapidly stabilize. However, this stable point is not the real value of the bot, you can easily test this by playing against top ranked bots in the training interface.

Interestingly, once “settled” in the ranking, you then climb positions one by one, slowly. I am quite sure this happens because top players update their bots, so they disappear from the top, and never climb back again to the same level. This should not happen unless everyone makes worse bots when trying to improve them, which looks unlikely.

I couldn’t agree more. We’ve experienced all these issues with ranking system in PR1&2. So very very kind request to Admins to run additional matches, let say 5000 for each player in the top10, 2000 in the pos. 10-20, 1000 in pos. 20-50.

My understanding is that, past a point, TrueSkill rankings do not get more stable as the number of matches increases. This makes sense, since its original purpose of the algorithm is to rank human players, who can get stronger or weaker over time. The solution in my opinion would be to use another algorithm more appropriate for ranking AIs, i.e. less eager to forget the outcomes of previous matches as new ones are played.

Is the challenge over? My final bot only played around 230 games. It should really be 10 times that…

I don’t know if it is a good idea to “rermember” the past ranking, as it is very easy to destroy a bot with a stupid bug. However, I think that the TrueSkill ranking assumes that the ranking is transitive, i.e. if A>B and B>C, then the algorithm assumes that A>C. Clearly, for the great escape, this is not the case.

With The current ranking system, most battles are “climbing battles”, when a AI has to beat many low rank AIs first.
When your AI reaches used all of its 100 battles (which can happen quickly it hit a cluster of AIs it has trouble to beat),
it has to rely on battles generated by other climbing AIs to get an accurate rank.

This means that :

  • Most battles imply one AI with an underrated mean and overrated standard deviation of its performances, which is probably not the most efficient way to get an accurate ranking

  • It takes a lot of time to get feedback on the strength of your AI

  • It’s a very bad idea to push it in the arena in the last minutes/hours of the contest…

  • In order to avoid the TOP 20 ranking to freeze, some regular ranking matches had to be introduced

I think two things could be done :

  • Not resetting the ranking of the AI when you push it (or maybe only the standard deviation part of its ranking).
    Sure the ranking could easily be destroyed by a bug, but that what we’ve got already every time we a push an AI.

  • Or at least keeping the system as it is but playing X other battles for all (or top N) AIs at the end of the contest while not resetting the ranking.

Also on the regular ranking matches for TOP 20 (5 additional battles every 30 minutes), the matchmaking is the same for the 5 battles. I think it could be more accurate/stable to use a different matchmaking for each 5 battles (especially for games like this one when the “transitivity” is low and you get 5 battles in a row versus an AI against which you are very good/bad).

Why the ranking was being changed after deadline yesterday?

After the deadline, recently pushed AI had to complete their 100 games, which had affected the global ranking.

Why on earth didn’t they run more games after the deadline?!? We should have got like 2000 games each, but I only got 230. I am so dissapointed. If I knew this information I would have not submitted so late.

My first post said this is how they did it in previous contests, which is exactly what happened. Considering there was no response from the Codingame team in here, there wasn’t really any reason to believe it would be different this time.

You only ran 230 times because that is how about how many times it runs the first time you submit to the arena. The 2000 extra games you saw were done while your code sat in the arena and was played against other players over time.

When they cancelled 4-player games, they wrote that now it would only take around 2000 games for the TrueSkill rank to stabilize. For some reason I assumed that this will be the case for the final rankings and they will run at least 2000 games. I guess I was wrong and this cost me around 20 places. My bot (like yours) requires many games to climb up the ranks. Oh well I will be better prepared next time.

I agree with you, you should post your feedback to the TGE feedback thread if you haven’t already.