Platinum Rift II - Your rules

I think there won’t be any final ranking procedure.
The standings at the deadline will be the final rank.
At least this was the rule in the past weeks at every Monday’s deadline.

Yes.
For this challenge, once the deadline is passed, we will wait for all queued matches to complete and will use the resulting rankings.

But it’s not fair! The ranking will be highly random, because score differences between top players are small, matches are played only at deploy (old results are incrementaly forgotten, and new deploys causes bias), number of matches is statistically small, freshly deployed code has high variance lowering the score (trueskill works that way i suppose). I strongly insist on making fair evaluation after deadline, eg. 5000 matches per each player in top 20.

3 Likes

Why can’t we afford to do the same thing as rift 1 ? It will encourage players to test their solution till the last moment.

1 Like

@mrv Are you saying this is random based on facts based on Platinum Rift 2 or based on previous contests? Because it is true that Platinum Rift 1 was highly random. We added 300 matches at the end of Platinum Rift 1, but to tell the truth it did not stabilize the leaderboard due to the internal randomness of this game.

With Platinum Rift 2, the leaderboard seems stable. I am not checking every minute, but when I come back from time to time, the top of the leaderboard seems stable… This is why we do not see the need to perform additional matches.

In addition, the last time we added matches for the final ranking we had many CodinGamers complaining that we did not warn them in advance about adding these new matches. As a conclusion we cannot satisfy everyone…

I previously mentioned a few wrong things happen now in the scoring system. The only fair method to evaluate top players is use STATISTICS and trueskill gives you that possibility. Maches have to be played until mean score variance would be much less than differences between players mean value. And matches have to be played between all player-pairs in e.g. top 10. In other case ranking is heavily biased and ranking positions are reliable.

corr: ranking positions are NOT reliable

Hi Xormode, the top 10 players in the final leaderboard will win a t-shirt, is that still true ?

@mrv: First, we did not implement our current ranking system out of the blue or without thinking or without trying a few things.
Then, we have to find the right balance between cost and fairness. Playing 5000 * 20 players = 100000 matches is just something we cannot do without careful planning (how many days will it take to complete?, how much will it cost if we want to accelerate)
Lastly, our very first implementation of a AI battle on CodinGame (Tron beta) was based on reaching a minimum score variance. It did not work at all: for some players (the bottom of the leaderboard), matches would be played indefinitely. For top players, the system would stop playing matches quickly (the actual variance of top players is super low and no longer changes even when adding many more matches). So we had to find another way…

I am not saying that our system cannot be improved in several ways, but for this contest it will have to make do.

@hedonplay: for this contest, it was one t-shirt for the top 3 players every week.

so the prize was modified. In the beginning of this contest, we said that for the final rank, top 10 players would win a t-shirt.

@hedonplay. When did we say that? Maybe we provided an incorrect information at some point, but I don’t recall. If you could point me to the source of the error, it would be great. Anyway it is not a matter of saying, it was clearly written on the challenge rules & awards page (on the t-shirt image).

if you get commit history on your pages, you can check the description on the prize section published in the beginning of the contest. i don’t have old versions.

what about 1 vs 1 pod ?

Wow this is quite an old topic :wink:
In case of equal number of PODs on each side they would both be reduced to 0

wanna duel on PR2 ?