True!
Also true.
True!
Also true.
So what is in the ‘ownerId’ input for a round?
My pods went crazy :-/
But maybe they just drank too much platinumbeer…
If you don’t see a cell, the ownerID is -1, even if it’s yours.
That does seem to be the case now… caused some consternation, I bet. It would be nice to at least see an update in the rules.
I must change strategy, and put guards on the edges of my empire …
Or something like that
That’s because if you don’t see a cell you never know if it is still yours.
If you see your platinum income dropping, it might be a hint that some of your zones are not yours anymore…
There’s a graphical bug we have yet to correct in which 2 pod squads, upon landing on the same hex, will stack on one another instead of merging. The game continues normally though. It is just an issue with the frame-by-frame display. I recommend debugging with speed x0.1 and Play until this has been sorted out.
The contest ends in a few hours. I have questions about final ranking procedure:
a) how many battles is going to be fought between top players,
b) how many players enters that intense-played top?
Instead using a fixed number of battles, I suggest taking into consideration the score variance. Battles should be fought until ranking positions would be stable with given confidence level. I think simple ANOVA analysis would be helpful at a first attempt.
I think there won’t be any final ranking procedure.
The standings at the deadline will be the final rank.
At least this was the rule in the past weeks at every Monday’s deadline.
Yes.
For this challenge, once the deadline is passed, we will wait for all queued matches to complete and will use the resulting rankings.
But it’s not fair! The ranking will be highly random, because score differences between top players are small, matches are played only at deploy (old results are incrementaly forgotten, and new deploys causes bias), number of matches is statistically small, freshly deployed code has high variance lowering the score (trueskill works that way i suppose). I strongly insist on making fair evaluation after deadline, eg. 5000 matches per each player in top 20.
Why can’t we afford to do the same thing as rift 1 ? It will encourage players to test their solution till the last moment.
@mrv Are you saying this is random based on facts based on Platinum Rift 2 or based on previous contests? Because it is true that Platinum Rift 1 was highly random. We added 300 matches at the end of Platinum Rift 1, but to tell the truth it did not stabilize the leaderboard due to the internal randomness of this game.
With Platinum Rift 2, the leaderboard seems stable. I am not checking every minute, but when I come back from time to time, the top of the leaderboard seems stable… This is why we do not see the need to perform additional matches.
In addition, the last time we added matches for the final ranking we had many CodinGamers complaining that we did not warn them in advance about adding these new matches. As a conclusion we cannot satisfy everyone…
I previously mentioned a few wrong things happen now in the scoring system. The only fair method to evaluate top players is use STATISTICS and trueskill gives you that possibility. Maches have to be played until mean score variance would be much less than differences between players mean value. And matches have to be played between all player-pairs in e.g. top 10. In other case ranking is heavily biased and ranking positions are reliable.
corr: ranking positions are NOT reliable
Hi Xormode, the top 10 players in the final leaderboard will win a t-shirt, is that still true ?
@mrv: First, we did not implement our current ranking system out of the blue or without thinking or without trying a few things.
Then, we have to find the right balance between cost and fairness. Playing 5000 * 20 players = 100000 matches is just something we cannot do without careful planning (how many days will it take to complete?, how much will it cost if we want to accelerate)
Lastly, our very first implementation of a AI battle on CodinGame (Tron beta) was based on reaching a minimum score variance. It did not work at all: for some players (the bottom of the leaderboard), matches would be played indefinitely. For top players, the system would stop playing matches quickly (the actual variance of top players is super low and no longer changes even when adding many more matches). So we had to find another way…
I am not saying that our system cannot be improved in several ways, but for this contest it will have to make do.
@hedonplay: for this contest, it was one t-shirt for the top 3 players every week.