Ranking calculations update

As promised, I’m unearthing this old topic:
How is the Coding Rank calculated

Summary of the issues:

  • Challenges with a lot of players reward so many points that winners of some of these with low participation (like Codebusters) earn less points than >1000 rank players of other games.
  • As we keep only the best 3 scores for contests, challenges with a lot of players pretty much decide the ranking (optimization contests like CvZ because of auto-submit of players).
  • Multiplayer bot programming earns more than twice the best contest points and it increases as we publish new games.

Objectives for a “fair” ranking:

  • Allow new players to reach high ranks. (past results should not decide the ranking)
  • Make all the games “equal”
  • Make it simple (as much as possible), robust (avoid edge cases) and durable (so it doesn’t become obsolete in 1 year).

@marchete’s proposal (helped by @Magus, and @Neumann):

(BASE*min(N/500,1))^​((N-​C+​1)/​N)

Where N is the total number of players, C the rank, and BASE a fixed value to be determined (from 2k to 5k)

@Palmipedus also mentioned the possibility to mimic the Tennis ATP scoring system.

Then, what about Clash of Code? Code golf?

The formula will be applied to both Contest and Multiplayer puzzle ? Or just contests ?

In my opinion, just remove the codingpoints earned with CoC. But i’m biased.

Code golf are fine i think. All code golf puzzles can give 1000 points. Pretty good as it is.

Always in my opinion, i think contests should reward more points than multiplayer puzzles.

For example the BASE for multiplayer puzzles should be something like 2500, and the BASE for contests should be something like 5000.

But don’t remove the rule for keeping only the 3 bests contests. It’s a good rule.

1 Like

I think the proposed formula is pretty good. @Magus: note that it says “min” instead of “max”.

For multiplayer contests it is necessary to limit the amount of points you can get from them, since it will only get harder for new players to catch up on all the multi’s. The current system might even encourage some people to copy codes found elsewhere. You could choose to limit the score to the best X multi’s or at the very least strongly diminish the amount of points to be gained from additional multi’s.

I also think Clash of Code should not play a role for the main ranking. It already has its own leaderboard and has very little, if anything, to do with the other categories. For the same reason I don’t think Code Golf should be relevant to the main ranking. If Code Golf is included you might as well factor in how fast we can type, or how fast we can spell the alphabet backwards.

3 Likes

You are trying to find a good formula to add shoes, apples and hammers, why don’t keep them separated and try to find a good way to calculate an average with the rankings for a global position? (i’m not thinking in functions or maths, only asking).

I’ll write my formula proposals in the next post, here I’ll keep what I think what should measure a global leaderboard.

  • It should measure all ranges of competitive games. Many of us don’t like Clash of Code a lot, but it’s a thing so it must be measured.
  • Each category must have a fixed percentage of the total score. It can’t drift if one category increases in size over time, like it happens with multiplayer.
  • New players must have the same equal opportunities to reach the Max Score. The real issue here is due to challenges, so the best 3 challenges is enough to fix that.
  • The ATP idea can be interesting so I added some score to the last 3 challenges. An active player should have more challenge score than someone who stopped doing challenges years ago. Giving points for the last 3 challenges is a simple and reliable method to achieve it. EDIT: Giving points to best 3 challenges in last 12 months.
  • Make all the games “equal”. All games will use the same formula.
  • Make it simple (as much as possible): Sorry, that’s hard to achieve with so many testcases… Drifting is a hard thing to fix.
  • Robust (avoid edge cases): MIN(N/500,1) fixes edge cases of low played games. Statistically speaking 500 is a good sampling size for a population.
  • Durable (so it doesn’t become obsolete in 1 year): It doesn’t drift over time as Max Score per category is fixed.
  • Challenges encourages being fast-paced, active in short, intensive periods of time. Multiplayer encourages slow-paced, dedicated efforts. Some CG fellow just promoted to UTTT legend after months trying it. This dedication must be rewarded, I think all multiplayers must count towards the total score (not only the best X). Both veterans and new players can play all multis and optims puzzles, there is no real need to limit like in challenges.
3 Likes

In all cases the formula for each challenge is the same as proposed:
f(x) = (BASE*min(N/500,1))^​((N-​C+​1)/​N)

> OPTION A

Different BASE for each category. Then a Drift Correction on those categories that are increasing in size, this way the total Max Score remains always the same.

Category Best of BASE Formula Drift Correction Max Score
Last 3 Challenges* 3 3.000 f(x) None 9.000
Best 3 Challenges 3 10.000 f(x) None 30.000
Multiplayer ALL 3.000 f(x) 20/ALL 60.000
Optimization ALL 2.000 f(x) 6/ALL 12.000
Code Golf ALL 2.000 f(x) 3/ALL 6.000
Clash of Code 1 3.000 f(x) None 3.000
TOTAL: 120.000

*EDIT:“Last 3 Challenges” can be changed to “Best 3 challenges in last 12 months” to make a bigger activity window as Palmipedus suggested

> OPTION B

Only 20 best multis, 6 best optims and 3 Codegolfs. CONS: it’s simpler to calculate but it discourages to participate on more multis and optim once you reach the limit (20 and 6).

I rather prefer the Option A.

Category Best of BASE Formula Drift Correction Max Score
Last 3 Challenges* 3 3.000 f(x) None 9.000
Best 3 Challenges 3 10.000 f(x) None 30.000
Multiplayer 20 3.000 f(x) None 60.000
Optimization 6 2.000 f(x) None 12.000
Code Golf 3 2.000 f(x) None 6.000
Clash of Code 1 3.000 f(x) None 3.000
TOTAL: 120.000

*EDIT:“Last 3 Challenges” can be changed to “Best 3 challenges in last 12 months”

1 Like

wow. A big brain fart from me … Thanks for that. I removed that part :smiley:

Just an idea for your option B:

We could have a “game of the month” (or 2 months or whatever) that counts for the score. The score would then be something like:

Category Number BASE Formula Drift Correction Max Score
Game of the month 1 3.000 ? f(x) None 3.000 ?
Multiplayer Best other 20 3.000 f(x) None 60.000
Optimization Best other 6 2.000 f(x) None 12.000

In that case, the numbers 20 and 6 could be decreased.

The advantage is that it gives a goal for new players (having a good score in many multis / optims) while keeping veterans active if they want to maximize their score with the game of the month. It could also create a good dynamic over the selected game.

1 Like

I’m not convinced by the “game of the month”. We started with the idea to make the leaderboard more stable and now we have a suggestion that would scramble it every month (same with the last 3 contest: if you miss one, you will instantly drop some ranks).

I scraped some leaderboards (without codegolf and clash of code) to give an idea how the proposals would look like:
Marchete option A (rank all multis, max multiplayer score=60k): pastebin
Marchete option B (best 20 multis, 6 optims): pastebin
best 5 multis, 2 optims, bonus for most recent 3 contests, max multiplayer score = 15k: pastebin
best 5 multis, 2 optims, no bonus for recent contests: pastebin

The question is: shall the leaderboard show the most skilled players at the top, or those who had the most time to write a decent bot for all arenas (possibly without ever hitting top10 in a single game)?

5 Likes

Clash of code should be moved in its own global category and be completely detached from the general leaderboard.
Perhaps the sprints (2-4h contests) should be added to this category though. Last one had around 700 participants and NO impact on contest points for those who participated in 3 week long contests.

Speaking of new players. If best 3 contests reward stays the way it is then new players might be forever prevented from obtaining maximum points on contests. Some of the older contests had 3000+ participants. You need a completely different way to scale the points perhaps.

For example rank 100 on the final leaderboard in a contest with 3600 participants yields more than 2800 points. You won’t reach this amount even if you place #1 in a contest with 2500 participants and the players only got better since then, but getting into top 100 in any contest means less CP regardless.
This means that even if someone was ranked #1 for the past 8 contests in a row they would be ranked lower than someone who got a good rank in 3 contests with 3000+ participants.

I also like the idea of having the last 3 contests give some bonus CP.

Continuing with the new player experience, imagine you had to write up 30+ bots and put some serious effort to compete on the general leaderboard. Exactly, most will prefer to not even bother. And as the multis pile up they will be disadvantaged on the ladder. Even 20 multiplayers as suggested seems like an awful lot.

I suggest everyone can pick something like 5-10 favorite multiplayers to be ranked on.
The multiplayer gained CP should scaled so that any 2 multiplayer games will reward the same amount of maximum CP.

Perhaps reward more CP for participating in the last (maybe) 3 multiplayers, to encourage remaining active.

I’ll finish this part with an example. So let’s say we have a codingamer “Hi_i_am_new” with the following stastics:
He picks 5 favorite games to count towards total score on the general leaderboard: CSB, Mean Max, CR, Fantastic Bits, Poker Chip Race
If each of those reward a maximum of 5000 points (if we stick to that limit) the total would be 25000.
The last 3 multiplayers are LOCM, CR and Kutulu, which would reward another 15000 points at most.

So “Hi_i_am_new” gets 40k CP if he is #1 in 7 multiplayers. That’s far better than having to write 20-30 bots; a very mundane task. Maybe the favorite multiplayers could reward less maximum points, but this is just one of the many little details that needs to be adjusted.

If you still want to take into account all multiplayers then anything beyond the 10th bot you have to write should give a much lower maximum CP.

Don’t really like code golf, but don’t care if it stays either as it doesn’t give that many extra points.

Optimization puzzles are fine the way they are imo, apart from the max number of puzzles that are counted towards the final score. If the number keeps growing then new players will end up facing the same daunting task they face in the multiplayer game section.

I think the proposed formula is good. I agree that the BASE should be higher for some categories.

I think what is difficult here is the fact that we have two kinds of category, the one with time limit as contests and the multiplayer (past events) with no time limit. Here what I propose for the ranking:

  • Time Limit: Taking the Top N for each category is a way to reward the presence, I believe a number between 3 to 5 is good. But it’s important that the window for the selection of the Top N moves with time (I propose 1 year). Like this even if someone miss one or two contests it should not have much impact on his ranking. The sliding window selection is what will make it fair for new players, as even if eventually a new player will have the same selection as an old player so will be able to fairly compete with everyone.

  • Unlimited time: As there is not time limit and this category is growing over time, I would say that the point should be at least 1/3 rewarding than the contest. We could apply the same logic, Top N * 3 multiplayer and we divide the reward by 3.

With this approach, any new player should be able to achieve a ranking showing his skills in N contests only assuming he post his solutions on Multiplayer and solve 2N more Multiplayer.

For the established players, Multiplayer will not be a defining factor as everyone will have time to post his solutions, the only really thing that will matter will be the contests and the ranking in contest due to the sliding window.

I believe this should achieve a stable leaderboard but still leave room for any new player to go to the top. Past player who top playing for a year, will go down in ranking (only contest as multiplayer point will not be lost) and will be able to climb back if they participate in another contest.

EDIT:
There is no sliding window on the unlimited time category
Each category can have subcategory, the thing that is important is the weight of each of them.

Let’s say we want to have 50% of point on Time Limit, and 50% on unlimited time.
Each category can be further split in subcategories

2 Likes

I think this leaderboard is a clear winner among the one proposed.
Skilled players get rewarded whether they are new, e.g. MSmits, or veterans, e.g. Recar.
Meanwhile, those who are known for copying other people’s code get ranked less high.

Just for the sake of having the best possible base formula, I’d like to change this:

(BASE*min(N/500,1))^​((N-​C+​1)/​N)

To this:

(BASE*min(sqrt(N)/sqrt(500),1))^​((N-​C+​1)/​N)

This will cause people who win a contest with less than 500 participants to be punished less severely.

1 Like

I think that part should be addressed separately. Yes, there are some copy-pasters with a high ranking, but it’s not the job of the leaderboard to filter them (partially by giving twice the points for contests as for multiplayers in that pastebin) - their stolen bots should just be removed to really solve the root cause.

3 Likes

Copy and paste has already been addressed:


It could benefit coders if it’s well used.

And about the best 5 multis, that will only make multi scoring useless for all kind of players. Copypasters will have a much much easier task, finding 5 is easier than finding ALL. On the other hand it simplifies a lot for tops challenger players. You just need to excel in 3 challenges, then reuse it on multi and you just need two more multi bots to have the max score (that they can C&P if they want). Having 3 good challenge bots it’s also 3 free multi bots.

Do you think all these 4 players must have the same score in global leaderboard?
https://cgmulti.azke.fr/players?p=_Royale%20Agade%20eulerscheZahl%20Recar
I don’t think so, sorry. Max Score in leaderboards should reflect the total score of all possible games a new player can do (no matter if it’s “boring” or “take too long”). Don’t fit Max Score to your needs, but to all available options of multiplayer games, for the theorical supercoder (top in all games, no matter what section)

Multi scoring is just a percentage of the global score (maybe 40%). Having just 100% in multi won’t get you to top leaderboard.

It’s just a matter of adjusting percentages of each section (multi, challenges, etc). You tend to promote challenges (fast, time-limited coding), but multis gives the option to test more things, with another pace. CSB multi bots are much much evolved than the challenge ones.

Since it’s not strictly forbidden to copy someone else’s code, I’m going to be bold and say: yes, it is the job of the leaderboard to filter them.

Do you think all these 4 players must have the same score in global leaderboard?

No, but the suggested leaderboard does not give them the same rank so I don’t see what the issue is?
But yes, I absolutely do think Recar is roughly on the level of the others. It should not be needed for a skilled player to dedicate his life to writing a quick legend bot for every single multi to get rewarded with a good rank.

In my opinion :

Copy/pasters in multiplayers puzzles are removed from the leaderboards by CodinGame (when they take the time to make a little clean up :smiley: ). I don’t think the global leaderboard should adress that. This is a separated issue. And since CodinGame is against it, it should be ignored.

The CodinGame goal should be to encourage people to stay on CodinGame (not all the day of course, but in a long term fashion). With that idea in mind, i really like the Marchete’s proposal to gives points for the last 3 contests. But i also think that every multiplayers puzzles (multi, code golf, optims …) must rewards points. I know that for a new player, create an AI for every multiplayer puzzles will be very hard. But it’s codingame goal to keep people here. So you have to find something to keep people here.

The Marchete’s OPTION A seems fine to me. Maybe tweak a little the bases.

For clash of code : i’m very biased because i don’t like CoC. But CoC is the only leaderboard where you lose points every day if you don’t do CoC every day … It’s very harsh and very time consuming. When you are top 10 in a multiplayer puzzles, you’ll stay in the top 10 for a very long time. Maybe one day you’ll be out of this top 10 but you’ll have to wait at least one year for that. That’s why i think CoC should not reward points in the global leaderboard. Just keep the CoC leaderboard separated.

In the end the question is : What the global leaderboard should represents ? If we only gives points for 3 contests and 5 multiplayers the global leaderboard will just represents your capacity to win a contest and to copy/paste your AI in the multiplayer puzzle.

If we want that the global leaderboard represents you overall level on codingame, you have to includes all availables multiplayers puzzles (we keep 3 contests becauses contests are not available anymore).

3 Likes

My only problem with the current formula is how much the number of participant influence the score. Seems like the new formula proposed by Thibauld will address that.

Some people proposed to only keep the best X multiplayer, I don’t agree with that at all. It is a global leaderboard, it should represent how you rank globally. So I feel all multiplayer should still worth points and I feel since some games gives so much point while others are meaningless, it mostly represent how you did in those games and not in all games.

I like heuristic, I find it more fun than a search algo, but that’s not a popular opinion in here, meaning games like CodeBuster, one of my favorite, doesn’t worth much and I don’t have any reason to improve on that game and it’s not really appealing for new player to tackle it.

Same goes for contest, I gave up in the last week of Locm, in part cause I knew I wouldnt be in top 20 and with the amount of players, it would never be in my top 3.

I agree with the top 3 in contest, since it’s a 1 time thing and new player can’t replay them, but I feel the way the top 3 is determine should change to reflect how good you did. I get that more player = harder to be on top, but not to a scale of 5k players.

Anyway, I think this would motivate new and old players to keep improving and giving their best in new contests and new games.

3 Likes

I do not agree with this. But i know many people will agree with you.

Because statistically speaking it’s not true, and we talked about it in the previous thread How is the Coding Rank calculated?

player count is a matter of sample size on a statistical sample. Having more players in theory won’t move you a lot on your relative position. If I’m in the top 20% for 1000 players, I’ll be in the same 20% with a population of 20000 players (maybe with some confidence error
Sample size is the basics of most statistical analisys and quantitative research. This has a big background on research and real use cases in all fields. It’s not something I made up just for CG rankings.