Ranking calculations update


The idea of a sliding window is very important to even out the conditions under which everyone competes. Basically after 1 year everyone has the chance to go through exactly the same circumstances, same contests, same playerbase.

I would still exclude sprints or put them into a different category at least. Have sprints be a different category and not part of the contests.

Without the sliding window and by including all small contests (after you scale the formula to take into account 200 - 500 participant contests) you get a chaotic leaderboard which won’t have as much worth competing in. For example over time the amount of players with 3 contests where they achieved top 10 accumulates and takes away from the motivation to go all in on it. A seasonal ranking however brings everyone closer to a new start, a blank slate and is therefore more encouraging.

Participants who like the 10 days contests don’t necessarily like the few hour contests. Those might also not fit into everyone’s free time and counting them the same way as the 10 day contests only generates chaos, especially if the last 3 contests are to be considered for the contest rank calculation.

Alternatively there could be a seasonal leaderboard. Only what happened during the last selected period of time is displayed. Would very much prefer this as default leaderboard over the “hey someone got #1 in 3 contests that took place a decade ago” while today’s environment is completely different. Completely different competition and challenges. This is also more in favor of those who remain active.


Guys don’t put me out of context haha.

I’m just saying I understand the current formula being something like: “You are better than more player, so you deserve more points”. But the number of player influencing the score is basically my only problem with the formula so I clearly don’t agree with that.

What I meant was on a really small scale, like 10 player, you could be lucky and be against only lazy coders who coded a dumb bot and gave up. So just a little bit more players and luck becomes quickly irrelevant and your relative rank wont change much between a let’s say 100 players versus 5k players.

The proposed 500 seems fair, small enough to include all games and big enough just so you don’t get millions of points for being in the first submitters of a new multiplayer on the first day.


An annotation to Marchete’s Option A:

We will (soon?) have the option to create our own multiplayer arenas, see Vindinium. So it’s quite likely that there will be some arenas with less than 500 players one day. That will cause the leaderboard to drift towards contests rather than multiplayers as of now.
20/ALL is not an appropriate correction if we want to account for this.


It’s true, aside from the gratuitous ad for Vindinium :laughing:. It would be better work with sum of 1st place scores

MaxScore(multi) / sum( f(C=1, game = multi))

In SQL you also can just get select multi,max(score) FROM nnn group by multi where … it’s “simple” to implement, or at least I know how to do all the calculations in SQL (some subqueries, but it shouldn’t be very slow).


At the moment my opinion on (possibles) future community multiplayers puzzles is weird.

Don’t get me wrong, i’d love the possibility to create a multiplayer puzzle without the Community Contest step. But i’m unsure if this puzzles should reward codingpoints. I suppose CodinGame will decide on that point.


Hey, just starting reading this post.

  • I do like the fact that the contest formula change because it was designed thinking that the number of contestant will always grow over time.

Quick question: What’s the benefit of having a ‘min(N, 500)’ and not just 500. With every game giving the same amout of points, people willing to get points will play either the one they perfer and/or the one with the less competition. It won’t change in the long term, but will add an incentive to the “low played” game :thinking:


The benefit is because at the start of a new multiplayer puzzles we often have far less than 500 players. I suppose the goal is to avoid that a new a multiplayer puzzle reward 3000 points directly.

And maybe it’s here to prepare the field for future community multiplayer puzzles. But you are more informed than us.


To avoid edge cases, like in very old challenges, and those are “final”, you can’t add more players. If we will have a single formula for all games, we must ensure that in any situation given points are fair. With only a handful of players you don’t have a representative sample of the real level of players. Maybe it’s only played by hardcore CGers so you end up in a lower percentile than you deserve.



we had a internal discussion on this topic. We didn’t have time to discuss everything though.

  1. First, we like the new formula to give the same amount of points to games of same type: (BASE * min(N/500,1))^((N-C+1)/N)
  2. We also like the idea of seasonal ranking for contests. From January to December, only top 3 contests taken into account. We even thought it could be the main leaderboard of CodinGame. (with possibility to check the leaderboards of previous years)
  3. Then, what to do with multi-players, optims, code golf… we’re not sure yet.


As I see it, we could have one of these solutions (independently of a special contest leaderboard per year):

  1. Go for Marchete’s option of a global leaderboard. Each category counting for a fixed amount of points and with drift corrections.

  2. Split the leaderboard per category. One for multis, one for optims… Just like the CoC leaderboard. That’s a bit what we have today when we sort the current leaderboard by “coding points category”. With no default leaderboard or perhaps just the contest one.

I tend to prefer the 2nd option because the first option doesn’t feel fair to newcomers (in the sense that it will take them forever to reach the top). At the same time, the 2nd option could make long-time CodinGamers feel bad.



In the 2nd option, we don’t have global leaderboard anymore ? So what we will have in our profile page ? Could we choose what rank is displayed ? (between the differents leaderboards)

I don’t see how the 2nd option is more fair for newcomers. Reaching the top on multis is still very hard if you have to do all the puzzles. Reaching the top in optims is slight easier but not easy at all.


Personally I don’t like the idea of more leaderboards. A global ranking is global, it should include everything that’s worth points. We already have levels vs points, I don’t feel we need another thing on top of that.


What about the following mix of all solutions above:

  • compute a multiplayer game rank by taking in account the 5 best scores on multi games with the formula (BASE * min(N/500,1))^((N-C+1)/N)
  • compute an optimization rank by taking in account the 5 best scores on opti games (same formula)
  • same for codegolf
  • clash of code ranking remains as it
  • a contest ranking is computed taking in account the 3 best scores (season, reset each year, etc can be discussed)
  • a global rank taking in account each of these rankings with a ratio (for example 30% of the score is for contest, 30% for multiplayer games, 20% CoC, 10% opti, 10% codegolf)

When a codingamer starts to play one type of the game, the specific ranking is displayed on its home page. So if they play CoC and multiplayer games, they can track both CoC and multi rank on the home page.

The global ranking and all these rankings are available in the leaderboard section.


If you want to add a global leaderboard with Bof5 and some ratio per section, it’s good. Many players will be happy with that approach.

I rather prefer a separate ranking for each section
(Like “sort by” in global leaderboard but more visible) instead of Best of 5 multis.
I see Bof5 as flawed from the start, it will reflect who got 3 good challenge bots and reuse them on multi (+ points from -3vel CSB and another one). Almost like doubling challenge points. Top challengers will be packed in like ±30 points with lots of rank oscillations.


You could make the general leaderboard display the seasonal results by default, current year. This could then take into account multis for the ongoing year and ignore the rest.
Then an option to switch to the all-time results could be made available.

You could add even more options for recent results, break it down into: previous month, previous 3 months, previous 6 months, 1 year, all-time.

Splitting everything by category doesn’t change much as you already have that option. Folks that only care about clash can bookmark the clash leaderboard. Probably you could add an option to what category each user wants to default to.


I think the goal of the leaderboard is to show who are the main coders (“best”?) on the platform and overall it does it pretty well (in my opinion).
There is just something that haven’t been discussed here before and that I want to mention. I think the base formula should reward more the best rankings. Currently, on a 1000-player multi, getting from #10 to #1 awards 60 points, which is less than going from #120 to #100 (65 points). (with formula min(5000,N)^((N-C+1)/N). On CSB, #10 to #1 only gives 9 points. I think there is a discrepancy between the difficulty and the reward.

One way to emphasize on best ranks would be to stop using linear exponent. For example, BASE^( (N-C+1)/N)^2) (with BASE=1000) would award 120 points from #10 to #1, equivalent to #150 to #100 (123 points), which seems fairer to me.

In addition to the current debate, an almighty AI during a contest would award less points in the multi if it is not improved (because losing some ranks would have more impact).
The main issue is that it would flatten middle ranks and ignore low ranks, so a factor 2 is probably too big but I would like to see some thoughts in this direction.


As a relatively new player, you might think I want score to be determined by few multis and only the last few contests, but that’s not really the case. There is one thing I like about the current system that I want to keep: Many multiplayers to play and get codingpoints on. In my opinion, the leaderboard isn’t just about showing who is the most skillful player or who invested the most time. It is also meant to give a sense of progression while you play. If only the top 5 multis will count, your progression will probably end fairly soon.

I am in favor of many multi’s counting for leaderboard, but normalized within a reasonable range. The best 20 is fine, or even the best 15. But 5 is way too few. You will miss out on many great bot games because you will mostly be motivated to improve your 5 best multis. It will turn into a micro-optimization fest which is a lot less fun.

About the contests and multis: They do need to have a lower minimum for the best scores. 500 is fine. I’m glad most people agree on that.

Basically I prefer Marchete option B with best challenges in last 12 months. I really would prefer if we keep a global leaderboard and not just a leaderboard per category.


Maybe it’s OK to score just enough multis to be challenging and have a sense of progression, and doable in a reasonable span.
Maybe 15 or even 10. MSmits is a “rookie” that excels both in challenges and multis (2nd in CSB in just 1 week…) so I think it has the best point of view of a challenging newcomer.

You can make a survey (5/10/15/20) but that can be problematic as some players doesn’t even multi. 5 is too low. 10 is reasonable.

With option B and Best of 10/15 you don’t have drifts, so you just need to assign some percentage to each category.
The new formula fixes the playercount score bias, it’s stable for the future, won’t drift, it’s challenging and doable for newcomers, and community multis and low played games won’t affect the rankings.
But for God’s sake, don’t give 20% to Clash of Code, better 10% or 5%.

P.d. If you can, at least make some badges/achievements for completing/participating in more multis. Like at least reach bronze. This is for encouraging to at least try each multiplayer, there are nice multis underplayed.


what I understand here is that the XP given to pass leagues doesn’t matter much for a feeling of progression in multis?


I totally forgot XP. But XP comes from many sources, it doesn’t guide you to anything specific.

It may be enough, yes.