Ranking calculations update

Copy and paste has already been addressed:


It could benefit coders if it’s well used.

And about the best 5 multis, that will only make multi scoring useless for all kind of players. Copypasters will have a much much easier task, finding 5 is easier than finding ALL. On the other hand it simplifies a lot for tops challenger players. You just need to excel in 3 challenges, then reuse it on multi and you just need two more multi bots to have the max score (that they can C&P if they want). Having 3 good challenge bots it’s also 3 free multi bots.

Do you think all these 4 players must have the same score in global leaderboard?
https://cgmulti.azke.fr/players?p=_Royale%20Agade%20eulerscheZahl%20Recar
I don’t think so, sorry. Max Score in leaderboards should reflect the total score of all possible games a new player can do (no matter if it’s “boring” or “take too long”). Don’t fit Max Score to your needs, but to all available options of multiplayer games, for the theorical supercoder (top in all games, no matter what section)

Multi scoring is just a percentage of the global score (maybe 40%). Having just 100% in multi won’t get you to top leaderboard.

It’s just a matter of adjusting percentages of each section (multi, challenges, etc). You tend to promote challenges (fast, time-limited coding), but multis gives the option to test more things, with another pace. CSB multi bots are much much evolved than the challenge ones.

Since it’s not strictly forbidden to copy someone else’s code, I’m going to be bold and say: yes, it is the job of the leaderboard to filter them.

Do you think all these 4 players must have the same score in global leaderboard?

No, but the suggested leaderboard does not give them the same rank so I don’t see what the issue is?
But yes, I absolutely do think Recar is roughly on the level of the others. It should not be needed for a skilled player to dedicate his life to writing a quick legend bot for every single multi to get rewarded with a good rank.

In my opinion :

Copy/pasters in multiplayers puzzles are removed from the leaderboards by CodinGame (when they take the time to make a little clean up :smiley: ). I don’t think the global leaderboard should adress that. This is a separated issue. And since CodinGame is against it, it should be ignored.

The CodinGame goal should be to encourage people to stay on CodinGame (not all the day of course, but in a long term fashion). With that idea in mind, i really like the Marchete’s proposal to gives points for the last 3 contests. But i also think that every multiplayers puzzles (multi, code golf, optims …) must rewards points. I know that for a new player, create an AI for every multiplayer puzzles will be very hard. But it’s codingame goal to keep people here. So you have to find something to keep people here.

The Marchete’s OPTION A seems fine to me. Maybe tweak a little the bases.

For clash of code : i’m very biased because i don’t like CoC. But CoC is the only leaderboard where you lose points every day if you don’t do CoC every day … It’s very harsh and very time consuming. When you are top 10 in a multiplayer puzzles, you’ll stay in the top 10 for a very long time. Maybe one day you’ll be out of this top 10 but you’ll have to wait at least one year for that. That’s why i think CoC should not reward points in the global leaderboard. Just keep the CoC leaderboard separated.

In the end the question is : What the global leaderboard should represents ? If we only gives points for 3 contests and 5 multiplayers the global leaderboard will just represents your capacity to win a contest and to copy/paste your AI in the multiplayer puzzle.

If we want that the global leaderboard represents you overall level on codingame, you have to includes all availables multiplayers puzzles (we keep 3 contests becauses contests are not available anymore).

3 Likes

My only problem with the current formula is how much the number of participant influence the score. Seems like the new formula proposed by Thibauld will address that.

Some people proposed to only keep the best X multiplayer, I don’t agree with that at all. It is a global leaderboard, it should represent how you rank globally. So I feel all multiplayer should still worth points and I feel since some games gives so much point while others are meaningless, it mostly represent how you did in those games and not in all games.

I like heuristic, I find it more fun than a search algo, but that’s not a popular opinion in here, meaning games like CodeBuster, one of my favorite, doesn’t worth much and I don’t have any reason to improve on that game and it’s not really appealing for new player to tackle it.

Same goes for contest, I gave up in the last week of Locm, in part cause I knew I wouldnt be in top 20 and with the amount of players, it would never be in my top 3.

I agree with the top 3 in contest, since it’s a 1 time thing and new player can’t replay them, but I feel the way the top 3 is determine should change to reflect how good you did. I get that more player = harder to be on top, but not to a scale of 5k players.

Anyway, I think this would motivate new and old players to keep improving and giving their best in new contests and new games.

3 Likes

I do not agree with this. But i know many people will agree with you.

Because statistically speaking it’s not true, and we talked about it in the previous thread How is the Coding Rank calculated?

player count is a matter of sample size on a statistical sample. Having more players in theory won’t move you a lot on your relative position. If I’m in the top 20% for 1000 players, I’ll be in the same 20% with a population of 20000 players (maybe with some confidence error
Sample size is the basics of most statistical analisys and quantitative research. This has a big background on research and real use cases in all fields. It’s not something I made up just for CG rankings.

The idea of a sliding window is very important to even out the conditions under which everyone competes. Basically after 1 year everyone has the chance to go through exactly the same circumstances, same contests, same playerbase.

I would still exclude sprints or put them into a different category at least. Have sprints be a different category and not part of the contests.

Without the sliding window and by including all small contests (after you scale the formula to take into account 200 - 500 participant contests) you get a chaotic leaderboard which won’t have as much worth competing in. For example over time the amount of players with 3 contests where they achieved top 10 accumulates and takes away from the motivation to go all in on it. A seasonal ranking however brings everyone closer to a new start, a blank slate and is therefore more encouraging.

Participants who like the 10 days contests don’t necessarily like the few hour contests. Those might also not fit into everyone’s free time and counting them the same way as the 10 day contests only generates chaos, especially if the last 3 contests are to be considered for the contest rank calculation.

Alternatively there could be a seasonal leaderboard. Only what happened during the last selected period of time is displayed. Would very much prefer this as default leaderboard over the “hey someone got #1 in 3 contests that took place a decade ago” while today’s environment is completely different. Completely different competition and challenges. This is also more in favor of those who remain active.

Guys don’t put me out of context haha.

I’m just saying I understand the current formula being something like: “You are better than more player, so you deserve more points”. But the number of player influencing the score is basically my only problem with the formula so I clearly don’t agree with that.

What I meant was on a really small scale, like 10 player, you could be lucky and be against only lazy coders who coded a dumb bot and gave up. So just a little bit more players and luck becomes quickly irrelevant and your relative rank wont change much between a let’s say 100 players versus 5k players.

The proposed 500 seems fair, small enough to include all games and big enough just so you don’t get millions of points for being in the first submitters of a new multiplayer on the first day.

An annotation to Marchete’s Option A:

We will (soon?) have the option to create our own multiplayer arenas, see Vindinium. So it’s quite likely that there will be some arenas with less than 500 players one day. That will cause the leaderboard to drift towards contests rather than multiplayers as of now.
20/ALL is not an appropriate correction if we want to account for this.

2 Likes

It’s true, aside from the gratuitous ad for Vindinium :laughing:. It would be better work with sum of 1st place scores

MaxScore(multi) / sum( f(C=1, game = multi))

In SQL you also can just get select multi,max(score) FROM nnn group by multi where … it’s “simple” to implement, or at least I know how to do all the calculations in SQL (some subqueries, but it shouldn’t be very slow).

At the moment my opinion on (possibles) future community multiplayers puzzles is weird.

Don’t get me wrong, i’d love the possibility to create a multiplayer puzzle without the Community Contest step. But i’m unsure if this puzzles should reward codingpoints. I suppose CodinGame will decide on that point.

3 Likes

Hey, just starting reading this post.

  • I do like the fact that the contest formula change because it was designed thinking that the number of contestant will always grow over time.

Quick question: What’s the benefit of having a ‘min(N, 500)’ and not just 500. With every game giving the same amout of points, people willing to get points will play either the one they perfer and/or the one with the less competition. It won’t change in the long term, but will add an incentive to the “low played” game :thinking:

The benefit is because at the start of a new multiplayer puzzles we often have far less than 500 players. I suppose the goal is to avoid that a new a multiplayer puzzle reward 3000 points directly.

And maybe it’s here to prepare the field for future community multiplayer puzzles. But you are more informed than us.

To avoid edge cases, like in very old challenges, and those are “final”, you can’t add more players. If we will have a single formula for all games, we must ensure that in any situation given points are fair. With only a handful of players you don’t have a representative sample of the real level of players. Maybe it’s only played by hardcore CGers so you end up in a lower percentile than you deserve.

Hello,

we had a internal discussion on this topic. We didn’t have time to discuss everything though.

  1. First, we like the new formula to give the same amount of points to games of same type: (BASE * min(N/500,1))^((N-C+1)/N)
  2. We also like the idea of seasonal ranking for contests. From January to December, only top 3 contests taken into account. We even thought it could be the main leaderboard of CodinGame. (with possibility to check the leaderboards of previous years)
  3. Then, what to do with multi-players, optims, code golf… we’re not sure yet.
6 Likes

As I see it, we could have one of these solutions (independently of a special contest leaderboard per year):

  1. Go for Marchete’s option of a global leaderboard. Each category counting for a fixed amount of points and with drift corrections.

  2. Split the leaderboard per category. One for multis, one for optims… Just like the CoC leaderboard. That’s a bit what we have today when we sort the current leaderboard by “coding points category”. With no default leaderboard or perhaps just the contest one.

I tend to prefer the 2nd option because the first option doesn’t feel fair to newcomers (in the sense that it will take them forever to reach the top). At the same time, the 2nd option could make long-time CodinGamers feel bad.

Thoughts?

1 Like

In the 2nd option, we don’t have global leaderboard anymore ? So what we will have in our profile page ? Could we choose what rank is displayed ? (between the differents leaderboards)

I don’t see how the 2nd option is more fair for newcomers. Reaching the top on multis is still very hard if you have to do all the puzzles. Reaching the top in optims is slight easier but not easy at all.

Personally I don’t like the idea of more leaderboards. A global ranking is global, it should include everything that’s worth points. We already have levels vs points, I don’t feel we need another thing on top of that.

What about the following mix of all solutions above:

  • compute a multiplayer game rank by taking in account the 5 best scores on multi games with the formula (BASE * min(N/500,1))^((N-C+1)/N)
  • compute an optimization rank by taking in account the 5 best scores on opti games (same formula)
  • same for codegolf
  • clash of code ranking remains as it
  • a contest ranking is computed taking in account the 3 best scores (season, reset each year, etc can be discussed)
  • a global rank taking in account each of these rankings with a ratio (for example 30% of the score is for contest, 30% for multiplayer games, 20% CoC, 10% opti, 10% codegolf)

When a codingamer starts to play one type of the game, the specific ranking is displayed on its home page. So if they play CoC and multiplayer games, they can track both CoC and multi rank on the home page.

The global ranking and all these rankings are available in the leaderboard section.

1 Like

If you want to add a global leaderboard with Bof5 and some ratio per section, it’s good. Many players will be happy with that approach.

I rather prefer a separate ranking for each section
(Like “sort by” in global leaderboard but more visible) instead of Best of 5 multis.
I see Bof5 as flawed from the start, it will reflect who got 3 good challenge bots and reuse them on multi (+ points from -3vel CSB and another one). Almost like doubling challenge points. Top challengers will be packed in like ±30 points with lots of rank oscillations.