Optimization puzzles are weighted too strongly

Thanks for including the optimization puzzles and multi-player games into the ranking, however I feel the weighting of the optimization puzzles is excessively strong.  Here’s my reasoning:

1.  Reward for effort is way out of proportion
All I had to do to get 600pts in the optimization section was to copy my code from the respective easy and medium puzzles that I had already done and then after a bit of minor tweaking that was probably the equivalent of no more than one medium puzzle I got my 600 points rather than the 60 I would have had for a medium puzzle.  Colleagues of mine managed over 1000 points using a better suited language.

2.  Code golfing depends strongly on the language
Some languages are far better suited to code golfing than others.  Currently the best ranked Java solution for Thor is 167 chars while  languages like Bash and Perl can do it in under 60 chars.  Apart from the Mars lander fuel test, you only have a fairly small choice of languages to get a really good score.

3.  Code golfing is fun but leads to bad code
Sure you can learn neat language features and tricks, but to win at code golfing you also need to make your code unreadable by setting one character variable names, removing formatting whitespace, comments, and type declarations, etc. as well as eliminating any input checks and tailoring the code to just the specific tests.

4.  There is a limit
At some point the limit will be reached. For Thor it may be say 50 characters.  After a time there will be a very large cluster of people around this limit point which will skew the scoring.

I did enjoy doing the optimization puzzles, but I have learned far more from doing the other puzzles.


I agree with RobinHood. I managed to obtain more than 1100 points, just coping and pasting the solutions and golfing hard at the Thor problem.

My suggestion is a list of achievements, just like the normal puzzles. Example: one achievement if you write the code in less than X number of characters (any language), but as golfing is limited by the language, another achievement if you write the code in less than X character in Y language, just like the roller coater in clojure or bash for the horses problem.

I also couldn’t find the way it’s been calculated.

Also totally agree with RobinHood. I’ve already formulated a similar post in my mind and was thinking to post it soon, when i bumped into this post which nicely summarizes also my problem with optimization puzzles.

As for reasoning 2: Being on Java, i’ve no chance to achieve similar character count as those using Bash, Ruby or C. Just thinking of main() signature having a strict formula eating up 35+ characters “public static void main(String[] a)” where leaving out any part would result in compiler error. Anyhow, i consider myself matching with other Java guys and their code sizes, so i’ve no problem here. Anyhow i also find strange that languages are mixed to formulate the final score, as per language would have more sense here.


There are two problems :

  • It is too easy to get shitloads of CG points just by doing code golfing in as many languages as possible. That’s unfair and uncessesary rewarding I think.
  • The amount of points you can get is way too high. That wouldn’t be a problem if everyone was participating, which is not the case, and never will be. You can get more points by submitting poorly optimized code in 5 languages than by finishing 1st in a contest, that shouldn’t be possible.

Possible solutions :

  • Like williammeira said, addding milestones achievements could be nice.
  • Give a reasonnable amount of points to the top XX only (XX being 50 or even less), by languages, and thus, stop mixing every language in the same leaderboard.

More globally, I do like the new ways of getting CG points (training multiplayer contests, or these optimization puzzle). But the amount of points one can get with these new ways is too high.

EDIT : Regarding the fact that one can get N* his points just by rewriting his optimization code in N languages, a fair system would be this :

  • For code golf, separate each languages in different leaderboard. That way, optimization will occur among verbose languages too, which is not the case now. If you submit in different languages, you get points only from the language you have the best rank. And to be fair, change the amount of points to not be based anymore on the number of contestants, which is different in every language (take the most populated language ? I don’t know).
  • For fuel optimization in mars lander, limit to 1 submission only, in 1 language. That’s ridiculously easy to rewrite the same algorithm in every language.

I fully agree with you Neumann. I’d like to add that currently, some people have more than 5000pts just because they have solved the puzzles in multiple languages. That’s way too much.

I would simply consider the submission that brings the most points. And for code golfing, the points should be calculated based on the ranking of the submissions for the same language.

Mars Lander should not be treated the same way as the code golfing puzzles. It’s closer to a multigame. I don’t see the point of rewriting my solution in many others languages except Bash for fun.

Hi guys, I am part of the CodinGame staff and I would like to share with you some ideas we have computed based on your proposals from this post and the other one (modification of ranking score).

The main idea of the following stuff is to balance each puzzle modes (solo puzzle, contest, multi, opti…) in terms of rewards.

Solo puzzles

The rewards are more balanced compared to other game types:

  • about 50 CP for an easy puzzle
  • about 100 CP for a medium one
  • about 250 CP for a hard one
  • about 500 CP for a very hard one

Contest (solo and multi)

A contest allows you to win CP based on the actual formula which is : N^((N-C+1)/N).
BUT only the 3 best scores are conserved. So if you have participated in 5 contests and your points are 1000, 500, 800, 600, 200, you totalize 1000+800+600 = 2400 CP for the contest part.
This is fairer for new entrants as they can beat the best old players playing the next contests.

Multi Games

The multi games are calculated the same way as the contest with these differences :

  • all multi are taken into account (and not only 3 best scores)
  • a maximum cap of 2500 CP can be earned for a multi (in case there are 5000 players for a multi, you can only gains 2500 for it for the first place). So if you are first in 4 multi games (whoa !) with more than 2500 players, you gain 2500*4 = 10k points.
  • if the cap is reached, the formula is still distributed across all the players and not only the first 2500.

Optimization (code size)

Code size puzzle points are calculated in the same way as the multi games but with these differences:

  • there is a leaderboard per language and the CP gain is per language (so you can no longer win CP on the mixed leaderboard (but it can always be consulted to look for the best language for the puzzle))
  • the cap for a given language and a given puzzle is 250 CP max with a minima of 50 : for example if a language has only 15 submissions, the best submission will gain 50 points, 136 points for 136 submissions, 250 points for 450 submissions

** Optimizations (fuel like Mars Lander) **

These are calculated like multi games with a cap of 1000 CP and not per languages like code size puzzles.

I would love to hear from you and don’t hesitate to give your feedback :wink:


This is p̶e̶r̶f̶e̶c̶t̶, much better !

1 Like

To my mind, this is very good, however, while I agree on the idea of the “maximum cap”, this can be done in several ways but I’m not sure to understand which one you are suggesting. The first possibility is just: min(2500, CP) and the other is to give CP only to the first 2500 players: N’=min(2500, N); N’^((N’-C+1)/N’.

Can you tell us which formula you have in mind? I have a preference for the latter.

@SaiksyApo: Are you sure ?

I was more thinking of something like that :

players < 2500 : N^((N-C+1)/N
players >= 2500 : 2500^((2500-(C-1)*2500/N)/2500)

there is a bug in optimisation puzzles leaderboard: i can not select language as mine, it simply does nothing
i can only select language directly, this works

As Mattrero pointed out, the goal is to distribute on all players.

For example, if a you are 5000 / 10000, in fact this is the same thing as to be 1250 / 2500.
The goal is to allow everybody to have some points because when you are 2600 / 15000, you are good and you must be rewarded.

To reply to other questions (in the other thread…), I don’t think we can retain 3 best scores from solo contests and 3 from multi ones, because it will be too hard for new players to climb the ladder. It’s for that reason we have thinking of merging it (multi and solo contests are not strictly alternated, we can have 3 solo contests then 1 multi one and not obligatory 1 solo then 1 multi and so on.)

And finally, yes, we are also thinking to remove the time criteria for optimization puzzle (but as @SaiksyApo says, this is an open debate) for the same reason of all that stuff : we don’t want for new entrants to be too difficult to climb the ladder. So, to be fair, an exact same code size give you the same reward (so more than one people can have the 250CP)

@bvs23bkv33 strange, I haven’t got the problem. We will investigate at it.

And what about: min(N, 2500)^​((N-​x+​1)/​N) ?

For N=10000: https://framapic.org/mhnBPb9ixF4u/OdcTxoke

For N=1000: (identical if we divide all the C values by 10)

1 Like

You’re right I made the equation too complicated :wink:

+1 for Maxime finding the best and simplest formula reflecting perfectly the idea :wink:

Yep, my bad, I misunderstood the formula ! :blush:

Just my opinion, but why not separating the rankings between solo challenges and opti challenges ?

I think (and it’s my case) people feel a bit disappointed when they code some non trivial algorithms for hard challenges, they just get 40 CP (because the last use case to have 100% is a bi**h :)), while you can have almost 300-500 CP by just using ternary expressions in several languages…

1 Like

Is it already taking effect? Because right now, the current score spread seems to be:

  • about 40 CP for an easy puzzle
  • about 60 CP for a medium one
  • about 80 CP for a hard one
  • about 100 CP for a very hard one

and it seems to me that it’s still really far away from being enough to balance with the other categories.

Nope :wink:

I just wanted to ask your feedback before implementing it ! The modification will be developed soon (not for the upcoming release).

I will post it when it will be done.

Ok nice, I think the perfect thing to tell if categories are balanced is to see how points are spread among categories on a ‘perfect’ user, and if the difficulty of it. For me it makes a lot of sense to upgrade solo puzzle points because difficult puzzles is where coders will spend the most of their time.