Contest reports

I was interested in learning a new language and I was perusing the blog posts of previous contest results ( I was interested in which languages were better performers ).

The thing that struck me was that it seems as if the “score by language” is not weighted.

For instance, if one person were to use the a very unpopular language and score well then that language would show up as a top scorer compared to c++ which usually has multiple top 10 placers.

Whereas if the languages were weighted by the proportion of the total programmers then we would see which languages really shine.

Please ask me to clarify if this doesn’t make so much sense.


Hey rootking,

You’re right, saying that one person with a great score on a very unpopular language may propel that language above c++, because we are only doing an average.

However, your proposition doesn’t help to determine languages with the best performance. For instance, let’s say the top 10 of the contest did it in C++, and all the other participants coded in Java.
With your weight, Java would end first.

IMHO and only considering playing on CodinGame, c++ has a great execution time performance, even though we did not activate much optimization flag when compiling it.
For script languages, I’d prefer Python. I feel it is more adapted to AI coding than Javascript and Perl, and I do not know really know much about other script languages.

Yeah, that was silly of me…

I just read up on my statistics because I remembered a similar problem to this:

How another graph with the 95% confidence intervals of each language?

or the sample mean multiplied by 1/(distance between the upper and lower 95% confidence bounds)