I think it would be nice to allow multiple submissions during a contest. Especially if there is more than one problem.
A participant should be able to submit a solution and continue to work on the problem. This way, a participant that solved 80% of a challenge in 1 hour, and continues to work on the problem for the next 2 hours without successfully improving his solution would be ranked higher than a contestant that also only solved 80% of the challenge but took the entire time to get there. In the current system the two participants would be scored equally despite one having produced the solution much faster.
The final score should be the score calculated from the last submission.
I’d be curious to hear other people’s thoughts on this too.
You are not entirely right, the ranking is made first by percentage, then by time.
Two people with 80% will be ranked by how fast they submit that 80%
But I still think like you that it would be nice if they let us submit more than once, because i ended up regretting giving up too soon but couldn’t do anything about it.
Well, yes, you are ranked based on percentage (score), and then how quickly you submit.
But here is where I think it’s not fair. For the sake of simplicity, let’s just say there are only two tests, each work 50%.
Say one participant solves Test 1 in 15 minutes and then continues to work on Test 2 for the remainder of time (say 2 hours), but without success. Eventually he submits his code.
Another participant works on Test 1 and it takes him a little over 1 hour. At that point this participants figures that Test 2 is too hard and submits.
With the current rating system, they both score 50% but the participant that gave up is ranked higher. I really don’t think that’s fair! In fact it should be quite the other way around. Participant 1 should be rewarded for having solved puzzle 1 sooner than participant 2 (if not for not giving up)
If participants are allowed to submit multiple times, it would go like this:
Participant 1 solves test 1 in 15 minutes and submits (saves, whatever you want to call it). If this participant makes no further submissions, his rank will be computed based on this submission (score + time of submission). If he later solves Test 2 as well and submits again, then his rank would be computed based on Test 1 and Test 2 (at time of second submission).
Participant 2 solves Test 1 after 1 hour. He submits and doesn’t even attempt Test 2. His ranking is calculated based on score + time, but since he took longer to finish he ranks lower than Participant 1.
I think so, too.
First Challange I made, I used the full 3 Hours and reach 95% after 2 hours. But I submit at 3 hours, so my ranking was bad.
Last Chalange programmed more quick&dirty (because of the time) and I decide after 2 hours, that to solve the last 15% could take me rest of my time and I’m not sure if I can solve it, so I submit after 2 hours to get a better rank, without triying to solve the last 15%.
Your idea to submit serveral times ant the last is rated would be more fair. Because if you reach 50% after 1 hour, you can submit and continue trying. And when you reach a better percentage you can submit again. And if not you get in your percentage the time rated you really need to solve this percentage.
Yes, you’re right. It’s not exactly Percentage - Time, but what do you propose X and Y be in your equation? I don’t think you can pick any values for X and Y that would make it fair. It needs to more sophisticated than a linear combination of time and percentage.
As I said above, I think that the easiest modification would be keeping the current ranking system but allowing participants to make multiple submissions, and only considering their last submission. That way, the back-end would not have to change much. It would simply take the last submission and, if there is a submission entry already, replace that with the new one.
I noticed something else to improve on the scoring system, based on my experience on several challenges: The score should be based on more “test-cases” for each functionnality.
Sometimes, we can be lucky, and some test-cases are validated for a functionality we hadn’t implemented. Or on the contrary, the functionality is implemented, it is working with the “dev” test-case, but bugged for the only “scoring” test-case, and we get no points at all.
If each functionality was validated by at least 3 “test-cases”, it would be fairer.