I would like to contribute and create a simple I/O puzzle (none of those interactive games yet heh). Is there some way to find or change the time limit?
The reason I ask is because I need to discourage people from using builtin quicksort of O(n log n) performance and push them towards O(n) sorting. The only way of doing that seems to me, at least at a glance, is to set the time limit just between O(n) solution and O(n log n) solution.
All ideas are welcome?
For reference: Lang versions FAQ
I also would like to know this, but I cannot find it anywhere in the documentation. I did a pretty stupid test with Python, inside the IDE of a classic I/O puzzle:
from time import sleep
With 5.95 seconds of sleep, it never timed out. With 5.96 seconds of sleep, it most of the times timed out. So, I guess the time limit is somewhere close to 6000±50 ms.
For classic IO puzzles the time limits aren’t published and vary by language (I’m pretty sure its 5 seconds for python, 0.5 for c++). For other puzzles the timeout is configurable by using the SDK (within limits) and is the same for all languages.
Sleeping without reading input probably isn’t the correct way to measure it - assuming it’s the same as bot programming the server only starts measuring the time from when the input is sent to your program (and most likely there is a pause to allow your program to startup - probably 1 second which would match the 6 seconds total above).
Ah that’s a much better answer, and pretty awesome that they vary per language. I always thought that optimization puzzles difficult for Python etc. should be piece of cake for C etc. But still, why are the timelimits not published?
As for this:
I now tested it with:
from time import sleep
(If I’m not mistaken, open(0) simply reads all input.), still the same result, so I guess for I/O classic puzzles it starts right from the start, and in any case your 5 seconds + 1 second for initialization probably is fully correct.
Let me confirm: you both are saying that it is useful to do non-data computation ahead of reading stdin, and that way I gain 1 second for whatever computation remains?
But I think all (or almost all?) optimization puzzles are created using the SDK, so the time limit is the same for all languages as mentioned by RoboStac.
No, it seems I was wrong about that - there doesn’t appear to be any measurable time between the start of execution and the first input. I guess it’s not run in the same way as the SDK puzzles.
Oh my bad - I used the word optimization, but I actually meant classic I/O puzzles that need some sort of optimization of code in order not to time out. (i.e.: I assumed that with C you could often apply brute force where Python would quickly time out with a similar approach).
I believe that discouraging people from using the built-in sorting functions this way is very difficult, if not impossible. The difference between O(n) and O(n log n) is so small that the constant factor still has a huge impact on the performance.
In some interpreted languages, the implementation of the built-in sort is pre-compiled to machine code for improved performance, so anything interpreted will have a hard time competing with it. For example, CPython writes its built-in sort in C, and according to my own benchmark, it is about 1.17× as fast as my own counting sort implementation written entirely in Python when sorting 10⁶ integers ranging from 0 to 9 – an O(n log n) algorithm is faster than an O(n) algorithm!
Furthermore, even in compiled languages, the performance of the built-in sort holds up pretty well against O(n) sorting algorithms. For example, according to my own benchmark, my own counting sort implementation written in Rust is no more than 3× as fast as the standard library’s
slice::sort_unstable (general comparison sort based on pdqsort) when sorting up to 10⁶ integers ranging from 0 to 9.
Actually I agree with every point you made here.