I used an any-time approach.
I checked the time using rdtsc (I handle 32b and 64b architectures), which is very fast, especially in a sandbox environment.
The time was checked frequently (something like every ~1ms) and I kept a reasonable margin before stopping the algorithm (100ms for first turn, and 10ms for others).
Rdtsc requires to hardcode the CPU frequency as a constant into the code, so if for some reason one server had a frequency lower than others, then it could explain what happened.
Another explanation is the time allocated for the first turn, which is always bigger that other turns. I didn’t seen it in the statement so I tried different values in order to guess it, and ended with a value of 1000ms (so with the margin my algorithm runs for 900ms the first turn). If the real value was not 1000ms but smaller, then it could also explain the timeouts (but then why did it fail just at some point near the end of contest ?)