Random timeouts in Hypersonic

I have a random timeout issue in Hypersonic.

I have wall time and cpu time check both are at around 95ms right before the command is printed.
I just use the standard c++ cout stream from the exemple code.

Most of the time it’s fine but seldom i get a timeout in the middle of the game.

I have tried two ways to time the game,

1.) Right after the inputs,
I believe that was then given as the right way to time by a CG member in a thread that I read a while ago.

2.) Before the inputs,
This worked well in the accountant but give very large time variation and seem unusable in Hypersonic.

Note that as i print the time of process right before outputting the command and that there is not any process between the time print and the cout it means that I am on time according to chrono and clock.

Every time this happen, the cerr with the time shows up but the cout with the command doesn’t.

Code is basically like that:

int t=milliseconds(now() - timeStart);
clock_t endt=clock();
cerr<<“simcnt=”<<simCnt<<" time=" << dec << t << “ms” << " clock_t="<<((float)(endt-startt)/CLOCKS_PER_SEC)*1000<< endl;
cout <<cmd<<" "<<dbg<< endl;

*Dunno why the last line refuse to interpret the carriage return, instead it prints it literal br/ in <> :frowning:

And I get an output like that:

time=95ms clock_t=95.079
Command should be there but nothing shows up.
Game information:
Timeout: the program did not provide 1 input lines in due time… FredericBautista will no longer be active in this game.

The cout appear to be delayed enough to cause a timeout on occasion.
It happen like maybe one in ten games or so.

By right after the inputs do you mean once you’ve read them all, or after you’ve read the first line . I’d imagine the server prints them all at the same time then starts the timeout, so if you have your timeout after you’ve read and processed all of the inputs you may be slightly late. Are the timeouts when there are a large amount of entities?

Do you imply you should read one line and then start the timer ?

I’m going to try that and see if it helps.

However, “processed” is just populating a few variables it’s not like there is any processing time performed there, it would just be system time spent during the cin stream calls, anyway i’ll give it a try.

Unfortunately this still seldom happen randomly, just had one in IDE now (after a few days).
It’s not often indeed and I suppose it’s actually happening to everybody.

Processing order now is just, read the very first line of input for the frame and start the timer right away.
I still print the time at the end of the process, right before printing the command.

Last time it failed on timeout i had this output:
time=95ms clock_t=95.035

I guess that seldom the cout is slightly delayed for some reason…

The code synoptic is pretty much

1 - read first frame input
2 - start timer
3 - read remaining inputs for frame
4 - process until time limit
5 - print time
6 - print move/bomb int int

The last two are really actually just one line of code each,
hence it’s pretty hard to stick a bug in between…

Reviving that post because now with recent platform changes it’s more important an accurate timing.

How exactly the CG servers measure turn time?
It is CPU time, or wall time?
I imagine that CG servers did that on a submit/test:

  1. CG Server: compiles the code
  2. CG server: runs an arbitrer process that executes the compiled code (all of them). This process redirects stdouts/stdins of all compiled codes.
  3. CG server: sends initial inputs to each subprocess, all lines at once. It then starts the timeout waiting for the 1st turn outputs of our code.
  4. Our AI code: at first turn it initializes all code (static constructors, etc…), and start our process.
  5. Our AI code: read first turn initial inputs. I think I need to account initialization time too. Otherwise it would be unfair because you can precalculate a lot of lookup tables without time penalty.
  6. Our AI code: Do magic stuff to win.
  7. Our AI code: Send Console.WriteLine/cout before 1s ellapsed from starting the code (not from reading inputs).
  8. CG Server: It reads our couts, calculate next turn and send next turn inputs. Reinit timeout to 100/150ms
  9. Our AI code: Read inputs. At the time we read the first input line, the timer must be restarted to 100/150ms timeout.
  10. Our AI Code: Calculate magic stuff for the remaining time
  11. Our AI Code: Send cout to console.
  12. CG Server: Reads our couts. Goto step 8. Rinse and repeat.

FAQ doesn’t explain that. Timing is a delicate issue, and an incorrect timer is a problem because you’ll get random timeouts, or you need to be conservative on timings (like 70ms or so).
On cloud servers now we have less calculation time, so maximizing available time is a plus.
It would be great if some of the CG gurus explain that and put it on an “Advanced FAQ”.

2 Likes

I suspect that CG developers might have some hard time with timings within the cloud actually.

There could be some unknown variation when running on cloud platforms that may have some special invisible process for security, maintenance or whatever else and that they just might not have access to or even not have much informations on.

This is just guessing on my end but they already said (in a reply in another post) that the increase in timeouts was related to the move to cloud platforms.

I guess that precise timing of a process somewhat goes against the fluffy concept of cloud :slight_smile: