Hello.
I’m trying to set up local simulations for multiplayer games, and at the same time I’m trying to make an AI that measures how much time it’s spending itself. I couldn’t find any useful information in the forum here, so I had to venture somewhere else.
Taking Tic Tac Toe as an example with a 1000ms timeout for the first round and 100ms afterward, I tried to read the java sources of the game engine, referee etc on github, and figured out it was using System.nanoTime() in a huge mess of trying to read the AI output. From what I could gather from the internet, it’s using clock_gettime with the monotonic flag behind the scenes, which is the same syscall used by std::steady_clock in C++ (the language I use). Not sure if you know what you are doing by measuring only real time instead of both real and cpu time but okay whatever.
Going back to how and when you use nanoTime, I figured I could sleep for more than a second right at the beginning of my AI, and it would pass. Indeed, it’s TOTALLY FINE DOODZ because I managed to sleep up to 1600ms in the IDE (add 150ms of real work) without running into trouble. The current AI running in your matchmaker is going for 1s + init, and it’s fine too. For subsequent rounds, I tried sleeping in my loop for 120ms in the IDE (actual work is at most 0.2ms) and it was okay until round 18 or so.
I believe the opponent’s last round time (capped to the timeout value), your JVM initialization and IO fiddling, and maybe more (like, simulating the round ?), is given for free in addition to the 1000 or 100ms when I compute the current round, which is definitely not what it says in the rules. And on the other hand, if your JVM decides to run the GC inbetween the calls to nanoTime() or your system decides to run some task, it’s going to reduce the actual allowed time.
So basically, unless I went wrong at some point, what it means is : please give complete information about that timeout thing, and fix this. Thank you.