Here is the thing : for some contest, I have a function that I need to call quite often (around 1000 times, with some variations between twice more or twice less). Since I have 100ms to answer, and this function is not the more time consuming one, obviously this function takes less than 0.1ms. And indeed that’s what most of my measurements show.
It is to note also that the time taken by this function to perform should not vary too much since it basically read and write stuff on small matrix and is not recursive.
However, sometime I have “time jumps” of more than 10ms (up to 30), without any reason. Let me show you why.
This is my function :
def my_weird_function(): t1 = get_time() # initialize some variables, basically a nanoseconds worth job t2 = get_time() tx =  for stufff in small_array_of_less_than_30_items: tx.append(get_time()) # read & write some values on a list smaller than 4 elements t3 = get_time() if t3-t1 >15: log(t1, t2, tx, t3)
Where get_time() gets the time in milliseconds from the start of the game loop (reach 100 and you’re timeout !).
def get_time(): return int(round(time.time() * 1000)) - START_TIME
I only log the times when I lose more than 15ms in this function.
Since all of this should take less than a millisecond, most of the time nothing is logged, and if it were it would be something like that :
25 25 [25, 25, 25, 25, 25, 25, 25, 25, 25, 25] 25
But about one out of three game loops, without having anything to do with a special state of the game (even two exact same runs can have different behaviours), I have a log like this :
33 33 [33, 33, 33, 33, 62, 62, 62, 62, 62] 62
At first I though maybe there was some special cases in my loop that took a very long time to perform, but it is just creating a bunch (i.e. less than 5) simple objects, only assigning some short variables. Only one loop over a list of 3 elements maximum, and no more than a few dozain variables assignation.
And as I continued logging, I found this :
5 29 [29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29] 29
Clearly, between t1 and t2, nothing could justify a 24ms gap. I previously didn’t put the actual code for lisibilty concerns, but here it is :
t1 = get_time() grid_copy =  groups_copy =  grid_copy = [[Ball(0, ball.pos)) if ball != None and ball.color == 0 else None for ball in col] for col in grid] t2 = get_time()
Creating two empty tables, and copying some elements of a 12*6 matrix, with the Ball constructor looking like this :
def __init__(self, color, pos, group = None): self.color = color self.pos = pos self.group = group
How can 24ms disappear there ? The only explanation I found is that something is quite wrong in the processor clock of the environment used by coding game, but since I’m not a Python expert, maybe it’s a language related issue known by some wise old men who dealt with it before. Maybe the way Python handles multiple calls of a same function makes it count time weirdly.
But the problem is that to tell my code to stop computing simulations of the game, I need to monitor the time it takes and stop it at, let’s say, 95ms. Now with this unaccountable “time jump” of sometimes more than 25ms, I’m forced to stop my code at 70ms to avoid accidental timeouts, making me lose about 1/4th of computing time, and some valuable hours of my life trying to debug that hell of a behaviour.
So, if some wise old python master could enlighten me with their inestimable wisdom, I’d be forever in your debt.