Inexplicable time jumps (Python 3) [Solved]

Hello,

Here is the thing : for some contest, I have a function that I need to call quite often (around 1000 times, with some variations between twice more or twice less). Since I have 100ms to answer, and this function is not the more time consuming one, obviously this function takes less than 0.1ms. And indeed that’s what most of my measurements show.
It is to note also that the time taken by this function to perform should not vary too much since it basically read and write stuff on small matrix and is not recursive.

However, sometime I have “time jumps” of more than 10ms (up to 30), without any reason. Let me show you why.

This is my function :

def my_weird_function():    
    t1 = get_time()
   
    # initialize some variables, basically a nanoseconds worth job
   
    t2 = get_time()
    tx = []

    for stufff in small_array_of_less_than_30_items:
        tx.append(get_time())
        
        # read & write some values on a list smaller than 4 elements

           
    t3 = get_time()

    if t3-t1 >15: log(t1, t2, tx, t3)

Where get_time() gets the time in milliseconds from the start of the game loop (reach 100 and you’re timeout !).

def get_time():
    return int(round(time.time() * 1000)) - START_TIME

I only log the times when I lose more than 15ms in this function.
Since all of this should take less than a millisecond, most of the time nothing is logged, and if it were it would be something like that :

25 25 [25, 25, 25, 25, 25, 25, 25, 25, 25, 25] 25

But about one out of three game loops, without having anything to do with a special state of the game (even two exact same runs can have different behaviours), I have a log like this :

33 33 [33, 33, 33, 33, 62, 62, 62, 62, 62] 62

At first I though maybe there was some special cases in my loop that took a very long time to perform, but it is just creating a bunch (i.e. less than 5) simple objects, only assigning some short variables. Only one loop over a list of 3 elements maximum, and no more than a few dozain variables assignation.
And as I continued logging, I found this :

5 29 [29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29] 29

Clearly, between t1 and t2, nothing could justify a 24ms gap. I previously didn’t put the actual code for lisibilty concerns, but here it is :

    t1 = get_time()
    
    grid_copy = []
    groups_copy = []
    grid_copy = [[Ball(0, ball.pos)) if ball != None and ball.color == 0 else None for ball in col] for col in grid]

    t2 = get_time()    

Creating two empty tables, and copying some elements of a 12*6 matrix, with the Ball constructor looking like this :

def __init__(self, color, pos, group = None):
        self.color = color
        self.pos = pos
        self.group = group 

How can 24ms disappear there ? The only explanation I found is that something is quite wrong in the processor clock of the environment used by coding game, but since I’m not a Python expert, maybe it’s a language related issue known by some wise old men who dealt with it before. Maybe the way Python handles multiple calls of a same function makes it count time weirdly.
But the problem is that to tell my code to stop computing simulations of the game, I need to monitor the time it takes and stop it at, let’s say, 95ms. Now with this unaccountable “time jump” of sometimes more than 25ms, I’m forced to stop my code at 70ms to avoid accidental timeouts, making me lose about 1/4th of computing time, and some valuable hours of my life trying to debug that hell of a behaviour.

So, if some wise old python master could enlighten me with their inestimable wisdom, I’d be forever in your debt.

1 Like

It might very well be related to the time measurement.
Give a try to time.perf_counter(), which is better for relative time measurement (no need to know the absolute time):

time_start = time.perf_counter()
elapsed = round((time.perf_counter() - time_start) * 1000)  # in ms

Even with perf_counter(), still the same issue :s

Maybe it’s garbage collector that weighs in.

If I understand correctly, you create pretty many little objects that are no longer used after you leave your functions. Usually you don’t have to worry what happens to them, they are automatically destructed and the memory is freed. This is sweet for most purposes, but from time to time you get such unpredictable time bumps.

The bad news is that there is not much to do about it.
You can try to uglify your code by using (an array of) global variables/objects that are initialised at the beginning at the global level, not in your function. But the GC will weigh in anyway (time bumps might be just lesser). This is more or less true for any language without manual memory allocation mechanisms.

That would explain a lot, thanks !
I’ll try to make some more tests to confirm this but I think this is indeed the issue :slight_smile:

OK, so I tried to call gc.collect() at the start of my function and the problem disappeared ! While dramatically rising the mean computing time, but without these spikes, proving your point :p.
So finally I called gc.disable() at the start of my code instead, and everything is fine now (even if it gives me this weird feeling that I’m putting a pillow on its face while whispering “shhh… no tears, only dreams”).

Thank you again :slight_smile:

Case closed