Speed up Python with JIT compiler

Due to the resumption of the discussion about the inequality of scripted and compiled languages, I would like to offer an easy way to speed up Python, the most popular scripting language ​​here. One of the simplest JIT compilers is Numba, a mature project included for example in Anaconda by default.

  • Numba allows JIT to compile individual functions in python. This is not just another python compiler, but a regular package that can be installed using conda or pip: https://numba.pydata.org/
    To make function compiled, one need to use decorator before defenition.

from numba import jit
import math

@jit(nopython=True)
def go_fast():  # JIT compiled
    result = 0.0
    for i in range(1000):        
        result += math.sqrt(i)
    return result

def go_slow():  #  standart cpython function
    result = 0.0
    for i in range(1000):        
        result += math.sqrt(i)
    return result

%timeit go_fast()  # 2.06 µs ± 3.93 ns per loop

%timeit go_slow() # 214 µs ± 1.25 µs per loop

  • Numba has some limitations, it will not be able to compile arbitrary user-defined classes, containers consisting of different types. But all common Python operations - conditions, loops, built-in collections - everything is supported out of the box. You will not need to dig into complicated type system as in cython, this is just regular Python code, possibly with minor modifications.
    Supported Python features: https://numba.pydata.org/numba-doc/latest/reference/pysupported.html

  • Especially it goes well with numpy, allowing you to quickly process arrays. Numpy is rich with built-in fast vectorized functions, but in real applications it is not always enough, and you have to iterate over and check something using regular Python cycles. It turns out a dramatic loss of speed, the benefits of numpy come to naught.

Benefits:

  • Regular and advanced python users get performance comparable to Java / C #, get the opportunity to participate in contests/multi and write competitive code in games that require some sort of brute force. This will require some minimal adjustments, but it is much easier than rewriting the python bot in C ++ in the middle of the contest.
  • These changes will not affect completely casual python users, they will write as they wrote on vanilla python.
  • Perhaps for the CG team, adding a package to the existing language will be less difficult than adding a new language (PyPy for example) to the backend.
12 Likes

There are some posts on the forum asking for Numba but without comments from the staff. So I also wonder if it’s hard/impossible to add or just “not a priority”.

1 Like

As a person who did rewrite bot from Python to C++ in the middle of contest just to make BFS fit in time limit, I really appriciate any effort to help Python code run faster.

1 Like

Debian Sid has a python3-numba package.

Thanks for the suggestion. I’m listing all requests of languages updates to propose it to R&D next time we do one language update.

4 Likes

Where do we suggest things to you?
I have been using rust for a while now and would really like nalgebra, it is like numpy for rust. It would also be nice to get a larger upload size, because we have to code everything ourselves it is easy to hit the size limit if you work on a game for a while, and once I hit the limit the game stops being fun, every new idea is constrained as it won’t fit so I have to give up something else, and we aren’t meant to obfuscate our code so it is difficult. I notice in your blog, the rust files were the longest on average so maybe this is a rust problem… A larger timeouts in the IDE would be nice, perhaps with a warning we would have timed out in the real game. Access to a neural network library like tch-rs would be wonderful to play this is meant to be fun right?, it would also leave more space for my weights in upload limit, Python used to have access to tensor flow, I guess it still does? multiple files would be a thing of dreams. While I’m dreaming… multiple cores would also be fun to play with providing we got the libraries to help. Access to my own cargo file.