Cg_referee : A generic tool to compare your bots locally

Hello everyone,

So we have been talking with a few members on the chat about local solutions and comparing different version of your bots. And it’s often a pain to setup a proper framework to compare bots.

So I tidied up my own “local arena” and made it into a generic arena for bots so everyone on CG can benefit from this small piece of code. If you are interested in trying, you can find it on my github : .

I tried to make a tutorial as precise as possible but bear in mind that all of this is work in progress, and that I will need feedback to improve/correct the tool if you find it interesting. I have also provided a working example for Tron so you can see what is required by the framework to compare your bots.

If you have any comment, question of criticism please feel free :slight_smile:



I didn’t realise Agade had already released an arena for Tron. In all fairness for him, I report his code here !
The code is in C++ and is available on github : . His code combines what I do in cg_referee and the actual code for Tron (+ options to have fair spawn points as he his discussing in the topic).

I have also a generic java arena :
Actually it only support tron and ghostbuster (but ghostbuster support is pretty limited). When i want a new puzzle support, i just to implement the engine.

1 Like

Just a little update : I’ve added a couple of features to the referee (seeds for the runs, multithreading, help to debug).

In the future I will post my engines for the games on github. And as a starter, I’ve added the GITC engine so you can continue working on your bots while waiting for the multi :wink:

1 Like

Thanks for sharing that !
You probably should add how to output the rankings after the game ends in the readme.
I guess that for a 4 players game, it’s a permutation of “0 1 2 3” ?

Also, for those who don’t want to recode the engine, I got your arena working with the Java referee provided by CG. I just had to auto-solve all errors in eclipse and arrange the methods in the main. Then, make an executable and change the command in the json, and you’re good to go.

You’re right, thanks for pointing that out. I’ll make the modifications to the instructions tonight :slight_smile:


Thanks for the referee tool :slight_smile:

I just want to point out for fellow python2 (and maybe python3) users that they’ll have to add a flush after their print for the referee to work.

For example:

print "WAIT"

Hope it helps.

1 Like

As you said to me yesterday, there’s the same “problem” with my cg-brutaltester tool. I don’t know why Codingame can read outputs from a python code without needing a flush. Maybe a CodinGame dev know the answer ?

1 Like

If you are searching for the answer, Plopx gave it to me:

If you start your code with a command line like python, you will need to call sys.stdout.flush(). So, how codingame run the pythons codes without a flush required ? Because they use a little more complexe command line. They start your python code with a file The final command line contains additional arguments like stdbuf o0 e0. This is why you don’t need to call sys.stdout.flush() on codingame.

If you want to use your python code with a local arena (like cg-referee or cg-brutaltester), you have to call the flush or using a command line with theses arguments.

Yeah that’s strange. I’ve seen that for some codes, but I don’t understand how and when. For instance, we don’t need the flush in Python3.

According to what Magus is saying, if I set the size of the pipe buffers between the processes to 0 it should do the trick and never require a flush. I’ll try to integrate this in the coming days.

Thanks for the feedback :slight_smile:

1 Like

It depends on the python environment, whether PYTHONUNBUFFERED is set or not.