I think most of the contestants are simply not writing tests. They potentially just observe their AI and certainly output some kind of debug infos on the err.
It’s been a while I participate to contests now (started with planetwars google AI challenge in 2011) and it’s been almost 10 years I am a professional developer.
I believe tests are a must when you write code in a professional context, But for the contests, I think you might skip them and end in the top.
Personnaly I always have tests aside of my AI. Knowing what to test and how is a difficult task. I tried different approach, and I would recommand to:
- Write tests on the basic part of the code you know that won’t change. For example, game state, game transition, pathfinding algorythms, game evaluation etc. Those are the key pieces of your AI, and you can’t affort a bug in it.
- Write tests on the behavior you expect any AI would do. For this you will need to write simple tests such as if I can win and it is my turn then do it. If the opponent can win, I must place a wall etc. I would see them as integration tests, and you can rely on them even when you rewrite completely your AI. On my side I always have methods in my IA that allows me to write tests using only the same inputs we receive from the standard input in the string form.
As you see, when I write my AI, I rely on solid tested blocks, and I have integration tests that guaranty that my AI is behaving correctly in more or less evident situations.
Then watch your AI play against other players, and select your opponent accordingly to the level you expect to be. If you find an issue, a good idea is to write the testing code on standard error, so you can directly copy paste any step of the game, and directly debug your AI in your favorite IDE.
On TGE I also rewrite the game engine. Assuming my bot was only producing correct moves, the rules are super simple to write : I think in an hour I had the game ready. I used the trueskill comparison tool I wrote when studying trueskill (you can read more in french on this post : https://groups.google.com/forum/#!topic/codingame-game-of-drones-fr/jOf9LZdOXl0) and I just had a local working the great escape tournament were I can make play matches between my AIs and rank them using trueskill. It was for me a great opportunity to define my evaluation function parameters in a genetic algorythm approach.
Top players are certainly spending a lot of time on their AI. I have the chance (or not :D) to have arround 1h30 of train every working day, so I code on my laptop. With one or two evening on the subject, that’s generally more than 20 hours spent coding on a contest like TGE.
One last thing, do not skip the documentation phases : TGE was for example very close to a game named Quoridor, and you were able to find on the internet thesis on building an AI for this game. Even if you have to adapt to the rules, it gave several ideas (even if I found them quite basics…)
I also always spent a couple of hours with a pen and a paper, just to write some ideas, try to find a way to optimize things. Sometimes you see obvious things you would not have catch in your code.