Summer Challenge 2024 - Feedback and Strategies

Hi! Can I ask what other optimization contests on other platforms? :slight_smile:

Hi sprkrd!

Here is an almost complete list of most common online coding competition:




Google Code Jam


International Collegiate Programming Contest

Google Hash Code


ICFP Programming Contest

Microsoft Imagine Cup


Project Euler

Advent of Code

Company contests







I have two or three others in mind.
I guess someone will complete and will adjust this list (it will be perhaps me).

Keep practice and come back here for each spring summer fall and winter challenge.

CodeVanguard :cloud_with_snow::bat:

Code Jam and Hash Code are discontinued.
Also, OP is referring to optim contests, not general CP contests

Ended up #58 Legend.

Better late than never…
I’m really proud I could go so far despite my lack of experience. I’d love for this PM to inspire ppl with better coding skills to implement this approach and show me how far it can go =) Pls get in touch if you do.


I used GA. My first implementation would compute the fitness of an individual by blending together performance in the 3 minigames (I never considered skating). I realized quite quickly this approach would hit a hard limit. The game isnt about playing all minigames decently at the same time, but about choosing which fights to take and which to concede.

Enters NSGA II. The main idea is to evolve our populations without any assumptions about which game to prioritize nor how hard. Given two solutions that outperform each other on different games, we cannot say which is better–they are simply not comparable, neither of them “dominates” the other. However, there are plenty of cases where a solution outperforms another in every game; this solution dominates the other one. We can then sort solutions by “Pareto fronts”. Front 0 contains solutions which are dominated by none, front 1 those who are only dominated by front 0, front 2 only 0 or 1, etc.
If you want to delve more into it, I cannot recommend enough the last videos of this amazing playlist :

I simulated each player equally, as to obtain (hopefully) every locally optimal solution, by which I mean the best scoring solution for a specific trade-off between games. I then matched my solutions against all the possible pairs of enemy locally optimal solutions, and chose the one giving me the best average “score”. For the first half of the games I add the medals points together so my algo doesn’t think 3 silver first turn is better than 2 golds and 1 bronze, and after a while I plug the actual multiplicative formula.

What worked

  • Eugenism ! It is trivial to find optimal diving strategies. It is also relatively easy to generate perfect runners en masse. There is indeed often plenty of different perfect runners (there are 15 ways to cover 7 empty squares in 3 turns!). Kickstarting my evolution process with multiple of these gave amazing improvements.

  • Much bigger population size than expected. I assumed 100 would be a good number and ended up with 682 (683 would make my population occupy a bigger power of two number of bytes).

  • Using real time instead of cpu time greatly diminished the number of random timeouts I experienced.

  • Going all-in on not playing for skate gave much better results than one could expect. Sure all my loses could be attributed to poor skating performance. But getting better at the 3 other games meant my enemies would either get starved in medals in other games or had to deprioritize skating and play the game on my own terms.

Possible Improvements

  • My non-domination sort is quite crude. I was unable to understand the nlog(n) approach explained in a paper. I was concerned the nn approach that precomputes every domination relations would be too slow. I ended up on a nn worst case but often better, that may be slightly off past front 0. There might be a faster, more exact or even both implementation out there.

  • I did not implement crowding distance computation. That may hurt in two ways : diminishing the genetic diversity during the evolution (even though we are guaranteed some baseline level of diversity due to sorting by non-domination), but most importantly if an opponent has multiple similar solutions on front 0, I will overfit against these due to how I pick which solution of my front 0 to play.

  • The way I pick which of my front 0 solution to play is very flawed… I suppose it worked great in Gold because ppl would pick some locally optimal solution. But assuming we did get all locally optimum solution (most of the time I had < 5, peaking to ~20 a few turns per game), this is a Nash equlibria problem. Iterated elimination of strictly dominated strategies would be a clear, improvement… Then pick an “anytime” nash equilibrium search algorithm.

  • My algo doesn’t even know about the existence of skating. I guess you could consider it a fourth dimension, even though it might just add to much noise in the limited computation time. I tried without success to bias my final choice depending on skating, but with mediocre heuristics. I’m pretty sure my algo would get better scores if I told it to drop everything on the last round to guarantee a gold medal in skating x)

  • I couldn’t set up a local testing environment, so there might be a lot of perf left in optimizing the parameters.

  • When a game ends my population diversity crashes, there must be a smarter way to deal with it. Making the algo understand that if a game has ended we need to keep as much options as possible in the next turns as to accommodate for the next round.

A big thank to the organizing for this thrilling competition !


Hello !

leojean890, 210/5237 (global), 15/1944 (python), 148/980 (gold)

Congratz to CG for having this interesting contest idea and organizing it, and congratz to all players !

My overall approach is a bruteforce (depth 5) applied each turn for each player.

First, I apply it on both opponents moves. I score each reached state at depth 5.

eval : I consider that each opponent will play :

  • a move combination favoring several mini-games at once
  • mini-games with the least medals (3*g+s)
  • mini-games where their opponents have the least medals (to deny them)
  • mini-games for which the issue is NOT certain (if the gap between 2 players is huge in a mini-game, we may sometimes consider that the issue is fully or almost determined) => I use some ponderation about this criterion too
  • I use lower coeff for skating and normalized coeffs for other mini-games
  • I ignore mini-games that won’t end before turn 100 and I don’t restart games finished during my depth 5 sim with random parameters.

I store/return each opponent’s state maximizing this eval at depth 5 to use it in the second part.

Second, I apply this bruteforce on my own moves. I score each reached state at depth 5.

I use almost the same eval than described upper but mostly considering if I’m beating the opponents “best reached state at depth 5” and at which point in each mini-game.

There is another criterion in the eval of my moves with bigger coeff : I add virtual medals (who is currently winning in each mini-game, or won it ?)
and I calculate the real score (multiplication) of each player considering the “new” amount of medals per mini-game per player.
This criterion is strongly ponderated in the eval of my moves.

I play the move maximizing this global eval.



Sorry for the late reply. It was mostly the ICPC challenge sponsored by huawei (codeforces). I did this 3 times. But also did atcoder once or twice and some topcoder.

There also were russian AI cups, very interesting bots programming contests (harder than CG and lasting for 1 month ; nice local visualizer/tester available). But I’m not sure what it will be then in the future :slight_smile: