The Great Escape - Feedback

The topic where you can share how you liked the challenge, & more

I liked the challenge a lot. The rules of the game were simple to understand, yet not trivial to implement (think checking for legal walls). And implementing a basic strategy was not too hard, but the game is complex enough to reward good ideas and promote different kinds of strategies.

I’m actually very curious as to what strategies other people employed. Maybe I should start a forum thread where we can share strategies :smile:

1 Like

Hey Khenra,
Thanks for the comment! That’s a real ray of sunshine in our day :slight_smile:
I’ve just created a thread for strategies sharing btw.

I discovered codingame.com through the Platinum Rift challenge, and I have to admit I am now addicted!

As with Platinum Rift, the challenge was a lot of fun, and led me to discover and implement many algorithms I would never have otherwise. It was crazy how the bot levels evolved throughout the 2 weeks! At first a simple shortest path and a few walls were enough to reach the top, but towards the end you really needed a complex strategy, with defense walls and a lot of other stuff!

I definitely noticed improvements compared to Platinum Rift. The matches were fair (by permuting all starting positions,) and the organization team very responsive to performance issues (multiple server upgrades during the challenge.)

Although minor, I was a bit disappointed by the graphics. The Platinum Rift maps were amazing, and still perfectly fluid, while here the board was way simpler, and didn’t seem as fluid. But it’s really a minor point, I would have been ok with a text visualization as well as long as the UI is responsive :smile: It’s just that Platinum Rift spoiled me I guess!

If I have one suggestion for the next multiplayer challenge, it is to explicitly give the other players’ output as input at the start of a round. In 1v1 it’s not important, but in 1v2 or 1v3, it should lead to much more socially oriented games, which is a super interesting part of multiplayer AI. How about a mutiplayer game where 10 players compete at once, and must form alliances to win? :smiley:

By the way, when will TGE be playable again in the multiplayer training? My final submission was buggy, and I’d really like to see what a correct implementation could have achieved!

I did not like this contest very much. PR2 was a lot more interesting and a lot more fun. Some problems with this contest:

  1. It seems that this is a well established game “Quoridor” in Europe but here in the US we haven’t really heard of it, I didn’t realize there were strategies and even sample AIs available online until the day before the contest ended.
  2. The game made it too easy for some simple bots to get to the top of the leaderboard, but then it required a lot of work work to counter them. Having simple strategies which work really well but take a lot of work to counter does not make for a fun or fair game.
  3. The last battles section still needs work. No way to filter or do anything in the last battles section, so having to go and individually watch battles is very tedious.
  4. I would liked to have had a debug view with less graphics like PR2 had.

There are a lot of problems with the leaderboard and ranking system.

  1. I know that when I submitted code, it would end up at one rank but then would drift a lot over time. For example I think earlier in the contest I was at 40 after submitting but drifted up to 15 over time without resubmitting. Then when I resubmitted I dropped way down. There were a few times when I would resubmit my code and I would drop significantly even after resubmitting the same code (15 to 30, 40 to 70, etc).
  2. It was taking way too long to re-rank after submitting changes. It was taking 2 hours towards the end of the contest. Trying to make small changes and see how it affect the ranking was taking way too long to iterate in a meaningful fashion, which is one of the reasons I gave up towards the end of the contest ended. Apparently there are even papers written on this game.
  3. With so many people active it was impossible to know your real rank because at any given time players above your rank would have their code running in the Arena. Also if you submitted your code at the same time as someone with a really high rank, it could skew your ranking downward. There needs to be a “current rank” and a “pending rank”. Current rank is used for the leaderboard, pending rank is used while code is being run in the arena. Your rank on the leaderboard doesn’t change until your new pending rank is finalized.
3 Likes

Me too, I enjoy PR2 more than TGE.

On one hand TGE is a well known game and there are lots of paper on this subject which makes it less creative, on the other hand i really can’t afford waiting more than 2 hours to see how my AI ranks.

I don’t like very much contests lasting only two weeks. I can hardly find enough time working it.

Hope next one will be better.

My feedback will be made in two distinct categories : the game, and the ranking.

The game

  1. As a board game, Quoridor is great because it is simple enough for a new player to enjoy, while retaining a lot of complexity for players who want to be more serious about it. I believe this idea of a game easy to pick up but difficult to master is what made The Great Escape really enjoyable for me. During the contest, I could see a clear progression of my AI every time I added a new layer of complexity to my code. Find a simple and deep game to appeal to everybody attempting the challenge (that was great in TGE)

  2. I absolutely loved the 1v1 version of the game. 1v2 on the other hand was much less interesting, mainly due to the asynmetrical nature of the starting positions. Granted, the ranking system improved fairness by playing all the possible starting positions, but this is only a bandaid. Every game should be as fair as possible with regards to starting set-up (could have been better in TGE)

  3. Fixed strategies. I wasn’t really bothered with them. They are hardcoded, there were easy counters that could be hardcoded too. The high-ranking players with fixed-strategy had a very good AI behind to back them up.

  4. Next challenges, what about other board games such as Abalone ?

The ranking

The ranking can, and should be improved in my opinion.

Disclaimer : my very last ranking match brought me from 2nd place to 4th place in the rankings. I am very aware of the volatility of the ranking system : all my comments are made knowing that maybe my real rank wasn’t even 4th due to luck in the matchs 95 to 99. Maybe I should consider myself lucky to be 4th instead of 10th, who knows ?!

  1. Communicate. Many questions came regularly and the answer had to be dug out from the forums. In 1v2, does it make a difference to be 50% / 0% / 50% compared to 33% / 33% / 33% (I still don’t know). What are the criterias for your AI to play a new game when it already did 100 ranking matchs ? How are your opponents chosen when playing ranking matchs ? etc…

  2. Bad/buggy AIs. Find a way to not make them occupy 50% of the leaderboard. As suggested in another topic, include the Default AI in the leaderboard and introduce all new AI at Default AI level in the rankings

  3. Score fluctuations. It is not realistic for an AI to see its score fluctuate much over time. After playing 100s or 1000s of matchs, the ranking should be very consistent. This topic was dealt with in very detailed forum posts, but basically : TrueSkill is not adapted to AI which have an inherently constant level, the last game played should not have more importance than the first game played. Inconsistent ranking can only be a source of frustration for the players

  4. Ranking games should not be limited to 100. They should not be limited to 1000. They should not be limited to 10000. Ranking games should be played continuously. Your hundred first games should place you on a priority queue to quickly approximate your rank, but this rank should continuously be re-evaluated knowing that other AIs in the system have been upgraded.

  5. End of challenge. Depending on the ranking system used, you should decide on an appropriate way to end the challenge. With the current imperfect system, it would have been a good idea to make the TOP 50 AIs play a hundred more games to get a better evaluation of their rank. The end of this challenge was a bit of a letdown…

  6. All of the above not only applies to the challenge, but also the the multiplayer games in the training session. Having witnessed the limitations of the ranking system implemented here, I have no interest in going back to play the other challenges because I’ll have to battle other AIs with only 100 games to play : that’s not enough… Make the games have a low frequency, make people have to click every day on something to get a ranking game that day, just do anything as long as the number of ranking games isn’t hard-capped to 100.

EDIT : Just in case it’s not apparent enough, I have absolutely LOVED this challenge and plan on participating in the next similar challenges. Key components : 1 week up to 4 weeks, AI vs AI, multiplayer.

2 Likes

This is my first time playing any multiplayer contest on CodinGame. To tell the truth, I really enjoyed it. It was so addicting that I found it hard not to work on it in the evenings, especially during the last few days.

I do have a few suggestions:
(1) The ranking system is good, but still needs a bit of improvement. There seems to be a large preferential bias towards earlier push, as people already mentioned. I haven’t studied it carefully. But maybe instead of asking the newcomer to climb up the ranking ladder, randomly choose 10-20 players in the current ranking list to play against the new one to provide an estimate of true score, then start the fine tuning by comparing it with opponents with similar scores.

(2) I didn’t even realize that this is a “well established” game until Ozzie pointed out. This seems a bit unfair to the uninitiated ones. A quick search on wiki gives you several papers to read on algorithms. I suggest that if a similar “well known” game is chosen for the contest, it should be mentioned in the description to give everyone a similar start line.

1 Like

I have more or less the same feeling as other players.

  1. the game logics was very interesting (although some details were not completely clear and required trial/errors to get a proper interpretation of the rules). Unfortunately, I realized only very late that this was an existing game, with already some developed algorithms implementing winning strategies. This probably explains the decent level of many bots compared to previous challenges.

  2. the ranking system was flawed, which is, admittedly, frustrating. The fact that battles were organized among closely-ranked bots made it possible to generate “areas” of similar strategies that could be very difficult to cross – the best example was 3-player games against two AIs that never put any walls: there is no winning strategy. No need for complex algorithm, just run random battles with random players in random starting positions – a few hundreds would be more than enough to get a precise ranking. The complex Bayesian algorithm implemented in the challenge is based on several assumptions (such as transitivity) that were clearly not fullfilled here. And everything was screwed by starting again at the bottom of the classification for every submission. I think it would be interesting to rank the bots properly (exhuastive matrix of battles?) and to compare both rankings, because I don’t know if the developers realized how flawed the ranking was.

1 Like

My first multiplayer contest with Codingame. Thanks a lot! That was funny, challenging and quick.
Great game choice!
As for ranking system… well, good thing is that you get feedback really fast: deploy your code and in 30 min you more or less understand if it was goof or bad. There was no such luxury in other contests (and you have to use local testing). However it is probably makes sense to change the way final leaderboard is determined. Just reset scores after submission deadline and run 1-2 days of contest to find winners. It is also right idea to filter out failling AIs.
It was also uncomforting to code without a version control system, but this feature is about codingame in general, I guess.
Thanks!

1 Like