Platinum Rift feedback

Oh, I thought that in 3- or 4-player battle there is one winner and others are equal losers… You want to say that an exact place in battle was important? I did not find it in the rules :frowning:

Challenge time should be twice longer : some of us cannot spend 100% time on it…
TrueSkill looks cool to me, may be underrun with only 100 battles but it’s nothing more than CPU time isn’t it ?
500 battles would be better and the final count (after the bell) is to apply to all participants : Some low rank AI’s may change the TOP 100 … It’s quite a big issue about TrueSkill, I spent long time to study that point on game of drones.

So the original concept is great ! congrats to codinGame team for the amazing integration job !

Now, the concept has to improve and grow up. Reproduce multi-players challenges many times may over repeat the same issues.

The main priority should be to allow the use of debuggers !
Writing trace logs is really an old way of programing and terribly time consuming.

It should be allowed to open sockets and write stdin/stdout on it … It shouldn’t be a big challenge to develop … Sound like a joke … : is there a computer dev to read this ? :slight_smile:
Write and read via socket would let participants run their code on their own computer using their IDE while the server side would stay on the server as it worked that far.

EDIT : for some reasons, participants code is to be compiled on the server. Server must be able to launch as many instances of the code as it needs. Some of them can stay running as long as a debugging session is active with a timeout when devlopper fall asleep :wink:

Thanks for reading

I rarely debug my code because i cover all my core modules with unit tests. Unit tests are your best friends believe me.

In 2-3 hours you can create test server and game log fetcher.
It’s not a big deal. I’ve done it for PR and my friend for TB (https://github.com/kvas-it/tron_battle).

IDKN, but I try and receive good score.
http://www.tibslab.com:3000/stats/apatrushev

Though in case of more games in final stage my final rank would be lower. I’m sure.
TOP20 factor described by @karalabe is really evil.

Some funny replays:

Specially hand made. One of the technic I used - try to win exact battle (which is not definetly lost) by hardcoding some decisions on key points, and then try to generalize the rule. This is example of hand made retreat decision.

Brook. While others are stable - you can collect money for the battle. BIG MONEY.
But you must define what stability means.

@apatrushev @lionel_herzog Regarding match simulators, my Platinum Rift one :wink: https://github.com/karalabe/cookiejar/tree/master/tools/arenas/codingame/platinumrift Though I haven’t cleaned it up for general use. Just thought I’d throw it out here if the game’s done :slight_smile:

My purpose is to improve codinGame overall features instead of re-writing code that simulates server behavior. Rewrite such a code may be a good help to implement a minimax stuff but the whole codinGame feature and offer would improve by the use of sockets and remote instances of client-AI’s compiled with debug information.

By the way, you can’t simulate others AI’s in a simulation, Server can !

I’m sure that your version will be unusable or very difficult to implement for each language in a list.

My server can fetch and replay finished games from CG site (without AI simulation - just by using outputs as CG js frontend does). It’s enough to see on game situation from the code side.
Also, when you write such simulator you can understand game rules and techniques. Believe me - it’s better than just reading rules.

ok we misunderstood : there is no language related dependancy.
The only feature to implement is the ability to share stdin and stdout between server and client. Stdin/stdout are simply redirected onto sockets. That it.

In facts, I found a hidden constraint. Redirect in/out is a standard application for sockets but the codinGame HTML 5 player looks not synchronous. It looks like playing only battles that are already finished and stored with stderr messages… No calculation during the playing process, just messages stored with game instructions…

So it appears that a debug session would be blind with current html players (GoD & PR) but i may be wrong…

If html5 player can play a streamed battle, , there wont be any concept problem, but if (as i guess) player can only play battles entirely stored in RAM data structures, the debug session would not see graphical result on screen.

Nevertheless, i’m kinda sure that such a debug capability would dramatically improve the concept “big picture”. Giving participants a way of running their program on their own machine, debug and profile it with their usual tools.
On the other hand , html5 player synchronicity is the point that i cannot evaluate impact on implementation complexity.

Only codinGame html5 managers could really evaluate it.

I will take a look on your server…

My three cents:

1 - Add a “remember my display settings” check box to the game display (because 99.99% of the time I want to debug my bot, not make fancy screenshots). Maybe you could also have a discussion with your talented web designers about Verschlimmbesserung.

2 - Provide a default working IDE, but don’t spend your energy in competing with dedicated tools. You won’t be able to provide incremental compiler with advanced code completion anytime soon. Instead, define an API for code submission and replay then wait for the community to code plugins for their own favorite IDE. Anyway, most of us are already playing this way, by copy-pasting our code, then copying back the stderr output for some kind of local replay (at least for long running contests).

3 - Regarding ranking issues, have an eye on AI Challenge. It was basically the same thing as CG without the IDE (but with alternate game serverr where players could connect their bots via TCP). I’ve only participate to the Ant contest and remember well the same kind of discussion about ranking occuring on the forum (but I’ve never fared well enough to worry about it).

So what good is the money if you don’t end up using them? That last one was definitely a funny replay :slight_smile: with my one guy sitting on 36 and just wasting money…

I’m a little upset. I was hoping to win a Nexus 6… Before the start of extra games, I was on the 3rd place. After the first extra round, I (probably) was on the first place. But I was only third at the finish. And now I’m back in the first place in Platinum Rift Training. I am always in the top 3. Therefore, I think that I’m a real winner of this contest :).
The letter with “keep coding” which came 8 hours before the end of the contest was a little disappointed. I was going to sleep at that moment.
The rank system was terrible:

  1. With the same code I could take 40th place, 20th place, or third place.
  2. On Sunday (24 November), I 5 times submit my AI into the arena. Jeff06 fell 10 positions on the leaderboard during this time. I think it’s because my AI was trained against him. This is not good. I’m really sorry for that…
  3. I think that the organizers did not achieve stabilization of player positions in extra rounds. And the 10 (or maybe 20) first places are random. It would be great to see some confirmation or refutation of my words. For example, when it is the first three places were distributed in this way…
  4. Maybe the problem with the rank system is not a mistake of the organizers. Maybe this is all due to the nature of the game itself and places stabilization is almost impossible to achieve…

Despite all this, it was a wonderful contest. Thank you all, and especially Codingame.

I agree with this. I found no use of the default display mode. Remove it entirely would be an option.

I fully agree too. There are many other points to improve than code editing which cannot compete with a local tool taking advantage of file system access.

I would point the particular profile of Platinum Rift : There is a huge impact of how lucky you are when you drop first pods at round one… A poor ranked AI has nothing else to do than dropping on the same zones to make your AI loose all chances. There is a very small difference between 1st and last place for a given game seed…

Game of drones is not suffering from this.

Both games suffer from low ranked suicide behavior which should be considered by AI programmers : Fighting in the arena doesn’t show how your AI fights such suicide-bombers. Nor how it will affect your score after a while. Because other players battles come from anywhere on the board. Go Arena implementation just make your AI fight it’s immediate score neighborhood.

As a consequence, fighting in the arena at the same time as other good players will change scores of all. It’s because high ranked AI are - for a short time - considered as poor ranked (until they reach their nominal score).
As a result , you can loose a battle against a poor ranked but very strong AI which wil make you loose lots of score.

But TrueSkill is really a good scoring system, it will stabilize as much as possible before the end of submission.

I used this replay as base for algo updated. It’s not so funny (so big money) after update.

Would you share your strategy, grmel? If not the code, maybe a description at least? I think your bot is indeed one of the strongest in 1v1 and I’m interested to know how it works.

1 Like

I completely agree that the ranking system sucked. I was under the impression that everybody’s battles would be rest at the end of the contest, otherwise I would have stopped trying to improve my AI earlier.
I’ve made my last submission 3 minutes before end of contest and so mine only ran 117 battles and ended up on position 272. The same code is now on 186 in the training.
I also noticed that my AIs have always fared better if given an opportunity to play many games (and especially if given the opportunity to play against top players) which I suspect is because I’ve used the following strategy:

  • if playing against only one other player, try to win
  • if playing against 2 or 3 other players, my strategy was to ensure I’d get second position and if possible win the game but most importantly make sure I don’t loose (ie. 3rd or 4th place).

Because of that it would take many battles but eventually my AI would climb into the top 100

I have the following proposal for games like Platinum Rift:

  1. Variable map size based on player count:
  • 120 zones for 2 player
  • 180 zones for 3 player
    This way less luck needed, better AI required for higer rank.
  1. No 4 Player battles on small maps like the current max size.
1 Like

May be just 2 player games =)?

The problem with the ranking comes from trueskill, which expect an AI to be as good in 1v1 games as in 1v2 and 1v3. As more games are played, the level should stabilize if the AI behaves the same in all type of matches. But playing alternatively against 1 or more opponents is like having an AI that changes all the time. So the ranking never stabilize.

This is almost the same problem we had on Tron Beta, with the randomness of starting positions. The corrections was made by alternating starting positions and making symetrical matchs, but here it is impossible.

In my opinion, here is what could have beed done :

  • consider only 4 player matchs
  • for each 4 player matchs, decompose in all 1v1, 1v2 and 1v3 matchs possible.
  • play all these matchs on the same map, without updating the ranking
  • combine the results into a single 4 player match and use it to update the trueskill ranking.

That way, more 1v1 match will be played, and every update in the ranking would come from a less random result.

1 Like

As a developer at CodinGame, all the team want to thank you for all these feedbacks.
We always want to go further and propose a better experience each time. We consider all your remarks, positive or negative to improve our platform. To resume:

  • Our mistake for this game was to add to much randomness in the game for the ranking. We will be more careful next times and we will keep searching for new improvements of our ranking system.
  • We will think about the size limit of the source file, even if we believe it is a good thing to have a clear and compact code.
  • We will try to be as clear as possible about all modifications to avoid lack of transparency from us. It’s hard for us to be always here to answer (especially given the time difference and a 3 weeks contest), but we will do our best to be reactive.
  • The persistent state for an AI agent (between different games) is a really good idea of improvement. And something we planned. Maybe coming next year. :wink:

Some surprises will come this month, like include the community to a part of a contest design ! :smile:

Again, thank you everyone.

Keep Coding !

5 Likes