Specially hand made. One of the technic I used - try to win exact battle (which is not definetly lost) by hardcoding some decisions on key points, and then try to generalize the rule. This is example of hand made retreat decision.
Brook. While others are stable - you can collect money for the battle. BIG MONEY.
But you must define what stability means.
My purpose is to improve codinGame overall features instead of re-writing code that simulates server behavior. Rewrite such a code may be a good help to implement a minimax stuff but the whole codinGame feature and offer would improve by the use of sockets and remote instances of client-AI’s compiled with debug information.
By the way, you can’t simulate others AI’s in a simulation, Server can !
I’m sure that your version will be unusable or very difficult to implement for each language in a list.
My server can fetch and replay finished games from CG site (without AI simulation - just by using outputs as CG js frontend does). It’s enough to see on game situation from the code side.
Also, when you write such simulator you can understand game rules and techniques. Believe me - it’s better than just reading rules.
ok we misunderstood : there is no language related dependancy.
The only feature to implement is the ability to share stdin and stdout between server and client. Stdin/stdout are simply redirected onto sockets. That it.
In facts, I found a hidden constraint. Redirect in/out is a standard application for sockets but the codinGame HTML 5 player looks not synchronous. It looks like playing only battles that are already finished and stored with stderr messages… No calculation during the playing process, just messages stored with game instructions…
So it appears that a debug session would be blind with current html players (GoD & PR) but i may be wrong…
If html5 player can play a streamed battle, , there wont be any concept problem, but if (as i guess) player can only play battles entirely stored in RAM data structures, the debug session would not see graphical result on screen.
Nevertheless, i’m kinda sure that such a debug capability would dramatically improve the concept “big picture”. Giving participants a way of running their program on their own machine, debug and profile it with their usual tools.
On the other hand , html5 player synchronicity is the point that i cannot evaluate impact on implementation complexity.
Only codinGame html5 managers could really evaluate it.
1 - Add a “remember my display settings” check box to the game display (because 99.99% of the time I want to debug my bot, not make fancy screenshots). Maybe you could also have a discussion with your talented web designers about Verschlimmbesserung.
2 - Provide a default working IDE, but don’t spend your energy in competing with dedicated tools. You won’t be able to provide incremental compiler with advanced code completion anytime soon. Instead, define an API for code submission and replay then wait for the community to code plugins for their own favorite IDE. Anyway, most of us are already playing this way, by copy-pasting our code, then copying back the stderr output for some kind of local replay (at least for long running contests).
3 - Regarding ranking issues, have an eye on AI Challenge. It was basically the same thing as CG without the IDE (but with alternate game serverr where players could connect their bots via TCP). I’ve only participate to the Ant contest and remember well the same kind of discussion about ranking occuring on the forum (but I’ve never fared well enough to worry about it).
I’m a little upset. I was hoping to win a Nexus 6… Before the start of extra games, I was on the 3rd place. After the first extra round, I (probably) was on the first place. But I was only third at the finish. And now I’m back in the first place in Platinum Rift Training. I am always in the top 3. Therefore, I think that I’m a real winner of this contest :).
The letter with “keep coding” which came 8 hours before the end of the contest was a little disappointed. I was going to sleep at that moment.
The rank system was terrible:
With the same code I could take 40th place, 20th place, or third place.
On Sunday (24 November), I 5 times submit my AI into the arena. Jeff06 fell 10 positions on the leaderboard during this time. I think it’s because my AI was trained against him. This is not good. I’m really sorry for that…
I think that the organizers did not achieve stabilization of player positions in extra rounds. And the 10 (or maybe 20) first places are random. It would be great to see some confirmation or refutation of my words. For example, when it is the first three places were distributed in this way…
Maybe the problem with the rank system is not a mistake of the organizers. Maybe this is all due to the nature of the game itself and places stabilization is almost impossible to achieve…
Despite all this, it was a wonderful contest. Thank you all, and especially Codingame.
I agree with this. I found no use of the default display mode. Remove it entirely would be an option.
I fully agree too. There are many other points to improve than code editing which cannot compete with a local tool taking advantage of file system access.
I would point the particular profile of Platinum Rift : There is a huge impact of how lucky you are when you drop first pods at round one… A poor ranked AI has nothing else to do than dropping on the same zones to make your AI loose all chances. There is a very small difference between 1st and last place for a given game seed…
Game of drones is not suffering from this.
Both games suffer from low ranked suicide behavior which should be considered by AI programmers : Fighting in the arena doesn’t show how your AI fights such suicide-bombers. Nor how it will affect your score after a while. Because other players battles come from anywhere on the board. Go Arena implementation just make your AI fight it’s immediate score neighborhood.
As a consequence, fighting in the arena at the same time as other good players will change scores of all. It’s because high ranked AI are - for a short time - considered as poor ranked (until they reach their nominal score).
As a result , you can loose a battle against a poor ranked but very strong AI which wil make you loose lots of score.
But TrueSkill is really a good scoring system, it will stabilize as much as possible before the end of submission.
I completely agree that the ranking system sucked. I was under the impression that everybody’s battles would be rest at the end of the contest, otherwise I would have stopped trying to improve my AI earlier.
I’ve made my last submission 3 minutes before end of contest and so mine only ran 117 battles and ended up on position 272. The same code is now on 186 in the training.
I also noticed that my AIs have always fared better if given an opportunity to play many games (and especially if given the opportunity to play against top players) which I suspect is because I’ve used the following strategy:
if playing against only one other player, try to win
if playing against 2 or 3 other players, my strategy was to ensure I’d get second position and if possible win the game but most importantly make sure I don’t loose (ie. 3rd or 4th place).
Because of that it would take many battles but eventually my AI would climb into the top 100
The problem with the ranking comes from trueskill, which expect an AI to be as good in 1v1 games as in 1v2 and 1v3. As more games are played, the level should stabilize if the AI behaves the same in all type of matches. But playing alternatively against 1 or more opponents is like having an AI that changes all the time. So the ranking never stabilize.
This is almost the same problem we had on Tron Beta, with the randomness of starting positions. The corrections was made by alternating starting positions and making symetrical matchs, but here it is impossible.
In my opinion, here is what could have beed done :
consider only 4 player matchs
for each 4 player matchs, decompose in all 1v1, 1v2 and 1v3 matchs possible.
play all these matchs on the same map, without updating the ranking
combine the results into a single 4 player match and use it to update the trueskill ranking.
That way, more 1v1 match will be played, and every update in the ranking would come from a less random result.
As a developer at CodinGame, all the team want to thank you for all these feedbacks.
We always want to go further and propose a better experience each time. We consider all your remarks, positive or negative to improve our platform. To resume:
Our mistake for this game was to add to much randomness in the game for the ranking. We will be more careful next times and we will keep searching for new improvements of our ranking system.
We will think about the size limit of the source file, even if we believe it is a good thing to have a clear and compact code.
We will try to be as clear as possible about all modifications to avoid lack of transparency from us. It’s hard for us to be always here to answer (especially given the time difference and a 3 weeks contest), but we will do our best to be reactive.
The persistent state for an AI agent (between different games) is a really good idea of improvement. And something we planned. Maybe coming next year.
Some surprises will come this month, like include the community to a part of a contest design !