The Fantastic Bits problem looks great but the simulation part is quite complicated.
There is nothing wrong with having a complicated simulation model, the problem is that without knowing how it is implemented it requires a huge amount of retro engineering.
This is supposed to be an AI contest, not a simulation retro engineering contest.
There was an effort to explain how the simulation works in the problem statement, which is nice, but this will never fully explain how simulation exactly works.
So here is the current situation :
There are still a lot of unanswered questions about rounding, collisions, etc, just look at the forum and the chat.
Those who will be unable to implement the simulation because of lack of time, or skill, will never be able to try cool approaches like Beam Search, Genetic Algorithm, Min Max / Alpha Beta, MCTS, etc, because all of them requires a simulation.
Those who will be unable to implement the simulation won’t be able to test locally, and will need to submit every time they won’t to try something new, which means longer feedback for them, and more resource usage for Coding Game servers.
Those who did the tedious retro engineering work for the Coder Strike Back contest already have an implementation ready and have a head start.
If there is a difference between the server code and the problem statement, people will waste a huge amount of time investigating on this.
Could Coding Game just give us the code of the simulation ? That would solve all these issues.
Other algorithm contest websites do that, and it works well.
They just keep the problem statement simple and add something like “For more details about the simulation see link_to_the_simulation_code” and that’s it.
As a bonus if there is a bug in the code, then people report it.
I agree. At the moment no one is capable to guess how the collisions works (this is not elastic at all, and the minimum of 100 is sometimes the raw speed at the end of the collision without any reason). The rules descriptions are wrong. Many things are missing.
I haven’t really be trying so far with the simulation, I just started to integrate the one I had in CSB which was mainly based on your post and some code from Neumann.
I reckon that this guessing game about how exactly the simulation works is tedious as it not only require knowledge in a specific area but also a lot of time spent on trial and error just to get the thing eventually working.
It can be seen as an additional challenge indeed however the blocking nature of the simulation that keeps people from integrating any forecasting kind of AI might be a little too much.
At the same time releasing the simulation code now would be a little bit unfair to people who already spent quite some time integrating it. But for future contests I believe it would be great if we could just start with a given simulation code instead of spending half the contest duration guessing it…
Pseudo code is a bad idea. You can do an error while writing the pseudo code. So the problem will be the same as now. If there’s is a bug in the source code, we have to reverse everything to make sure it is a bug and ask CG to fix it. But we have to wait at least one day for that …
I only agree with the source code release because of time limitations, not because I think it’s the correct way.
You said “majority of top coders”, please think as a whole. CG players are not just the top 50 but a lot of horrible, slow coders like me that use X language. You can’t make rules that can benefit some language over others. If they release a Java function they’ll be in advantage.
Pseudo code is just another language, you can make almost the same errors translating from a language as from pseudocode.
Of course it’s easier to have a full simulation code, but giving pseudocode with many unit tests is IMO a better way to give precise simulation info. And more useful as a learning tool for new coders.
You implement your simulation from the pseudocode, and test unit it with given tests.
Unit testing is very useful, and I think this is a good way to learning it. This will help people a lot on real life programming, whereas source code helps as a copy-paste tutorial (useful too, to not reinvent the wheel). But for copy-paste tutorials I have Stackoverflow.
Personally, I believe this discussion boils down to the question: What should a contest actually be about?
Should reverse engineering the physics simulation be a part of the challenge or should the challenge focus on designing a solid AI? I am not advocating one or the other, but seeing this discussion railroading for the second option, I have to ask myself what the amazing people at CG are thinking on this. Obviously, they will have thought about this question long and hard and have decided that implementing physics is part of the job.
Setting obvious problems like bugs and inaccurate descriptions aside for a moment, I think the CG approach does make sense since in “real life” more often than not, you have to implement some kind of simulation since it is simply not available from the get-go. Additionally, these simulations are usually inaccurate or really expensive - either way, approaching this step and knowing how to figure out some good heuristics is beneficial in “real life”.
On the other hand though, the allotted time frame is awfully short to do both while working full time and having adult responsibilities. I would love to have some kind of simulation I can run locally in order to develop advanced AI approaches.
Alas, to me, it all boils down to whether the scope of the contest and the time available are aligned in a sensible way. In my opinion, this is what should be discussed instead of whether or not including simulation code to toy around with makes sense or not. I feel that the latter decision is actually understandable from a “learning something to apply in the real world” point of view and on top of that this kind of decision is inherently opinionated. There are good arguments for both sides. Giving people enough time to actually work on the problem on the other hand seems to be a less opinionated question.
Thank you for this. I agree that this is somewhat ambiguous right now. Is CG aiming to be solely a site for battling AIs, or is it a site for battling coders? They are not necessarily the same thing, and if the latter is the focus, then reverse engineering, applying the output of previous projects to a new endeavor, and general hacking skills could certainly be a big element of the contest.
That being said, those elements are, understandably, not very popular, especially as a recreational activity. While they may be valid activities for a contest, they could also make that same contest relatively unpopular.
Another astute comment. After having spent a couple of hours on the contest, I have nothing worth submitting, and I am also starting to question whether I want to put any more effort into it at this time. We’ll see.
I have to say that actually writing the simulations is one of the activities I actually enjoy most. (I’m that one who liked writing compilers in uni.) Even though I’ve never gotten a sim that is “perfectly matching” the CG one, the have definitely been good enough to try experimenting with genetic algos or neural networks for training. I’ve not gotten a GA or NN AI good enough to hit the arena, but I learned a lot and enjoyed the afternoons and evenings I spent on them. So maybe it’s not everyone’s cup-of-tea, but certainly I wouldn’t throw it out. And certainly it’s a bag of tricks to reuse.
If you don’t think they had us a sim, they do: online. It’s just not one we can run ourselves in an IDE and step through code. Maybe that’s what you’re really asking for, is the ability to hook up a debugger to code running on the engine?
My AIs are not very good either but I still submit them, and I have to say the best ones tend to be quite simple. I tend to pick out strategies by observing the AI that beat mine.
In the end, I don’t think writing the sim helped me shape my bot for this contest (yet). A sim is still only as good as the competition bot, and mine are just “ok.” I’ve toyed with the idea of simulating multiple rounds, weighting the outcomes etc, but never tried it. Maybe later in the week, work permitting.
Having people write simulators wouldn’t be as bad if every contest was unique.
Currently people who participated in CSB have enormous advantage both in pre-existing codebase and understanding of CG-style physics. I’ve pretty much lost interest in doing anything more than simple tinkering after realizing I’m competing against months of dev time.
Many people re-used their code from Tron and Back to the code. If you have a “grid like” game, many codingamers already have a big code base to copy/paste.
And before that, we had Code buster. You may think “it’s ok, there was no contest like that on codingame”. And you are right. But a well known russian website had a very similar contest just before. Russians codingamers just copy/pasted many many code.
I don’t think you can find a contest where no one could copy/paste some old code.
Everyone has had access to similar CG multiplayer games for months already. If you practiced coding for those games, you would have more code to base yourself on for the contests. Like when I participated in my first contest during Hypersonic, I already coded 5 bots for other CG multiplayer games beforehand.
So in that sense, I think it is fair for everyone. It’s just like an exam actually. You will perform better if you have prepared for it.
The truth is that Magus (and others, too) even wrote a full document about CSB, the simulation and GA.
It’s like the reference document for any CSB AI.
Top players helped a lot other people by releasing great info. They don’t have the need to do it.
So I’m thankful with them, if they have earned an advantage good for them. It must be hard work to do it.
Sometimes it’s like running against Usain Bolt but what new players (like me) must do is to learn from them, and slowly getting better.
I might be at odds with you guys but I disagree with the whole perfect simulation logic.
Real life problems cannot be perfectly simulated and thus AI developers need to be able to develop a model that fits as well as it can. I recon some of you guys are using fitting techniques that give their full potential if you have perfect information but I disagree that this is a prerequisite to developing a great AI.
I’m not discussing about the perfectness of the simulation logic here.
But iso providing full source code, wouldn’t it be easier/maintainable to provide libraries for simulating?
In that case, we do not loose the ‘secrets’ of the environment in which we must develop our AI but we still have the advantage to test offline our algorithms.
Not sure though about the feasibility of this approach since you are proposing a big number of different languages.