Fantastic Bits - Feedback & Strategy

Pluses: Good game, simple to get started, hard to really advance.

Minuses: lack of some pertinent info in the turn input, like score (!) or opponent magic remaining - these could have been very useful for setting “attack” vs “defend” behavior.

My strategy:
After much experimentation, I found that Flipendo was far more powerful and predictable than the other spells, so I made it the primary behavior to flipendo any snaffle that was directly between me and the opponent’s goal as soon as enough magic had been accumulated. Further, I added Accio when a snaffle that I had specifically targeted (more about that below) was farther away from the opponents’ goal than either of my wizards and no flipendo shot could be made. I also experimented with the other spells but found that they cost too much magic to justify their benefits.

Moving: I worked through three different algorithms to determine whether my wizards should target the two snaffles closest to my goal or the two closest to the opponent’s goal - this was an attempt at determining “attack” vs. “defend” behavior. Generally if there lots of “open shots” (opponents not between the majority of snaffles and their goal) I would attack (target the two snaffles closest to their goal), otherwise I would target the two snaffles closest to my own goal in order to “herd” the snaffles. This resulted in mostly defensive behavior, and the transitions between defend-attack-defend caused my wizards to spend too much time in transit from one side of the pitch to the other, which ultimately wasn’t very productive.
I revised this to just have my wizards target the two snaffles that in combination represented the shortest combined transit distance at all times, and I jumped up a hundred places in the ranking. I stuck with this and finished 231st overall - never getting higher than about 30th place in the Gold League.

Throwing: Like everyone, I started just throwing the snaffle directly towards the opponent’s goal regardless of surroundings. This often resulted in back and forth throwing with a close opponent which had a purely random outcome. Then I refined this with a procedure to check a proposed throw for whether it would intercept an enemy in its first 2000 px. Using this I checked four possible throws in this priority, and if all were “blocked”, I would just drop the snaffle:

  • Throw at the opponent goal
  • Throw in the direction I’m traveling
  • Throw straight down
  • Throw straight up
    Finally I added a check to make sure I wouldn’t score an “own goal”.
2 Likes

And what about some random in the engine to simulate wind, wizard tiredness, snaffle imperfections, spell accuracy, luck, broom breakdown, temperature variations, uncomfortable clothes, lack of concentration, air resistance, landscape distraction, loss of balance, … ?

In real life nothing is perfect :slight_smile:

2 Likes

Some people complain about not knowing the actions of opponents. Personally I took it as some kind of fog of war, like the mist in Codebusters, and it didn’t shocked me. And speaking of “not realistic”, well we were wizards flying on a broom and chasing snaffles… complaining about realism seems a bit odd isn’t it ?

1 Like

The good

  • I thought CG raised the bar with Hypersonic, but now it raises even higher with Fantastic Bits. I was so excited about the possibilities after reading the spells, it motivated me to work harder as I could not wait to see good AIs do crazy plays. As such, programming and watching the game was rewarding and entertaining.
  • Easy to get into, hard to master, with more depth than CSB.
  • Love the physics and visuals.
  • The stability and speed of the servers were stellar throughout the whole event. Whatever you changed was worth it, it is miles ahead of Hypersonic.
  • The complete transparency and feedback on bug reports and fixes is appreciated!
  • 1v1 is much better than mixing it up with 3 or 4 players games.
  • Finished in 2nd place, getting a bit better at this. :slight_smile:

The less good

  • Through the contests and multi games so far, I notice a consistent lack of vital information in the input given. I do not know if it is intentional, but I strongly believe we should be given ALL information of the game state, except when explicitly designed as such like the fog of war in CB. Guessing basic stuff like goal scored and mana, or vital information like spells casted, bludger’s last collision, etc. is distracting and an unnecessary burden. I’d rather spend that time on something more interesting.
  • The spells could be better balanced. Obliviate and accio were interesting concepts and could have added a lot more to the game, but their numbers made them almost always an inferior choice to petrificus and flipendo.
  • I feel defending properly was maybe too difficult to be worth the bother, as it was often better to just go all-in on offense. Not sure what would be the best way to address this, but reducing mana on accio could be a good start.
  • I think the rerun system could be much better as I do not think the final points represent accurately the performance differential in the top. Unfortunately there is not enough data left to analyze as the last battles of the contest have been cut off to the last 1000 matches, there were a lot more. I will leave it to pb4’s thread to discuss it further.
  • The top 4 was clearly ahead in points, yet most matches were spent against other people, where an off match takes a huge chunk of points, which makes the final result feel a bit random sometimes, depending on who you get and at which moment of the submit progression. It would be better if it favored people who are closer in points.
  • By running 1000 matches for the top 15 in the end, it seems there is no good reason against just resetting the whole league and that way make it 100% fair for everyone, regardless of previous submission time/results. While I think Magus totally deserves the win, the repeated submits of pb4 on the last day seems to have given him a bigger lead than I think he should have started with at the end.
  • Difficult to avoid, but many bugs in the game physics slowed down progress during the first couple of days, spent debugging on what’s going on instead.

Other notes

  • Unlike a few others, I do not believe we should be given the real simulation code for a few reasons. One is it would favor submissions in whatever language it is written in. Additional effort would be need to properly document it. Also, coding a simulation yourself is actually a really good exercise.
  • However, we should be given complete, detailed and accurate information about the physics model used and its implementation. It was left almost completely unexplained except a brief mention of elastic collisions, despite the thorough description of about every other game mechanic. If it is too much to put in the statement, then provide an external link with more information on elastic collisions and details on more obscure stuff like the minimum half-impulse which is hard to find information about even on Google. Without Magus explanatory post on CSB, many would have just given up trying to figure it out.
  • On that thought, a training puzzle based on the same physics model would be a great way to get introduced to it.
  • I am not sure why choosing C++ and using an evolutionary technique takes so much flak from some in the community. As if it was cheating or too easy to do for some reason (it’s not). And you still need heuristics when you take that route.
  • I find all the code sharing going around post-contest a little awkward, I think it will flood the multi with copycats when it’s released and make it less interesting for everyone involved. Maybe changing the rules enough so it takes more than copy-paste to submit and do well would be a good thing?
  • Kudos to Magus, pb4 and Neumann for a hectic fight in the top 4. The pressure was intense!
  • Grats to Icebox for a top 20 with only heuristics in Python! Very impressive.
  • Thanks TheSauce for beating Magus and pushing me to a bigger lead for a while. :slight_smile:
  • Thanks player_one for CG Spunk, it’s an awesome tool.

My strategy

  • More on that later. Stay tuned for the blog :wink:
11 Likes

I beg to disagree. This is a big difference between your AI and mine. I have a “defense mode” :smiley:

1 Like

I strongly disagree on that one! The higher performance of C/C++ comes at a price - script languages often have higher expressiveness and allow far faster prototyping. If you want people to be able to chose from different programming languages, nerfing some of them seems absurd to me. If people think that’s unfair there’s only one proper way to solve this problem: Limit the contest to one (or very few, similar) languages. I personally wouldn’t like that though.

I don’t find it awkward, I think how Magus shared it (without the evaluation) was a good way to do it. But I completely agree that there should be two or three rule changes, which should be big enough to throw off everybody who doesn’t even understand what he’s copy/pasting (not just something simple like “max thrust is now 200 instead of 100” or “Accio power is now 5000 instead of 3000” where you just need to change a constant).

Talking about rule changes, I actually did NOT like the fact that there were no rule changes after bronze.

4 Likes

Hi

This beeing my first contest I don’t really have much to compare with, but that said I liked the challange as it was.
I didn’t have much time to put into it until the last day and the evening before that so at the start of the competition I made a quick grab the nearest sniffler and throw it at the center of the goal. This worked quite well and some improvements as to not throw it directly at an opponent or bludger and some other small improvements put me at 9-10 th place in the middle of the contest time. Att he last daturday evening I had of course gone down quite som places as more players had improved their code. I spent nearly all of the last competition day writing a completely new version including simulating the environment - and it failed miserably landing me at place 96 that I am not too happy about.

I think mine and my opponents magic should be available in the input I can of course keep track of my own but keeping track of my opponents magic reserve is nearly impossible (examining each entity movement could teoretically give this information), furtermore I find it illogical that the magic amount is per player and not per wizard.

1 Like

I know you do. I did not say defense is useless but definitely harder to execute and get consistent benefit from compared to offense, as reaching the top with barely any specific consideration for it should demonstrate. And I still did not need it to have a IDE version beating your last submit (but not pb4 and Neumann) :stuck_out_tongue:

I should have been clearer, I was not referring to Magus, but others did share a lot more than that.

Out of curiosity, do you refer to the code I shared when you say you feel “awkard about it” ? For what it’s worth, there are 1500 lines of code missing. It would take a huge effort to make anything work from what I shared :slight_smile:

Independently from your answer, here are my thoughts on code-sharing. I see two very different reasons to read another participant’s code.

  1. As a beginner, to understand how other people organize their code and obtain good performance.
  2. As an advanced programmer, to find small tricks used by other players which might be too small to mention in a post-mortem.

During my time spent on Codingame, I have been very interested in both options. As a recent example for 2), I learned a few interesting tricks for an advanced beam-search by reading Kimiyuki’s Hypersonic code. For 1), I will definitely learn to use polymorphism and heritage by reading Magus’ and Jeff’s codes.

Given that I learned everything by methods 1) and 2). I am very attached to at least some form of code sharing. According to you, what type of guidelines should we follow when sharing things at the end of a contest ?

5 Likes

I was not directly referring to you either pb4. But githubs and archives of integral codebase have been posted on chat after the contest (not naming anyone). Though to be honest, sharing direct evaluation code like you did may be indeed a bit trickier than Magus simulation code.

You raise great points about learning from what others are doing, and I am definitely not arguing against sharing ideas and strategies and whatnot. What I am not completely in agreement with is the way to go about it. For instance, Magus blog on CSB is the perfect example on how to communicate ideas and concepts others can learn from, with bits of useful code here and there. And in order to apply this knowledge one still has to fully assimilate it, and code it him/herself in a functional form as a proof of learning.

Having huge chunks of code (or complete code) readily available as-is is a shortcut I disagree with for a few reasons. In the same way a student following a teacher solving a problem is not the same as a student having to apply the knowledge and solve it him/herself. One can mindlessly copy-paste elastic collision simulation or evaluation (with some grunt work, granted) or whatever else, but what will have been learned from it? How are you encouraged to experiment and find cool and creative things on your own when you can just follow the beaten path without having to figure it out? Isn’t that the most important point of these puzzles and competitions on CG? What if the solution to all puzzles before even solving them was available a click away on this website?

It’s only a matter of philosophy of course, and I expect divergences on my point of view. It was certainly not my intention to make a big deal out of it (it’s really not). I am not even sure my opinion on the matter is fully thought out yet. My concern is more about making the multi more fair by changing rules a bit and force adaptations to be made. I would rather compete against people who put up the work for it.

10 Likes

Hi, #4 here.
I’ll not explain much about my strategy as it is very similar to what the top 3 will soon explain in the blog post, but I have a few things to say anyway.

The original concept of the contest was nice, but there were a major issue in my opinion :

In most of the past contests, the way to go has always been the GA/MC way, i.e. the “let’s reproduce the engine and tests a lot of stuff in a few milliseconds” way. I’m fine with that since it still requires a lot of skill to tune such algorithms and have descent results.

The difficulty is either in making the engine as fast as possible (STC), or in making the evaluation function as intelligent as possible (CSB). I prefer the latter. But this time, I felt that the major difficulty, regardless of the engine performance, was getting the engine complete, and bug-free. I’m not the fastest coder but it took me 5 days to code the complete engine without any bugs. I spent the rest of the contest on the “intelligent” part, the evaluation function, but in the end, it was very basic, and still I manage to make top 4 with that. I had the chance to look at and discuss about the top 3 solutions, and it lacked the “Ho wow” factor that you get when you read TGE’s Recar Post-mortem for instance (or maybe I missed something but I don’t think so). That’s because imho the difficulty was put on the wrong aspect of the contest, and there wasn’t enough time to spend on the interesting part. AI contests should be about making things “intelligent”, and not about trying to make basic things work. I can’t blame CG for that since they try to provide us with original concepts everytime, and they succeed most of the time (they did on FB). Or maybe they tried to favor AI based on heuristics, I don’t know, but it didn’t work well.

Designing multiplayer AI games is a complicated task, and the easy-way to tackle the problem I mentionned earlier is to make the engine open-source. Everyone would then start with the same material, and the contest would be more about strategies than a reverse-engineering exercise. The 1 week format is good, but it requires less complicated rules in order to make room for what makes an AI contest interesting.

8 Likes

I’m sure there will be a lot of people (myself included) that tried to write a sim but didn’t have the time to get it performing better than a simpler reflex / FSM AI. I think this was mainly down to my inexperience in writing a decent evaluation function which converges on a good solution rather than the quality of my sim; which was far from perfect, but possibly would have been good enough to get ‘reasonable’ results from.

I know there are a lot of people giving the same feedback regarding the need for sim writing, and the need for optimization, but I would hope that this doesn’t cloud the fact that this really was an awesome competition which was very interesting to work on, watch and compete in.

Personally, I’d much rather see a more complex (but interesting) problem than one that have been given overly simplified mechanics which may cause it to be somewhat less involving.

For example, if you’d like the game simplified, in which way would you prefer?

  • Less spells?
  • Making collisions between entities just causing both to stop at the point of impact? Perhaps only having snaffles bounce off the walls?
  • Less snaffles?
  • Completely remove Bludgers?

For me, doing any of the above diminishes the level of interest/challenge of the game…

2 Likes

It took some time, but here is my post-mortem document.

https://drive.google.com/open?id=0BwV4JhqN8FZabDRDVllaM0ptNE0

I did not go into details of why a metaheuristic would be so good here : read Magus’ and reCurse’s articles for this, or my old post-mortems.

Hope you’ll like it !

20 Likes

Looks like an amazing document.
“Looks like” as I’ve not read it in details : I’m not sure I want to spoil myself to the most interesting / fun part of the game, since during the contest I only worked on the game simulation, and I hope to do the IA part when the multi is released :slight_smile:

PS : where can we read Magus’s and reCurses’s articles ?

I sent my article in french to CodinGame few days ago. They have to translate it and then it will be posted on the blog : https://www.codingame.com/blog/

2 Likes

Mine is still in progress, I have been more busy lately than anticipated, but it’s coming! :slight_smile:

1 Like

I made my FB code available to whoever happened to be in chat that 1hr but I’ve been thinking about it a bit more since and would agree with reCurse. Little bits and pieces of code is much more helpful and constructive for people trying to learn and will probably refrain from sharing full code in the future.

I also don’t think having the full engine at the beginning of a contest would help too much, especially if it’s only available in one particular language. FB was particularly difficult because of the complexity of the simulation rule set and the 100ms time limit, which pushed the boundaries of a 1-week contest to whomever could reverse engineer the engine most accurately and quickly. A simpler rule-set that gets expanded or modified in multiplayer would make more sense.

Other than that, this was a really fun contest (and my first one)! Really looking forward to the next one in February.

1 Like

My thoughts and post-mortem of the contest is here

Here’s my FB post-mortem for anybody that’s interested.
http://raphaelmun.com/2016/12/19/fantastic-bits-a-i-competition-post-mortem/

2 Likes