Fall Challenge 2020 - Feedbacks & Strategies

The topic where you can tell how you liked the challenge, & how you solved it.

As the challenge game will be released soon as a classic game on the platform, please don’t share your code publicly.

Global leaderboard: https://www.codingame.com/contests/fall-challenge-2020/leaderboard/global

Also, please take a few minutes to take our survey : https://codingame.typeform.com/to/aFt94JqN

Next event: Spring Challenge 2021
https://www.codingame.com/contests/spring-challenge-2021

Thanks a lot for participating!

15 Likes

First of all, thanks CodinGame for this contest. It was fun. Simple rules, clear goal.

I will probably finish around the 13th rank in the legend league.

My code is a simple beamsearch with iterative deepening and the following tricks :

  • I use a stupid dummy for my opponent and i use only at the first turn of my simulation. The dummy only maximize his score (buy potion or cast the best spell to maximize his score at the end of the game).
  • If my AI learn a spell during the simulation, i don’t add the tax to the other items. Because if you don’t simulate your opponent, your AI will thinks that it can take back the tax at the next turns … (and in most cases, your opponent just take it).
  • I check the path of actions that my AI want to do. If i found a LEARN action before any BREW, i do this LEARN action. You can cast your spells later, but your opponent can learn the spell you want before you’ll do it …

I use a beam size of 1000 nodes. My evaluation function is simple :

  • 1 point for each score point
  • 1 points for each tier 0 ingredient in my inventory
  • 2 points for each tier 1 ingredient in my inventory
  • 3 points for each tier 2 ingredient in my inventory
  • 4 points for each tier 3 ingredient in my inventory
  • 1.1 point for each crafted potion
  • 0.5 point for each learned spells

I aslo use a patience coefficient of 0.95
So the score of a node in my beamsearch is eval*(0.95^depth). For each node, I add the score of the parent node.

Fun fact, my first code was a MCTS without any evaluation function. And it was 50th in the gold league.

32 Likes

Thanks for contest.

Didn’t really manage to get into it with timeouts at 20 or even less. They got very obnoxious. Lots of energy wasted on fixing timeout issues. On my last submit i tested with 40ms timeout, looks like there are no timeouts atm, but this doesn’t help as I wasn’t considering the extra 20ms to do something with it up to now.

Not a fan of the witches. I like having a Legend of Link theme though.

Glad you removed the upgrade cards.

10 Likes

BlitzProg - 326th overall

Being in the middle of another time consuming event, getting in the gold league was difficult. Especially considering my Monte Carlo approach ended up being almost useless because of how strong the Silver boss was, after first thinking this would be enough.

I tried to follow the advice of others to get into gold league. The meta apparently was: just buy the first 8 spells then BFS with what you have. This wasn’t enough, so I had to come with a few upgrades: beamsearch, and start the search queue with the possibility to buy a spell in exchange of a score “penalty”.

The later would let my AI pay for a few extra spell to gain time in specific cases when it would allow to gain a few turns, which helped immensely (games get very quick the better the AI, one turn difference will often change the course of a game). The penalty comes as a decay factor that is set to one turn ahead for these nodes, so it would only buy spells if it was really worth the time spent.

Decay coefficient is set to 0.9, maximum Beam search node set to 256, I didn’t really try to tweak these. Decay score do not add the current score to the overall node score, only what was gained during the turn.

I ignore the opponent, it’s simply not part of my calculations.

I use the simplified game states for my beam search (I do not copy the whole game state, only things such as delta, score and a bit array of spell availability for example). Another trick I used is making no distinction between BREW, REST, and WAIT - these nodes are all the same. So any time my AI wouldn’t cast a spell in the search, it would automatically try to brew any of the available brews, and if none is found, it would “REST”. I don’t think the order of Brew/Rest command makes any difference unless you’re messing with advanced opponent prediction maybe, but that sounds so complex.

Overall quite complicated but fun challenge, thanks!

8 Likes

Mid legend, C++, used beam search as most others.

For the most part of the contest I tryed to make Monte Carlo work, it was quite ok and I think closer to the end of contest it could potentially reach bottom legend. I had a few heuristics which i found while trying MC and kept to my beam search:

  • learning only one spell per rollout, easy to implement and it turned out you dont really want to learn more. Except for begining where I simply learned 4 zero-cost spells.
  • reward for the full conversion of the spells, i.e. for spending and gaining. Simple logic more is often better than less, and most spells have a positive conversion rate.
  • same reasoning to always use the maximum cast times available. Also using less than max some times could lead to a local maximas, where you would brew a potion earlier and leave yourself with empty inventory.

That is probably it, I tryed a lot of other things none of which gave any result. I kept my simple MC solution with 1k rollouts for opponent prediction. It was able to catch most of the losses for sim termination and gave a reasonable approximation of opponent score, basically you don’t need much more.

Thanks to the CG and all stuff members for such a great event. Thanks to all the partisipants, especially the ones who was hanging out in chat, making participation even more fun and interesting. We still have a recalc ahead of us with a quite tough battle at the top. Looking forward to it and to the next contest.

13 Likes

#197 (117 in gold) #1 in India

First of all a hail to all of those CG staff who organised such a big contest in a very good manner,really a hatsoff to all of em’.

This game was quite easy if you think and even easy to start but is hard to master it, On the first day of contest i was confused between many algos that what should i use Minimax, simulation, BFS, Genetic algorithm, MCTS,etc. To be true i think all of these algorithm would have worked good but i need anyone so i choose my favourite - Simulation, As already it was night in India so i slept without getting bronze, On next morning I made bronze and then i started writing my Simulation code. I have 5 classes at all -

class timeout;
class inv;//inventory
class gamestate;
class action;
class player;
class sim;

So, after making all classes i started by sorting my moves and opponent moves which is simple just remove ‘OPPONENT_CAST’ moves from your input and saved them in the array

#CHECKING VALIDITY OF MOVES

So validating that this move playable or not at particular gamestate looks something like this -

if (action.type == REST) return true; //not neccesary to check if REST is valid action or not

else if (action.type == LEARN && me.inventory.inv0 >= action.tomeIndex) return true;

else if (action.type == BREW &&(c.inventory.inv0 >= -action.d0 && c.inventory.inv1 >= -action.d1&& c.inventory.inv2 >= -action.d2 && c.inventory.inv3 >= -action.d3&& c.inventory.total()+action.d0+action.d1+action.d2+action.d3 <=10)) return true;

else if (action.type == CAST &&(c.inventory.inv0 >= -action.d0 && c.inventory.inv1 >= -action.d1&& c.inventory.inv2 >= -action.d2 && c.inventory.inv3 >= -action.d3&& action.castable == 1)) return true;

else return false;

after this i developed the game engine ,Next day it was ‘DIWALI’ in India(One of major festival in India) so i wasn’t able to code that day so on next morning i.e. 15th Nov’s morning i made my simulation ready, But too much bugs i was nearly 2000 with my heuristic and got 1200 with that buggy simulation then i felt nervous that what would happen if i won’t be able to fix my code and started fixing bugs and then on that day i ended on 729th and on Next day silver was going to open so i fixed some more bugs and reached 400 and after silver opening i fixed nearly every bug.

#WHATS NEXT??

so, after lot of bug fixing + optimization i read rules again and bingo i forgot to add repeatable casts in my scope i was treating them as non repeatable cast and fixing that gave me much

Now lets talk about affect each action on the gamestate-

#BREW -

Brew is basically to make a potion so just add its price to your inventory score and mark it as visited so that you won’t choose that action again in same sim

#LEARN

I found learning the most complex action of all so basically deltas and repeatable value given in input for learn action are deltas and repeatable value of CAST spell you will learn so first of all subtract tomeIndex from your tier-0 ingredients then make new valid CAST action deltas and repeatable value same as LEARN action then add taxCount in your tier-0 ingredients basically min(taxCount, left_space_in_inevntory)

#CAST

So for cast simply do -

me.inventory.inv0 += d0;
me.inventory.inv1 += d1;
me.inventory.inv2 += d2;
me.inventory.inv3 += d3;
castable = 0;//mark it visited

#REST

its the simplest action in this game so simply make castable = 1 of all your CAST action

#WAIT

Really, just ignore it

Now i will tell you about backbone of simulation (evaluation system)

#EVALUATING EACH ITERATION

in each depth when i apply one action in gamestate if that action.type == BREW then value += MAX_DEPTH - depth

and after completing my one iteration i evaluate it more by -

value += my money
value += inv1 *2
value += inv2 * 3
value += inv3 * 4

But on last 2 days my rank broked a lot so changed to beam search with same eval.

this is all i have for my code, With more optimization and better eval can give you better rank.

Thanks :slight_smile:

12 Likes

finished 254th (mid gold)

First time joining a contest, loved it. Thanks a lot to the whole codingame crew.
I also loved the visual design.

My overall tactics:

Learn Phase:

Learn 1st spell in tome for 9 rounds. This is very stupid but at least I can always grab opponents tax to get a head start.

Play Phase:

1] Look for possible brew paths

BFS for BREW moves with score evaluation on final node
score = 1 * tier0 + 3 * tier1 + 3 * tier2 + 3 * tier4 - turns * 3.5

pick best found path for each potion

2] Lock on to potions on the left (=best value)

take the path that was found for the most left potion (but must have price greater than 10)
Then keep looking for better scored paths of the same potion.

My code was able to search up to a depth of 6-9 (3000 ~ 6000 game states / 40ms)

The scoring function was decided with a bit of trial and error but the general idea is to allow longer paths if there is a high enough return in ingredients. This was the last one I was happy with.

3 Likes

Hi all,

It was my first attempt after a long break for JEE prep, so I was very rusty :frowning:

For the language, I chose the usual favourite --> Python3. But as performance got more important, I had to port and after 6 months of doing no programming other than Java in school, I wasnt commfortable enough with any language other than Java.

A bonus for Java is that its so easy to build a simulation as the open source game referee on https://github.com/CodinGame/FallChallenge2020/tree/main/src/main/java/com/codingame/game was already written in Java :slight_smile:

So a basic random search removing the really bad moves was the way I went. And this ended up being the final AI. The rest of the resubimts all came from random modifications to the eval.

To start with, its a good idea for everyone to look at the excellent streams out there. I found that the one by errichto was particularly helpful.

About the contest itself:
Pros:

  1. Massive participation!
  2. Well organized.
  3. Lots of very strong newcomers.

The only “bad” part was the slow submmissions, which is to be expected with 7K plasyers. So I dont think that it matters that much…

Well done organizers, the contest was run very smoothly.

And also, excellent job by the University of Tokyo, 42Tokyo, and the other Tokyo universities, the completely dominated the top :slight_smile:

2 Likes

I am not sure if understand this correctly - does it mean you skipped LEARN actions found after BREW? or you forced LEARN to be done asap, so whenewer search returned CAST 1 -> CAST 2 -> LEARN 3 -> CAST3 -> BREW, then you switeched orderd of commands to LEARN 3 -> CAST 1 -> CAST 2 -> CAST3 -> BREW?

Thanks!

Yes, this is it.

But a path like CAST 1 CAST 2 BREW 1 LEARN 4 will stay like this. I just force a LEARN asap when there is no brew before.

1 Like

why not only score points?

I tested (locally and with CGBenchmark) and I was better with 1.1. I think the +0.1 is just usefull in case of tie between 2 nodes. So, crafting a potion is used as a tie break.

Hello ! First of all, it was probably the best art of any contests so far. Good job to the artist(s). If there’s a any additional time that can be spent on the viewer for the multiplayer I’d suggest 2 things:

  1. Most games are over when a player reaches 6 brewed potion, would be nice to have some visual feedback of how many potions have been brewed, other than counting the notifications in the time line.
  2. Watching a game, specially at higher speed, it can be confusing to see what potion was brewed by who since they spawn on the same level and can cross with another potion from another player. Would be easier to watch if the potion would spawn a little higher, over the cauldron and then translate to the brew recipe, so it would be clearer witch witch brew it.

Other than that, about the game itself. Really not my cup of tea, not enough interactions between players, almost an optimisation game. I gave up friday after being stuck in top 100 silver for 4 days, I usually don’t like searchs, that was pretty much the only way. It’s the first one I give up because I dont like it since C4L. I don’t want to be a sore loser haha I mostly posted for my viewer suggestions.

And one last thing: Can this one have a name after ? Like “Fall Challenge 2020” is ok for a contest, but for the forever lasting multiplayer, it’s a weird name. The spring one was an obvious reference to pacman so everybody calls it pacman, but it would be nice to have an official name for multiplayer.

18 Likes

Could name it Spice Girls

23 Likes

14th place

Thanks for another great contest and congratulations to the winners!

Initially my bot learns 6 spells - with them having some hardcoded scores based on winrates. It runs 2 depth min/max search against the opponent. That’s the last time we care about the opponent.

Then beam seach like Magus described with some differences: 0.9 patience, the goal is to brew 3 potions ASAP (unless fewer is enough to end). Scoring function was quite similar. It allows learning at most 2 more spells if it shortens the path to the goal.

I wish I had at least some opponent prediction - some replays show that my bots gets stuck with inventory because of opponents next turn brew, but I didn’t have that much time in the end after starting playing a bit late.

12 Likes

Nice contest overall thanks to CG aiming for top quality experience as always! For me participation was fun and productive learning-wise, as for cons I hope technical issues will be investigated and solved.

This time getting bronze was quite easy, choosing random valid action did the trick. Bronze to Silver wasn’t complicated either: hardcoded learning of spells, checking for brewable potions, random sampling from valid spells or REST.

After that the difficulty level started to rise noticeably, there was a distinctive need for a proper search. So, finally I’ve spent most of the time playing with numpy rather than the contest itself, however I failed in puting it to good use.

Would be grateful for some hints from fellow pythonistas on how they’ve got good results in such a tough performance-wise competition (my guess is some numpy/scipy tricks). I’m truly amazed some python coders managed to reach legend, great job!

9 Likes

For those who used a scoring system for the spells, how useful was it? I put something quick together for my bot and didn’t bother any further to score individual spells like in LOCM:

blues scored as: sum(blues)/2 + 1
each green: 3 points
each blue: 4 points
each orange: 5 points

counted the score as duration to get there, ignored anything with duration higher than 5
ignored anything with gain lower than 2, so score of what you pay - score of what you get >= 2

3 Likes

Big thanks for this contest. It was very unique and weird in both good and bad sense.

I finished ~200th in Silver (~800th overall). Which is rather shocking to me, as I usually end up in Legend. Many other usual suspects for Legend league didn’t make it too, some even never got out from Bronze! Perhaps it’s not theirs type of game or they just got bored.

Anyway, I used some simple sorting/heuristics to get out of wood and bronze, until it became apparent that I really need some simulation/state-crunching. So I started with a randomized search: just pick a bunch of moves and try to apply them. Ignore the ones that are impossible to do. It was rather performant and much better than plain sorting/heuristics, but still wasn’t good enough.

So I ended with an A* search. After some optimization it was able to explore 10K-20K nodes per turn (in C#), which should be kinda enough even for BFS to climb out of Silver, but it wasn’t enough. I made an independent heuristic for node priority (with mix of distance to nearest potion and average distance to all potions) from node scoring (based on priority, but including potion’s price). I think this should’ve made it performing similar to everyone’s favorite Beam Search, but results disagree.

What helped AI perform better:

  • Counterintuitive, removing [some] pruning. At first, I stopped search when certain depth was reached or potion brewed. Removing it consistently improved ladder by few hundred positions.

  • Adding endgame. If either of players have 5 potions, I score every potion the same (positive if I win by total score and negative if not) with a bigger penalty for depth.

What didn’t help:

  • Drafting changes. More spells or less. Better spells by value or not. Highly different to each other or not (I used average and min distance between spell vectors). Drafting after midgame or not. Changing constants. Everything got similar results.

  • Altering node’s priority heuristic. I tried to remove min distance to potion and use only averages. I tried adding inventory score. I tried adding price of the nearest potion. Every change made things worse.

Things I liked in the contest:

  • A nice change of contest type compared to a couple of previous ones. No more “fog of war”, thorough space search is required, heuristic approaches are mostly useless.

  • An optimization contest that is really multi-player. We had “solo” optimization contests before and they were rather boring. This one isn’t.

  • How drafting is implemented: not a separate phase, but seamlessly integrated into core gameplay.

  • I had to do some heavy code optimization work which I usually don’t do.

Things I didn’t like:

  • Nearly impossible to debug and understand. Your AI is finding something or it doesn’t. No idea why. Some things work, some do not. But you never know why. Looking at the games you lost gives you almost nothing.

  • Everyone is doing the same things. Which coupled with the previous paragraph makes everything much less fun.

So, in the end, I both regret not spending much time on this contest (so I could climb higher) and don’t (as it would be just more-of-the-same).

13 Likes

Hello,

Great challenge, lots of fun, finished #4. Here is my postmortem.

Bronze

search

I went for a beamsearch approach, with a hash system to avoid duplicate nodes.
The main issue with beamsearch is that it might overlook moves providing a gain in the long term (i.e. learns), depending on the evaluation fonction. And I had no idea about how to take the learned spells in account in an evaluation function. So I did my searches without learn from the beginning.

Retrospectively, this might have been a bad idea. But now that my code is a complete mess, I don’t have the courage to reimplement the learn in my beam search to test if it performs better.

learn

Learning the first spell for the first 6 turns works extremely well in the early leagues.

Silver & early gold

search

Same as bronze. Only a couple of optimizations, bugfixes and minor improvements (such as managing the urgency bonus), but no major change.

learn

I spent way too much time trying to improve this. I ended up doing something like this:

  • turn 1 to 4: simulate the game with every spells (learned and learnable) but one. The lowest score obtained by the simulation would be used to identify the critical spells that should be taken early.

  • turn 5 to … : one beam search per learnable spell, and compare the results with the simulation without learning. It is somehow a depth-1 BFS with a beam search as evaluation function. It might be slighty better than including learns in the search, as each spell has his own beam, but on the other hand it doesn’t consider sequences of learns.

Late gold and legend

search

I added another beam search for the opponent. The data of the opponent’s beam search would be used to constrain the other beam searches.
More precisely:

  • Remove the potions which were taken by the opponent at the depth before, considering the most profitable path for the opponent,
  • Remove the nodes where the in-game score is below the in-game score of the opponent if he can get 6 potions,
  • Do not make 6 potions if the opponent can get a better in-game score.

learn

I tried different approaches, but couldn’t do better than what I did in gold.

Conclusion

Huge thanks to the CG team for making this possible ! And congratulations to pb4 !

24 Likes