Fall Challenge 2020 - Feedbacks & Strategies

How do you implement vanilla beam search and beam stack-search? Could you please share a link to pseudocode of both? I failed to find good descriptions, free of confusing context like neural networks.

Do you need to use some priority queue (n-ary heap) for the frontier? Then you need to evaluate all intermediate nodes, not only leaves? Is evaluation function the same for leaves and inner nodes?

1 Like

First CG challenge for me. Ended gold 1 hour before the end of the challenge (400th of gold).
I’ve take part in the challenge to learn Rust and i aim for Gold so i’m happy since both of my goals are completed.
I’ve learn in bonus a lot in graph search and real time (So many TOs).
My strategy for the challenge :
2 days before the challenge : made an account CG, did some puzzles in Rust and setup my dev env.
Wood : I was checking all possible actions for the turn, sort them by price and take the first one.
I’ve set security stock of 2 meaning that i dont cast spells if it lower my ingredients below my security stock so my bot had some of each tiers to brew.
I’ve manage to get bronze without a search, i was looking only for the current turn.
Bronze & Silver : First i just add a basic learn that took the production spells. (not repeatable ones). Then i needed a search.
I’ve quickly got a BFS but it was very slow because i each node was big (700 octets) and the copy was taking a lot of time (using smaller type was a big optimisation).
My search had the learns, rest, casts, and was stoping at brews. But it was the first time i had to program a graph search and my code was not optimise at all.
But since i’ve implemented the learn and the repeatable my bot was well place and i’ve got silver when the league opened.
My search was very slow and i knew i had to optimise to even consider sorting between solutions or getting path with multiple brews.
I’ve refactor during 5 days, using bitwise to store the states, heterogenous collections for the spells and brew and a unique spellbook for all the nodes (yes i had a copy of the book at every node before). But it was a pain to add the repeatable casts so i had to roll-back at a version from 5 days ago. (That was a tough call).
Silver to Gold : After the rollback i’ve add some of the optimisations i’ve come up in the 5 days.
Bitwise state, smaller nodes, one spell book.
and i was happy with the performance (Depth of 9, 30 to 50k nodes, 8k solutions found).
I start to add other features like looking what my enemy was doing etc.
But gold was hard to reach…
Then i’ve come up with a rough formula to sort my solutions and i quickly got to the top of silver.
I was about to go sleep (it was the last day at 8 in the morning) but my bot climbed out of silver and i got hype and stay awake to see him reach gold.

What help me:

  • I could run test offline and that was waaaaay faster that submit on CG.
  • Smaller type
  • tests

Where i’ve failed:

  • My node could be smaller (I’ve learn about unions after the challenge)
  • I didn’t use the 1s of the first turn
  • I didn’t profile my code (I wanted to but didn’t had the time to make it work)
  • i didn’t know Rust enough to see the costly operations (mallocs for example).

What i’ve like:

  • Love Rust, the challenge, and the tough competition.

What i didn’t like:

  • All into one file, ive hated this so bad, at the end i was just searching for everything. (My file was 1k lines)
  • The search make your bot harder to understand. (Sometime i look replays and don’t get what my bot is doing and why)
  • Maybe the leagues were too big.

What i’ve wish for:

  • An arena to make bots versus player
  • More chatty bots, plz put your stats into the log ><

Here is one of my matchs : (One that ive won :wink: )


Thanks to CG, and to all the competitors.

~1200 place overall

My simple approach was as follows:

  • at first learn some spells
    • no heuristic to determine the best one
    • learn the first one, not to spend money on that
    • goal: learn 7 new spells
  • then for each move run a BFS
    • no special logic here
    • expand the action tree only if I can cast the spell (also check if spell is repeatable)
    • goal: run till I don’t find a first potion I can brew
  • if I didn’t find a potion to brew in given time - cast a random spell

I tried also running the BFS until the most valuable potion was brewable, but usually I wasn’t able to do so, ending up
doing a random move.

1 Like

I finished #27 overall, and #1 in Python.

I am not going to lie, between the random time outs and the natural speed disadvantage this language has, this contest was a bit frustrating for the Python users out there.

For me, it just meant I had to try and find different approaches to have a chance to compete. I spent a couple days doing just that, trying other things like turning the game into an optimization problem for example, but nothing worked. In the end I think this was just a game where the best approach was to try to model the game exactly in your bot, including not overfitting due to unknown future potions/spells, finding good heuristics to “guide it”, and then just let it do its thing.

I managed to get up to #3 gold on Sunday afternoon with a pruned BFS that got to me ~7-8 depth on average, and then converting this BFS to a beamsearch apparently (I wasn’t there to see it :/) made me stomp the boss and led to my final ranking. A few things I did slightly differently from others, which probably all helped a bit and a lot in aggregate came from being very constrained by the depth until I changed to beamsearch, forcing me to develop a solid heuristic. In more details:

  • I didn’t use 1 but 2 patience coefficients, 1 of 0.95 for your inventory value and one for potions that started at 0.99 but gradually got down to 0.95 as you brewed more potions and got closer to the end game. The idea behind that is that it matters more and more that you brew faster as the game progresses, but doesn’t really matter in the beginning
  • Similarly, my eval of inventory depended on how many potions p I had already brewed, taking only into account for each potion type max(6-p, actual potion number). This helped guide my AI towards not accumulating too much of soon-to-be-worthless inventory. Once I reached branches that led to the end of the game, I only kept those and of course used as the inventory eval what my actual bonus inventory score would be.
  • For learns, for the longest time I just forced my bot to learn a spell during the first 6 turns no matter what, otherwise they would often just start brewing. Ultimately, I added all the different learns you could do in 1 turn or in 2 turns (only allowing to learn on the 2nd turn if you also learnt on the first turn) and then not incorporating learns past that, and this approach when used with a beamsearch with a large depth actually made my bot a smart learner! I removed the 6 turns learn, and it seemed to be able to pick the right combos before starting to brew. Interestingly, @kovi mentioned among the top 30 I was the one that learned the less spells on average, and I noticed that sometimes I even only learned 4 spells and still somehow won.

Once I moved to beamsearch, I only used 150 nodes, the lowest I’ve seen in the comments. I think that was fine-ish thanks to a good pruning, and allowed me to reach a depth of 15-20, which was beyond my expectations in Python.

Thank you very much to CG, this contest was great as usual and I fully understand why contests see more and more participants! A few small points of feedback:

  • I know this was discussed before and I don’t want to open a can of worms, but for contests where it’s basically a simulation arms race, it would be great if languages could somehow be made a little more equal. I hear the old argument of “use the best tool at your disposal”, but being forced to always use the hammer isn’t that fun either. I’m sure there are ways to at least close part of the gap, such as for example a conservative benchmark to C++, compensating for GC time etc… Anyway, only 3/80 AIs in legends used python (but 2k / 7k users), that might be the lowest total yet (?) and either way extremely low when compared to 60/80 using C++ (but 1.5k / 7k overall).
  • That’s more CG being victim of their (again, well deserved!) success as @Razielwar said, but especially towards the end of the contest, commits literally took 2+ hours which made it really hard to roll out several incremental features
  • As others have mentioned, the random time out issues were also a little frustrating
  • More than usually, this felt like a single player game where taking what the opponent does did not matter much
17 Likes

@Magus, why did you decide to add iterative deepening to your beamsearch?

~Top-20 Silver, 560 overall, C#

But it’s not “the low-performance GC-managed language” to blame, it’s my lack of experience with game tree search. (And that Silver Boss was freaking tough!)

In fact modern .NET runtime and library allow you to write quite efficient code. Let me share some tricks.

Action[] actionsFromPool = ArrayPool<Action>.Shared.Rent(actionCount);
Span<byte> bytes = stackalloc byte[sizeof(ulong)];
BinaryPrimitives.WriteUInt64BigEndian(bytes, _data);
bytes[4] = unchecked((byte)score);
return BinaryPrimitives.ReadUInt64BigEndian(bytes);
int castableCount = BitOperations.PopCount(leaf.CastableBitset);
public override int GetHashCode() => HashCode.Combine(_data, CastableBitset, LearnedBitset);
3 Likes
  • Overall: 2355
  • Silver: 1812

To reach the bronze league I implemented a simple algorithm that chose the best potion possible to brew. If there was no potion possible to brew I casted a random possible spell. If there was no possible spells I rested.

To get to silver I did some basic DFS over game states. I also enforced a limit of learning at most 6 spells.

I was not a big fan of this game.

  • First of all was the experience with the game. The rules had some weird quirks that were tedious to implement (e.g. urgency bonus). Game mostly focused on speed and most solutions were very similar (DFS / Beam Search + heuristic).
  • Second of all, the idea of brewing potions is not very exciting. I really enjoyed contests when the game was fun and relatable (e.g. Hypersonic). It is much more pleasurable to write a bot for a game that you would be eager to play yourself.
5 Likes

Because there is little differences between a beamsearch at fixed depth and a beamsearch with iterative deepening (in the code).

And fixed depth is dangereous. Is you fix the depth at 10 for example, you can timeout because you’ll maybe never reach this depth in 50ms.

1 Like

When you are surrounded by c++ but you keep trying to fight ^^

By the way, I added a security to avoid counter of my opponent when I think I can win with the 6th potion brewed. And corrected two bugs: wrongly computed bonuses. Pages score completely false, I start to think that this part of my code is useless.

6 Likes

Ranking 1121 globally
Firstly the game was pretty hard to understand and complicated, what I really hated was changing the rules of game when your code goes up in leagues because every time I went up in leagues I needed to find every single place in my code that needed to be changed/upgraded. Second thing is that game wasn’t about thinking which ai algorithm to use and how to optimize it, the moment I realized that it is just write really fast BFS algorithm and collect every info you need as fast as you can or there is no way to climb the leaderboard it became more like Algorithms classes from university and less like AI for games. Third thing is that I prefer to write some simple simulator of game, write few files, which makes code organized and easy to optimize (running it few hundred of times to check how good parameters are etc) and in case of this game, after painful few minutes of understanding rules I gave up, almost instantly, on writing it on my computer, mainly because I didn’t know how potions recipes are randomized etc. and as everyone can see codingames ide to write code is bad, after getting into silver league I needed to scroll few hundred lines up and down to check code in different places.

So about my algorithm, I created a 13310 nodes in my graph (I encoded ingredients as follows ingr[0] + ingr[1] * 11 + ingr[2] * 121 + ingr[3] * 1331 ), ofcourse there wont be as much as 13310 nodes in this graph (there are a lot of not possible numbers to achieve using this encoding, because ingr[0] + ingr[1] + ingr[2] + ingr[3] <= 10 , but I didn’t have motivation to optimize this, game killed it). Then there is edge connecting nodes u and v if after casting spell a in shop state u I get to shop state v. So I run bfs from my actual state of the game and I’m finding path to each of potions (or every other state of shop that have enough ingredients to create this potion). There was some really frustrating piece of code to write so I could check if I need to rest before travelling with this edge. After that I have potion prices and number of turns to get this potion, so I divide price by number of turns and choose the best (there were probably a lot of much better heuristics but as I said game killed my motivation). I keep each path stored so after initial bfs I don’t need to rerun it if there was not any change in game state (enemy created potion or I created potion).

When game made it possible to learn spells I wrote additional bfs call, if I already founded some path then I have free time so I call bfs as much times as time limit makes it possible and for every spell which is possible to learn I find new paths in graph given that I learned this spell and if there is now better path than without learning this spell I learn it (if I have time to check few spells I choose the one making most improvement). I had hyper parameter to limit how much spell I will learn.

1 Like

Thanks for the postmortem. Can you explain the following eval?

0.2(1.25 Repeatable_Spells+ 1 Non_Repeatable_Spells)
- Past 11 spells

Can anybody share any bot which uses beam search technique and make it to Legend or Gold? I could not find any practical example online where beam search technique is used for writing good Bot. All references I see related to NLP!

2 Likes

Maybe @Magus ?

Sharing full code is forbidden.

And sharing my code for the beamsearch part will be useless. There is specific code for the current puzzle, I use 2 arrays instead of one to avoid sorting the Node directly (I sort an array of pointer instead) …

If you are looking for some pseudo code. Just look for a pseudo code of a simple BFS. A beamsearch is pretty much like a BFS, you just add a sort and you keep the N best nodes.

1 Like

So I didn’t know of iterative deepening before you mentionned it, but from what I saw online it seems like you go through the whole tree from the start again at each new iteration. Is that actually what you are doing? Or are you keeping the last depth’s states you stopped at in memory, and keep going one depth further from that last stopping point until you run out of time?

Do you sort the whole fringe/frontier or just the children of the current node? Is it possible to use priority queue instead (basically the heap sort)? By which criterion do you sort — by the static evaluation of the node? Is it the same function as for scoring leaf nodes?

Here is some pseudo-code for beam search from @Agade’s post-mortem, maybe that helps

2 Likes

Iterative deepening is not the good term. I don’t restart from the start. I use the nodes from the last depth.

The whole depth at once.

The score of the node. And the score of a node is parent.score + game.eval() * COEF_PATIENCE^depth

2 Likes

Ah got it, I just couldn’t understand the purpose if you really did that but it makes sense then, basically keep going until you run out of time. Thanks!

@Gronahak Congratulations on getting to gold with Python and such a low number of nodes visited.

Glad I’m not the only one who tried linear algebra ! Here was my approach :

Given the matrix of costs of all spells (each on a line), I estimate the cost of ingredients,that I just multiply with the potions’ costs. My formula formula for approximate ingredient cost was, using pseudo inverse:

    time_cost = np.array([2 - castable])  # using castable property is optional : then it would be all 1
    inv_c = np.matmul(np.linalg.inv(np.matmul(c.transpose(), c)), c.transpose())
    approx_cost = np.matmul(inv_c, time_cost)

@Raf It works with any number of spells, and gives an approximate cost, which is somewhat the average exchange cost from all possible ressources, measured in rounds.
Also, one way to ensure invertibility is that all spells must satisfy the “increasing quality of gems” property, either purely producing spells, or producing ingredients all of higher tiers than the cost ingredients, so I used the following function to buy a new spell or not:

    pos = np.where(deltas > 0)[0]
    neg = np.where(deltas < 0)[0]
    interesting = True
    if len(neg) > 0:
        if min(np.where(deltas > 0)[0]) < max(neg):
            interesting = False

Anyway, in the end I also used tree search :/, and in this exploration rather than prediction mode, more different spells seemed to bring more diversity and gains, so I dropped the linalg strategy.

My final tree implementation covered only 200 nodes and I didn’t go far up the Silver League.

3 Likes