Fall Challenge 2020 - Feedbacks & Strategies

Maybe @Magus ?

Sharing full code is forbidden.

And sharing my code for the beamsearch part will be useless. There is specific code for the current puzzle, I use 2 arrays instead of one to avoid sorting the Node directly (I sort an array of pointer instead) …

If you are looking for some pseudo code. Just look for a pseudo code of a simple BFS. A beamsearch is pretty much like a BFS, you just add a sort and you keep the N best nodes.

1 Like

So I didn’t know of iterative deepening before you mentionned it, but from what I saw online it seems like you go through the whole tree from the start again at each new iteration. Is that actually what you are doing? Or are you keeping the last depth’s states you stopped at in memory, and keep going one depth further from that last stopping point until you run out of time?

Do you sort the whole fringe/frontier or just the children of the current node? Is it possible to use priority queue instead (basically the heap sort)? By which criterion do you sort — by the static evaluation of the node? Is it the same function as for scoring leaf nodes?

Here is some pseudo-code for beam search from @Agade’s post-mortem, maybe that helps

2 Likes

Iterative deepening is not the good term. I don’t restart from the start. I use the nodes from the last depth.

The whole depth at once.

The score of the node. And the score of a node is parent.score + game.eval() * COEF_PATIENCE^depth

2 Likes

Ah got it, I just couldn’t understand the purpose if you really did that but it makes sense then, basically keep going until you run out of time. Thanks!

@Gronahak Congratulations on getting to gold with Python and such a low number of nodes visited.

Glad I’m not the only one who tried linear algebra ! Here was my approach :

Given the matrix of costs of all spells (each on a line), I estimate the cost of ingredients,that I just multiply with the potions’ costs. My formula formula for approximate ingredient cost was, using pseudo inverse:

    time_cost = np.array([2 - castable])  # using castable property is optional : then it would be all 1
    inv_c = np.matmul(np.linalg.inv(np.matmul(c.transpose(), c)), c.transpose())
    approx_cost = np.matmul(inv_c, time_cost)

@Raf It works with any number of spells, and gives an approximate cost, which is somewhat the average exchange cost from all possible ressources, measured in rounds.
Also, one way to ensure invertibility is that all spells must satisfy the “increasing quality of gems” property, either purely producing spells, or producing ingredients all of higher tiers than the cost ingredients, so I used the following function to buy a new spell or not:

    pos = np.where(deltas > 0)[0]
    neg = np.where(deltas < 0)[0]
    interesting = True
    if len(neg) > 0:
        if min(np.where(deltas > 0)[0]) < max(neg):
            interesting = False

Anyway, in the end I also used tree search :/, and in this exploration rather than prediction mode, more different spells seemed to bring more diversity and gains, so I dropped the linalg strategy.

My final tree implementation covered only 200 nodes and I didn’t go far up the Silver League.

3 Likes

I am interested in full bot code not just pseudo-code. For example I want to see his “Simulate(…)” function mentioned in pseudo-code.

1 Like

Me too, because I could learn much from it. However unfortunately the risk is just too high that if a full legend bot code is revealed, dozens of new clone bots will enter the legend arena from random Level 1 users. Some post mortems with pseudo code or limited code-excerpts are detailed, and of very high quality.

1 Like

My, my, it’s that time of the week already! Let’s get to it.


I got dragged in a contest. That hadn’t happened in a long time :slight_smile:

I ended #136 (so upper Gold), quite better than I expected considering the experience lag I’m starting to accumulate in multis.

As for strategy, I’ve got the same run-of-the-mill BFS that I streamed get me out of Wood-1, just faster, deeper, and with an actually reasonable heuristic. I simulated a single opponent step maximizing finisher moves.

I also wrote a more in-depth postmortem if you really must know.

5 Likes

I agree. But I dont find any other option than getting access to the code. Contest is over anyway.

It’s a bit old, but you can look at my Fantastic Bits code : https://github.com/dreignier/fantastic-bits/blob/master/fantastic-bits.cpp

This is my full code from the previous contest Fantastic Bits code. I just stripped away the evaluation function code.

1 Like

Awesome. Thanks for sharing @Magus.

Thanks for sharing your inventory storage.
What do you mean by this sentence?

See wikipedia.

Example: a spell takes -3 of one type.
+3 is 00011 in binary. For minus 3 you invert all bits and add 1: 11101.

Let’s say our inventory is 4 and we add that -3: 00100 + 11101 = 100001. The leading 1 os an overflow, so we just have 1 left. 4-3 is 1 as it should be.
If however our inventory is only 2: 00010 + 11101 = 11111.
The leading bit here is a 1. But counting the number of bits, it’s not the overflow position. This one indicates a negative value. See my validity check, I look at this position to detect negatives.

5 Likes

I was counting 1.25x0.8 points for each repeatable spell, and 1x0.8 points for each non-repeatable spell. I thought having more than a certain number of spells was not useful so I had this “max spells” parameter that after 11 spells they would count for a quarter of the value. This is the 0.2 that you show (0.8/4).
So I had the first 11 spells count “a lot” and afterwards count very little.
If I had, for example 10 repeatable spells and 5 non-repeatable. I was counting the repeatable ones with the full 0.8 (x1.25), one non-repeatable one with 0.8 (x1) and 4 non-repeatable ones with 0.2 (x1). This is why I wrote “Up to 11 spells, with priority to repeatable ones” and “Past 11 spells”.
What pb4 was doing to evaluate spells was much better.

1 Like

Thanks for explanation but I am not sure why it is called “Past 11 spells”.

Now I’m curious…

You switched to C on the last day of the contest with a clear practical objective to be as fast and efficient as possible within the limited timeframe. Given the constraints, you even limited yourself to passing stuff by value, no pointers, etc…

I there a reason why you didn’t pick C++ over C? Wouldn’t have the STL data structures and the algorithm header helped a lot?

Good question. Not sure I have the perfect answer to it. It’s a muddled mix of:

  • rash decision bias
  • CG C++ has the reputation of sucking in performance most likely because of template simplifications that don’t make it to the final assembly because of -O0. C not so much.
  • Really, the only STL-type structure I’m using is <queue>.
  • C is more fun than C++
  • C is upgradable to C++ on a wink

That premise is more biased than wrong. I’d simply have said “faster and more efficient”. :smiley:

Which STL data structure and header would I be missing?