# Fall Challenge 2020 - Feedbacks & Strategies

Iterative deepening is not the good term. I don’t restart from the start. I use the nodes from the last depth.

The whole depth at once.

The score of the node. And the score of a node is `parent.score + game.eval() * COEF_PATIENCE^depth`

2 Likes

Ah got it, I just couldn’t understand the purpose if you really did that but it makes sense then, basically keep going until you run out of time. Thanks!

@Gronahak Congratulations on getting to gold with Python and such a low number of nodes visited.

Glad I’m not the only one who tried linear algebra ! Here was my approach :

Given the matrix of costs of all spells (each on a line), I estimate the cost of ingredients,that I just multiply with the potions’ costs. My formula formula for approximate ingredient cost was, using pseudo inverse:

``````    time_cost = np.array([2 - castable])  # using castable property is optional : then it would be all 1
inv_c = np.matmul(np.linalg.inv(np.matmul(c.transpose(), c)), c.transpose())
approx_cost = np.matmul(inv_c, time_cost)
``````

@Raf It works with any number of spells, and gives an approximate cost, which is somewhat the average exchange cost from all possible ressources, measured in rounds.
Also, one way to ensure invertibility is that all spells must satisfy the “increasing quality of gems” property, either purely producing spells, or producing ingredients all of higher tiers than the cost ingredients, so I used the following function to buy a new spell or not:

``````    pos = np.where(deltas > 0)[0]
neg = np.where(deltas < 0)[0]
interesting = True
if len(neg) > 0:
if min(np.where(deltas > 0)[0]) < max(neg):
interesting = False
``````

Anyway, in the end I also used tree search :/, and in this exploration rather than prediction mode, more different spells seemed to bring more diversity and gains, so I dropped the linalg strategy.

My final tree implementation covered only 200 nodes and I didn’t go far up the Silver League.

3 Likes

I am interested in full bot code not just pseudo-code. For example I want to see his “Simulate(…)” function mentioned in pseudo-code.

1 Like

Me too, because I could learn much from it. However unfortunately the risk is just too high that if a full legend bot code is revealed, dozens of new clone bots will enter the legend arena from random Level 1 users. Some post mortems with pseudo code or limited code-excerpts are detailed, and of very high quality.

1 Like

My, my, it’s that time of the week already! Let’s get to it.

I got dragged in a contest. That hadn’t happened in a long time

I ended #136 (so upper Gold), quite better than I expected considering the experience lag I’m starting to accumulate in multis.

As for strategy, I’ve got the same run-of-the-mill BFS that I streamed get me out of Wood-1, just faster, deeper, and with an actually reasonable heuristic. I simulated a single opponent step maximizing finisher moves.

I also wrote a more in-depth postmortem if you really must know.

5 Likes

I agree. But I dont find any other option than getting access to the code. Contest is over anyway.

It’s a bit old, but you can look at my Fantastic Bits code : https://github.com/dreignier/fantastic-bits/blob/master/fantastic-bits.cpp

This is my full code from the previous contest Fantastic Bits code. I just stripped away the evaluation function code.

1 Like

Awesome. Thanks for sharing @Magus.

Thanks for sharing your inventory storage.
What do you mean by this sentence?

See wikipedia.

Example: a spell takes -3 of one type.
+3 is 00011 in binary. For minus 3 you invert all bits and add 1: 11101.

Let’s say our inventory is 4 and we add that -3: 00100 + 11101 = 100001. The leading 1 os an overflow, so we just have 1 left. 4-3 is 1 as it should be.
If however our inventory is only 2: 00010 + 11101 = 11111.
The leading bit here is a 1. But counting the number of bits, it’s not the overflow position. This one indicates a negative value. See my validity check, I look at this position to detect negatives.

5 Likes

I was counting 1.25x0.8 points for each repeatable spell, and 1x0.8 points for each non-repeatable spell. I thought having more than a certain number of spells was not useful so I had this “max spells” parameter that after 11 spells they would count for a quarter of the value. This is the 0.2 that you show (0.8/4).
So I had the first 11 spells count “a lot” and afterwards count very little.
If I had, for example 10 repeatable spells and 5 non-repeatable. I was counting the repeatable ones with the full 0.8 (x1.25), one non-repeatable one with 0.8 (x1) and 4 non-repeatable ones with 0.2 (x1). This is why I wrote “Up to 11 spells, with priority to repeatable ones” and “Past 11 spells”.
What pb4 was doing to evaluate spells was much better.

1 Like

Thanks for explanation but I am not sure why it is called “Past 11 spells”.

Now I’m curious…

You switched to C on the last day of the contest with a clear practical objective to be as fast and efficient as possible within the limited timeframe. Given the constraints, you even limited yourself to passing stuff by value, no pointers, etc…

I there a reason why you didn’t pick C++ over C? Wouldn’t have the STL data structures and the algorithm header helped a lot?

Good question. Not sure I have the perfect answer to it. It’s a muddled mix of:

• rash decision bias
• CG C++ has the reputation of sucking in performance most likely because of template simplifications that don’t make it to the final assembly because of -O0. C not so much.
• Really, the only STL-type structure I’m using is `<queue>`.
• C is more fun than C++
• C is upgradable to C++ on a wink

That premise is more biased than wrong. I’d simply have said “faster and more efficient”.

Which STL data structure and header would I be missing?

From what I actually used during the week, in descending order of Oh I’m happy this was here !

Either because I didn’t want to reimplement it, or because I don’t want to spend any “brain juice” making sure I get it right.

• random_device
• unordered_map
• nth_element
• bitset (to print the bits of any variable)
• _GLIBCXX_DEBUG
• unique
• vector
• erase/remove

Some of those address problems you encountered according to your PM

1 Like

I’ll only comment on those that applied:

• `unordered_map`: well, I had an almost drop-in for that
• `bitset`, yeah, that one will leave a bitter taste. Live and learn. But honestly, even if I’d have gone with C++, I probably wouldn’t have thought of using it anyway. Typical “I-know-how-this-works-dammit” hubris. (to be a least bit charitable with myself, I would have gotten less of it wrong with a bit more sleep. Which I wasn’t able to account for because of itself.) (One of my mistakes might seem more ridiculous than it really is (but it still really is) to those not used to the Haskell way, where you just use a function called `setBit`, and never need to pull out the bit-shifting operations)
• `_GLIBCXX_DEBUG` I forgot to put it in the PM, but I forgot to disable asserts for the final submit
• `vector`: well, who uses `::at()`, really?

(`nth_element` wouldn’t have been my first pick if I had reached beamsearch, I’m more of a spontaneous heap person. But they’re too memory access pattern-dependent to really be compared on the paper. I’ll write up about this at some point.)

I’m sure you must know, but just in case you don’t, using the #pragma line solves that issue.

I know about the pragma. (Hey, I used it too!)
What’s not so obvious to me is whether it reaches the full level of expected optimization in the end.

Last I checked, the GCC documentation was relatively explicit about it being a function-level attribute. Templates… are not always so explicitly function-based. C++ is (well, was—has that much changed?) notorious for requiring a healthy deal of link-time optimization, which we rather obviously don’t have here.

Note that I’m not saying it doesn’t work. But I haven’t verified it personally.

@JBM Try this playground, compare the accessor with the pragma, without the pragma, and by specifying the compiler option. It works.