Fall Challenge 2020 - Feedbacks & Strategies

I am interested in full bot code not just pseudo-code. For example I want to see his “Simulate(…)” function mentioned in pseudo-code.

1 Like

Me too, because I could learn much from it. However unfortunately the risk is just too high that if a full legend bot code is revealed, dozens of new clone bots will enter the legend arena from random Level 1 users. Some post mortems with pseudo code or limited code-excerpts are detailed, and of very high quality.

1 Like

My, my, it’s that time of the week already! Let’s get to it.


I got dragged in a contest. That hadn’t happened in a long time :slight_smile:

I ended #136 (so upper Gold), quite better than I expected considering the experience lag I’m starting to accumulate in multis.

As for strategy, I’ve got the same run-of-the-mill BFS that I streamed get me out of Wood-1, just faster, deeper, and with an actually reasonable heuristic. I simulated a single opponent step maximizing finisher moves.

I also wrote a more in-depth postmortem if you really must know.

5 Likes

I agree. But I dont find any other option than getting access to the code. Contest is over anyway.

It’s a bit old, but you can look at my Fantastic Bits code : https://github.com/dreignier/fantastic-bits/blob/master/fantastic-bits.cpp

This is my full code from the previous contest Fantastic Bits code. I just stripped away the evaluation function code.

1 Like

Awesome. Thanks for sharing @Magus.

Thanks for sharing your inventory storage.
What do you mean by this sentence?

See wikipedia.

Example: a spell takes -3 of one type.
+3 is 00011 in binary. For minus 3 you invert all bits and add 1: 11101.

Let’s say our inventory is 4 and we add that -3: 00100 + 11101 = 100001. The leading 1 os an overflow, so we just have 1 left. 4-3 is 1 as it should be.
If however our inventory is only 2: 00010 + 11101 = 11111.
The leading bit here is a 1. But counting the number of bits, it’s not the overflow position. This one indicates a negative value. See my validity check, I look at this position to detect negatives.

5 Likes

I was counting 1.25x0.8 points for each repeatable spell, and 1x0.8 points for each non-repeatable spell. I thought having more than a certain number of spells was not useful so I had this “max spells” parameter that after 11 spells they would count for a quarter of the value. This is the 0.2 that you show (0.8/4).
So I had the first 11 spells count “a lot” and afterwards count very little.
If I had, for example 10 repeatable spells and 5 non-repeatable. I was counting the repeatable ones with the full 0.8 (x1.25), one non-repeatable one with 0.8 (x1) and 4 non-repeatable ones with 0.2 (x1). This is why I wrote “Up to 11 spells, with priority to repeatable ones” and “Past 11 spells”.
What pb4 was doing to evaluate spells was much better.

1 Like

Thanks for explanation but I am not sure why it is called “Past 11 spells”.

Now I’m curious…

You switched to C on the last day of the contest with a clear practical objective to be as fast and efficient as possible within the limited timeframe. Given the constraints, you even limited yourself to passing stuff by value, no pointers, etc…

I there a reason why you didn’t pick C++ over C? Wouldn’t have the STL data structures and the algorithm header helped a lot?

Good question. Not sure I have the perfect answer to it. It’s a muddled mix of:

  • rash decision bias
  • CG C++ has the reputation of sucking in performance most likely because of template simplifications that don’t make it to the final assembly because of -O0. C not so much.
  • Really, the only STL-type structure I’m using is <queue>.
  • C is more fun than C++
  • C is upgradable to C++ on a wink

That premise is more biased than wrong. I’d simply have said “faster and more efficient”. :smiley:

Which STL data structure and header would I be missing?

From what I actually used during the week, in descending order of Oh I’m happy this was here !

Either because I didn’t want to reimplement it, or because I don’t want to spend any “brain juice” making sure I get it right.

  • random_device
  • unordered_map
  • nth_element
  • bitset (to print the bits of any variable)
  • _GLIBCXX_DEBUG
  • unique
  • vector
  • erase/remove

Some of those address problems you encountered according to your PM :slightly_smiling_face:

1 Like

:slight_smile:

I’ll only comment on those that applied:

  • unordered_map: well, I had an almost drop-in for that
  • bitset, yeah, that one will leave a bitter taste. Live and learn. But honestly, even if I’d have gone with C++, I probably wouldn’t have thought of using it anyway. Typical “I-know-how-this-works-dammit” hubris. (to be a least bit charitable with myself, I would have gotten less of it wrong with a bit more sleep. Which I wasn’t able to account for because of itself.) (One of my mistakes might seem more ridiculous than it really is (but it still really is) to those not used to the Haskell way, where you just use a function called setBit, and never need to pull out the bit-shifting operations)
  • _GLIBCXX_DEBUG I forgot to put it in the PM, but I forgot to disable asserts for the final submit :sweat_smile:
  • vector: well, who uses ::at(), really?

(nth_element wouldn’t have been my first pick if I had reached beamsearch, I’m more of a spontaneous heap person. But they’re too memory access pattern-dependent to really be compared on the paper. I’ll write up about this at some point.)

I’m sure you must know, but just in case you don’t, using the #pragma line solves that issue.

I know about the pragma. (Hey, I used it too!)
What’s not so obvious to me is whether it reaches the full level of expected optimization in the end.

Last I checked, the GCC documentation was relatively explicit about it being a function-level attribute. Templates… are not always so explicitly function-based. C++ is (well, was—has that much changed?) notorious for requiring a healthy deal of link-time optimization, which we rather obviously don’t have here.

Note that I’m not saying it doesn’t work. But I haven’t verified it personally.

@JBM Try this playground, compare the accessor with the pragma, without the pragma, and by specifying the compiler option. It works.

Oh, I do believe it works “a lot”. It’s “reaching 100%” I’m concerned about. I can’t afford to CX all of my code before deciding for a language.

It’s really “fear” vs “proof by example” at work here.

For reference, I’d have more consideration for your expert opinion, telling me something along the lines of “I use C++ full-time professionally using GCC and the same subset of the STL as on CG, and have never observed any difference in performance between the translation unit-based pragma and the command-line” than a specific contrived example.

(Example that isn’t as good as I’d hoped for, by the way: the optimizer seems smart enough not to write the a[] data, but not to skip the heap allocation. Yes but std::array; yes but POD; yes but yes but yes but…)

I wouldn’t dare call myself an expert, but here you go.

I have been using local compilation on CG since early 2017.
During the first 6 months, I did sometimes observe a ~ 10-15% improvement in my code compiled locally vs my code compiled on CG with pragmas.
In the last 3.5 years, I have never observed a case where the locally-compiled-with-optimization-flag-in-command-line code is faster than the CG-compiled code.
Most of the time, both methods are equivalent.
There is one multi where local-compilation makes my performance drop to roughly 1/3 the speed of CG-compilation. I’ve tried many flags, tune, targets, etc… without success.

I am now wary of using local compilation when it isn’t necessary for code-size reasons.

1 Like

Success is the best revenge

By the way, I hope to beat you someday :nerd_face: :nerd_face:

1 Like