Fall Challenge 2020 - Feedbacks & Strategies

Ok! Thanks a lot, that made that a lot clearer! And you’re definitely right about basic statistics being way better than no statistics at all in that case!

99th overall (~32th now by tweaking some magic numbers)

In others words, “I will take myself” :joy:

2 Likes

Thanks for sharing! Do you have any idea if your implementation is faster than @eulerscheZahl’s one?

Hi,

For the list of possible actions for Casts, I use 46-bit bitmaps (representing the 46 Cast cards) :

  • one per player for the list of learned casts.
  • one per player for the list of Casts not already played.

For each of the 1001 inventories I also store a 46-bit word containing the list of Casts that can be played with this inventory. (I store this in an unsigned long long)

This allows me to do functions like this one:

bool Game::isRestituable(int player)
{
    return learnedCasts[player] != notPlayedCast[player];
}

void Game::rest(int player)
{
    notPlayedCast[player] = learnedCasts[player];
}

unsigned long Game::castActionList(int inventory, int player)
{
    return notPlayedCast[player] & canBePlayed[inventory];
}

To count the number of possible cast actions, I do :

unsigned long long castActions = game.castActionList(inventory, 0);
int count = __builtin_popcountll(castActions);
2 Likes

12th

At the start I felt that the search space is too big and depth is neither possible nor feasible, as it seemed hard to predict opponent not to mention the later learns/brews ;). So I started MC with quick success (1st on saturday). Soon I realized that speed matte, so added state encoding/bitmagic to reach 1M+ checked states, and then with heuristics (eg. learn only 1) reached/explored depth 11 properly. Probably entered into parameter tuning too early, and even with that in gold the approach slowly lost momentum and I lost motivation.
Finally I decided to rewrite to beamsearch. But wanted safe legend first, so as a last resort I focused on reasonable opponent prediction. Running mc for a shorter time for the opponent earliest brews, then asses those separately as if were picked. Choose my best first action which has the best average score considering the various best first enemy picks.
This occasionally led back to top20, but with careless resubmit I missed to first legend migration waves.

I had no choice but finish beamsearch. After it started to work, I had to reevaluate everything and many of my earlier the heuristics didn’t work at all. Fortunately for beamsearch the speed/depth was not that important any more, enough depth (15+) can be reached easily. For the record my beamsearch had maxdepth 20, width 10k, learning only on first 3 steps, and eval is very similar to every else’s.
Last day I calculated the stats for learnable spells, and adjusted to prefer (pay for) the better ones.

It was definitely not my favourite contest. From the beginning I felt that interaction between players was too low, so it is too much a solo optimization game with relatively narrow possible approaches (more tc-ish than cg-ish). Nevertheless the strong competition made it challenging. Congratulations for the top3, they played a different league!

7 Likes

I was concerned/curious about how the timeout issue would affect the rerun. So based on rerun data download by @eulerscheZahl (thanks!), here is the timeout leaderboard:

Name                 TO   G    %
[CG]SaiksyApo        45   791  5.69%
MichaelPu            21   784  2.68%
The_Duck             19   792  2.40%
omu                  19   809  2.35%
Gonny                16   816  1.96%
NinjaDoggy           15   765  1.96%
Rafbill              14   805  1.74%
Csipcsirip           13   828  1.57%
MSz                  13   790  1.65%
bourgeof             13   808  1.61%
aCat                 13   829  1.57%
sensaur              13   788  1.65%
Rounddice            12   857  1.40%
jolindien            12   886  1.35%
egaetan              11   764  1.44%
Shun_PI              10   790  1.27%
Zorg1                10   808  1.24%
zasmu                9    877  1.03%
KawattaTaido         9    822  1.09%
maras                9    790  1.14%
viewlagoon           9    807  1.12%
_Royale              9    796  1.13%
bowwowforeach        8    817  0.98%
ValGrowth            8    812  0.99%
dbdr                 8    778  1.03%
y_kawano             8    801  1.00%
reCurse              7    888  0.79%
Magus                7    798  0.88%
DomiKo               7    956  0.73%
wlesavo              7    801  0.87%
Bondo416             7    779  0.90%
aropan               6    967  0.62%
Nerchio              6    700  0.86%
tanzaku              6    810  0.74%
eulerscheZahl        5    782  0.64%
Neumann              5    767  0.65%
Vadasz               5    773  0.65%
RiSuS                5    839  0.60%
trictrac             5    791  0.63%
SlyB                 5    644  0.78%
iwashi31             5    734  0.68%
ichyo                4    860  0.47%
MichalOp             4    802  0.50%
takumi152            4    836  0.48%
Illedan              4    832  0.48%
VincentBab           4    795  0.50%
pb4                  3    838  0.36%
YurkovAS             3    838  0.36%
BrunoFelthes         3    761  0.39%
wala                 3    790  0.38%
NightLuna            3    767  0.39%
k4ng0u               3    809  0.37%
ucarts               3    815  0.37%
Fangel               3    785  0.38%
blasterpoard         2    825  0.24%
Redstrike            2    785  0.25%
Jirotech             2    809  0.25%
siman                2    815  0.25%
bin101               2    787  0.25%
Jeff06               2    509  0.39%
Agade                1    827  0.12%
Risen                1    788  0.13%
eki                  1    549  0.18%
scolen               1    756  0.13%
kovi                 1    839  0.12%
karliso              1    863  0.12%
betrue12             1    796  0.13%
delineate            0    954  0.00%
Valdemar             0    804  0.00%
Mazelcop             0    910  0.00%
sumoru               0    799  0.00%
AT274                0    769  0.00%
Skril                0    785  0.00%
Marrggta             0    802  0.00%
ClosetAI             0    781  0.00%
yukine3              0    595  0.00%
Samsonov             0    823  0.00%
Manwi23              0    835  0.00%
Saelyos              0    719  0.00%
SeJ                  0    611  0.00%
8 Likes

46th Legend

Nice contest, nice game. This time it was more… well… mysterious - in both OoC and Spring Challenge the game was somehow more visual, “go there and collect that”, making it more interesting to watch or easier to spot bugs / unwanted bot behavior. At least for me. This game forced me to finally start using CG Benchmark and Brutaltester more rather than just watching a lot of games myself :smiley:

Anyway, I guess what I did was not unique - I chose C++ and I started with DFS with iterative deepening (up to some fixed number of nodes seen) with simple heuristics - the score of the state was a sum of (discounted) potions’ value I made and inventory value (calculated as sum of i * tier_i). At first the discount was about subtracting 1 from the potion value every time I got deeper in the search, then tried with the usual gamma thing and realized it was noticeably worse than the first idea. Now as I read other postmortems I realize I probably should have tried more different gammas, but well. Oh, and for the sake of speed I was learning spells only at depth 1 (as the very first action).
When I finally debugged this it took me to low/mid gold.

To get to the legend league I did a bunch of upgrades.

  1. I switched to actually measuring time rather than calculating the nodes and was ready to cut the search when there is no time left, but apparently the desired beam width and depth in beamsearch were enough to keep me away from timeouts.
  2. Beamsearch - I went for width 2000, and with depth of 25 I’ve rarely seen a timeout / cut in depth. I changed the sorting heuristics to calculate inventory as (i + 0.5) * tier_i, it somehow helped a lot. I still was learning spells only at depth 1.
  3. Forced learning spells as the first action - in the final version for 7 first rounds. It was a huuuge boost.
  4. Taking opponent into account - if my opponent had 4 or more potions done, I calculated how many rounds are there still left in the ‘shortest time’ case - that they will do the rest of their potions as fast as they can. Then went just up to this depth in the search for my action and calculated the score there as in the endgame (below).
  5. Endgame management - it wasn’t the most beautiful one, but well - I assumed the game to be close to ending if either me or my opponent had 4 potions done, and then 1. doubled beam width 2. switched to a different heuristic function for calculating state value - it was the old value for not-finished-yet game states, big number * (potions in the same manner as before + inventory value, this time tier_1 + tier_2 + tier_3) for ended games.

I stayed up to 4 a.m. yesterday (today?) to watch my bot beating gold boss. I wanted to be sure I’m in the legend league.

It was my first Legend, so overall - not that bad, I guess :wink:

9 Likes

I’m pretty sure all my timeouts are NPE that I wasn’t able to debug due to C# being compiled in Release in IDE :upside_down_face: But atleast I’m first :tada:

3 Likes

I had a similar 32-bit integer of inventory and costs.

struct Pos {
    uint32_t p = 0;
    // Create a new pos from a sub-calculated 32-bit value
    explicit Pos(uint32_t x) : p((x + 0x40404040) & 0x3F3F3F3F) {}

    Pos(int a, int b, int c, int d) {
        p = (static_cast<unsigned>(0x100 + a) % 0x40)              +
            (static_cast<unsigned>(0x100 + b) % 0x40) * 0x00000100 +
            (static_cast<unsigned>(0x100 + c) % 0x40) * 0x00010000 +
            (static_cast<unsigned>(0x100 + d) % 0x40) * 0x01000000 ;
    }

    // I only used this for log output to display inventory counts
    int operator [](int i) const {
        int x = ((p >> (i*8)) % 0x40) ;
        return (x > 0x30) ? x - 0x40 : x;
    }

    // Test for valid positive numbers (inv counts)
    operator bool() const {
        if ((p & 0x30303030) ||                          // Negative is invalid
            (p + 0x05050505) & 0x10101010) return false; // Positive overflow (any item > 10)

        // Total must be < 11
        auto x = p + (p >> 16);
        x += (x >> 8);
        x &= 0xff;
        return x <= 10;
    }

    Pos operator+(Pos const & rhs)
    {
        return Pos{p + rhs.p};
    }

    Pos operator-(Pos const & rhs)
    {
        return Pos{p - rhs.p};
    }

    bool operator >=(Pos const & rhs) const {
        Pos x = Pos{p - rhs.p};
        return !(x.p & 0x30303030); // Negative is invalid
    }

    bool operator >(Pos const & rhs) const {
        return p != rhs.p && *this >= rhs;
    }
};

To determine if it was possible to take a move, I would add the cost to my inventory (pos for position). Then if the result had any negative numbers, it was invalid.

I stored my Brew costs as positive numbers so I could easily compare my current position to the cost of a brew. This was simpler as >= than simply >, so I used that.

// Try to brew
if (inv >= brew.cost) {
    new_inv = inv - brew.cost;
    // ...

// Try to cast a spell
auto new_inv = inv + spell.cost;  // works for negative and positive numbers
if (!new_inv) continue;           // not valid.
2 Likes

93 th - 13th Gold

Thanks for this great contest really beautiful! It was a fun contest even if i was a little disappointed not reaching legend. :sweat_smile:
CodinGame you are victim of your sucess, more than 7000 players! But what a pain when we submit to wait for ranking… For the next contest i hope it will be faster :innocent:

From Wood to Bronze

Java bot with a little heuristic

  • if i can brew a potion brew it
  • try to cast to build ingredient necessary for the best one

Bronze to Gold

I started developing a BFS and on monday i switched programming langage to give a try with C++ as i noticed it will be hard AI with search depth and i didn’t want to spend my time like in other contests fighting with Garbage Collector and timeouts.

On tuesday night my bfs was ready and doing well.

As explained by other players i used a BFS with pruning on inventory state as there are only 1001 inventories possible. For that i used a map and a hash computed like this :

hash = inv[0] | (inv[1] << 4) | (inv[2] << 8) | (inv[3] << 12);

Merge condition

  • take the one with the best eval
  • if equals take the one with the more spells castable (that not need to rest)

All my possible inventories are created statically and my states in my bfs only store the hash key. I use also bitsets to know which potions are brewed (10010), which spells need to rest (000100001110), which spells are learned and which learned spells need to rest. Thanks to Egaetan to share with me those tips!

For the evaluation, i used a decay with a Coefficient ^ depth * eval. Eval take into account score and inventory 0.05 * (a + 2 * b + 3 * c + 4 * d).

Note: as i have seen above perhaps i have not spent enough time in the evaluation function, i will try again in the game and hope to reach legend!

First turns LEARNING

I play my bfs but i force the first turn to be a learn and choose the best one for 7 turns
Why? because the boss was doing that :smile:

Middle game

I play my opponent during 15ms and reach above depth 12-16 playing 30-40k nodes. I would like to use that for defense not aiming potion i think my opponent will take before me or agressively to steal potion i can take before him but it didn’t work really well. Finally, i used it to build a best score array to know if i can make the last potion without risk. If i can i add extra points to the evaluation to force my AI to play this action.

The i play my witch during the remaining time reaching 16-20 depth playing 100-110K nodes.

End game

If i have brewed more than 4 potions and my score is better than the opponent + 5 i lesser the decay coefficient to force my bot to brew potions faster.

Conclusion

Really fun contest and time consuming :smile:
I have now some knowledges about c++ so that’s great and i think i will try other contests with this langage.

Thanks CodinGame of that great contest, i’m waiting for the next one :grinning:

4 Likes

Individual ranking: #67
School ranking: #1 University of Wrocław
Language: C#

Thanks.
First is first - I need to thank quite a lot of people, for the contest itself, for valuable advice, and just a friendly talk and support. I am starting with huge thanks for the CG stuff for organizing the contest (@TwoSteps ). I would also like to thank the members of our Polish CG Community (including but not limited to @Zylo , @Nerchio , and @TheSkolar ) - if anyone wants to join, just let Zylo or me know.

Last but definitely not least, I would like to thank our students who participated in this contest. I hope it was fun for you (even if it was a mandatory AI4Games exercise list…). In particular, huge congratulations to the awesome coding team forming our university’s representation: @DomiKo, @MichalOp, @maras, @Manwi23, and my colleague @MSz . In my wildest dreams, I didn’t imagine having six people in legend. But I love you even when you don’t get such high scores, don’t worry. :heart_eyes_cat:

My code.
I did (surprise) beamsearch. I started the contest with, of course, some fast-bronze short python code, then switch to C# and ‘wasted’ >two days encoding everything on bits. Inventory / each spell / tome /potion to fit in simple ints. With a little bit of overthinking, too early, and with multiple inefficiencies on the way, but I was really curious can I do this. I could, I did, and with small extensions I stuck with this representation till the end (with only minor copy-paste inconveniences along the way).

Simultaneously with representation, I was working on some basic BFS, gradually rewriting the engine and adding crucial parts. After finally encoding spells, I tried searching towards the endgame score, i.e., rupees + tier>0 resources. At that time, timeout issues kicked in, and I had to deal with the garbage collector and reusing memory (thanks to @Illedan for helping me). Then comes beamsearch, and some more or less viable approaches to make evaluation function dependent on the inventory related to potions. I had some days of no coding or just small bugfixing, and the gold league came out. I discover a stupid serious bug and, after a day or so, get gold finally. Ah, and of course, I always learn free spells for some constant number of first turns.

In gold, I finally implemented counting how many potions each player brew, and tried some pvp strategies. I got the following: when I discover the opponent has one potion to win, I make for him a BFS depth 5 ending with the first potion. This gives me a threshold, and I perform BFS for myself up to this depth, trying to maximize my score. I slightly extended this with search when the opponent has 2 potions to win and can make a potion in the current turn. I did the things ‘weird way’ because of my limited computational resources. That’s for the defense. For the offense, when my beamsearch discovered terminal state in the previous turn on depth <= 6, I performed BFS optimizing endgame score ending on the first score better than what the opponent can make in the current turn (1 depth search with pots/spells). At this point, I had about eight situation-tuned BFS variants copypasted in my code and reached the character limit, so I had to cut out the unused and commented ones :-).

Then comes a long, last day of using CGBenchmark and optimizing parameters. (And constantly being in nearly-top gold.) Finally, the function turned out to be pretty simple:
PotionValue * 10 * 0.97^depth + Tier0 * 4 + Tier1 * 7 + Tier2 * 8 + Tier3 * 15
I also added some small fix to check my one-turn win, and increase initial learning turns to 10. I tried using hashtables, but they didn’t improve.

I don’t know when (I was finally sleeping, @Manwi23 kept watch) the code was promoted to legend. In legend, I just discovered that as I aready removed one of my expensive heuristics out of evaluation, I can increase search depth limit. Now (using beam size 1000) I usually reach depths 9-11 with ~80-100k nodes per turn.

The game.
I didn’t like it. At all. It was a really awful game showing out what is worst at CodinGame. :pouting_cat:

The game required really fast performance, cutting out most of the slower languages users except some expert wizards. It had again (remember the post mortems from the Spring!) very low interaction between the players. Not fun. I still feel somehow bad that I made it an exercise list on my AI course. If not for the contest, and knowing it in advance, I wouldn’t use it in my classes.

And the visualization, although graphically nice, was terrible to analyze. It really should have some debug mode with even plain circles and arrows. Pausing each frame to understand which jumping potion comes from which cauldron to which order was a pain. Really.

On top of that comes server issues, extremely lengthy submissions, dead chat problems, etc.

Summary.
The contest was really awesome, and I will always remember it, but sadly not because of the quality of the game - the reason will be people and interactions, making it a great community/social experience. This cat thanks you all. :smiley_cat:

17 Likes

Hi there,

interesting game. Timeout spend the most time how by most players.

Ended somewhere in the middle of silver.

Way to bronze:

needs just running code with 2-3 lines for some decition and brewing.

Way to silver:

Here was needed simulation. I used BFS.

  • Have started with saving all data for each SimulationStep, and also REST as own Step. At some point i have seen, that REST could be handled with one property for next Step instead of making a new REST Step for simulation
  • Timeout was the biggest problem. Possible ways to solve
    • Time for Simulation. Have maked it to 40 seconds, and if the round taked 45 seconds, the next round – timeout 35 seconds.
    • Count of simulations. During the content one of the Steps could take up to 30 seconds.
      • Why? – the memory. Special – garbage collector has starts on one Step to work, what has occured, that one Step was so long.
      • What have i maked? – 1. Each 1.000 Simulations execute garbage collector by self. Was a part of solution, but it have spend a lot of time, why i could only execute 16.000 simulations per round.
      • Later – i have executed it only once per round, but still maximal 16.000 Steps per round.

That was not enought to come in silver. I was around top-100. Then i have maked a learning. Always first 5 rounds – learn. And i was in top 10.

Because of other players, who was better than a boss, i could‘n come in silver, but have reached it, as i executed the code on next day, where no many players was on.

Silver:

Here was no changes in game rules. So it comes to performance of code and tactic.

What i have maked:

  • Previously i changed castable state of spells and saved it for each step. I don’t need it, while i always have access to current state of my spells and can extract thhe id of spell from action. If it is castable - do it, else – REST,
  • At the end by each Step i was going throught all spells, checked, if by this executing my inventory had valid state (all over -1 and sum of items smaller as 11) and added it as new Step. I have added it only if it not increase my maximal size of steps of 16.000, because i would never come there.
  • Also by each step i checked, if i can make some order with current inventory, and have removed the order from list, because the quickest way is found. All orders found – no need to go over all 16.000 simulations.

At this point i have reached 18.000 simulations without timeouts. After the event end i could make 60.000 without some Timeout problems. Don´t know why, 50 ms should be always 50 ms.

Order decition: here was the question, what to do – become most rupee or close it as fast as possible. I ended with getting most rupees while i or other witch have not maked 3 orders, then switch to make quick orders in hope to have more rupees. Here could be played a long with decition to win by most players.

2 Likes

My rank: 42 (legend)
Language: C++

Hello there!

First of all, thank you for the contest! It was a great fun!
I also wanted to thank @Domiko, @aCat, @Nerchio, @Zylo and the rest of the polish CG community for the invaluable tips!

The beginning

At the beginning, I felt quite overwhelmed. The rules were quite complex with a lot of tiny details. I had some ideas what algorithm to use, all of them required considerably fast simulations.

For the first few days I decided to focus on implementing a well-optimized DFS that I planned to modify into something more complex in the future.

Optimizing the simulation
The main idea was to fit everything in the integers. Since there were only 46 (including the starting ones) possible spells, I represented the entire spell list as one 64-bit integer. It also allowed me to perform REST operation very quickly – single OR operation.

The inventory consists of up to 10 ingredients from tiers 0 - 4. I also represented it as a single integer, obviously :wink: :

union Ingredients
{
    uint64_t value;
    char tiers[8]; 
    //tiers[0] - the amount of tier-0 ingredients
    //tiers[1] - empty
    //tiers[2] - the amount of tier-1 ingredients
    //tiers[3] - empty
    // ...

    uint64_t count() const
    {
        return tiers[0] + tiers[2] + tiers[4] + tiers[6];
    }
};

Why the empty spaces? Well, in this way if I want to add two ingredient lists I can do it as follows:

const uint64_t clearMask = 0x00FF00FF00FF00FF;
Ingredients sum = ((ing1.value + ing2.value) & clearMask);

If I add two negative values together, the overflow will be added to the next char. That’s why I needed to leave some empty spaces and clear them with the clearMask.

I represented each spell and potion as the Ingredients entity, so when I wanted to check if I can brew a potion or cast a spell, I needed one addition and AND with a correct bitmask.

In this way, I could represent the state of the entire game with just few 64-bit integers.

With this representation, I managed to get the DFS with the depth of 6 turns.

Beamsearch (how original)

The DFS was not enough to get through the silver league, I needed something better. I thought about the Monte Carlo approach, but having already the DFS, it was much quicker to implement the beamsearch. I had also heard some rumours from the CG chat and my friends that it performs quite well, so I decided to give it a shot.

I set the beam width to 500 and was able to predict more than 60 moves in 50 ms. I reached the gold rank and the real struggle began. After many hours of parameters fitting and watching how the best players played, I drew some interesting conclusions:

  • I was unable to implement any reasonable opponent tracking – I completely abandoned that idea.

  • Predicting more than 20 turns was actually making my bot worse. The AI was very slow compared to the others, it played very safely and waited with brewing potions until it was too late. I decided to limit the depth and increase the beam width to around 1750. With duplicate states removal, it was usually able to reach the depth of 15-20.

  • Sacrifice first few turns to get a spell – that was probably the most surprising one. After limiting the beamsearch’s depth to no more than 20 turns, the bot had troubles with determining if it needs a particular spell or not. I often ended up with 2-3 spells (aside from the starting ones) while my opponents had 5-10. Getting a 0-cost spell on the first 4 turns gave me a significant boost.

Cons
I’m not a very big fan of the game, there was almost no interaction between the players. It was really hard to analyze the games, at least for me. It often felt very random, slightly changing some parameters in the heuristic function was drastically affecting the game, I’m still not very sure why the particular ones I ended up with were so much better than the others. The players’ bots were quite similar, especially in the top of the gold league.

Timeouts – the fast and slow CG machines
I often joked about my bot getting the faster and slower computer to run on during the testing. I’m not sure if there’s any truth in it. Basically, after submitting the solution, my bot usually had some significant win streaks and, suddenly, lost 3-5 games in a row (including one timeout, of course :upside_down_face:). The CG felt very unstable, especially during the final day (500+ submits at the same time!).

Summary
I really enjoyed the contest. I’m looking forward for the next one!

5 Likes

I was not looking to write a full blown postmortem, since a lot has been mentioned already and my 3rd rank is too far from the top, so I doubt it has much if any relevance once they publish theirs. But anyway some asked, so here’s the gist.

  • Draft phase
    • Fixed number of turns.
    • Learn first available spell unless the second spell is significantly better than the first.
    • I was convinced draft should not be a distinct phase and be merged with the main game but failed to make it work.
  • Spell score
    • Determined by self-play for 50k games.
    • Each game skips the draft phase by forcing the same spells to each player, with the exception of one different spell on each side selected by matchmaking, e.g. one random spell vs another one of similar ranking.
    • Update the spell ranking according to game result.
    • Tried to evaluate spell synergy, failed to give improvements.
  • Main search
    • Beam stack search. I don’t like vanilla beam search because it loses the completeness property.
    • Use available recipes to give a value to each number of ingredients, as a heuristic to further guide the search.
    • Nodes are pruned if the same key (inventory + potions brewed + spell learned) had been reached in a depth lower or equal.
    • New nodes are also ‘manually’ pruned for more efficiency. A few examples:
      • Learn before cast unless that cast allowed you to learn it (due to tax).
      • Learn less taxed spells before the more taxed ones.
      • Do not cast a spell after rest if it could have been cast before.
    • Search opponent with same algorithm except 3x less time.
    • Use opponent solution when searching for self to prohibit brewing any of their potions if they brew them faster.
    • Perhaps the only CG game where better search is worse?
      • Thanks to the random recipe drawn after a brew which has the property to mess up any plans and transform a lead or a win straight into a loss.
      • Had to artificially limit the search depth to 15, any further decreased playing performance.
      • Too much effort wasted on optimization as a result.
    • Evaluation: Score, endgame reached, # of ingredients left according to tier, bonus for # of deliveries, bonus for faster brews.
  • Endgame search
    • During the main search, for each possible combination of potion brewed, keep the best path found to it. Also includes a path for maximizing score from ingredients.
    • When main search is over, build a move tree for each player out of those paths.
    • Perform simultaneous minimax on those trees with payoff matrix resolution to determine node value.
    • Leaf node value is the end result win/draw/loss. If the game doesn’t complete, consider it as lost. Failed to find better values.
    • If the root node value is good enough, override the search move with the minimax move.
    • Contrary to game theory, I found using a mixed strategy is weaker in practice than a pure one.
    • Works well for figuring out a game is won/lost and how up to 15 moves ahead.
    • Too much effort wasted, the prediction is too often useless. The random recipe draw can mess everything up.
  • Random thoughts
    • I briefly tried training a NN to make some basic decisions but failed to converge anything meaningful, so gave up quickly.
    • Random timeouts are a serious issue, there is no excuse if a language is not garbage collected.
    • The game is fun at first but very frustrating and opaque to improve at beyond the obvious. Better/smarter planning is of little use with RNG possibly changing all optimal paths.
    • Card draws are worse than fog of war, especially when that impactful. And I hate fog of war.
    • Debugging with viewer is as painful as it’s ever been. Please consider simpler and alternate debug views again.
    • Also please fix the scrolling, matching output with viewer should not be easier in a shared replay link.
    • emil. was my smurf/alt
      • I prefer playing anonymously these days, it’s much less stressful/pressure for me, and I can chat around without questions on my bot/performance.
      • For disclosure, I have done so a few times in the past to mess around and give up soon after. I may keep doing so for the same reasons.
      • This time was different when I unexpectedly ranked in the top early on and then decided to just play along until the last stretch.
      • I used Rust to ‘hide’ better and also to internally put pressure on CG to keep parity between IDE and arena. They were made aware of it.
        • The issue was raised months ago when they added Release mode only to arena, which very negatively affects local and batch testing, but was not addressed.
        • If that failed, my plan was to switch to C++ later in the week.
      • I apologize for the dead spot in legend, had no idea deleted accounts would not just disappear.

And obviously, many thanks to CG for organizing the contest, and the community for being so lively and fun during a challenge.

33 Likes

Could you please detail a bit more regarding this? What model did you use? What approach did you take to train it?

TLDR:

(details found in other posts)
C#, 56th, Beam search
Depth 30 with beam of 500, ending mostly between 10-20 depth
Run for enemy to find end of game.
Scoring based on:

  • ingredients needed for customers
  • offline spell learning score
  • closest customer in ingredients needed
  • Spells usable with given inventory + extra for repeats

Extra:

  • Beautiful UI, but terrible for debugging!
  • Too little opponent interactions for my taste
  • Randomness :scream:
  • Had given up as I had zero motivation since mid week, but my gf told me I would regret it if I didn’t make Legend => made me do an all nighter with coffee and red bull getting into Legend around 6 AM the last night :rocket: :zzz: #NoRegrets
  • Big thanks to CodinGame for having this great platform and free games, which makes me want to plan my year around the contests :heart:

And using VR for the contest was not the most efficient :sweat_smile:
( https://twitter.com/erik_kvanli/status/1326879016817750017?s=20 )

15 Likes

Hello Wala,
Thanks for your feedback! I like the idea of array with the size in the first element.
I would love to discuss about your code because mine in Java is really not optimized. Not setting the time limit to 40ms would lead frequently to timeout.
I wasn’t aware about the JVM memory issues. Did you try to remove it to see if it really matters?

1 Like

What is the philosophy behind “sum(blues)/2 + 1”?

My scores are based on how long it takes to get a gem if all spells are exhausted.

You get 2 blues from the starting spell and +1 if you need to rest it. So i count a blue as half turn.

I was considering to adjust scores based on spells I learn, but never got around doing that. Not yet.

Can you explain this factor “tomeIndex/3” and why you are dividing by 3?