Spring Challenge 2021 - Feedbacks & strategies

I went with simultaneous MCTS with UCB1. Only restriction was don’t seed if you already have a seed and the evaluation function was just won-lose-tie. It showed promise but the problem was performance, not enough sims per turn. Was in the process of switching to bitboard but ran out of time.

thanks for sharing and congrats for your work !

Do you have a trick to convert quickly the state into NN inputs ?

Hi, rank 3306 here!
Some amazingly inspiring ideas already posted above, can’t wait to try them out. What could I possibly add?
Well, I’ve translated my code into a few languages on my quest to get that sweet Legendary achievement. And I think it’s a great exercise in many ways. But, for me specifically, it helped unearth how the readability of my code remains questionable still even to this day. Can recommend for sure.
I don’t know many platforms, but doing this on CodinGame surely was enjoyable. See you guys this fall in Gold, hell, maybe even in Legendary :>

update: looking through some post mortems, and seeing that I was moving in the right direction, even if I did not implement no mcts/beam search or anything like it, now that is truly inspiring :100:

10 Likes

Hello,
Rank 172 in Master with java here. This is my second competition and I think I used a beam search approach if I understand it correctly.
I did not use the back end code.
Some methods I used were:
Copying a game:
Here I had a big mistake where two dimensional arrays were not copied correctly which mattered a lot and was hard to find. Thank god to unittests.

Applying a move to a game, Calculating all possible moves for a game,
Calculating all plausible moves:
I would trim the move list by having varying maxtreenumbers [1,2,4, 6] mostly and not complete while my opponent had no size 3 tree which helped a lot when my opponent made a selloff in the midgame in the hopes of winning. (Also selling when this happens would maybe have been better).

This was my beam kind of. I simulated a whole day of my turns to reach the maximum value of a game until I reach a Wait. My opponent did nothing meanwhile.

Then I had shadowCalculation for all 6 directions and a potental shadow calculation where I simulated my opponent using all grow actions which were possible at this point.

Then there is my gameevaluating algorithm… Oh my… I think it had like 20 variable parameters, because I had not much time and had no gamebreaking fast ideas left at sunday night to work with so I made minor changes to close the gap from 60 gold to legend.
First the basic stuff everything was transformed to points so score was always 1 and sun was 1/3. I also had sunproduction for next round at 1/6 first, then 1/3, and 0 in last turn.
Lastly each tree had a complex score, depending on size [1.1,2.2,4.4,9.6] until late game. Added decreasing scores for the next turns where it was not in a shadow depending on size. And a decrease if it is in potential shadow. Seeds also had decreasing downgrades depending on neighboring trees of same player. This made my last push possible, because I totally thought that the downgrade for shadows was enough which was a big mistake. Lastly some bonusscore for the richnessvalue of the cell. All of this was multiplied by a factor which decreased when turns ran by to make completes more useful.
For endgame I had slightly different values where the basevalue of a tree was much lower since trees are kind of worthless at the end if you did not complete them.

I also started completing at earliest in turn day 11.

In total it was a fun challange where I was happy to first time reach master even with not being able to code for three days. Next time I would like to have less fixed parameters, but rather a better algorithm. I would also make even more unittests since I had like 3-4 pretty gamebreaking kind of invisible bugs. It was a little bit annoying to manage the parmeters at the end. It got late and I got tired at 02 a.m watching my last commit barely making it after an hour.

9 Likes

Nothing too clever, just the first layer of the NN being split in half to take an array of 9 uint64’s and an array of 6 floats. The uint64’s are then made from my gamestate (eg the first is 37 bits of ‘blocked’ and 27 bits of ‘active[player]’, the second is 10 bits of ‘active[player]’, 37 bits of ‘trees[0][player]’ and 17 bits of 'trees[1][player]). I precalculate all the shifts so I don’t need to do any looping through bits. The other binary inputs are a single 1 in a range so I just calculate which bit to set.

In the NN part it then just loops through the set bits in each of the 9 uint64’s and for each of the output values adds the weights relating to that input as needed (no need for multiplications as the input value is 1). It then does a normal matrix multiply for the 6 floats and combines the outputs (and the biases). In theory I should be able to directly run the first layer off the gamestate without the intermediate array of uint64’s for a small speedup but I don’t want to make it game specific and it’s not a major part of my time usage.

Cheers once again for the great contest CG team & fellow Codingamers!

It’s been a while since I’ve devoted as much time to a codingame contest and written a PM :stuck_out_tongue: Looks like I’ll finish around #90-100 in Legend. I quite liked the simplified design of the game and appreciate the discrete state space. I’ll just refer to this contest as Tororo, it sounds better that way. Also, kudos to reCurse for an absolutely dominating performance! Looking forward to your PM. Edit: really interesting PMs all around! Love it :slight_smile: thanks for sharing!

Some of you may associate me with a particular high-level interpreted language I like to contest in, but sometime in the middle of the contest I decided python just wasn’t fast enough to simulate enough moves into the future for a search-based strategy to work decently well (squeezed out ~1K simulations with evals). I also haven’t been programming much over the past year so the additional C++ practice was much welcomed.

Even though it was easier to translate over to C++ with the python bot as reference, it still took me a couple days to get it working. Once that was done, I managed to beat the Gold Boss just by increasing the search breadth/depth (~50-60K simulations with evals). Below I’ll briefly describe a couple things I learnt during this contest.

Bitboarding?

Edit2: not fully a bitboard but merely condensed state-representation…? Since I still needed to iterate through indices to generate shadows :frowning: only some operations are faster with my representation

Tororo’s game state quite intuitively translated over to a bitwise notation where the presence of a tree is just 0/1 on the i-th bit. This means we can represent any state as the following (depth & actions are for book-keeping during search):

struct State{
    uint64_t seed;          // | first 37 bits CELL has SEED        | --11 bits free-- |       16 bits Depth       |
    uint64_t sm;            // | first 37 bits CELL has size 1 TREE | --11 bits free-- |       16 bits Actions     |
    uint64_t md;            // | first 37 bits CELL has size 2 TREE | -----------------27 bits free----------------|
    uint64_t ta;            // | first 37 bits CELL has size 3 TREE | -----------------27 bits free----------------|
    uint64_t me;            // | first 37 bits CELL has OWN tree    | --11 bits free-- | 8 bits SCORE | 8 bits SUN |
    uint64_t opp;           // | first 37 bits CELL has OPP tree    | --11 bits free-- | 8 bits SCORE | 8 bits SUN |
    uint64_t dormant;       // | first 37 bits CELL is DORMANT      | --11 bits free-- | 8 bits NUTRI | 8 bits DAY |
};

With this, we can shorten some of our operations into a few bitwise operators which should speed things up significantly compared to storing and iterating over a size-37 (or more) array/vector:

// State-based macros
/* Patterns:
 * #define TREES 0x1FFFFFFFFFUL // First 37 bits
 * #define i64 ((unit64_t)1)
 *  get_(me/opp)_trees          : s.(seed/sm/md/ta)&TREES&s.(me/opp)                //Order does not matter
 *  get_(me/opp)_active_trees   : (~s.dormant)&s.(me/opp)&TREES                     //mask with inverted dormant bits
 *  cnt_(me/opp)_trees          : cntBits(s.(seed/sm/md/ta)&TREES&s.(me/opp))       //function call + first macro pattern
 *  can_seed_locations          : ((~(s.seed|s.sm|s.md|s.ta|SOIL_ZERO))&TREES)      //merge the first 4 bitmasks and places
 *                                                                                  //where SOIL is 0, invert them and
 *                                                                                  //mask again with TREES
 *  accumulating shadows        : if (state.sm&idx) // idx = (i64<<i)
 *                                    shadow.sm |= SHADOW_CELLS[d][i][1];           //For day d, tree i and size 1
 */ 

cntBits I adapted the solution from https://stackoverflow.com/a/109025 to do this in constant time.

Some Shadowing/Seeding tricks

// bitmask of adjacent cells for idx i at length l [i][l]
const u64 ADJ_CELLS[37][4]; // [0][0]: {0UL,126UL,524286UL,137438953470UL},
// bitmask of shadowed cells on day d for idx i at length l [d][i][l]
const int SHADOW_CELLS[6][37][4]; // [0][0]: {0UL,2UL,130UL,524418UL},

First we can just precompute all neighboring cells for each locations at a given distance with a 37-bit bitmask. Similarly, we can do so for any cell on a given (day%6) where the shadow would be cast. So to check whether we can seed to a location, we need only require a check for whether our tree bits intersect with the neighbors bitmask of the tree height distance around the seed_to location (an inverse check as I iterate through possible seed locations instead of own trees). However, iterating through 0-36 for which index to seed_from still took my seeding actions generation to O(n^2) :(. For shadows, we can combine all the bitmasks together in one iteration over the tree indices to get the overall shadow ‘map’ for the day.

Zobrist Hashing

Another trick was Zobrist Hashing which reduced the collisions for my hashmap and gave me a boost of ~5K more simulations (10%) even though it was more expensive to compute than the prime-multiply and add naive strategy used before. The Zobrist hashing table is statically pre-populated with random 64-bit unsigned long long integers beforehand. All in all, around 1/4 of my codebase was constants o.O

The general strategy here for Zobrist hashing is to xor each component of the board state with each other. Each cell can be in one of 9 unique states {empty,(seed,small,medium,tall)*(me,opp)}, in addition to a limit of 24 days, 21 nutrients and I assume a max of 255 possible sun/score. Chuck them all into an xor and we get back our collision-reduced hash.

Optimization tricks

As per Magus https://www.codingame.com/forum/t/c-and-the-o3-compilation-flag/1670 and _CPC_Herbert C++ and the -O3 compilation flag.

  1. #pragma GCC optimize "O3,inline,omit-frame-pointer" at the top of your file!
    • Declare functions explicitly as inline.
  2. Reuse global variables
    • struct State[] for the search and struct Shadow[] for computing shadow maps. To recover from an early-exit scenario (timeout when searching a layer), I keep pair of State arrays and just alternate between them by indexing with depth%2. So when a layer is incomplete, we can revisit the old completed layer to evaluate those nodes instead.
  3. std is slow
    • Unfortunately I did not implement my own heapq or hashmap, but the only two std datastructures I really used was a std::priority_queue and std::unordered_map in the search. It was faster to declare the pq as a function-local variable while reusing a global map and calling .clear() (heap log(n) removals I presume - but is there no way to wipe it in O(1)?).

Search Strategy (pretty broken…)

Finally, some comments on my search strategy although I think Euler’s implementation of beam search is far more accurate than the version I ended up with which definitely feels a bit wonky now that I look at it more…

Roughly I thought it was a beam search where I limited the width of propagation at each search layer but for Tororo we needed to consider two ‘layers’, actions and days. So I just gave priority in a BFS way for least actions in closest day then break ties by state evaluation function. Not ordering by the number of actions taken gave me quite bad results since a sequence with COMPLETE would always certainly be weighted heavier in my heuristic evaluation (thinking back, maybe prioritizing by more SUN may be better than length of actions?). BUT after looking at Euler’s post, I’m sure my bot’s search is pretty broken… like, really, really, only-simulates-3-actions-deep-a-day broken… lol, so I shall skip describing it further…

Edit3: Fixed my beam over ACTIONS instead of DAYS and jumped to top 50 T.T For those interested in avoiding my embarrassing mistake

Only thing of note here was that I had a cheaper heuristic state evaluation (which I cached as a ‘score’ field in my State) to determine priority in queue and a more expensive playout policy for the final layer to select the best action. This allowed for increased evals/states visited in exchange for a more approximated solution.

Heuristic Evaluations

My heuristic evaluation was a cheap next-day sun income and score (adding bonus for shading opponent trees worked worse…) + a small weighted bonus for SOIL score. The final playout function uses a naive static WAIT policy… which works… surprisingly alright:

  1. Wait each turn to generate income. Compute shadow map. Since we’re always WAIT-ing, we can optimize slightly by unrolling the loop to only iterate through at most 6 days.
  2. We have a multiplicative decay of *0.95 for each day on sun income.
  3. Pessimistically assume nutrients will decrease by opponent_tall_trees when we harvest our trees. These additions were so that my bot didn’t hold onto trees for too long and prunes the board earlier.
  4. Add a bonus score for seeds that are in various levels of shade/no-shade. Afterall, an actual MCTS playout would have us grow these trees and produce some additional sun each turn.
  5. Return the simulated final score based on game rules + num_trees/100 (tie-breaker).

Remarks

Thanks again for the wonderful contest! Hopefully the things I learnt and shared about above may be helpful to some, definitely switched me back into programming mode from reading/writing philosophy all semester… Looking forward to the Fall Contest!

29 Likes

First of all, a shout-out to @BlitzProg. Like you, I poured a lot of time into this contest, with most of my code thrown away again. In the end, I magically made it to #184.

Getting to Bronze with a few lines of code was a breeze. Then, I decided to recreate the referee in order to be able to simulate the game. I tried to implement MCTS for the first time in a contest but was frustrated after a while because nothing seemed to work and I wasn’t sure if I should start searching for bugs (which were there for sure) or making the simulation faster.

Then I tried something more like Beam Search. A look-ahead of 5 steps and opponent prediction made my bot slower, but not really better. It was annoying to me that a depth of 1 was just as good as a heavy simulation. In hindsight, I’m pretty sure, my bot wasn’t doing what I thought it was, resulting in weird behaviour. I should have tested every component more carefully and will definitely keep that in mind for the next contest.

For the sake of my rank, I eventually went back to the basics and wrote a heuristic bot. If it weren’t for watching Kung Fu Panda on Saturday, I don’t know if I’d have had enough strength to start my comeback. :panda_face: Here are the main ideas of my program:

  • an evaluation function calculates how many sun points should be expected in the next 6 turns or until the game ends; I assume, that the board doesn’t change and use a slight decay factor (0.99) to account for uncertainty in the future; the function returns mySunExpectation – oppSunExpectation; taking into account the opponent’s sun was crucial here

  • I determine the “best” seeding, growing and completing action

  • seeding: maximize richness, then maximize positional advantage by evaluating with a tree of size 3 on that position; I only allowed seeding on cells without any of my existing trees (no matter the size) within a distance of 2; this seems a good compromise between being able to plant enough trees and not losing sun points by blocking my own trees

  • growing: maximize tree size (larger trees → more sun points for me, more shadow for opponent), then minimize amount of trees of the new size, then maximize evaluation with new tree size

  • completing: maximize evaluation, then maximize cell richness (in the endgame, this is switched to avoid losing richness points)

  • completing a tree is only allowed when I have at least daysLeft / 2 fully grown trees and if no other tree has been completed on that day (or if it’s the last day)

  • seeding is only allowed if I don’t own any other seed already and I have less than 8 trees in total

  • endgame: don’t waste sun points on seeding or growing if a tree is impossible to complete

Overall, I enjoyed this contest and am happy to end it in legend. Thanks, CodinGame, for providing this awesome challenge and congratulations to the winners – especially @reCurse - and everyone who tried hard! :deciduous_tree: :evergreen_tree: :whale: :evergreen_tree: :deciduous_tree:

11 Likes

Hi,

485th global here! My best result (yet?) in CodinGame contests!

I’m both surprised and happy in that! The contest was all in all really well done and well thought, with a very beginners friendly first steps (which is definitely appreciable to bring new people to the game and create some curiosity), and as shown by the PMs here, some variety in strategies!

In my case, I not totally willingly took the lazy approach suggested by Bob in a few blogposts. I think that’s the contest where I spent the least time coding. But I took my time to find how to write my initial code (status and such) and what I wanted to do with it.

I felt like doing a simulation would probably be out of my current range (but I wanna try based on some PMs, to see what would be possible), so went with heuristics on a mix of the next days :

  • Seeding is influenced by the full sun cycle : avoid seeding in places where you’ll get shadows from yourself (didn’t take the opponent too much in account, as you can shadow a tree which shadows you)
  • Growing is influenced by the two to three next days : what happens to my tree if I don’t grow it ? Will it get shadowed or can I prevent it? And furthermore, can I shadow one of my opponent’s trees?
  • Last but not least, excepting on the last days, completion happens only when a size 3 tree is about to get shadowed.

In the end, the idea for me was that the more sun points you get, the more likely you’re to win! And in turns where a potential size 3 tree will be shadowed, I’d rather try to put a seed in that favorable position, than have a useless size 3 tree.

I think I didn’t do a big “programming job” here and kind of actually let the others do it ? My approach to this contest wouldn’t have worked with the last two, as this strategy pretty much “reacts” to the opponent, rather than being smart itself, but it was really interesting to think about strategies instead of going directly into a big simulation (which, again, I probably couldn’t have done that well)

Shoutout to @LuckyJ for the always interesting discussions about strategy and “value” of Sun Points!

And thanks again to CodinGame for the contest!

9 Likes

Thanks CG for organizing. The game was quite different from others on CG, with some interesting mechanisms that gave various strategies a chance. And all for free. :slight_smile: Good job!

Tried alpha beta pruning with NN for state evaluation. Failed because I made my NN too simple, so it converged at gold/600. Still learned a lot, and learned that I have yet a lot to learn.

Thanks!

8 Likes

Well done to all, and especially to @reCurse. Thanks again to CG for the contest.
So, if you weren’t watching the chat, you probably missed the fact that I wasn’t particularly keen on this one.
(Just my personal opinion ofc).
In fact I wasn’t going to bother going past Bronze. Still, @eulerscheZahl forced me to reach Gold.
If-else bot, no shadows.
Sort of enjoyed it (a bit) but server issues on Saturday killed my motivation to go any further.
PS. Not knocking the game or anything :stuck_out_tongue:

5 Likes

Top10 bot with beamsearch

Heuristics

I tried to stick with it for the first few days (but maybe too long) to get better comprehension of the game and also it is fun. Most of it was about heavy pruning, use of next turn shadow, and endgame rule of thumb, complete when: 23 - day <= min(nutrients, max(player1tree3cnt, player2tree3cnt)))

Beam search

Brought me back top20 (after fixing some major bugs)
Pretty similar to @Eurerschezahl’s but with heavy eval. Simulate sun collection till the end than complete all trees. That is why my bot tend to turn the tide in the end.

Pruning

I have used almost every order/limitation what were mentioned, some less freqeunt/important are:
-(on the first turns) when there are no seed I do allow seed before other actions
-I only calc the target of the seed action, and just use best source tree for it on demand

Speedups

I have started without bitboard, because I felt that seeding would be most costly anyway. But later for heavy eval I have added (the simpler) bitboard representation already mentioned. I have also utilized the fact that sun collection has repeated pattern every 6th day (considering that my eval takes no actions).
Other smaller optimization didn’t really matter, the beam reached its limit (the uncertainty coming from opponent actions).

Enemy prediction:

That is what brough the breakthrough #1 around legend open.
Part one is making automatic grow (if there is no next turn shadow).
The other part is real predict for next 1-2 grow/complete actions by running the same beam for the opponent (for waiting opponent or if first action is waiting, i have used second action).

In contrast to my initial feeling of simple optimization problem, it proved to be a very interesting contest. Heuristics were competitive till last few days, and also there was no clear winning search algorithm till…
congratz for @reCurse for the excellent (and can i say shocking) performance with NN in contest!

26 Likes

I really like this challenge because both Heuristic and AI algorithm can both be in Legendary (I only use Heuristic and be in Legendary).

6 Likes

I will probably finish somewhere just outside the 100th place in legend.

My bot was a heuristic bot, which selected actions based on the order:
COMPLETE
SEED
GROW
WAIT

with heavy conditions on COMPLETE and SEED based on the number of trees and days.

Once it had decided which of the actions to take, it then went into a secondary phase selecting which tree to complete / grow / seed, based on shadows - so if the opponent would be hurt or I would be helped by a grow or complete command, it would prefer that one. One of the big benefits at the end of the contest was rewriting all of the special cases surrounding the shadows into an evaluation function that just worked out all of the shadows given a day and a set of trees, which meant all of the edge cases I got wrong were magically fixed, which pushed me over the top into legend.

I didn’t have enough time working on this challenge to write a sim based bot and optimise it, so maybe I should have changed languages to get the legend achievement!

10 Likes

I did the exact same thing, but I struggled to incorporate this information properly in my evaluation:
In my final version (ranked #33 in legend)

  • I considered the waiting opponent to get sun points at the end of the day in my simulation
  • I used the growing opponent to compute guaranteed (pessimistic) sun production and potential shadow cast on opponent (optimistic) as part of my heuristic.

But your 2/3 + 1/3 trick in the simulation is much more elegant, and if I switch to it (which is basically 3 lines of diff), I get 60% win against my version. Too bad I could not think of it…

2 Likes

I also think that training NN to output a value of a state could be a good idea in this game. Will probably try it myself sometime.

This is similar to my experience, the jump from if else heuristics to true AI is very steep. Making an AI solution that beats the if else heuristics is not easy.

2 Likes

Rank: 698/1608 Gold (925/6867 Global)
Language: Java (~1700 LOC)
Strategy: MCTS (simulteneous), tree reuse, pruning, eval
Time Spent: 50~60h (much of it debugging)

Overview

I’ve competed on CodingGame before but this was the first time I really went for it. I’m happy that I implemented a simultaneous MCTS with storing the state between turns and managed to code up most of what I intended. However, due to poor performance of my sims, taking too much time to calculate possible actions and memory issues my code couldn’t really make good use of tree search.

Overall despite the struggles I had a blast, learned a lot and I’m sure to come back for future contests.

Initial Strategy

I went for simulating the game pretty early. My initial strategy was, starting with the current board state, to go through each possible action and (as many times as possible) play through the game from that action using random moves. After move time runs out I pick the action with the highest winrate. Code snippet:

List<ActionWithScore> possibleActionsWithScore = board.getPossibleActionsWithScore;
while (hasTimeRemaining()) {
    for (ActionWithScore actionWithScore: actionsWithScore) {
        actionWithScore.playThroughAndUpdateScore();
    }
}

// Sort by win rate
Collections.sort(possibleActionsWithScore);
return possibleActionsWithScore.get(0).getAction();

After ironing out my game simulation bugs I was surprised this got me as far as ~700 gold on gold opening. At best I was running ~2k sims a turn and unfortunately this number didn’t really improve after many tweaks due to performance issues.
I was certain the initial strategy was suboptimal because of purely random move choice so I wanted to try MCTS next.

Board Representation

I stored most of the information about the board as int[37] where each int holds all the information of the cell. Later, I found out about bitboarding and the possibility to store the entire board in just few ints, and even compute the shadows with bit shifts with a hex bitboard. I chose not to pursue this representation because at all times my main bottleneck appeared to be generating the list of possible moves at each state. In hindsight, implementing the leaner bitboard board representation would’ve allowed me to more easily compute shadows and do more interesting things with pruning the possible actions using heuristics.

MCTS (simultaneous)

This was my first time implementing any game tree search algorithm (haven’t even done minimax before) so it was definitely a struggle. My initial MCTS used alternating nodes for me and opponent until I realized this gives opponent nodes information about what action I had taken.
I re-wrote the MCTS to be simultenous by storing both actions in a node, and chosing actions independently for each player based on the UCB. I believe this is MCTS DUCT although I found about the paper later and haven’t read it carefuly.

I also implemented storing the tree between moves and changing the new root to the current node whenever it existed (i.e. my opponent move was one of the expanded children of the prior root). I thought this was really cool but it didn’t end up mattering much due to general performance issues with my code.

Pruning

For pruning the possible moves I’ve used fairly conservative rules and probably should’ve spend more time to make pruning more aggressive:

  • Seed: use preferred seeding indices (where the seeding tree won’t shadow itself) in the first 18 days. Also in the first 14 days don’t seed at all in a cell that would shadow with any of my trees (assume tree size 3 for the seeded cell to make this easy)
  • Grow: don’t grow beyond 5 trees of the next size
  • Complete: don’t complete in the first 11 days. Also don’t complete if I have less than 3 trees of size 3 and less than 5 trees of size 2 in the first 20 days (to avoid sun starving).

Scoring

I spend a lot of time tuning my scoring function. My intuition is that if the scoring function is a perfect predictor of the end game score from that state (assuming optimal moves) then you’ve basically solved the game. This is also why a NN might be a good fit (it just has to map a board state to score).
My scoring as usual was fairly complex. It took into account current scores, nutrients, sun, current day, the number of trees of each size and the number of directions from which those trees can be shadowed (i.e. across all the 6 directions). After tuning it in the web IDE it looked to be a pretty decent predictor of who’s winning.

Lessons Learned

What went well

  • Unit testing and ensuring that simulating the game works correctly
  • Asserts (pre-submission) catching most of my bugs
  • Scoring function seemed to work well
  • Created a Timer utility class which was handy in diagnosing issues and could easily by turned off

What went poorly

  • Code bloat and trying to code in 1 file for the first few days (after which I split up code files and started a python script to stitch)
  • A lot of time diagnosing random timeout spikes which turned out to be GC
  • Expensive MCTS Node representations (containing a map, multiple lists and few other variables each)

Where I got lucky

  • Adding a line to access all my static fields (to force my static initializer blocks to run) and adding a System.gc() at the start of main solved most of my timeout issues

PS (edit after reading posts above): Thanks for the great PMs, you guys are truly inspiring and I’ll revisit the strategies without the time pressure over the next few months. Also thanks to game makers and GC for hosting; this was a great game/competition!

12 Likes

Legend 42 with C#

I used minimax with alpha-beta pruning and iterative deepening, and my heuristic was based on score, accumulated sun, tree counts, and tree values. Weights were trained offline by exhaustively running through ranges of values for each variable.

My game state used bitboards (ulong) for tree-related stuff and ints for everything else, so cloning states was a decently fast operation.

At the heart of my search was good move filtering and ordering. Since the majority of possible moves in each position were seed moves, I filtered out any that would end up casting shadows on my others trees within 2 tiles. This allowed me to search much deeper while also spacing out my trees so they could collect more sunlight. I also filtered out moves based on a few dozen expert-system-like observations to help nudge the search away from bad openings or from running out of momentum in the middle game.

For move ordering, I prioritized COMPLETE move first, then GROW moves, then SEED moves, then WAIT moves. Weights for richness and existing tree counts for each tree level were also included. This move ordering greatly improved my cutoff rate and consequently my search depth.

Late in the week I switched from simulating moves to simulating states, which helped a lot since prior to this I wasn’t fully implementing the game rules. At the end of the competition I was pretty consistently searching 14 moves (7 turns) deep.

By far my biggest jump in rankings was when I fixed a bug that was charging all my opponent’s actions to my sun bank instead of theirs, and that took me from 200th in gold to 40th in legend. Probably the thing I wasted the most time on was fixing bugs for the way I was incrementally updating my shadow map during playouts.

Additionally I built tools for training weights, automatically combining my code from multiple files into one, generating images for game states, serializing inputs from real matches so I could easily debug specific states locally, and parsing games from top players so I could analyze patterns.

Oh yeah, and on my first move I precalculated lookup tables for seeds and shadows for every day (mod 6), tile, and tree size.

Thank you for running this competition! It was a lot of fun.

12 Likes

I’ll keep it (relatively) short. I promised to share some bitboard stuff, but Kodle already did so. I think I’ll write a separate article at some point with some sims for popular board games so that people can learn how to simulate them efficiently.

My search was beamsearch. In hindsight I probably should have gone with MCTS. I am better at that and I spent way too much time fixing bugs in my beamsearch (no idea why, it ought to be simple). I coded a single player MCTS on sunday, in just 2 hrs, bug free. It just wasn’t good. Should have tried a 2 player one, but not enough time left.

For Beam Search eval parameters are the most important part. I had:

  • Richness (separate parameter for each tree type)
  • Suns
  • Future suns (similar to eulers)
  • GameScore
  • Shadow region overlap penalty (if a tree is within 3 hexes on a straight line to another tree it adds a penalty)

Search depth in the end roughly 6-7 days I think (30 turns, but waiting on the slowest state). Beam width 1k was best. I could simulate 250k states (actions) per turn. So performance was really not an issue. This game was all about clever pruning and eval.

I’m of two minds when it comes to my result. If you had told me I’d be lower legend at the start of the contest, I’d be disappointed, but after I was doing so poorly on wednesday (still in silver) I became very relieved when I finally beat gold boss on saturday. Tried more things on sunday, but not of them gave me improvements. So my final code is the code that beat gold boss.

I had a good time. Thanks to struct for helping me setup brutaltester and congratulations to the winners, especially recurse, with his monster of a NN.

14 Likes

Finished 150 legend with rules based bot.

Main approach landed me in gold. The thing that got me to legend was actually watching @reCurse games and synthesizing his strategy into rules. Had no idea it was a NN at all.

Here was the simple bot I ended up with:

  • seed on diagonals only, or distance 3 in a straight line. break ties by richness. nothing else for seeding
  • don’t complete before day 12, after day 12 complete one tree a day. prefer the tree that is blocked next turn or blocks your own tree next turn
  • only grow to size 3 if either it would block an opponents tree next turn or if it would unblock the tree you’re growing from your own tree next turn.
  • if you have any grows < size 2 that would block or save your own tree from getting blocked next turn, do those
  • otherwise grow the tallest tree
  • some other stuff for endgame for day >= 20 too to make sure it’s not wasting energy

there were obviously a lot of other differences and behaviors that I didn’t capture from watching his strategy, but thought it was pretty decent for just being rules. For example sometimes reCurse bot would complete more than 1 tree in a turn but I didn’t figure out the condition for that. I think it also prioritized seeds very differently from what I did, and it’s endgame was different as well

17 Likes