22nd rank / Go - Congrats to the winners, and especially, Agade. Thanks again for keeping your “best” bot in to enable others to get better.
As usual, family and work life prevented me having the amount of time I’d like to put into the contest, but I was able to put in a little time throughout the contest which was a good change.
I liked this contest because it didn’t require large sets of knowledge around AI techniques that I don’t have at my fingertips (or time to make them so) to do reasonably well. Thoughtful problem analysis generated reasonable returns on time investment.
From previous contents, I found that I usually ended up with a Game state “class” and component “classes”. I create input processing and state init methods that wrap the common unchanging boiler plate code. I choose to keep the code simple and isolated to allow for quicker iterations. I also decided to try and not overthink things.
The bot has two layered functions. I didn’t do anything with forward looking searches or probability-based sample guessing. Just a simple game state eval function that made choices for all possible modules all the time. The first function returned a list of samples to complete (in order), a molecule to grab, a list of stored samples that could be built, and a list of stored samples that were buildable. These were rebuilt every turn. The only “search”-like piece was a permutation expansion of the samples to determine a good build/acquire order.
The second function took those “flags” and the bot’s location to generate the output command. While not coded as an FSM, it functions as one. The basic flow seems to match most of the bots styles. It has very little optimization for score differential and time remaining.
There are only two places where I consider the other bot. When picking molecules, the system will attempt to stall opponents samples that don’t get to much in the way of mine. This is not great and others bots did much better, but it was enough to beat the boss in gold and hold my own. This is where I would spend more time improving. The other place is when choosing to complete samples. I will wait if it appears that the other bot is waiting for my completion. This is also the only data I used across turns.
An example of not overthinking to reduce complexity is my bot’s algo for choosing what sample ranks to pull. It is a single static list tweaked periodically to address opponent’s choices. I’m sure it could be replaced with something more elegant, but choose your battles.
Things I’d spend more time on: Molecule blocking strategies, end game optimizations, better waiting strategies, better sample picking strategy (I’d love to try RoboStac’s only take two to start strat).
Interesting quirks I noticed. Ties are strange. I’m not sure how the eval functions for bot position handle them. There was a period on Saturday where I was 2nd, but would lose to many other bots, but would tie Agade about 10% of the time (by Sunday that had dropped a lot). We would get stuck watching each other at two stations and then each complete 3 level 1 health 1 samples and tie 3-3 or 5-5. It may not be anything, but I thought it was interesting.
This is the first contest where I actually had the “tools” setup and ready to use (Codin Game Sync and CG Spunk). After the last few contests, where I found that the lack of getting large runs in was hammering advancement. I spent time to get used to using those two tools and figuring out how use them. This seems to have helped as well.
With regard to bot hiding and bot “decay”, it seems that while some showed up really late from nowhere, it appears that it was not as bad as in previous contests. I’d be interested in hearing what others thought.
List of samples (silly but …):
1, 1, 1,
1, 1, 1,
1, 2, 2,
2, 2, 2,
2, 2, 3,
2, 3, 3,
2, 3, 3,
3, 3, 3,
3, 3, 3,
3, 3, 3