Code4Life - Feedback & Strategy

Hello there, 205 in gold, thanks for the game and the community , I really enjoyed it.

I have a greedy algo based on state machines with a focus on the molecules state where molecules picking happens.

Each turn I evaluate every type of molecules by its contribution to both my samples and those of the opponent. A molecule contributes to a sample if it helps to make progress on that sample.

Then I sum up a molecule’s contribution to all the samples for both me and opponent. I control the weight of contribution to opponent’s sample to adjust my aggresivity. My bot becomes more aggressive when I gain more expertise.

I wanted to try some algo on combination optimization, minimax but I ran out time. :frowning: Hopefully it could be released for multi for practice.

Congratulations to all legends bravo!

2 Likes

I liked this contest. Wish I had more time to explore some of the ideas I had. To give a window into my strategy, I will list all of my unit tests for my bot:

Should move to SAMPLES if no samples carried
Should move to SAMPLES if no actionable samples carried
Should get rank 1 if at SAMPLES and carrying fewer than 3
Should get rank 2 if at SAMPLES and total expertise > 4
Should get rank 3 if at SAMPLES and total expertise > 11

Should move to DIAGNOSIS for cloud sample if arms otherwise full
Should not move to DIAGNOSIS for cloud sample unless arms otherwise full
Should move to DIAGNOSIS with unknown samples
Should diagnose first sample when at DIAGNOSIS
Should diagnose subsequent samples when at DIAGNOSIS

Should correctly calculate needed molecules plan for single sample
Should correctly calculate needed molecules plan for multiple samples
Should correctly combine costs with expertise in plan
Should correctly combine costs with carried molecules in plan
Should correctly combine predicted costs with expected future expertise in plan
Should correctly calculate multiple plans for multiple samples with various combinations

Should got to DIAGNOSIS to get more samples if thwarted
Should pull attractive samples from the cloud if available
Should take expertise into account when calculating attractive samples
Should not try to pull samples carried by opponent from the cloud
Should prefer sample with lower cost
Should prefer sample with more health
Should prefer sample that creates a good plan when combined with currently carried samples

When pulling from cloud, should count on opponent's molecules if opponent at LABORATORY
When pulling from cloud, should count on opponent's molecules if opponent at MOLECULES
When pulling from cloud, should not count on opponent's molecules if opponent at SAMPLES
When pulling from cloud, should not count on opponent's molecules if opponent at DIAGNOSIS
When pulling from cloud, should not count on molecules opponent doesn't have

Should go to DIAGNOSIS to get rid of bad samples if carrying 3 impossible

Should move to MOLECULES if not already there when samples diagnosed
Should get first needed molecule
Should get next needed molecule
Should get next needed molecule when limited supply
Should get what's needed for second sample
Should abandon impossible sample

Should prioritize taking molecules needed by both me and opponent
Should pay attention to molecules opponent has when prioritizing molecules
Should pay attention to expertise opponent has when prioritizing molecules
Should not take more than total of 10 molecules when blocking opponent
Don't try to block when only one molecule left of a given type
Block first, take what you need later
Should prioritize taking scarce molecules first

Assume opponent will relinquish all molecules for completed sample when at lab
Should take opponent's expertise into account when predicting molecules relinquished
Should handle multiple opponent completions when predicting molecules relinquished
Should WAIT for needed molecules from opponent at LABORATORY

Should move to LABORATORY when no more needed molecules available
Should account for expertise when deciding to move to LABORATORY
Should put the completed sample in the LABORATORY
Should put prerequisite sample into lab first

Should move to DIAGNOSIS for attractive cloud sample
Should move to DIAGNOSIS for multiple attractive cloud samples
Should move to DIAGNOSIS from LABORATORY for multiple attractive cloud samples
Should not move to DIAGNOSIS for unattractive cloud samples

Should not print CONNECT 0 from https://www.codingame.com/replay/220695869
Should put correct sample from https://www.codingame.com/replay/220707762
7 Likes

47th (legend), 2nd scala

This was my favourite contest (again :smiley:), I really liked the similarity with board games. The change of rules in Silver was a great addition !

First WE I rushed to implement basic structures and get to Bronze for the full rules.
My code kept the same basic structure the whole contest : depending on high level features of the game I decide to move or act at a module. My “state” object exposed several high level methods like completed samples, cost to complete a sample with my next expertises, molecules missing by me / opp …
I tried to write some unit tests for these methods but was very lazy by the end.

Monday I implemented a basic offline Arena to pit my code against its former version. Thanks to the amazing work of @TrueLaurel (scala project on Github) it was much easier. We will collaborate to improve this project to provide a good quickstart for scala enthusiasts !
This Arena helped me tune my parameters to select the right rank. When new Silver rules were released I could quickly adapt and peaked for a short time at 10th rank.

During the week I had little time (family, work) and I did minor tweaks, like taking molecules both can use in priority, moving to a module when I can act at least twice in it, using cloud when appropriate … I kept around 100th place overall.

Last WE I improved my endgame (last ~10 turns) so I don’t waste any turn to produce before game ends, fixed several bugs (seen with CG Spunk new feature about module bouncing, cool !) which got me to legend.
My last commit relates to keeping track of all undiscovered samples and checking how likely I can complete them with current molecules / expertises.

On Sunday night I tried to exploit the cheese (submitting samples without anlyzing them) but the results were not so good, to make it work I had to change too much of my logic (like keeping molecules to reach 4 with my expertises) and it introduced too many behaviour bugs. Nevertheless it was a really fun try !

Btw, I was head to head the last WE with @Eeval for 1st place in Scala. @Eeval it seems we live both in/very near Lille, we should exchange tips like we did with TrueLaurel !

New stuff I used this time :

  • TrueLaurel’s Bundler (link above) to aggregate all my scala sources in 1 file. It really helped to better organize my code, and share reusable code for future contests. Submitted source is now ~1500 lines which is huge in Scala.
  • his abstract structure to build a player, arena … saved some time. I’ll try to improve the framework for next time.
  • keep a TODO list of ideas to try
  • splitting my code in
  • gamestate class which I can test and changes less
  • player logic which changes more often and that I test from offline Arena / CG Spunk

What I want to improve :

  • apply TDD more strictly and have better unit tests
  • make smaller steps before testing in CG Spunk so it’s easier to improve continuously. 10 min coding should be a maximum before a test
  • build a reliable offline arena (with tests ! mine had several leaks) upfront so I can iterate on tuning quicker.
  • try a std AI algorithm like MCTS / alphabeta / genetics
5 Likes

84th rank / Javascript

This was the first contest, where I could dedicate a good amount of time and make it to legend league.

My coding was almost purely based on rules depending on the station I am at right now.

What got me over the line to legend was the algorithm to block my opponent, if less than 4 ressources of a type are needed to block the most valuable sample of my opponent.

What would have improved the code further

  • checking how many turns are left and changing the strategy in the endgame
  • potentially blocking a scarce resource (e.g. opponent has no exp in type C molecules, so I can block that resource type completely or even only take 3 or 4 out of 5 once my opponent develops type 3 samples. Again it was difficult to know if blocking your opponent more is helpful or costs yourself too much by losing the storage capacity.
  • having my own arena would have been great, there were a few parameters I was playing around with and I couldn’t figure out if they improved the bot or not, trying to get this from the rank is impossible as it takes too long to stabilize.

I didn’t write any tests and developed in the browser directly, which to my surprise worked out well. However, in other competitions it was much more complicated to see why my bot is behaving in a certain way and troubleshooting codingame without testing strategy is very difficult. Next time I will try to use my own IDE and set my machine up properly.

I’d love to figure out a way to take the Referee.java that was provided on github and build an arena around it and somehow let it compile with my javascript bots. If anyone knows how to do that let me know.

1 Like

You can use:


With

It allows you to run a lot of games versus your bots. But be aware, local != arena, so you can do better in the local testing, but it will not improve a lot your arena ranking.

2 Likes

About the Challenge

I would like to thank Roche for this challenge, and I hope that they get good candidates from the players pool. And I also hope that more and more companies follow this path for recruiting.
About the challenge itself, it has some good and bad things:
Pros:

  • Interesting game, simple rules but very complex to master.
  • This time Wood Bosses weren’t that hard, players should be less frustrated. Players well distributed between leagues, with the majority in silver.
  • CG heard players and made nice rules changes. This is a nice thing, and IMO the new rules added more strategy.
  • Referee is a plus, and the game was almost bug free from the start.

Cons:

  • Sorry to repeat that, but CG Points earned on challenges shouldn’t be based on playercount. The 1st player in Code4Life earns less points than player 189th in Coders of the Caribbean. It doesn’t reflect the relative skill at all, it just tends to show the best place on high played games (CotC, GitC and CvZ).
  • Many GUI and art problems: Tracking samples and projects in GUI was kind of impossible. Also the bots were just the same, and with samples hiding its color I was lost about who is who most of the time.
  • Ties are bad scored on a resubmit. If I tied all the initial matches, the system places me the last. How!? why!? That’s wrong, my bot tied a lot and I struggle going thru ranks.
  • No T-Shirt :disappointed:

My main goal on the challenge was just trying to get 1st on any language, for the achievements. I first tried Freepascal, but CG’s version is totally outdated, terrible to code with. So I chose Dart in the end.

About my strategy

My strategy is based on a joke about “viable” strategies https://forum.codingame.com/t/code4life-bugs/2804/86 In theory, going full with undiagnosed samples should be an “strategy” only if you were losing, but I managed to end Legend 42th as main strat. My strategy is what follows:

  1. Start normally, with all rank 1 samples. Trying to not be blocked
  2. At some point, you have enough expertise to be able to complete any undiagnosed rank1 sample with expertise + molecules
  3. Some time later, you reach the Sweet Spot, where you complete them only with expertise. At this point you get 3 rank1 samples completed in 12 turns (1 sample each 4 turns).
  4. Once the 3 Science Projects are discovered, return to a “normal” strategy.

My bot was lacking a lot of things, like taking samples from cloud, and a terrible endgame management. Even with my terrible lack of features, I managed to hold enough to be in top50:
This is an example of my strategy working: Marchete vs Monstruo Carnal! Replay Starting from frame 240 :).
It’s viable but far from ideal, and very RNG. But it was fun to do it, nobody expected the Spanish Inquisition!. I like alternative strategies and in this case it was somewhat viable :slight_smile: I’m not sure if many other players managed to get it working.

16 Likes

local != arena, because I am competing against myself, right? So all the parameters are the same as in the arena, but if I get better against myself it doesn’t necessarily mean I’m improving against a different player with a different strategy

Hi everybody,

Very great challenge. I began on monday morning and passed Silver league when it opened with heuristics :

  • Each sample is .done (0:complete, <0:molecules missing), .possible (0:yes, <0:not enough molecule available), .possibleNow (0:yes, <0:not enough available or not enough space in my bag)
  • Each player has .nbDone, .nbPossible, .nbPossibleNow
  • EVERYWHERE : Choose moving to which module or do action here
  • SAMPLES : take rank1, 2 3 with a cost based on expertise
  • DIAGNOSIS : just analyse
  • MOLECULE : Evaluate each molecule on number needed for me and opponent, and take a few more molecules than needed.
  • LABO : Submit sample with the best health

To enter League Gold :

  • Evaluate samples in the Cloud and samples in hand. Drop the worst in hand against the best in cloud. I had no time to code the all combos evaluation.

Top Gold :

  • Add Combo samples in the list with all diagnosed samples in each order possible. The combo gets sum of healths and sum of each molecule types. The sample Id is the first sample Id and the expertise gained is the last sample expertise gain. The first sample expertise gain is substracted from the molecule type, so the combo reflects exactly the molecules needed to submit the samples in the right order. Note that a 3 sample combo is a combination of a 2 samples combo with a sample. With this trick, the “sample + combo” list is treated as it was a sample list in all algorithms.
    eg : combo AABBE (id:0 health:1 gain:C) + CCCC (id:1 health:10 gain:D) = AABBECCC (id:0 health:11 gain:D)

I was 40th Gold just before Legend league opened, but only 20 were taken. Then it was very difficult to climb again on top Gold.

  • Changing Magic numbers to select rank 1 2 3
  • Adapting rank on enemy rank
  • Move to LABO on last rounds if needed
  • Change agressivity (block enemy samples) if I lose or if I win
  • Allow free moves between modules instead of hard coding the moving sequence
  • Add sample “blockable” state
  • I coded the rank 1 bug (LABO without diagnosis), just adding another sample state and changing the number of molecule types as (4 4 4 4 4). But I desactivated it because it was too hard to balance with the rest of the game.

My last code in Gold was between top 5 and top 100 on each submit. Too bad : I was top 3 gold on last monday morning, 0.7 points under the Boss, but I tried to resubmit. I know I don’t want to do that !!!. Final rank : 189th. (but 1st of my language :slight_smile:)

Very fun time, as always. Thanks Codingame

2 Likes

22nd rank / Go - Congrats to the winners, and especially, Agade. Thanks again for keeping your “best” bot in to enable others to get better.

As usual, family and work life prevented me having the amount of time I’d like to put into the contest, but I was able to put in a little time throughout the contest which was a good change.

I liked this contest because it didn’t require large sets of knowledge around AI techniques that I don’t have at my fingertips (or time to make them so) to do reasonably well. Thoughtful problem analysis generated reasonable returns on time investment.

From previous contents, I found that I usually ended up with a Game state “class” and component “classes”. I create input processing and state init methods that wrap the common unchanging boiler plate code. I choose to keep the code simple and isolated to allow for quicker iterations. I also decided to try and not overthink things.

The bot has two layered functions. I didn’t do anything with forward looking searches or probability-based sample guessing. Just a simple game state eval function that made choices for all possible modules all the time. The first function returned a list of samples to complete (in order), a molecule to grab, a list of stored samples that could be built, and a list of stored samples that were buildable. These were rebuilt every turn. The only “search”-like piece was a permutation expansion of the samples to determine a good build/acquire order.

The second function took those “flags” and the bot’s location to generate the output command. While not coded as an FSM, it functions as one. The basic flow seems to match most of the bots styles. It has very little optimization for score differential and time remaining.

There are only two places where I consider the other bot. When picking molecules, the system will attempt to stall opponents samples that don’t get to much in the way of mine. This is not great and others bots did much better, but it was enough to beat the boss in gold and hold my own. This is where I would spend more time improving. The other place is when choosing to complete samples. I will wait if it appears that the other bot is waiting for my completion. This is also the only data I used across turns.

An example of not overthinking to reduce complexity is my bot’s algo for choosing what sample ranks to pull. It is a single static list tweaked periodically to address opponent’s choices. I’m sure it could be replaced with something more elegant, but choose your battles.

Things I’d spend more time on: Molecule blocking strategies, end game optimizations, better waiting strategies, better sample picking strategy (I’d love to try RoboStac’s only take two to start strat).

Interesting quirks I noticed. Ties are strange. I’m not sure how the eval functions for bot position handle them. There was a period on Saturday where I was 2nd, but would lose to many other bots, but would tie Agade about 10% of the time (by Sunday that had dropped a lot). We would get stuck watching each other at two stations and then each complete 3 level 1 health 1 samples and tie 3-3 or 5-5. It may not be anything, but I thought it was interesting.

This is the first contest where I actually had the “tools” setup and ready to use (Codin Game Sync and CG Spunk). After the last few contests, where I found that the lack of getting large runs in was hammering advancement. I spent time to get used to using those two tools and figuring out how use them. This seems to have helped as well.

With regard to bot hiding and bot “decay”, it seems that while some showed up really late from nowhere, it appears that it was not as bad as in previous contests. I’d be interested in hearing what others thought.

List of samples (silly but …):
1, 1, 1,
1, 1, 1,
1, 2, 2,
2, 2, 2,
2, 2, 3,
2, 3, 3,
2, 3, 3,
3, 3, 3,
3, 3, 3,
3, 3, 3

4 Likes

We are many to agree with that and we asked long ago to change the formula. But codingame always responded “we will discuss it”. We never had any other response.

There is some problem with local arenas yes. For example, i got brutaltester but i also got a local ELO arena evolving my own code. I just tag some variables, and the arena perform a genetic algorithm to evolve the variables to be the best. Using the ELO arena was a colossal failure on this contest. Because my AI has some weaknesses, and the ELO arena just exploit this weaknesses. The result of my evolution was destroying my current AI (with a 80% winrate !), but it can’t do more than the 50th rank in the legend league.

So you can use local arenas, just be careful and understand what you are doing :stuck_out_tongue:

3 Likes

#93rd

I am grateful to codingame.com that introduced an very interesting game. I really enjoyed and dedicated it for a week. I basically implemented heuristics with FSM based.
The most difficult and import part I thought were what sample rank should I get and what kind of molecule type should I pick up for myself or blocking the opponent.

Sample Rank
I estimated expected average cost based on my expertise and magic number. My thought and some experiments showed me about 4~4.5 average cost per sample would be good to me and my heuristics. I checked sample pools and removed one already popped out, then calculated average sample cost of each rank and find best fit with my budget. I also tried to check with my current storage to find more suitable cost, but I didn’t work well since it would often return lower rank than I expect.

Molecule Type
First, I just wrote a simple logic to collect one first both robot requires. It seems worked fine until certain levels but I couldn’t go ahead with it. But I didn’t have enough time to change my algorithm such as minimax or so.
I finally thought another logic that calculates very last safe turn to protect collecting molecules from the opponent and vice versa. That algorithm very depends on where the opponent is. Basic formula is as below.
very safe turn = molecule available - sample cost.
ex ) available : A4B4, cost A3B2 : distance = 4-3 + 4-2 = 3
This means I can collect all of my required molecules if I have 3 more turns to the opponent.
What if the opponent is already in the molecule module or on the way? The available resources should be divided by 2 in that case.
very safe turn = molecule available / 2 - sample cost.
ex) available : A4B4, cost A3B2 : distance = 4/2-3 + 4/2-2 = -1, impossible to collect them safely.
It also possible to calculate even if the opponent is on the way to the molecules.( remainder of divide by 2 rounded due to concurrent rules )
very safe turn = eta + (molecule available - eta) / 2 - sample cost
ex1) available : A4B4, cost A3B2, eta : 1 : distance = 1 + (4-1)/2-3 + 4/2-2 = 0. it’s still collectable if you choose A first.
But should be careful to collect correct one while the opponent is coming.
ex2) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-1)/2-3 + (4-1)/2-2 = 1.
ex3) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-2)/2-3 + 4/2-2 = 0, be careful not to take A twice that led you lose one turn.

Finally, you can calculates all of safe turns of yours and the opponents’ and sort it as the lowest( greater or equal than 0 ) should be in urgent whatever it’s yours or not.

Unfortunately, I submitted this logic at very last minutes so that I didn’t have a time to take a look, I even couldn’t verified well whether it works correctly. I was very disappointed that I hadn’t any chance to confirm my theory was correct or worthy. Please tell me the result If someone also tried this approach.
Anyway, very awesome contest and I really enjoyed it. I hope I could see C4L in multi-player game again in the future.
Thanks all.

4 Likes

12th
I implemented a random search (Monte Carlo).
When starting the turn at a machine, I might use it. Then I go to another machine to use it and to a third one to use that too.

Samples: give score for having samples. Choosing sample ranks is hardcoded. I randomize how many samples I take, as I might want to get samples from the cloud as well.

Diagnosis: first diagnose undiagnosed samples. Then randomize, how many samples to move to the cloud and how many to take. Returning a sample results in a negative score, as it costs time to do so and I prefer to have a sample that I might fill later when the opponent frees molecules.

Molecules: again: random. Take any amount and type of molecules. As it is hard to fill a sample having e.g. 5A (very unlikely that I choose 5A), the probability of taking molecules is not equally distributed, but depends on my own samples, samples in the cloud and carried by the opponent.
Score is awarded for taking a molecule, when only two are left and the enemy needs both.
The available molecules are time-dependent, so that I can simulate the opponent freeing them.

Laboratory: there are only up to 3! = 6 possible combinations to use the samples (assuming that you don’t go away with a sample although you could complete it).
For each of these orders I try to complete them, add expertise if possible and add the molecules back to the molecules. So I decide here, if my random move at molecules was a good idea.

I also check if I blocked the opponent with my chain of molecules:

foreach (Sample s in enemy.Samples) {
	if (s.Health == -1)
		continue;
	List<int> times = new List<int> ();
	for (int m = 0; m < MOLECULES_COUNT; m++) {
		int cost = s.Cost [m] - enemy.Expertise [m] - enemy.Storage [m];
		if (cost <= 0)
			continue;
		int startTime = -1;
		while (startTime + 1 < maxWaitTime && Machine.MOLECULES.Available [m, Math.Min (maxWaitTime - 1, startTime + cost)] >= cost)
			startTime++;
		for (int i = 0; i < cost; i++) {
			times.Add (startTime + i);
		}
	}
	times.Sort ();
	for (int i = 0; i < times.Count; i++) {
		if (times [i] < i + enemy.ETA && times[i] < 4) {
			score += weightBlockChain * (s.Health + weightGainExpertise);
			break;
		}
	}
}

some score is also given for having any molecules left (but not too many, I need free space to fill my own samples), preferably where the opponent has little expertise.

As I didn’t have time to fiddle with the parameters myself, I let the brutaltester try them out for me by randomizing my parameters and compare the results in matches against myself (fully automated).
Keeping all options open when randomizing my action and let the scoring function decide helped to make my bot do actions like collect molecules first and then go to diagnosis and get the sample for my storage or using newly gained expertise when pushing more than one sample without implementing it.

On the last day I added some basic waiting at the laboratory to block, but never found a replay where it happened before the end of the contest. But it worked, as I know now (frame 200).

I also have code to submit undiagnosed samples when I am about to lose anyway (hoping to be lucky), but didn’t get it working in time.

12 Likes

I’ll keep it short.

I had a fun approach. 5 days before the start, I started fresh to learn a programming language I knew nothing about: Ruby. I heard good things about it, I heard it was “Python”-like, which is a language I am more than familiar with. And I used Coding Game as a crash course.

I spend those five evenings reading the first chapter of the book “learn 7 programming languages in 7 weeks” and then just joined the competition.

This was truly a crash course, thanks to the competition I had to look up a lot of syntax, and would use and reuse the constructions I learned. This was such a good practice! Ended 10th in Silver, happy, given the constraints :smiley:

6 Likes

Simple and funny game! Overall great contest with change of rules for better gameplay.

Solution (C#)

Started of by trying simulation without any luck because I didn’t figure out how to model unknown samples. Changed to heuristics on this last sunday and had more luck, ending at 69th Legend. My solution is based on 2 important functions:

  • int[] FindBestCost(Robot r, int[] availiable)

    • Finds the cost of each Molecul for the best subsett(Most health) of samples held by this robot, given the availiable Moleculs. This returning null indicated no found solution.
  • List<Sample> FindBestSamples(Robot r, int maxRank)

    • Takes all the player’s samples, all cloud samples and 3 of each rank, but only 1 from rank 3 (from the samples module with health of each rank beeing: 0, 9, 29). Removes all above maxRank, orders descending by health and returns the first 3 which has any solution by using FindBestCost(..). This could result in just 1 possible Sample, but lack of time stopped me here.

Then I used these functions on the different modules.

Samples

Find the best samples where maxrank is given by Experience thresholds 0, 9, 15. Then, pick up samples from the best set which is not in the cloud.

Diagnosis

  • Diagnose everything.
  • Discard samples not in the best set.
  • Pick samples in the best set

Molecules

  • Pick items from the best cost prioritizing blocking the enemy’s best subset (didn’t block one sample in particular, but the best combination he had). Also blocked if enemy needed 2 or more picks to block me.
  • Never waited for the enemy to deliver needed moleculs.

Laboratory

  • Return everything, prioritizing the ones to give experience for others on hand.
  • Always finish all samples (if any was solvable at the time of finished returns)

Also with some optimalization about not going to Molecules if all samples could be finished or going straight to Samples if all samples in the best set were found in the cloud.

5 Likes

Oh wow, this is going to be fun to play with. Thanks for the tip!

What does ELO stand for?

I started off with a simple if-then-else bot that would follow a predetermined path with some heuristics and calculations to figure out what he could make and what molecules to choose. Mostly this was because I really had no idea what the evaluation function for a MiniMax ought to look like for this game. While I was doing this I also wrote the simulation code for when I went to my actual AI. That got me to Silver.

When Silver opened I took a day and a half messing around with a Minimax but never got it to produce useful behavior… moving from station to station was hard to generalize in an evaluation. So I went to plan B. This involved a mini-minimax for each station, plus some linking code (or maybe another minimax) for determining when and where to move when there was nothing to do at the station anymore. All I got done on it was the molecule selecting logic, but that was enough to get to Gold, even though I was still debugging it when I got promoted.

I submitted my dumb bot plus the fixed molecule picker and was 27th in Gold on Friday night. I figured that was a good place to start on my push to Legend when I woke up in the morning. Much to my surprise, I found myself in 2nd the next morning. Before lunchtime, I was in Legend. Everyone who was pushing their bots between those times, I thank you :slight_smile:

The rest of the weekend was frustration. I built my minibot for the diagnosis station but it wasn’t as good as what I had. I suspect that was because it needed to be more integrated with other minibots that I hadn’t written yet, but time was too limited to go all the way with it. I maintained a place between 30-55 all day Saturday and Sunday (leading to hopes of top-50 with some luck), but apparently everyone got better overnight. Finished at #87 in Legend.

Key points of the molecule picker:

  • Brute-force sample combinations to come up with the best samples I could make. This basically tried for double and triple turn-ins whenever possible, even if they might not score as many points. Expertise was the focus. It did not take into account future expertise from turning in the samples, though it should have (didn’t think of it until late).
  • The weighting of the samples differed between me being there alone and the opponent also being at the molecule station
  • Pick molecules for the best sample on my list in order, based on how much are available. More limited molecules are picked first.
  • If I can block the opponent by taking a single molecule, do that before picking for myself.
  • On a related note, if one molecule type only has 1 remaining in the stack, take it preemptively
  • After all that is done, if I still have 3 or more spots available for molecules, see if I can block the opponent with them.

Great contest… it was nothing like previous ones and it forced me to think in new directions.

6 Likes

26th Legend

First version

My first version was a mix between state-machine for the global AI, and a minimax for the molecules pick, I used this until Legend league. My minimax eval was pretty simple :

evaluate(robot) {
  score = 0;
  for (Sample sample : mySamples) {
    if (complete) {
      score += health * 1000;
    } else {
      if (enoughSpaceAndMoleculesOnBoard) {
        score += health * completion * 10;
      } else {
        score += health * completion * 1;
      }
    }
  }
  return score;
}

evaluate() {
  evaluate(me) - evaluate(him);
}

This worked pretty well in the first few days but my AI won’t quit the molecules stand unless it cannot pick any more molecules, so I added the “wait” move and a “trash molecules” component in the evaluation to counter that :

score -= 2 * trashMoleculeCount;

A trash molecule is a molecule that isn’t required by any of my samples. Then if my minimax finds that “wait” is the best move, I let my state-machine decide what to do (pay if something is to be paid, or go pick some samples). I also considered the enemy position with an ETA delay; if for exemple he’s at the DIAGNOSIS stand, I will consider only WAIT moves for his first 3 turns (most pessimistic case).

I wasn’t happy with the results of this version in the Legend league, and couldn’t figure if my evaluation sucked or if the rest of the state-machine AI sucked. Considering only molecules pick in the minimax has a few drawbacks :

  • I would often stay at the molecule stands to block an enemy sample even though it would be better to leave, pay, and keep farming.
  • If you’re at the MOLECULES stand and the enemy is on his way to diagnose some samples, it’s often profitable to wait a few turns for him to diagnose them, and then benefit from his ETA to try to block his samples before leaving.

I could/should have treated those cases in the rest of my AI, but it felt “hacky”. So I decided to rewrite the whole AI from scratch saturday, using only heuristics and a new state-machine :

Rewrite

I remembered Agade’s post-mortem on GITC and started with the most greedy/dumb stuff I could find, because apparently that’s what’s working. It’s basically a priority-based pile of behaviors (in this order) :

  • I’m at the SAMPLES stand ? GO pick some, or go to DIAGNOSIS if I’m full
  • I’m at DIAGNOSIS stand ? GO diagnose the stuff that isn’t already
  • I’m at LABORATORY stand ? GO pay stuff
  • List every possessed/cloud sample, filter those that can be completed in the current state of the game, sort them by (health/leftToPick/killCost) and look how much of them I’m currently carrying. If I carry less than X of them, then I go to DIAGNOSIS/SAMPLES to pick some new ones. X = 1 if I’m at LABORATORY or MOLECULES, 2 otherwise.
  • If everything else failed, I go pick molecules

The molecules pick behaves as follow :

For every enemy sample, I calculate its kill cost; i.e. for each type of molecules, I compute the amount of molecules that he needs to pick, and based on the amount of molecules available on the board I compute the number of molecules I would have to pick to kill it :

getMoleculeKillCost(sampleOwner, sample, molecule) {
    actualCost = sampleCost[molecule] - sampleOwnerExpertise[molecule];
    leftToPick = actualCost - sampleOwnerStorage[molecule];
    if (leftToPick == 0) return 99;
    return boardMolecules[molecule] - leftToPick + 1;
}

getSampleKillCost(sampleOwner, sample) {
  return MIN(getMoleculeKillCost(sampleOwner, sample, ...))
}

I filter the samples that I can kill for sure; i.e. the ones with a killScore lower than the number of molecules the enemy still needs to pick (I can kill it faster than he can complete it).

  • There is a sample that can be killed, regardless of it’s health or my score or anything else ? GO kill it
  • Else, try to complete some of my samples

I order my samples in order of completion : if sample1 needs more than 0 molecules of type A and sample2 gives expertise on A, then I chose to complete sample2 first. I completely ignore sample health in this sorting.

Once a sample is chosen, I pick the needed molecule with the lowest kill score, because it reduces the risk of it being killed by the enemy.

One of the latest addition was to first look for killable samples at the very beggining of my state-machine algorithm. If I can kill one enemy sample, I will go for it regardless of what I’m doing or where I am.

Issues

  • Because I always chose to kill an enemy sample before farming my own, I sometimes would kill an enemy sample and block myself without any possible completion of my own samples because of all the extra trash molecules I just picked… I Tried to add some conditions to discard some possible enemy kills but it failed.
  • Because I chose to attack whenever I can, I sometimes lose a lot of time travelling the map just for killing a rank 1 sample. Again, I tried to add some conditions to restrict this aggressive behavior but it would work poorly.
  • My AI lacks tons of features that I didn’t have enough time to implement correctly but where mandatory for the top players imo (see next part)

Things I wanted but failed to implement

  1. Don’t pay for samples that release molecules needed for the enemy
  2. Account for sample collision during sample choice to fasten the farm process and reduce movements between stands (don’t take samples that all requires the same type of molecules)
  3. Consider projects and try to finish some in late-game with rank 1 samples when you have the opportunity

Conclusion

Again, I lost tons of time on a solution that I couldn’t make work. The rewrite was good but too greedy/stupid. With more than 2 days on my second version, I would have been able to add the extra features it needed, but I poorly managed my time, even though I had lots of it for this contest (I spent several dozens of inefficient hours on this one …)

The game was fun, even though it a bit too RNG dependent. It was hard to have good feedbacks on its AI’s level. Sure you can see the results with a feature that drastically improve your AI, but the fine tuning part felt like blind tuning.

Thanks CG and see you at the next contest.

15 Likes

Guys, for those of you who enjoyed the game mechanics of this contest, you should checkout the tabletop game Splendor: https://boardgamegeek.com/boardgame/148228/splendor This is very similar to the contest: I suppose someone at Codingame know this game, given the numerous similarities. The game can be played with 4 players and has a less partial info: you see part of the samples at the beginning, no need to diagnose.

1 Like