Code4Life - Feedback & Strategy

The topic where you can tell how you liked the challenge, & how you solved it.

This contest was hosted for Roche. Check out their software application jobs.

The next contest is Wondev Woman, don’t forget to register :


I had little time to do this contest so i didn’t want to make a full heuristic code. You have to test all the cases, too much work. So i did a minimax alphabeta algorithm for everything. There is th two major problems:

  • It’s very difficult to find the right evaluation function. Your AI will want to pay samples at laboraty everytime she can.
  • Your AI will want to wait. Because she’s waiting for the opponent to pay a sample to take molecules.

To avoid that, you have to find the right evaluation function and you have to code a good heuristic to order/select the good possibles moves. Running out of time, i changed my mind a little. My minimax algorithm can’t go to the laboratory. Its only goal is to maximize the completion of my current samples. When the minimax want to wait, i check if i can pay a sample. If i can, i go to the laboraty.

I coded this in 12 hours so i’m pretty happy of the result. I know Agade is in pure minimax, so it’s possible to find the right evaluation function if you have to time.


Blitzprog, ~80th

That challenge was nice. No strategy was obvious here, a lot of fail and retry, and greedy algorithms weren’t that useful. List of stuff I did…

Wood2 to wood1: Pick 3 “good” samples (with good points/molecule ratio), grab the molecules, send to laboratory.

Wood1 to bottom gold : implemented a fairly complete strategy for all cases. Pick sample according to my total expertise, analyse, pick molecules and laboratory without sending an invalid command.

Change of rules from bronze to silver really helped my bot for some reason. It did its job during the week. :smiley:

Bottom gold to middle legend : I’ve been busy this week-end. I re-wrote almost everything.

  • Ordering function that decides how to use held samples the best trying all possible combinaisons, saving the most molecules (like, using CCA(+A) first before AAB(+B) because +A expertise will cause AAB(+B) to cost 1 less)
  • Picking molecules carefully, looking at the stock. The lower the stock for a molecule I need, the greater the chance I’ll get to pick that one first.
  • When My bot is done picking his needed molecules, it keeps going and try to reduce what the opponent can do by picking more, using the ordering fonction on him to see how it will help causing problems.
  • Laboratory: send samples, applying the best order. Also stall opponent if my bot realizes he is waiting for me to give back the molecules.
  • Diagnosis sample upload: more work to get around molecule blocking and avoid running in circle: deposit samples I can’t pay if I have 2 or more. Also detect that the OP is holding a molecule he doesn’t need at all, that would allow me to use the sample otherwise: I’ll deposit that sample anyway.
  • Diagnosis sample download: simply see for each sample if it would help to have it. On the next turn it will do that for other samples if there are more and I have space. Not fully optimal but it did help me to win more games.
  • Time limit: React to the end of the game. Sample ordering function also has a time management, so It can simply tell the bot to find the best way to score within the 200 turns limit.
  • I also shortcut from diagnosis to laboratory when no molecules are needed.

All of these allowed me to progressively move from bottom gold to top gold (30th)

The last thing I wanted to do is to change the magic numbers. Thanksfully, Bob was around and gave me a better set to start picking yellow and red samples from: pick rank 2 samples when you have at least 15 expertise levels combined. That was enough to beat the gold boss. :slight_smile:

This was one of the first challenges I entered the legend league with using PHP, using a procedural approach.
So since I’m in legend my bot gets to participate in the Final re-run, which is running as I am typing this message. I hope I gain a few more ranks :smiley:

I hope we get to see this challenge in multiplayers!


Wasn’t your bot non-deterministic at some point? is it due to inconsistent computation times on CG servers?

I use iterative deepening.
I first compute the minimax algorithm for a depth of 6. If there’s is still time, i try for a depth of 8. And so on … So yes sometimes my AI is non-deterministic because the computed depth is no the same.


This was my first contest and I didn’t have much time for the first days. Bronze to Silver was fairly simple. In silver I tried my code against some human players and realized that there would need to be a solution to getting blocked. I decided to circumvent it all together.

  • I wrote some code to look at all the samples in the pool and determine what combination of molecules would give the highest chance to draw a sample that can be completed. I did this for all ranks and when the expected number of draws was low enough I would move up a rank.

  • When I was at molecules I calculated if I had any samples that could be safely completed without getting blocked. Each round checking what the highest priority molecule is for the target sample.

  • I calculated the best sequence to turn in my samples. I also used this code to see if any samples in the cloud can offer a better sequence than I currently have.

Calculating the combinations was pretty slow so it took me a while to optimize performance and preload what I can in round 1.
Later I wanted to add blocking of opponent samples and change my algorithm to be greedier if a good fallback exists.

Sadly I never had the time to add that and I had a bug that skipped some combinations in step one that would lead to bad moves. So I finished around rank 200 in Silver. Even so, I really enjoyed this contest and hope I will be able to revisit the game in the future.


I came 9th with a purely heuristic bot. I spent quite a while trying to get minimax to work but never managed to get the eval function to work as well, so gave up on it on Friday as I wasn’t here for most of the weekend.

The main part of my bot was splitting the samples the bot held into done, possible and other. Done samples were possible without picking up any molecules, and I added the expertise to the bot / increased the cost of all other held samples by the molecules used for the done sample. This meant it would calculate the best order to hand the samples in, and all future calculations would ignore that sample and assume it had been handed in. Possible samples were samples that could be completed with the current molecules on the table (and if my score was ahead of the enemy it would include molecules they were carrying), and other samples couldn’t be completed without something changing.

My bot would then go through all the possible samples for the enemy and work out if it was able to block the samples they were holding (if the enemy had 3 possible samples then blocking 2 was enough). If it couldn’t block it would try to take samples that were needed for all my possible’s before focusing on the one with the least molecules needed to complete. If no more samples were possible it would take molecules until it had 10 - this would take the molecules the enemy needed the most of, followed by the molecules I had least expertise in.

One of the biggest differences in my bot was I only took 2 samples on the first turn. This allowed me to usually stop the enemy handing in all 3 samples at the start, as well as making it very hard to block me at the start (as I’d get to molecules 2 turns earlier).


I first looked for a solution using minimax, but wasn’t able to find the right evaluation function. I’ve also tried to find the evaluation function through genetic algorithm but got no result. After few days without progress I’ve decided to look for a heuristic solution and finished 150th. In the heuristic solution what looked to me the most important was to determine what sample’s rank to take : too high and you’re slow at expertise, too low and you’re slow at health… Except that, prefer samples in the cloud to sample in the module as they are already diagnosed; wait for the opp to deliver at labo module if I’m at molecules module to catch the molecules coming back, and reciprocally when it’s me who is at the labo; select samples according to project; take molecules needed by the opponent if i’m not full to block him; at the end avoid samples I won’t have time to complete; don’t forget to take into account the level up in expertise when counting needed molecules to deliver several samples at once; …
Thank you CodinGame, it was a great contest as usual, and congratulation to Agade.


I had great fun, once more and finished 175.

Congratulation to the winners :trophy:

My bot is heuristic based with an FSM. Like RoboStac, i sorted samples categories: Free samples (paid by my exp); Can samples (paid by my exp + carried molecules + gained exp by other carried samples); Maybe samples: samples that I may build with the currently available molecules, minus a penalty based on the location of opponent (the idea is to avoid samples that are too easy to lock by the opponent); Impossible samples: samples that cannot be done currently. Impossible samples are candidate to be dropped in the cloud. I did a similar sorting for the cloud samples.

Molecule Picking strategy: I picked the rarest molecule first, in order to try to lock the opponent and avoiding being locked. This is probably weak, but it improved my ranking compared to doing A B C D E.

Sample pickup. I tried something somewhat adaptive by calculating a “tension”. For ranking 3 a tension of 1 means that I can get 7 molecules for ABCDE, 0.8 only 4, 0.6 only 3. This is a kind of probability that I’ll be able to build the sample. Rank 2 used 5 molecule and Rank 1 I used 3. I wasn’t able to calibrate this system and fell back to hardcoding my sample picking strategy.

I also handled several ending scenarios, in order to avoid arriving with 3 R3 samples on the last turn to the lab. If I was loosing I never expected the opponent would release a molecule I was waiting for, to dodge bots that were waiting at the lab until the last turn.

On the coding side, I implemented early a class enabling easy calculation on molecule sets. This allowed me to write something like: if (me.expertise + me.available <= sample.cost) . This made code clearer and allowed faster experiments. I’m not satisfied by my testing method this time, since I wrote no test. This is not good :slight_smile: My secondary objective for contests is to write nice code and improve my Scala-Fu, I believe I learnt and experimented one or two new patterns this time.

Regarding the game itself, there was a lot of complaints that it was too random, two submits giving different rankings. I must say that this random aspect was difficult to handle for me. I thought about it during the contest, in the end I had some ideas to tackle it but not the energy to implement anything new over the week end. It seems there are ways to tackle this random aspect, since Agade consistently stayed 1st, even after the rerun.

Looking forward the next contest !


Hello there, 205 in gold, thanks for the game and the community , I really enjoyed it.

I have a greedy algo based on state machines with a focus on the molecules state where molecules picking happens.

Each turn I evaluate every type of molecules by its contribution to both my samples and those of the opponent. A molecule contributes to a sample if it helps to make progress on that sample.

Then I sum up a molecule’s contribution to all the samples for both me and opponent. I control the weight of contribution to opponent’s sample to adjust my aggresivity. My bot becomes more aggressive when I gain more expertise.

I wanted to try some algo on combination optimization, minimax but I ran out time. :frowning: Hopefully it could be released for multi for practice.

Congratulations to all legends bravo!


I liked this contest. Wish I had more time to explore some of the ideas I had. To give a window into my strategy, I will list all of my unit tests for my bot:

Should move to SAMPLES if no samples carried
Should move to SAMPLES if no actionable samples carried
Should get rank 1 if at SAMPLES and carrying fewer than 3
Should get rank 2 if at SAMPLES and total expertise > 4
Should get rank 3 if at SAMPLES and total expertise > 11

Should move to DIAGNOSIS for cloud sample if arms otherwise full
Should not move to DIAGNOSIS for cloud sample unless arms otherwise full
Should move to DIAGNOSIS with unknown samples
Should diagnose first sample when at DIAGNOSIS
Should diagnose subsequent samples when at DIAGNOSIS

Should correctly calculate needed molecules plan for single sample
Should correctly calculate needed molecules plan for multiple samples
Should correctly combine costs with expertise in plan
Should correctly combine costs with carried molecules in plan
Should correctly combine predicted costs with expected future expertise in plan
Should correctly calculate multiple plans for multiple samples with various combinations

Should got to DIAGNOSIS to get more samples if thwarted
Should pull attractive samples from the cloud if available
Should take expertise into account when calculating attractive samples
Should not try to pull samples carried by opponent from the cloud
Should prefer sample with lower cost
Should prefer sample with more health
Should prefer sample that creates a good plan when combined with currently carried samples

When pulling from cloud, should count on opponent's molecules if opponent at LABORATORY
When pulling from cloud, should count on opponent's molecules if opponent at MOLECULES
When pulling from cloud, should not count on opponent's molecules if opponent at SAMPLES
When pulling from cloud, should not count on opponent's molecules if opponent at DIAGNOSIS
When pulling from cloud, should not count on molecules opponent doesn't have

Should go to DIAGNOSIS to get rid of bad samples if carrying 3 impossible

Should move to MOLECULES if not already there when samples diagnosed
Should get first needed molecule
Should get next needed molecule
Should get next needed molecule when limited supply
Should get what's needed for second sample
Should abandon impossible sample

Should prioritize taking molecules needed by both me and opponent
Should pay attention to molecules opponent has when prioritizing molecules
Should pay attention to expertise opponent has when prioritizing molecules
Should not take more than total of 10 molecules when blocking opponent
Don't try to block when only one molecule left of a given type
Block first, take what you need later
Should prioritize taking scarce molecules first

Assume opponent will relinquish all molecules for completed sample when at lab
Should take opponent's expertise into account when predicting molecules relinquished
Should handle multiple opponent completions when predicting molecules relinquished
Should WAIT for needed molecules from opponent at LABORATORY

Should move to LABORATORY when no more needed molecules available
Should account for expertise when deciding to move to LABORATORY
Should put the completed sample in the LABORATORY
Should put prerequisite sample into lab first

Should move to DIAGNOSIS for attractive cloud sample
Should move to DIAGNOSIS for multiple attractive cloud samples
Should move to DIAGNOSIS from LABORATORY for multiple attractive cloud samples
Should not move to DIAGNOSIS for unattractive cloud samples

Should not print CONNECT 0 from
Should put correct sample from

47th (legend), 2nd scala

This was my favourite contest (again :smiley:), I really liked the similarity with board games. The change of rules in Silver was a great addition !

First WE I rushed to implement basic structures and get to Bronze for the full rules.
My code kept the same basic structure the whole contest : depending on high level features of the game I decide to move or act at a module. My “state” object exposed several high level methods like completed samples, cost to complete a sample with my next expertises, molecules missing by me / opp …
I tried to write some unit tests for these methods but was very lazy by the end.

Monday I implemented a basic offline Arena to pit my code against its former version. Thanks to the amazing work of @TrueLaurel (scala project on Github) it was much easier. We will collaborate to improve this project to provide a good quickstart for scala enthusiasts !
This Arena helped me tune my parameters to select the right rank. When new Silver rules were released I could quickly adapt and peaked for a short time at 10th rank.

During the week I had little time (family, work) and I did minor tweaks, like taking molecules both can use in priority, moving to a module when I can act at least twice in it, using cloud when appropriate … I kept around 100th place overall.

Last WE I improved my endgame (last ~10 turns) so I don’t waste any turn to produce before game ends, fixed several bugs (seen with CG Spunk new feature about module bouncing, cool !) which got me to legend.
My last commit relates to keeping track of all undiscovered samples and checking how likely I can complete them with current molecules / expertises.

On Sunday night I tried to exploit the cheese (submitting samples without anlyzing them) but the results were not so good, to make it work I had to change too much of my logic (like keeping molecules to reach 4 with my expertises) and it introduced too many behaviour bugs. Nevertheless it was a really fun try !

Btw, I was head to head the last WE with @Eeval for 1st place in Scala. @Eeval it seems we live both in/very near Lille, we should exchange tips like we did with TrueLaurel !

New stuff I used this time :

  • TrueLaurel’s Bundler (link above) to aggregate all my scala sources in 1 file. It really helped to better organize my code, and share reusable code for future contests. Submitted source is now ~1500 lines which is huge in Scala.
  • his abstract structure to build a player, arena … saved some time. I’ll try to improve the framework for next time.
  • keep a TODO list of ideas to try
  • splitting my code in
  • gamestate class which I can test and changes less
  • player logic which changes more often and that I test from offline Arena / CG Spunk

What I want to improve :

  • apply TDD more strictly and have better unit tests
  • make smaller steps before testing in CG Spunk so it’s easier to improve continuously. 10 min coding should be a maximum before a test
  • build a reliable offline arena (with tests ! mine had several leaks) upfront so I can iterate on tuning quicker.
  • try a std AI algorithm like MCTS / alphabeta / genetics

84th rank / Javascript

This was the first contest, where I could dedicate a good amount of time and make it to legend league.

My coding was almost purely based on rules depending on the station I am at right now.

What got me over the line to legend was the algorithm to block my opponent, if less than 4 ressources of a type are needed to block the most valuable sample of my opponent.

What would have improved the code further

  • checking how many turns are left and changing the strategy in the endgame
  • potentially blocking a scarce resource (e.g. opponent has no exp in type C molecules, so I can block that resource type completely or even only take 3 or 4 out of 5 once my opponent develops type 3 samples. Again it was difficult to know if blocking your opponent more is helpful or costs yourself too much by losing the storage capacity.
  • having my own arena would have been great, there were a few parameters I was playing around with and I couldn’t figure out if they improved the bot or not, trying to get this from the rank is impossible as it takes too long to stabilize.

I didn’t write any tests and developed in the browser directly, which to my surprise worked out well. However, in other competitions it was much more complicated to see why my bot is behaving in a certain way and troubleshooting codingame without testing strategy is very difficult. Next time I will try to use my own IDE and set my machine up properly.

I’d love to figure out a way to take the that was provided on github and build an arena around it and somehow let it compile with my javascript bots. If anyone knows how to do that let me know.

1 Like

You can use:


It allows you to run a lot of games versus your bots. But be aware, local != arena, so you can do better in the local testing, but it will not improve a lot your arena ranking.


About the Challenge

I would like to thank Roche for this challenge, and I hope that they get good candidates from the players pool. And I also hope that more and more companies follow this path for recruiting.
About the challenge itself, it has some good and bad things:

  • Interesting game, simple rules but very complex to master.
  • This time Wood Bosses weren’t that hard, players should be less frustrated. Players well distributed between leagues, with the majority in silver.
  • CG heard players and made nice rules changes. This is a nice thing, and IMO the new rules added more strategy.
  • Referee is a plus, and the game was almost bug free from the start.


  • Sorry to repeat that, but CG Points earned on challenges shouldn’t be based on playercount. The 1st player in Code4Life earns less points than player 189th in Coders of the Caribbean. It doesn’t reflect the relative skill at all, it just tends to show the best place on high played games (CotC, GitC and CvZ).
  • Many GUI and art problems: Tracking samples and projects in GUI was kind of impossible. Also the bots were just the same, and with samples hiding its color I was lost about who is who most of the time.
  • Ties are bad scored on a resubmit. If I tied all the initial matches, the system places me the last. How!? why!? That’s wrong, my bot tied a lot and I struggle going thru ranks.
  • No T-Shirt :disappointed:

My main goal on the challenge was just trying to get 1st on any language, for the achievements. I first tried Freepascal, but CG’s version is totally outdated, terrible to code with. So I chose Dart in the end.

About my strategy

My strategy is based on a joke about “viable” strategies In theory, going full with undiagnosed samples should be an “strategy” only if you were losing, but I managed to end Legend 42th as main strat. My strategy is what follows:

  1. Start normally, with all rank 1 samples. Trying to not be blocked
  2. At some point, you have enough expertise to be able to complete any undiagnosed rank1 sample with expertise + molecules
  3. Some time later, you reach the Sweet Spot, where you complete them only with expertise. At this point you get 3 rank1 samples completed in 12 turns (1 sample each 4 turns).
  4. Once the 3 Science Projects are discovered, return to a “normal” strategy.

My bot was lacking a lot of things, like taking samples from cloud, and a terrible endgame management. Even with my terrible lack of features, I managed to hold enough to be in top50:
This is an example of my strategy working: Marchete vs Monstruo Carnal! Replay Starting from frame 240 :).
It’s viable but far from ideal, and very RNG. But it was fun to do it, nobody expected the Spanish Inquisition!. I like alternative strategies and in this case it was somewhat viable :slight_smile: I’m not sure if many other players managed to get it working.


local != arena, because I am competing against myself, right? So all the parameters are the same as in the arena, but if I get better against myself it doesn’t necessarily mean I’m improving against a different player with a different strategy

Hi everybody,

Very great challenge. I began on monday morning and passed Silver league when it opened with heuristics :

  • Each sample is .done (0:complete, <0:molecules missing), .possible (0:yes, <0:not enough molecule available), .possibleNow (0:yes, <0:not enough available or not enough space in my bag)
  • Each player has .nbDone, .nbPossible, .nbPossibleNow
  • EVERYWHERE : Choose moving to which module or do action here
  • SAMPLES : take rank1, 2 3 with a cost based on expertise
  • DIAGNOSIS : just analyse
  • MOLECULE : Evaluate each molecule on number needed for me and opponent, and take a few more molecules than needed.
  • LABO : Submit sample with the best health

To enter League Gold :

  • Evaluate samples in the Cloud and samples in hand. Drop the worst in hand against the best in cloud. I had no time to code the all combos evaluation.

Top Gold :

  • Add Combo samples in the list with all diagnosed samples in each order possible. The combo gets sum of healths and sum of each molecule types. The sample Id is the first sample Id and the expertise gained is the last sample expertise gain. The first sample expertise gain is substracted from the molecule type, so the combo reflects exactly the molecules needed to submit the samples in the right order. Note that a 3 sample combo is a combination of a 2 samples combo with a sample. With this trick, the “sample + combo” list is treated as it was a sample list in all algorithms.
    eg : combo AABBE (id:0 health:1 gain:C) + CCCC (id:1 health:10 gain:D) = AABBECCC (id:0 health:11 gain:D)

I was 40th Gold just before Legend league opened, but only 20 were taken. Then it was very difficult to climb again on top Gold.

  • Changing Magic numbers to select rank 1 2 3
  • Adapting rank on enemy rank
  • Move to LABO on last rounds if needed
  • Change agressivity (block enemy samples) if I lose or if I win
  • Allow free moves between modules instead of hard coding the moving sequence
  • Add sample “blockable” state
  • I coded the rank 1 bug (LABO without diagnosis), just adding another sample state and changing the number of molecule types as (4 4 4 4 4). But I desactivated it because it was too hard to balance with the rest of the game.

My last code in Gold was between top 5 and top 100 on each submit. Too bad : I was top 3 gold on last monday morning, 0.7 points under the Boss, but I tried to resubmit. I know I don’t want to do that !!!. Final rank : 189th. (but 1st of my language :slight_smile:)

Very fun time, as always. Thanks Codingame


22nd rank / Go - Congrats to the winners, and especially, Agade. Thanks again for keeping your “best” bot in to enable others to get better.

As usual, family and work life prevented me having the amount of time I’d like to put into the contest, but I was able to put in a little time throughout the contest which was a good change.

I liked this contest because it didn’t require large sets of knowledge around AI techniques that I don’t have at my fingertips (or time to make them so) to do reasonably well. Thoughtful problem analysis generated reasonable returns on time investment.

From previous contents, I found that I usually ended up with a Game state “class” and component “classes”. I create input processing and state init methods that wrap the common unchanging boiler plate code. I choose to keep the code simple and isolated to allow for quicker iterations. I also decided to try and not overthink things.

The bot has two layered functions. I didn’t do anything with forward looking searches or probability-based sample guessing. Just a simple game state eval function that made choices for all possible modules all the time. The first function returned a list of samples to complete (in order), a molecule to grab, a list of stored samples that could be built, and a list of stored samples that were buildable. These were rebuilt every turn. The only “search”-like piece was a permutation expansion of the samples to determine a good build/acquire order.

The second function took those “flags” and the bot’s location to generate the output command. While not coded as an FSM, it functions as one. The basic flow seems to match most of the bots styles. It has very little optimization for score differential and time remaining.

There are only two places where I consider the other bot. When picking molecules, the system will attempt to stall opponents samples that don’t get to much in the way of mine. This is not great and others bots did much better, but it was enough to beat the boss in gold and hold my own. This is where I would spend more time improving. The other place is when choosing to complete samples. I will wait if it appears that the other bot is waiting for my completion. This is also the only data I used across turns.

An example of not overthinking to reduce complexity is my bot’s algo for choosing what sample ranks to pull. It is a single static list tweaked periodically to address opponent’s choices. I’m sure it could be replaced with something more elegant, but choose your battles.

Things I’d spend more time on: Molecule blocking strategies, end game optimizations, better waiting strategies, better sample picking strategy (I’d love to try RoboStac’s only take two to start strat).

Interesting quirks I noticed. Ties are strange. I’m not sure how the eval functions for bot position handle them. There was a period on Saturday where I was 2nd, but would lose to many other bots, but would tie Agade about 10% of the time (by Sunday that had dropped a lot). We would get stuck watching each other at two stations and then each complete 3 level 1 health 1 samples and tie 3-3 or 5-5. It may not be anything, but I thought it was interesting.

This is the first contest where I actually had the “tools” setup and ready to use (Codin Game Sync and CG Spunk). After the last few contests, where I found that the lack of getting large runs in was hammering advancement. I spent time to get used to using those two tools and figuring out how use them. This seems to have helped as well.

With regard to bot hiding and bot “decay”, it seems that while some showed up really late from nowhere, it appears that it was not as bad as in previous contests. I’d be interested in hearing what others thought.

List of samples (silly but …):
1, 1, 1,
1, 1, 1,
1, 2, 2,
2, 2, 2,
2, 2, 3,
2, 3, 3,
2, 3, 3,
3, 3, 3,
3, 3, 3,
3, 3, 3


We are many to agree with that and we asked long ago to change the formula. But codingame always responded “we will discuss it”. We never had any other response.

There is some problem with local arenas yes. For example, i got brutaltester but i also got a local ELO arena evolving my own code. I just tag some variables, and the arena perform a genetic algorithm to evolve the variables to be the best. Using the ELO arena was a colossal failure on this contest. Because my AI has some weaknesses, and the ELO arena just exploit this weaknesses. The result of my evolution was destroying my current AI (with a 80% winrate !), but it can’t do more than the 50th rank in the legend league.

So you can use local arenas, just be careful and understand what you are doing :stuck_out_tongue:



I am grateful to that introduced an very interesting game. I really enjoyed and dedicated it for a week. I basically implemented heuristics with FSM based.
The most difficult and import part I thought were what sample rank should I get and what kind of molecule type should I pick up for myself or blocking the opponent.

Sample Rank
I estimated expected average cost based on my expertise and magic number. My thought and some experiments showed me about 4~4.5 average cost per sample would be good to me and my heuristics. I checked sample pools and removed one already popped out, then calculated average sample cost of each rank and find best fit with my budget. I also tried to check with my current storage to find more suitable cost, but I didn’t work well since it would often return lower rank than I expect.

Molecule Type
First, I just wrote a simple logic to collect one first both robot requires. It seems worked fine until certain levels but I couldn’t go ahead with it. But I didn’t have enough time to change my algorithm such as minimax or so.
I finally thought another logic that calculates very last safe turn to protect collecting molecules from the opponent and vice versa. That algorithm very depends on where the opponent is. Basic formula is as below.
very safe turn = molecule available - sample cost.
ex ) available : A4B4, cost A3B2 : distance = 4-3 + 4-2 = 3
This means I can collect all of my required molecules if I have 3 more turns to the opponent.
What if the opponent is already in the molecule module or on the way? The available resources should be divided by 2 in that case.
very safe turn = molecule available / 2 - sample cost.
ex) available : A4B4, cost A3B2 : distance = 4/2-3 + 4/2-2 = -1, impossible to collect them safely.
It also possible to calculate even if the opponent is on the way to the molecules.( remainder of divide by 2 rounded due to concurrent rules )
very safe turn = eta + (molecule available - eta) / 2 - sample cost
ex1) available : A4B4, cost A3B2, eta : 1 : distance = 1 + (4-1)/2-3 + 4/2-2 = 0. it’s still collectable if you choose A first.
But should be careful to collect correct one while the opponent is coming.
ex2) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-1)/2-3 + (4-1)/2-2 = 1.
ex3) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-2)/2-3 + 4/2-2 = 0, be careful not to take A twice that led you lose one turn.

Finally, you can calculates all of safe turns of yours and the opponents’ and sort it as the lowest( greater or equal than 0 ) should be in urgent whatever it’s yours or not.

Unfortunately, I submitted this logic at very last minutes so that I didn’t have a time to take a look, I even couldn’t verified well whether it works correctly. I was very disappointed that I hadn’t any chance to confirm my theory was correct or worthy. Please tell me the result If someone also tried this approach.
Anyway, very awesome contest and I really enjoyed it. I hope I could see C4L in multi-player game again in the future.
Thanks all.