The topic where you can tell how you liked the challenge, & how you solved it.
This contest was hosted for Roche. Check out their software application jobs.
The next contest is Wondev Woman, don’t forget to register :
The topic where you can tell how you liked the challenge, & how you solved it.
This contest was hosted for Roche. Check out their software application jobs.
The next contest is Wondev Woman, don’t forget to register :
I had little time to do this contest so i didn’t want to make a full heuristic code. You have to test all the cases, too much work. So i did a minimax alphabeta algorithm for everything. There is th two major problems:
To avoid that, you have to find the right evaluation function and you have to code a good heuristic to order/select the good possibles moves. Running out of time, i changed my mind a little. My minimax algorithm can’t go to the laboratory. Its only goal is to maximize the completion of my current samples. When the minimax want to wait, i check if i can pay a sample. If i can, i go to the laboraty.
I coded this in 12 hours so i’m pretty happy of the result. I know Agade is in pure minimax, so it’s possible to find the right evaluation function if you have to time.
That challenge was nice. No strategy was obvious here, a lot of fail and retry, and greedy algorithms weren’t that useful. List of stuff I did…
Wood2 to wood1: Pick 3 “good” samples (with good points/molecule ratio), grab the molecules, send to laboratory.
Wood1 to bottom gold : implemented a fairly complete strategy for all cases. Pick sample according to my total expertise, analyse, pick molecules and laboratory without sending an invalid command.
Change of rules from bronze to silver really helped my bot for some reason. It did its job during the week.
Bottom gold to middle legend : I’ve been busy this week-end. I re-wrote almost everything.
All of these allowed me to progressively move from bottom gold to top gold (30th)
The last thing I wanted to do is to change the magic numbers. Thanksfully, Bob was around and gave me a better set to start picking yellow and red samples from: pick rank 2 samples when you have at least 15 expertise levels combined. That was enough to beat the gold boss.
This was one of the first challenges I entered the legend league with using PHP, using a procedural approach.
So since I’m in legend my bot gets to participate in the Final re-run, which is running as I am typing this message. I hope I gain a few more ranks
I hope we get to see this challenge in multiplayers!
Wasn’t your bot non-deterministic at some point? is it due to inconsistent computation times on CG servers?
I use iterative deepening.
I first compute the minimax algorithm for a depth of 6. If there’s is still time, i try for a depth of 8. And so on … So yes sometimes my AI is non-deterministic because the computed depth is no the same.
This was my first contest and I didn’t have much time for the first days. Bronze to Silver was fairly simple. In silver I tried my code against some human players and realized that there would need to be a solution to getting blocked. I decided to circumvent it all together.
I wrote some code to look at all the samples in the pool and determine what combination of molecules would give the highest chance to draw a sample that can be completed. I did this for all ranks and when the expected number of draws was low enough I would move up a rank.
When I was at molecules I calculated if I had any samples that could be safely completed without getting blocked. Each round checking what the highest priority molecule is for the target sample.
I calculated the best sequence to turn in my samples. I also used this code to see if any samples in the cloud can offer a better sequence than I currently have.
Calculating the combinations was pretty slow so it took me a while to optimize performance and preload what I can in round 1.
Later I wanted to add blocking of opponent samples and change my algorithm to be greedier if a good fallback exists.
Sadly I never had the time to add that and I had a bug that skipped some combinations in step one that would lead to bad moves. So I finished around rank 200 in Silver. Even so, I really enjoyed this contest and hope I will be able to revisit the game in the future.
I came 9th with a purely heuristic bot. I spent quite a while trying to get minimax to work but never managed to get the eval function to work as well, so gave up on it on Friday as I wasn’t here for most of the weekend.
The main part of my bot was splitting the samples the bot held into done, possible and other. Done samples were possible without picking up any molecules, and I added the expertise to the bot / increased the cost of all other held samples by the molecules used for the done sample. This meant it would calculate the best order to hand the samples in, and all future calculations would ignore that sample and assume it had been handed in. Possible samples were samples that could be completed with the current molecules on the table (and if my score was ahead of the enemy it would include molecules they were carrying), and other samples couldn’t be completed without something changing.
My bot would then go through all the possible samples for the enemy and work out if it was able to block the samples they were holding (if the enemy had 3 possible samples then blocking 2 was enough). If it couldn’t block it would try to take samples that were needed for all my possible’s before focusing on the one with the least molecules needed to complete. If no more samples were possible it would take molecules until it had 10 - this would take the molecules the enemy needed the most of, followed by the molecules I had least expertise in.
One of the biggest differences in my bot was I only took 2 samples on the first turn. This allowed me to usually stop the enemy handing in all 3 samples at the start, as well as making it very hard to block me at the start (as I’d get to molecules 2 turns earlier).
I first looked for a solution using minimax, but wasn’t able to find the right evaluation function. I’ve also tried to find the evaluation function through genetic algorithm but got no result. After few days without progress I’ve decided to look for a heuristic solution and finished 150th. In the heuristic solution what looked to me the most important was to determine what sample’s rank to take : too high and you’re slow at expertise, too low and you’re slow at health… Except that, prefer samples in the cloud to sample in the module as they are already diagnosed; wait for the opp to deliver at labo module if I’m at molecules module to catch the molecules coming back, and reciprocally when it’s me who is at the labo; select samples according to project; take molecules needed by the opponent if i’m not full to block him; at the end avoid samples I won’t have time to complete; don’t forget to take into account the level up in expertise when counting needed molecules to deliver several samples at once; …
Thank you CodinGame, it was a great contest as usual, and congratulation to Agade.
I had great fun, once more and finished 175.
Congratulation to the winners
My bot is heuristic based with an FSM. Like RoboStac, i sorted samples categories: Free samples (paid by my exp); Can samples (paid by my exp + carried molecules + gained exp by other carried samples); Maybe samples: samples that I may build with the currently available molecules, minus a penalty based on the location of opponent (the idea is to avoid samples that are too easy to lock by the opponent); Impossible samples: samples that cannot be done currently. Impossible samples are candidate to be dropped in the cloud. I did a similar sorting for the cloud samples.
Molecule Picking strategy: I picked the rarest molecule first, in order to try to lock the opponent and avoiding being locked. This is probably weak, but it improved my ranking compared to doing A B C D E.
Sample pickup. I tried something somewhat adaptive by calculating a “tension”. For ranking 3 a tension of 1 means that I can get 7 molecules for ABCDE, 0.8 only 4, 0.6 only 3. This is a kind of probability that I’ll be able to build the sample. Rank 2 used 5 molecule and Rank 1 I used 3. I wasn’t able to calibrate this system and fell back to hardcoding my sample picking strategy.
I also handled several ending scenarios, in order to avoid arriving with 3 R3 samples on the last turn to the lab. If I was loosing I never expected the opponent would release a molecule I was waiting for, to dodge bots that were waiting at the lab until the last turn.
On the coding side, I implemented early a class enabling easy calculation on molecule sets. This allowed me to write something like: if (me.expertise + me.available <= sample.cost) . This made code clearer and allowed faster experiments. I’m not satisfied by my testing method this time, since I wrote no test. This is not good My secondary objective for contests is to write nice code and improve my Scala-Fu, I believe I learnt and experimented one or two new patterns this time.
Regarding the game itself, there was a lot of complaints that it was too random, two submits giving different rankings. I must say that this random aspect was difficult to handle for me. I thought about it during the contest, in the end I had some ideas to tackle it but not the energy to implement anything new over the week end. It seems there are ways to tackle this random aspect, since Agade consistently stayed 1st, even after the rerun.
Looking forward the next contest !
Hello there, 205 in gold, thanks for the game and the community , I really enjoyed it.
I have a greedy algo based on state machines with a focus on the molecules state where molecules picking happens.
Each turn I evaluate every type of molecules by its contribution to both my samples and those of the opponent. A molecule contributes to a sample if it helps to make progress on that sample.
Then I sum up a molecule’s contribution to all the samples for both me and opponent. I control the weight of contribution to opponent’s sample to adjust my aggresivity. My bot becomes more aggressive when I gain more expertise.
I wanted to try some algo on combination optimization, minimax but I ran out time. Hopefully it could be released for multi for practice.
Congratulations to all legends bravo!
I liked this contest. Wish I had more time to explore some of the ideas I had. To give a window into my strategy, I will list all of my unit tests for my bot:
Should move to SAMPLES if no samples carried Should move to SAMPLES if no actionable samples carried Should get rank 1 if at SAMPLES and carrying fewer than 3 Should get rank 2 if at SAMPLES and total expertise > 4 Should get rank 3 if at SAMPLES and total expertise > 11 Should move to DIAGNOSIS for cloud sample if arms otherwise full Should not move to DIAGNOSIS for cloud sample unless arms otherwise full Should move to DIAGNOSIS with unknown samples Should diagnose first sample when at DIAGNOSIS Should diagnose subsequent samples when at DIAGNOSIS Should correctly calculate needed molecules plan for single sample Should correctly calculate needed molecules plan for multiple samples Should correctly combine costs with expertise in plan Should correctly combine costs with carried molecules in plan Should correctly combine predicted costs with expected future expertise in plan Should correctly calculate multiple plans for multiple samples with various combinations Should got to DIAGNOSIS to get more samples if thwarted Should pull attractive samples from the cloud if available Should take expertise into account when calculating attractive samples Should not try to pull samples carried by opponent from the cloud Should prefer sample with lower cost Should prefer sample with more health Should prefer sample that creates a good plan when combined with currently carried samples When pulling from cloud, should count on opponent's molecules if opponent at LABORATORY When pulling from cloud, should count on opponent's molecules if opponent at MOLECULES When pulling from cloud, should not count on opponent's molecules if opponent at SAMPLES When pulling from cloud, should not count on opponent's molecules if opponent at DIAGNOSIS When pulling from cloud, should not count on molecules opponent doesn't have Should go to DIAGNOSIS to get rid of bad samples if carrying 3 impossible Should move to MOLECULES if not already there when samples diagnosed Should get first needed molecule Should get next needed molecule Should get next needed molecule when limited supply Should get what's needed for second sample Should abandon impossible sample Should prioritize taking molecules needed by both me and opponent Should pay attention to molecules opponent has when prioritizing molecules Should pay attention to expertise opponent has when prioritizing molecules Should not take more than total of 10 molecules when blocking opponent Don't try to block when only one molecule left of a given type Block first, take what you need later Should prioritize taking scarce molecules first Assume opponent will relinquish all molecules for completed sample when at lab Should take opponent's expertise into account when predicting molecules relinquished Should handle multiple opponent completions when predicting molecules relinquished Should WAIT for needed molecules from opponent at LABORATORY Should move to LABORATORY when no more needed molecules available Should account for expertise when deciding to move to LABORATORY Should put the completed sample in the LABORATORY Should put prerequisite sample into lab first Should move to DIAGNOSIS for attractive cloud sample Should move to DIAGNOSIS for multiple attractive cloud samples Should move to DIAGNOSIS from LABORATORY for multiple attractive cloud samples Should not move to DIAGNOSIS for unattractive cloud samples Should not print CONNECT 0 from https://www.codingame.com/replay/220695869 Should put correct sample from https://www.codingame.com/replay/220707762
47th (legend), 2nd scala
This was my favourite contest (again ), I really liked the similarity with board games. The change of rules in Silver was a great addition !
First WE I rushed to implement basic structures and get to Bronze for the full rules.
My code kept the same basic structure the whole contest : depending on high level features of the game I decide to move or act at a module. My “state” object exposed several high level methods like completed samples, cost to complete a sample with my next expertises, molecules missing by me / opp …
I tried to write some unit tests for these methods but was very lazy by the end.
Monday I implemented a basic offline Arena to pit my code against its former version. Thanks to the amazing work of @TrueLaurel (scala project on Github) it was much easier. We will collaborate to improve this project to provide a good quickstart for scala enthusiasts !
This Arena helped me tune my parameters to select the right rank. When new Silver rules were released I could quickly adapt and peaked for a short time at 10th rank.
During the week I had little time (family, work) and I did minor tweaks, like taking molecules both can use in priority, moving to a module when I can act at least twice in it, using cloud when appropriate … I kept around 100th place overall.
Last WE I improved my endgame (last ~10 turns) so I don’t waste any turn to produce before game ends, fixed several bugs (seen with CG Spunk new feature about module bouncing, cool !) which got me to legend.
My last commit relates to keeping track of all undiscovered samples and checking how likely I can complete them with current molecules / expertises.
On Sunday night I tried to exploit the cheese (submitting samples without anlyzing them) but the results were not so good, to make it work I had to change too much of my logic (like keeping molecules to reach 4 with my expertises) and it introduced too many behaviour bugs. Nevertheless it was a really fun try !
Btw, I was head to head the last WE with @Eeval for 1st place in Scala. @Eeval it seems we live both in/very near Lille, we should exchange tips like we did with TrueLaurel !
New stuff I used this time :
What I want to improve :
This was the first contest, where I could dedicate a good amount of time and make it to legend league.
My coding was almost purely based on rules depending on the station I am at right now.
What got me over the line to legend was the algorithm to block my opponent, if less than 4 ressources of a type are needed to block the most valuable sample of my opponent.
What would have improved the code further
I didn’t write any tests and developed in the browser directly, which to my surprise worked out well. However, in other competitions it was much more complicated to see why my bot is behaving in a certain way and troubleshooting codingame without testing strategy is very difficult. Next time I will try to use my own IDE and set my machine up properly.
You can use:
It allows you to run a lot of games versus your bots. But be aware, local != arena, so you can do better in the local testing, but it will not improve a lot your arena ranking.
I would like to thank Roche for this challenge, and I hope that they get good candidates from the players pool. And I also hope that more and more companies follow this path for recruiting.
About the challenge itself, it has some good and bad things:
My main goal on the challenge was just trying to get 1st on any language, for the achievements. I first tried Freepascal, but CG’s version is totally outdated, terrible to code with. So I chose Dart in the end.
My strategy is based on a joke about “viable” strategies https://forum.codingame.com/t/code4life-bugs/2804/86 In theory, going full with undiagnosed samples should be an “strategy” only if you were losing, but I managed to end Legend 42th as main strat. My strategy is what follows:
My bot was lacking a lot of things, like taking samples from cloud, and a terrible endgame management. Even with my terrible lack of features, I managed to hold enough to be in top50:
This is an example of my strategy working: Marchete vs Monstruo Carnal! Replay Starting from frame 240 :).
It’s viable but far from ideal, and very RNG. But it was fun to do it, nobody expected the Spanish Inquisition!. I like alternative strategies and in this case it was somewhat viable I’m not sure if many other players managed to get it working.
local != arena, because I am competing against myself, right? So all the parameters are the same as in the arena, but if I get better against myself it doesn’t necessarily mean I’m improving against a different player with a different strategy
Very great challenge. I began on monday morning and passed Silver league when it opened with heuristics :
To enter League Gold :
Top Gold :
I was 40th Gold just before Legend league opened, but only 20 were taken. Then it was very difficult to climb again on top Gold.
My last code in Gold was between top 5 and top 100 on each submit. Too bad : I was top 3 gold on last monday morning, 0.7 points under the Boss, but I tried to resubmit. I know I don’t want to do that !!!. Final rank : 189th. (but 1st of my language )
Very fun time, as always. Thanks Codingame
22nd rank / Go - Congrats to the winners, and especially, Agade. Thanks again for keeping your “best” bot in to enable others to get better.
As usual, family and work life prevented me having the amount of time I’d like to put into the contest, but I was able to put in a little time throughout the contest which was a good change.
I liked this contest because it didn’t require large sets of knowledge around AI techniques that I don’t have at my fingertips (or time to make them so) to do reasonably well. Thoughtful problem analysis generated reasonable returns on time investment.
From previous contents, I found that I usually ended up with a Game state “class” and component “classes”. I create input processing and state init methods that wrap the common unchanging boiler plate code. I choose to keep the code simple and isolated to allow for quicker iterations. I also decided to try and not overthink things.
The bot has two layered functions. I didn’t do anything with forward looking searches or probability-based sample guessing. Just a simple game state eval function that made choices for all possible modules all the time. The first function returned a list of samples to complete (in order), a molecule to grab, a list of stored samples that could be built, and a list of stored samples that were buildable. These were rebuilt every turn. The only “search”-like piece was a permutation expansion of the samples to determine a good build/acquire order.
The second function took those “flags” and the bot’s location to generate the output command. While not coded as an FSM, it functions as one. The basic flow seems to match most of the bots styles. It has very little optimization for score differential and time remaining.
There are only two places where I consider the other bot. When picking molecules, the system will attempt to stall opponents samples that don’t get to much in the way of mine. This is not great and others bots did much better, but it was enough to beat the boss in gold and hold my own. This is where I would spend more time improving. The other place is when choosing to complete samples. I will wait if it appears that the other bot is waiting for my completion. This is also the only data I used across turns.
An example of not overthinking to reduce complexity is my bot’s algo for choosing what sample ranks to pull. It is a single static list tweaked periodically to address opponent’s choices. I’m sure it could be replaced with something more elegant, but choose your battles.
Things I’d spend more time on: Molecule blocking strategies, end game optimizations, better waiting strategies, better sample picking strategy (I’d love to try RoboStac’s only take two to start strat).
Interesting quirks I noticed. Ties are strange. I’m not sure how the eval functions for bot position handle them. There was a period on Saturday where I was 2nd, but would lose to many other bots, but would tie Agade about 10% of the time (by Sunday that had dropped a lot). We would get stuck watching each other at two stations and then each complete 3 level 1 health 1 samples and tie 3-3 or 5-5. It may not be anything, but I thought it was interesting.
This is the first contest where I actually had the “tools” setup and ready to use (Codin Game Sync and CG Spunk). After the last few contests, where I found that the lack of getting large runs in was hammering advancement. I spent time to get used to using those two tools and figuring out how use them. This seems to have helped as well.
With regard to bot hiding and bot “decay”, it seems that while some showed up really late from nowhere, it appears that it was not as bad as in previous contests. I’d be interested in hearing what others thought.
List of samples (silly but …):
1, 1, 1,
1, 1, 1,
1, 2, 2,
2, 2, 2,
2, 2, 3,
2, 3, 3,
2, 3, 3,
3, 3, 3,
3, 3, 3,
3, 3, 3
We are many to agree with that and we asked long ago to change the formula. But codingame always responded “we will discuss it”. We never had any other response.
There is some problem with local arenas yes. For example, i got brutaltester but i also got a local ELO arena evolving my own code. I just tag some variables, and the arena perform a genetic algorithm to evolve the variables to be the best. Using the ELO arena was a colossal failure on this contest. Because my AI has some weaknesses, and the ELO arena just exploit this weaknesses. The result of my evolution was destroying my current AI (with a 80% winrate !), but it can’t do more than the 50th rank in the legend league.
So you can use local arenas, just be careful and understand what you are doing
I am grateful to codingame.com that introduced an very interesting game. I really enjoyed and dedicated it for a week. I basically implemented heuristics with FSM based.
The most difficult and import part I thought were what sample rank should I get and what kind of molecule type should I pick up for myself or blocking the opponent.
I estimated expected average cost based on my expertise and magic number. My thought and some experiments showed me about 4~4.5 average cost per sample would be good to me and my heuristics. I checked sample pools and removed one already popped out, then calculated average sample cost of each rank and find best fit with my budget. I also tried to check with my current storage to find more suitable cost, but I didn’t work well since it would often return lower rank than I expect.
First, I just wrote a simple logic to collect one first both robot requires. It seems worked fine until certain levels but I couldn’t go ahead with it. But I didn’t have enough time to change my algorithm such as minimax or so.
I finally thought another logic that calculates very last safe turn to protect collecting molecules from the opponent and vice versa. That algorithm very depends on where the opponent is. Basic formula is as below.
very safe turn = molecule available - sample cost.
ex ) available : A4B4, cost A3B2 : distance = 4-3 + 4-2 = 3
This means I can collect all of my required molecules if I have 3 more turns to the opponent.
What if the opponent is already in the molecule module or on the way? The available resources should be divided by 2 in that case.
very safe turn = molecule available / 2 - sample cost.
ex) available : A4B4, cost A3B2 : distance = 4/2-3 + 4/2-2 = -1, impossible to collect them safely.
It also possible to calculate even if the opponent is on the way to the molecules.( remainder of divide by 2 rounded due to concurrent rules )
very safe turn = eta + (molecule available - eta) / 2 - sample cost
ex1) available : A4B4, cost A3B2, eta : 1 : distance = 1 + (4-1)/2-3 + 4/2-2 = 0. it’s still collectable if you choose A first.
But should be careful to collect correct one while the opponent is coming.
ex2) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-1)/2-3 + (4-1)/2-2 = 1.
ex3) available : A4B4, cost A3B2, eta : 2 : distance = 2 + (4-2)/2-3 + 4/2-2 = 0, be careful not to take A twice that led you lose one turn.
Finally, you can calculates all of safe turns of yours and the opponents’ and sort it as the lowest( greater or equal than 0 ) should be in urgent whatever it’s yours or not.
Unfortunately, I submitted this logic at very last minutes so that I didn’t have a time to take a look, I even couldn’t verified well whether it works correctly. I was very disappointed that I hadn’t any chance to confirm my theory was correct or worthy. Please tell me the result If someone also tried this approach.
Anyway, very awesome contest and I really enjoyed it. I hope I could see C4L in multi-player game again in the future.