The topic where you can tell how you liked the challenge, & how you solved it.
Leaderboard: https://www.codingame.com/leaderboards/challenge/unleash-the-geek-amadeus/global
The topic where you can tell how you liked the challenge, & how you solved it.
Leaderboard: https://www.codingame.com/leaderboards/challenge/unleash-the-geek-amadeus/global
BlitzProg, 46th gold - 140 overall (2nd PHP)
I liked that contest! Unlike many others, the strategy that would focus on having the other lose the game (instead of winning fair and square) was a viable and powerful strategy. There was so many ways to fool your opponent, and not losing a bot to deceiving tactics ended up being so complicated.
I had to rewrite my AI many times :
My ending strategy was :
I had a lot of fun, thanks =)
C# and should end around 90 after being promoted 10 min before the deadline
Fixed a few lines of code on my phone and it worked out
Had a great time streaming the contest, even though I saw a lot of bugs the next day when I reviewed my stream. ( Prize of coding 8 hours before the stream )
My bot is nothing special, just a bunch of behaviours:
I also created a GA bot doing full moves, but I was struggling to get it to select good moves, so I threw it away on the last Sunday going back writing IFsā¦
On the game:
Thx for a great contest, I had a fun and exhausting time!
Highest rank was 155 overall, finished 473rd for an unknown reason. Very triggering to lose 300+ places out of the blue, but thereās nothing I can do now.
This is by far the largest amount of code Iāve written for anything on CG. Just over 1.6k lines with 53k+ chars.
Despite the code size, my strategy was simple:
After that, I check if I can kamikazee any enemy bots. If itās worth it, I pull the trigger.
I also try to avoid sending more than one bot into an area where something is likely to explode. That was fun to program.
This was quite the game. I personally love games like this, itās just sad that my placement got decimated like that.
EDIT: I had a different version in my IDE, thatās why it was so bad.
I will finish around rank 20
This is one of the contests where Iām not sure if itās worth writing a post mortem for my bot.
I use mostly heuristics to make my decisions, as I didnāt see the big picture on how to abstract the game and implement a decent search.
I generate a list of possible commands first. These are placing radars, harvesting and kamikaze attacks. Placing traps was part of it too, but I removed that part in legend. My bot is a pacifist.
I donāt have to say much about my radar placement, I probably lose some battles because of it.
The first radar is at (9,6) or (9,8) if there is no robot starting at y=6.
Then I try to cover large regions with my following radars, rewarding the placement on a known ore.
For harvesting I prefer cells close to the base obviously. I count the number of turns needed to reach the ore and bring it back to the headquarter. A cell with more than 1 ore gets translated into multiple actions.
Kamikaze actions will happen when I expect a good trade (like 2 vs 1) or an even trade (1 vs 1) while Iām in a lead. I also try that even trade when Iām not sure about the trap and I havenāt lost a robot yet, as thatās a relatively safe way to check if the trap is real or a fake.
I assign a score for each pair of robot and action. Then I reduce the amount of actions by removing those actions with low score for everyone. For the remaining actions I bruteforce all possible assignments. I give an additional score for the overall plan (e.g. a malus for having 2 robots next to each other, making them vulnerable for traps), which is why I didnāt use a more efficient Hungarian method.
When I plan to harvest on a cell with more than 1 ore, I wait a turn so the opponent fears a trap and hopefully wonāt touch the cell, allowing me to come back later.
With my final destination cells in mind I can modify the movement path in order to penalty 2 robots being in the range of the same trap.
I placed 67th in Legend League. I got to be 6th for a short while early in the contest before you all came in! Very keen to hear what the higher levels were doing!
This was my strategy:
Haashi, ended up 230th overall.
It was my first ranking contest (very first contest was thales-2018) and I didnāt have much time after Wednesday. Wednesday my bot ranked up 80th overall, and I did not change anything afterwards. I was pretty happy staying competitive without changing anything.
My strategy used really really simple heuristics :
For every robot, assign every possible actions and do action with best score: dig, request trap, request radar, move to ore, wait at base
All radar placements were hardcoded
If one of my robot was near one of my trap, I checked the number of destroyed robots, if more enemy robots would be destroyed, Iād dig. (I once killed my last robot lol)
To check enemy traps, If an enemy robot stays at base, I assume he has a trap until he waits again (Very very easy to trick)
Thanks for everyone playing and discussing the game in the chats, it helps a lot to read them to get some ideas.
Ruby -> Kotlin, #72 and still falling down at Legendary.
Planned to make beat-the-wood bot, so i started with a simple and dumb solution in Ruby. It become abomination of endless if
s, comments and stuff like if false and ...
after a few days (and sneaking into gold league) , so i decided to rewrite it in some other language - Kotlin. New code was still a mess, but at least I fixed lots of bugs during the process.
My final bot was doing following:
Other things:
* First version of unsafe spots prediction was serious score boost and was dead simple - do not touch cells with holes.
* I tried different bomb placement patterns until i gave up and finished with single bomb solution.
* That single bomb was planted at the start of the match to counter guys that tried to steal my radars.
* Bomb wall tactic really worked for a few days until people started countering it.
* I somehow decided to use euclidean distance instead of manhattan. Did not realise i was wrong until i looked at referee code. Changing algorithm was surprisingly successful and gave me +100 places change.
Anyway that was really fun contest.
Thanks!
I finished 6th. At first I wasnāt sure about the contest, but ended up really enjoying it. My initial thoughts were bombs would be entirely pointless and itād just be about fast farming, but I hadnāt considered the protecting of ore until later which made a massive difference. I never placed a single bomb as I was initially convinced theyād always be a waste of time (I now think this is wrong, but didnāt have time to add bomb support).
Before ever submitting a version I started making a sim using replays to validate - other than the randomness in long movements this worked well to prove there werenāt any issues. I used replays of the top bots at the time to develop / check bomb prediction. For both players Iād generate a list of possible bomb positions and which robots could be carrying bombs and then check that against the reality. This worked really well and made it easy to quickly advance up to the top of gold as I basically never died to a 1v2 and would never mine on an enemy trap. This worked by tracking changes to holes / ore and enemys waiting and matching them up. I also tracked radar timers and if the enemy only waited once per possible radar I assumed they werenāt bombing at all and cleared everything.
I never managed to make a good version that used the simulation - every version submitted was heuristic based. Every time I decided on a final move for a bot Iād work out where it would be at the end of a turn and then mask off any positions that could be exploded by an enemy (could chain from any square within 5 of any enemy) to avoid multiple losses.
In the end I had a few rules for moves:
Once all those rules had been checked the remaining bots would do a bfs to calculate time taken to collect ore and return it (or return ore and collect the next if already carrying). The one with the shortest time would be chosen and then the remaining bots would restart the search (as ore would be taken by the previous bot / moves would be masked due to explosion possibilities).
If I collected ore on a bot that had waited at base (or requested a radar) then Iād mark that square āprotectedā and not collect from it until the last 75 turns.
I also tracked the maximum id seen when placing radars and worked out if it was possible to get to that id if the enemy had only asked for radars (as both bombs / radars would increment the id). If there was a good enough chance they hadnāt asked for bombs Iād ignore their bombs to try and steal their protected ore. This worked really well for a while, but there are a couple of people who only got a few bombs and didnāt radar spam which messed it up.
25th place. First of all I want to say that I enjoyed this contest very much. The game was surprisingly deep and contains many different aspects (e.g. scouting, enemy tracking, pathing, ā¦) to master. A few cons were follow. It lacked clarification in statement what will happen if multiple robots try to dig 1 cell with ore. On 7th day my 2.2k lines of code reached code limit and I had to shorten my variable names, it was uncool.
First part of my algorithm detect enemyās inventory and possible enemyās trap places based on new holes, ore difference, score difference (I even lower ore in invisible cells if enemy robot visited only it among positive visits and then got +score). Changes in this part affect rating the most.
Then I calculate field map on 15 moves ahead and mark every cell to maximum number of neighbors(my robots) it can have in any point of time to set up Monte-Carlo for my robots. For example neighbors of enemyās item holder may have 1 neighbor max, potential enemyās traps have 1 neighbor max, in some situations like 1v1 with lower score I set 0 limit in some cells so my robot dodge kamikaze guy. Monte-Carlo was applied to tasks, not to moves. Example of task: move to (14, 10) in 4 moves. I wrote big pile of code to predict with high probability when all tasks can be realized under restrictions without need to simulate robotās coordinates in mid-task. My calculations was so accurate that in half games predicted restricted minimal number of moves was 100% accurate whole game. Priorities were hidden in task generation, for example if I carry ore, then I check least moves for base cells, pick all of base cells corresponding to minimal number of moves and select 1 of them with higher probability the closer y coordinate to robot is. The best tasks are selected based on estimation function with decay over moves.
In the last part I calculate the best commands to realize found best tasks under restrictions, 95% of time default realizations works, in other times I Monte-Carlo commands to find the best to fit tasks. It is possible to kamikaze me 1v2 almost only via chaining, it is almost impossible without chaining.
In 1 sentence I detect enemies, Monte-Carlo on tasks level on 15 moves ahead, then realize found tasks in commands.
Regrets:
I say it again, thank you for great game!
Gold #263
My eval in my search looks like thisā¦
score += 1000000 * oresCollectedByMe;
score += 10000 * myRadarsCnt;
for each enemy robot I score the following:
if(enemyRobotDied) score += 9000000;
for each robot I score the following:
if(myRobotDied) score -= 10000000;
else if(I just picked up an ore) score += oreHoldingScore
else if(already holding an ore) score -= manhattan(0, orePickedUpLocation.y)
else if (I just picked up a radar) score += 1000 - manhattanToRadarTarget;
else if(already holding a radar) score -= manhattanToRadarTarget
else {
if(if I'm the closest robot to HQ and if none of my robots are holding a radar and there are no ores to collect){
score -= myRobot.x;
}
else if(turn < 2) score -= 10 * abs(myRobot.x - 8)
else if(I have a safe closest ore location) score -= 5 * manhattan_distance;
else if(I'm digging a unexplored cell) score += 1;
else score -= manhattan distance to closest unexplored cell.
}
Yet another addictive contest. I enjoyed the contest as I tried various strategies (capturing the center, various fakes, detecting fake etc.,). Itās just that search was over powered by decision tree bots.
Very sad that CG is stopping community contests. But thanks that CG promised to organize 2 contests an year. Please bring us at least 4 CG contests an year. We would love that.
Ended 158th, not bad for my first contest, my bot do the following:
Enemy analysis:
Any enemy robot that pauses 1 turn in base will become a suspect.
Based on new hole or ore reduce with some check on enemy robot and my own action will mark dangerous tile.
Radar Point:
A precalculated weighted score graph for radar placement, all cell in range 4 add pts,
any new hole will within range will deduct the score. closest pt with HighScore * 0.9 will be my next radar point
Trap Point:
Building great wall at x=1 with available point closest to my robot
Fake Point(for robot faking):
Help building great wall at x=1 with available point closest to my robot
Dig Point:
closest dig tile available with checking on enemy robot position and their dangerous point
At game start robot with Y closest to middle will request radar, followed by trap.
All robot pauses at turn 1 and turn 2 to help fake the trap wall
Kamikaze move when predicted 2 robot coming and already there.
If my robot more than enemy, stop all kamikaze and stop fake and stop trap
To do:
need to analyze more on enemy robot pattern.
Overall, thanks for a great game ! enjoy it very much !
k4ng0u, 18th overall (1st Typescript)
As I participated in the Amadeus internal challenge I had quite a head start understanding the gameplay and strategies. But you guys caught up quite quickly since by Saturday night the top players were already using everything that was used during the internal contest!
The key concepts for me were:
At first everyone would trap and kamikaze as often as possible. Then people would focus on avoiding traps and not spend time putting traps themselves.
Personally here was my strategy:
With this, I was in legend as a pacifist bot that would only explode a few enemy traps in my quest for Amadeusium.
In legend a new strategy appeared: making every bots suspicious by having them wait 1 turn at the base. This would slow down the enemy who will have to avoid all the suspicious cells and need to find his ore further. And then at the end of the game, itās harvest time for incredible come back effects or just winning harder at the end of the game.
I went for this and it pushed me in the top 10.
However a counter strategy came out: some people were systematically mining near my bots to determine whether a suspicious cell is really suspicious. And probably if this was working well X times, he would just mine without caring of whether the cells are suspicious or not.
To face this, on Sunday evening I had to add a few traps in my algorithm just for the disuasion effect.
In the end I finished at the 18th place which is my best result so far. Paradoxically I am a little disappointed since I was in the top 10 most of the week. But well I guess not everybody kept their best bot in the arena
The fact that multiple strategies were countering one another was interesting and made it challenging to perform well against every opponent type.
On the downside I would point the viewer that could have included the possibility to show the radar zones of both players.
Thanks codingame and Amadeus for this challenge. It was very entertaining!
Iām 132th. I didnāt have much time for this contest so i tried some research algorithms :
I suppose i could do better with a simple heuristic code, but it was not fun.
On the contest itself :
In fact, i think the contest could have been better with a different trap mechanic and with a 1v1v1v1 mode (4 players).
I finished 30th in legend, it was the first time I reached legend so that was great for me.
My strategy was to estimate the 5 best āgoalsā for each robot (for example dig in a particular cell and get the ore back or take a radar and place it), then I optimized the combination of moves (by brute fore, with 5**5 = 3125 possibilities) to avoid having two robots that can be killed by a kamikase and to avoid two robots having the same goal. I tried to implement a āyoloā strategy where I assume that there is no trap placed if Iām loosing to much to try to get back in the game but it never really worked.
In addition, I also tried to obtain as much information as possible from enemies moves.
Overall, I found the challenge great, being simple and deep. I found particularly interesting to discover environment through opponent moves. However, there was a meta effect a bit too strong : the optimists beat the prudents who beats the aggressive who beat the optimists. As a result, after a certain points, it was difficult to know if a change would improve your bot.
I also think that the traps and radar mechanisms were unbalanced. If you lose one more robot than your opponent in the three first quarter of the game, you basically just lost so you cannot really adapt to the strategy of the opponent. Similarly, I believe the radar were too powerful and just forced everybody to use them all the time. Maybe having 10 robots each instead of 5 and radars with 1 less range would have authorize more different strategies.
Also it would be great to have a special display for battles where your bot timed out.
I came in second, which is becoming a habit. Thank you very much to the creators, sponsors and organisers of this competition. As ever on CG, it was very smoothly run. Congratulations to @karliso on a great win. Going into the final hour, I knew I was winning 70-80% against all the top published bots; but karlisoās last minute entry blew me away.
Iām going to spend most of this post talking about the meta around traps, the most interesting/difficult/important part of the game, and how my bot fitted with that. To do that, Iām going to go through strategies around traps, working from the simplest strategy through the various counters and counter-counters.
Thereās not much to be said here. The first iteration of my bot, and probably most bots, placed radar, mined ore, and never thought about traps. This quickly becomes untenable.
If you place random traps, opponents who ignore them will blow up their robots. This is a devastating thing to happen. You lose 20% of your mining power for the whole game - this is usually decisive.
I went through a phase of doing this. I found it effective for reaching the top ten (very early on), but you can avoid digging up traps by detecting them. There are at least two ways to achieve this:
I did these methods of trap detection. This allowed me to perfectly avoid traps, when I wanted to.
Here, you dig up a place you know (or suspect) is a trap, to take out two of your opponents robots for one of yours. When I did traps, I did this.
This is avoided simply by never having two of your robots next to an enemy trap, or chain of traps. However, by the end chains were so rare that I only considered single traps, to improve pathfinding.
This is where things get interesting. Around Thursday of last week, a new meta emerged. @The_Duck was the best at this for a long time, though I donāt know where it originated. The idea was:
In the high level meta, most or all bots were using this by the end. I was, whenever I expected to dig a 2 ore square early in the game. I started picking up stashes at a hardcoded turn, or when the nearest ore to HQ got far enough away.
Stashing is so effective that countering it became (I believe) the key to the high-level meta at the end of the competition. I tried multiple methods to detect enemy stashes. Having done this, you can dig them up safely.
I tracked the expected result of the game, and took more of these risks when behind. Also, if there is an enemy robot next to a stash/trap, I dig it. If itās a trap, itās a one for one swap.
To pathfind, I guessed which ore each miner would go for. Then they got to select routes, starting with the one closest to their target. These routes were planned so that weād never end up with two robots next to a trap/stash on this or a future turn. Within that restriction, I prioritised being close to enemy stashes but not my own.
My first few radar are hardcoded, except that I skip the risky second and third radar if the first doesnāt turn up enough ore (so that I donāt run out). After that, I mostly place them opportunistically as I mine, which is a very effective technique.
Thereās lots of other code in my bot, but none of it stands out as particularly interesting or important.
This section is a tad salty, so let me preface this by saying that I was beaten by a great bot, and by again expressing my thanks to everyone who had a part in the making of this great contest.
In the final evaluation, karliso and I had very comparable results against lower bots; perhaps mine are slightly stronger; the bulk of the gap between us comes from our head-to-head record. This record was 54-37. That is within the range of normal outcomes from a fair coin toss. For future contests, particularly with significant prizes, I think that there should be more games run in the evaluation; I think there is a 10% probability I would have won the contest in this case (and if the evaluation had been closer, this chance could be much higher!).
Again, karliso is a worthy winner, I have no bitter taste from this contest, and I hope to play many more CG contests in the future.
EDIT: @Neumann pointed out that not all the games in the final evaluation are on CGStats, because there is some maximum number tracked and they get forgotten over time. So the actual sample size is larger than I quoted here, and this the evaluation is more rigorous than I thought.
24th in legend, very surprising because my bot lacks very basic stuff due to lack of time, for example it can be easily 1v2 kamikazed. Luckily, the better the opponents, the less this happened!
The only āsmartā thing my bot did that has not been mentioned already is that if an opponent starts harvesting a āstashā (see teccles post), and there is still ore left on the stash, then my bot realizes that there is no bomb there and will prioritize emptying that enemy stash. Donāt know if it helped much thoughā¦
It was refreshing with a āheuristics onlyā contest, although not really my cup of tea, it makes programming much more recreational (āah, letās add another if here and see what happensā)ā¦ the rules were very simple, and still there was a lot of depth, kudos to the organizers!
Hello,
I was part of the testers of the game and out of the competition but I can give you a few explanation on my strategy.
The main idea was to create a very fast collector that does not spend time setting traps.
My radar position was hardcoded, nothing fancy. I decide to put a radar or to continue collecting based on the amount of visible ādisputedā ore. I choose my radar carrier to minimize the total travel distance.
I donāt try to avoid ādangerous cellsā for radar because I am generally faster than my opponent. If the opponent destroy one of my radar, I will restore information from previous turn. It does not affect me and I donāt waste time setting it back.
A large part of my code is analyzing the difference between actual map and expected map and correlating with the movement of opponent robots.
This allows me to resolve some actions but not all. In case of ambiguity, I flag all cells with a hole near a robot carrying item as ādangerousā without removing the ācarry itemā flag on the robot.
When 1 or more opponent robots reach the base, I compare if the score increment with the number of returning robots. If it matches, I know all robots was returning ore, they are now empty and I remove flag ācarry itemā on them.
This game me a pretty good vision of my opponent tactics and I was able to do kamikaze attacks on ādangerous cellsā. Very powerful.
I also estimate if my opponent is using trap before searching for kamikaze opportunity. I could how many item was requested during the last 5 turns. If it never goes above 1, you should be safe.
My code create some stats of the map. How many ādangerous cellā, how many ādangerous oreā, how many āprotected oreā (below one of my radars), how many ādisputed oreā, how many ādouble oreā and how many ātriple oreā.
Depending of these stats, I sometime choose to wait at the base before digging a double ore or a triple ore. The cell is them marked as āprotectedā; I donāt expect the opponent to touch it and the remaining ore is collected only at the end of the game.
I didnāt created any to code to protect from kamikaze, didnāt have any path-finding to adjust my path.
The code is 860 lines long, I was very surprised to reach legend league.
At the end, even if I never take a trap, it rely a lot on the fact I could have set a trap.
Thank you guys for sharing the strategies. I have used many myself but there is one thing I didnāt think of that many of you have mentioned:
Waiting a turn in base to protect your stashes with fake traps.
Ended up in Gold #252 with a āsimpleā javascript bot that just had a list of moves ordered by priority:
Each move of course had some preconditions but nothing too sophisticated.
This worked well until Gold League. Moving 5 bots without a BFS algorithm proved quite difficult. Things started to get really complicated when I had to prevent multiple bots from doing the same move for example, or going too close to each other. At this point adding/changing preconditions and deciding objectives was not the way to go. The final version was around 800 lines of code, mostly clean except from a couple 30 loc methods haha.
Anyways it was fun and I wasnāt expecting much, just wanted to jump in quickly and spend some time being creative.
The starter AIs were super helpful BTW. Parsing inputs is such a pain in the ass!
What I struggled the most with was discovering if a new version of the bot was better or not. Waiting till all battles are finished is boring so I keep coding. By the time the battles end I have added or removed something, changed a parameterā¦ and it is hard to clearly see what change is responsible for what. Did I drop because of X? Or was it Y? Maybe it is the combination of both. But wait, now that I have changed this maybe the new version works better.
In other words, quick feedback. It is like executing the tests while coding and finding out that something is broken 10 minutes later but you cannot see what test is failing and your code base has changed meanwhile
I havenāt tried executing matches locally. Maybe I should try that and compete with previous versions of my code to get an idea of what works and what doesnāt without having to wait that much.
PS: I have been writing some posts about my experience during the contest if anyone is interested https://salvatorelab.com/tag/unleash-the-geek/