Unleash the Geek - Feedback & Strategies

Gold #263

My eval in my search looks like this…

score += 1000000 * oresCollectedByMe;
score += 10000 * myRadarsCnt;

for each enemy robot I score the following:
     if(enemyRobotDied) score += 9000000;

for each robot I score the following: 
     if(myRobotDied) score -=  10000000;
     else if(I just picked up an ore) score += oreHoldingScore
     else if(already holding an ore) score -= manhattan(0, orePickedUpLocation.y)
     else if (I just picked up a radar) score += 1000 - manhattanToRadarTarget;
     else if(already holding a radar) score -= manhattanToRadarTarget
     else {
           if(if I'm the closest robot to HQ and if none of my robots are holding a radar and there are no ores to collect){
	            score -= myRobot.x;
           }
           else if(turn < 2) score -= 10 * abs(myRobot.x - 8)
           else if(I have a safe closest ore location) score -= 5 * manhattan_distance;          
           else if(I'm digging a unexplored cell) score += 1;
           else score -= manhattan distance to closest unexplored cell. 
     }

Yet another addictive contest. I enjoyed the contest as I tried various strategies (capturing the center, various fakes, detecting fake etc.,). It’s just that search was over powered by decision tree bots.

Very sad that CG is stopping community contests. But thanks that CG promised to organize 2 contests an year. Please bring us at least 4 CG contests an year. We would love that.

2 Likes

Ended 158th, not bad for my first contest, my bot do the following:

Enemy analysis:
Any enemy robot that pauses 1 turn in base will become a suspect.
Based on new hole or ore reduce with some check on enemy robot and my own action will mark dangerous tile.

Radar Point:
A precalculated weighted score graph for radar placement, all cell in range 4 add pts,
any new hole will within range will deduct the score. closest pt with HighScore * 0.9 will be my next radar point

Trap Point:
Building great wall at x=1 with available point closest to my robot

Fake Point(for robot faking):
Help building great wall at x=1 with available point closest to my robot

Dig Point:
closest dig tile available with checking on enemy robot position and their dangerous point

At game start robot with Y closest to middle will request radar, followed by trap.
All robot pauses at turn 1 and turn 2 to help fake the trap wall

Kamikaze move when predicted 2 robot coming and already there.

If my robot more than enemy, stop all kamikaze and stop fake and stop trap

To do:
need to analyze more on enemy robot pattern.

Overall, thanks for a great game ! enjoy it very much !

1 Like

k4ng0u, 18th overall (1st Typescript)

As I participated in the Amadeus internal challenge I had quite a head start understanding the gameplay and strategies. But you guys caught up quite quickly since by Saturday night the top players were already using everything that was used during the internal contest!
The key concepts for me were:

  • suspicious robot: a robot that might have requested a trap (spent 2 turns at the base) and hasn’t dug since then.
  • suspicious cell: a cell that has been last dug by a suspicious robot. This notion evolved a lot for me. It started as any hole on the map around which the enemy stopped. And by the end of the week it was a bunch of cells that were backtracked depending on robots, suspicious robots, my digs, current/previous map state.
  • trap chains: chains of suspicious cells. In my algorithm I enriched it with the neighboring cells of suspicious robots as they could become suspicious by the time you move
  • dangerous zones: the neighborhood of the trap chains that are reachable by the enemy (Manhattan distance of 4 or less)
    A lot of strategies were used during the contest and evolved in the time. Interestingly enough the evolution was similar during the internal and the public contest.

At first everyone would trap and kamikaze as often as possible. Then people would focus on avoiding traps and not spend time putting traps themselves.

Personally here was my strategy:

  • Decision planning order: kamikaze, request radar, put order, request trap, put trap, go back to base, mine, go to base for next radar if no radar is being targetted, go around next radar spot, dig randomly
  • Ore pathing: compute the 4 best ore cells for each of the mining bots. Bruteforce the best combination. Initially I used the nbMiningBots best ore cells but with 5 mining bots (5^5 combinations) my code was occasionnaly timing out)
  • On a given turn, avoid to have more than one robot in the same dangerous zone (avoid multiple kills) => this is done after all actions have been planned and in a way so that robots are as close as possible to their initial targets.
  • Whenever I am next to a suspicious cell with ore that has more enemies than allies around it, dig in it. (=> worst case it’s a trap, you lose one unit and enemy loses one unit. best case scenario you destroy a radar and get 1 ore. In between you go away with 1 ore. And in all cases the cell is no longer suspicious)
  • My radar positions were fixed (+/-2cells depending on whether cells are suspicious or not) and not replaced if destroyed as I rely on my map history and good suspicious cell determination.

With this, I was in legend as a pacifist bot that would only explode a few enemy traps in my quest for Amadeusium.

In legend a new strategy appeared: making every bots suspicious by having them wait 1 turn at the base. This would slow down the enemy who will have to avoid all the suspicious cells and need to find his ore further. And then at the end of the game, it’s harvest time for incredible come back effects or just winning harder at the end of the game.
I went for this and it pushed me in the top 10.
However a counter strategy came out: some people were systematically mining near my bots to determine whether a suspicious cell is really suspicious. And probably if this was working well X times, he would just mine without caring of whether the cells are suspicious or not.
To face this, on Sunday evening I had to add a few traps in my algorithm just for the disuasion effect.

In the end I finished at the 18th place which is my best result so far. Paradoxically I am a little disappointed since I was in the top 10 most of the week. But well I guess not everybody kept their best bot in the arena :stuck_out_tongue:
The fact that multiple strategies were countering one another was interesting and made it challenging to perform well against every opponent type.
On the downside I would point the viewer that could have included the possibility to show the radar zones of both players.

Thanks codingame and Amadeus for this challenge. It was very entertaining!

7 Likes

I’m 132th. I didn’t have much time for this contest so i tried some research algorithms :

  • A bruteforce of every possible moves for each robots. I bruteforce all moves for my first robot, i keep the best, then i bruteforce all moves for the second robot, i keep the best. This goes on until the last. To evaluate a move, i use an evaluation function but i also bruteforce all enemy possibles traps (DIG a trap or explode a traps).
  • ISMCTS : It’s like a MCTS but you randomize the unknown. It’s was terrible.
  • Monte Carlo depth 1 : Not really good.

I suppose i could do better with a simple heuristic code, but it was not fun.

On the contest itself :

  • The concept is good
  • I think the trap mechanics could have been better. Not really sure how.
  • Incomplete information. Can be good. But because of this, you have the “shi fu mi” effect.

In fact, i think the contest could have been better with a different trap mechanic and with a 1v1v1v1 mode (4 players).

5 Likes

I finished 30th in legend, it was the first time I reached legend so that was great for me.
My strategy was to estimate the 5 best “goals” for each robot (for example dig in a particular cell and get the ore back or take a radar and place it), then I optimized the combination of moves (by brute fore, with 5**5 = 3125 possibilities) to avoid having two robots that can be killed by a kamikase and to avoid two robots having the same goal. I tried to implement a “yolo” strategy where I assume that there is no trap placed if I’m loosing to much to try to get back in the game but it never really worked.
In addition, I also tried to obtain as much information as possible from enemies moves.

Overall, I found the challenge great, being simple and deep. I found particularly interesting to discover environment through opponent moves. However, there was a meta effect a bit too strong : the optimists beat the prudents who beats the aggressive who beat the optimists. As a result, after a certain points, it was difficult to know if a change would improve your bot.

I also think that the traps and radar mechanisms were unbalanced. If you lose one more robot than your opponent in the three first quarter of the game, you basically just lost so you cannot really adapt to the strategy of the opponent. Similarly, I believe the radar were too powerful and just forced everybody to use them all the time. Maybe having 10 robots each instead of 5 and radars with 1 less range would have authorize more different strategies.

Also it would be great to have a special display for battles where your bot timed out.

4 Likes

I came in second, which is becoming a habit. Thank you very much to the creators, sponsors and organisers of this competition. As ever on CG, it was very smoothly run. Congratulations to @karliso on a great win. Going into the final hour, I knew I was winning 70-80% against all the top published bots; but karliso’s last minute entry blew me away.

The trap meta

I’m going to spend most of this post talking about the meta around traps, the most interesting/difficult/important part of the game, and how my bot fitted with that. To do that, I’m going to go through strategies around traps, working from the simplest strategy through the various counters and counter-counters.

1. Mine ore, ignore traps

There’s not much to be said here. The first iteration of my bot, and probably most bots, placed radar, mined ore, and never thought about traps. This quickly becomes untenable.

2. Place random traps

If you place random traps, opponents who ignore them will blow up their robots. This is a devastating thing to happen. You lose 20% of your mining power for the whole game - this is usually decisive.

I went through a phase of doing this. I found it effective for reaching the top ten (very early on), but you can avoid digging up traps by detecting them. There are at least two ways to achieve this:

  • Avoid any square your opponent’s robot has paused next to, which has a hole.
  • More sophisticated, only avoid squares which robots have stopped on after pausing on the HQ.
    • This can be countered by robots holding traps visiting HQ briefly. However, this is easily detected - your opponents score doesn’t increase, so you know something funny is going on.

I did these methods of trap detection. This allowed me to perfectly avoid traps, when I wanted to.

3. Suicide on traps

Here, you dig up a place you know (or suspect) is a trap, to take out two of your opponents robots for one of yours. When I did traps, I did this.

This is avoided simply by never having two of your robots next to an enemy trap, or chain of traps. However, by the end chains were so rare that I only considered single traps, to improve pathfinding.

4. Stashes

This is where things get interesting. Around Thursday of last week, a new meta emerged. @The_Duck was the best at this for a long time, though I don’t know where it originated. The idea was:

  • Pause your robot on spawn, but don’t pick up a trap.
  • Dig a square with more than 1 ore. The opponent will think it might be a trap.
  • Leave (stash) the rest of the ore for later. When ore goes short, go back for the easy pickings.

In the high level meta, most or all bots were using this by the end. I was, whenever I expected to dig a 2 ore square early in the game. I started picking up stashes at a hardcoded turn, or when the nearest ore to HQ got far enough away.

5. Stash detection/guesswork

Stashing is so effective that countering it became (I believe) the key to the high-level meta at the end of the competition. I tried multiple methods to detect enemy stashes. Having done this, you can dig them up safely.

  • Entity IDs. Traps and radar get IDs which are sequential, shared between opponents, and set when the item is buried. This means that if you place IDs n and n+1, nothing buried between those placements is a trap. This is the only certain method in this section; the rest is guesswork.
  • Obvious radar. If there are no possible radar anywhere near the item buried, it’s a radar. If multiple holes dug after it is buried would be invisible without this item, it’s a radar.
  • Two enemy robots next to a square, when we have a robot nearby. This uses point 3 - the enemy wouldn’t take that risk. To avoid this being used against me, I avoided having two robots next to my own stashes at once.
  • Late traps. Very few good bots placed traps after around the halfway point in a match, so it was often worth the risk. This risk almost certainly ended up costing me the win - karliso did this much more, and many of my losses to them came from late traps.

I tracked the expected result of the game, and took more of these risks when behind. Also, if there is an enemy robot next to a stash/trap, I dig it. If it’s a trap, it’s a one for one swap.

Other aspects

To pathfind, I guessed which ore each miner would go for. Then they got to select routes, starting with the one closest to their target. These routes were planned so that we’d never end up with two robots next to a trap/stash on this or a future turn. Within that restriction, I prioritised being close to enemy stashes but not my own.

My first few radar are hardcoded, except that I skip the risky second and third radar if the first doesn’t turn up enough ore (so that I don’t run out). After that, I mostly place them opportunistically as I mine, which is a very effective technique.

There’s lots of other code in my bot, but none of it stands out as particularly interesting or important.

Final evaluation

This section is a tad salty, so let me preface this by saying that I was beaten by a great bot, and by again expressing my thanks to everyone who had a part in the making of this great contest.

In the final evaluation, karliso and I had very comparable results against lower bots; perhaps mine are slightly stronger; the bulk of the gap between us comes from our head-to-head record. This record was 54-37. That is within the range of normal outcomes from a fair coin toss. For future contests, particularly with significant prizes, I think that there should be more games run in the evaluation; I think there is a 10% probability I would have won the contest in this case (and if the evaluation had been closer, this chance could be much higher!).

Again, karliso is a worthy winner, I have no bitter taste from this contest, and I hope to play many more CG contests in the future.

EDIT: @Neumann pointed out that not all the games in the final evaluation are on CGStats, because there is some maximum number tracked and they get forgotten over time. So the actual sample size is larger than I quoted here, and this the evaluation is more rigorous than I thought.

35 Likes

24th in legend, very surprising because my bot lacks very basic stuff due to lack of time, for example it can be easily 1v2 kamikazed. Luckily, the better the opponents, the less this happened!

The only “smart” thing my bot did that has not been mentioned already is that if an opponent starts harvesting a “stash” (see teccles post), and there is still ore left on the stash, then my bot realizes that there is no bomb there and will prioritize emptying that enemy stash. Don’t know if it helped much though…

It was refreshing with a “heuristics only” contest, although not really my cup of tea, it makes programming much more recreational (“ah, let’s add another if here and see what happens”)… the rules were very simple, and still there was a lot of depth, kudos to the organizers!

2 Likes

Hello,

I was part of the testers of the game and out of the competition but I can give you a few explanation on my strategy.

The main idea was to create a very fast collector that does not spend time setting traps.

My radar position was hardcoded, nothing fancy. I decide to put a radar or to continue collecting based on the amount of visible “disputed” ore. I choose my radar carrier to minimize the total travel distance.

I don’t try to avoid “dangerous cells“ for radar because I am generally faster than my opponent. If the opponent destroy one of my radar, I will restore information from previous turn. It does not affect me and I don’t waste time setting it back.

A large part of my code is analyzing the difference between actual map and expected map and correlating with the movement of opponent robots.

  • Cells with unexpected hole or unexpected missing ore are flagged as “touched cells”.
  • If an opponent robot stays in column 0 I flag the robots as “carry item”.
  • For each “touched cell”, I check how many opponents robots stays near it. If it’s only 1 I associate this robot and the cell. If the robots “carry item” I will remove this flag and flag the cell as “dangerous”.

This allows me to resolve some actions but not all. In case of ambiguity, I flag all cells with a hole near a robot carrying item as “dangerous” without removing the “carry item” flag on the robot.

When 1 or more opponent robots reach the base, I compare if the score increment with the number of returning robots. If it matches, I know all robots was returning ore, they are now empty and I remove flag “carry item” on them.

This game me a pretty good vision of my opponent tactics and I was able to do kamikaze attacks on “dangerous cells”. Very powerful.

I also estimate if my opponent is using trap before searching for kamikaze opportunity. I could how many item was requested during the last 5 turns. If it never goes above 1, you should be safe.

My code create some stats of the map. How many “dangerous cell”, how many “dangerous ore”, how many “protected ore” (below one of my radars), how many “disputed ore”, how many “double ore” and how many “triple ore”.

Depending of these stats, I sometime choose to wait at the base before digging a double ore or a triple ore. The cell is them marked as “protected”; I don’t expect the opponent to touch it and the remaining ore is collected only at the end of the game.

I didn’t created any to code to protect from kamikaze, didn’t have any path-finding to adjust my path.

The code is 860 lines long, I was very surprised to reach legend league.
At the end, even if I never take a trap, it rely a lot on the fact I could have set a trap. :smiley:

5 Likes

Thank you guys for sharing the strategies. I have used many myself but there is one thing I didn’t think of that many of you have mentioned:
Waiting a turn in base to protect your stashes with fake traps.

Ended up in Gold #252 with a “simple” javascript bot that just had a list of moves ordered by priority:

  • request a radar
  • go base to deposit ore
  • move and place a radar
  • dig big stacks once, before the enemy (this would have been much better combined with the fake bomb strategy)
  • move and extract closest ore
  • blind dig

Each move of course had some preconditions but nothing too sophisticated.

This worked well until Gold League. Moving 5 bots without a BFS algorithm proved quite difficult. Things started to get really complicated when I had to prevent multiple bots from doing the same move for example, or going too close to each other. At this point adding/changing preconditions and deciding objectives was not the way to go. The final version was around 800 lines of code, mostly clean except from a couple 30 loc methods haha.

Anyways it was fun and I wasn’t expecting much, just wanted to jump in quickly and spend some time being creative.

The starter AIs were super helpful BTW. Parsing inputs is such a pain in the ass!

What I struggled the most with was discovering if a new version of the bot was better or not. Waiting till all battles are finished is boring so I keep coding. By the time the battles end I have added or removed something, changed a parameter… and it is hard to clearly see what change is responsible for what. Did I drop because of X? Or was it Y? Maybe it is the combination of both. But wait, now that I have changed this maybe the new version works better.
In other words, quick feedback. It is like executing the tests while coding and finding out that something is broken 10 minutes later but you cannot see what test is failing and your code base has changed meanwhile :smiley:

I haven’t tried executing matches locally. Maybe I should try that and compete with previous versions of my code to get an idea of what works and what doesn’t without having to wait that much.

PS: I have been writing some posts about my experience during the contest if anyone is interested https://salvatorelab.com/tag/unleash-the-geek/

2 Likes

Hey, if you’re running your bruteforce on your possible commands (collect ore, place radar or kamikaze), how do you know that 2 robots will be next to one another next turn if the target is more than 1 turn away?

I finished #16. I thought the contest was a lot of fun, and that the key part was actually how many different ways one could think of to extract as much information as possible from all situations. The strategies then just naturally came along to react to that information.

I did pretty much the same thing as eulerscheZahl and funny enough, I finished right behind him. I gave scores to all cells depending on what my robots were supposed to do, kept only the top scores and then ran a bruteforce to assign each robot to a given task to have an optimal mix given my constraints. I also had features already discussed like potential enemy trap detection (the exact same one vrampal described) and removal (same as DaFish), stash construction (starting with the ones with the most ores), opportunistic dig of enemy stash etc… 2 other small features not yet discussed were finding what the next objective will be right before basing to save 1-3 movement cells on your next trip, and trying to blow up returning enemy robots when my own robot didn’t have time to collect one more ore before the end of the game.

I do believe that you had to drop at least a couple random bombs to protect yourself against people trying to steal your stashes, which became meta towards the end. I however did not have time to implement any sort of protection against the enemy kamikazing on their or my bombs, so that did backfire a few times (not to mention I got destroyed by the few bots trying to actively blow me up).

Thanks to the whole CG team for a great contest!

1 Like

I compute the arrival time. If it only differs by 1 (=digging time), I see them in danger. You will still see it happen, but I prioritize other plans.

1 Like

I came a distant third after karliso and teccles. Thanks to CodinGame, Amadeus, and all the players for a fun competition :slight_smile:

Strategy

For a while I just mined straightforwardly and sometimes placed traps where I happened to mine.

Then for a while I ran a dedicated hunter-killer bot who stood with a trap at x=1. He tried to predict where returning enemy robots would be and intercept them with traps to get 2-for-1 trades. This worked at low levels but wasn’t useful against top bots, so eventually I turned it off.

At some point I noticed Mazelcop seemed to be leaving ore under their radars untouched until late in the game. I had to assume that ore was trapped, and Mazelcop could save it until later, cleaning up some fast easy ore at the end. Also I noticed that SlyB seemed to be fake-requesting traps sometimes. I started doing these things too: I made my robots always pretend to request a trap on returning to HQ (but I basically never actually got a trap). Then I would remember which squares the enemy would have to assume were trapped, and I would save this “claimed” ore until later. At turn 100 I would start to harvest this easy claimed ore. This seemed to give a ~2 point rating boost when I started doing it.

I never did the fancy stuff other top players did of guessing which possibly-trapped ore was actually safe. For the most part my bot plays it safe and tries to rigorously infer all it can about the map, and then assumes any square is trapped unless it can prove that it isn’t.

Inferring which squares might be trapped

I’m not sure how other people determine which enemy robot dug on which square, but I kind of liked my method. Each stationary enemy robot could have dug on one of five squares, or not dug at all. I go through each possible combination of those squares and check whether that combination of enemy digs it is consistent with what I know about the map (basically, which holes appeared and how the ore on each square changed). I find all combinations of enemy dig locations that satisfy all the constraints. For example, if there are 3 stationary enemies I check whether the map is consistent with “enemy 1 dug left, enemy 2 dug right, and enemy 3 waited”, and each possible variant of that. This gives a list of possible dig squares for each enemy robot, and also determines whether each enemy robot must have dug or whether it could have just waited.

Any enemy robot that paused on the HQ might be carrying a trap, until they definitely dig or return to HQ. [This assumption that robots that return to HQ don’t have traps isn’t necessarily true. A few people (deathpat, for example) would get a trap, fake planting it somewhere, return to HQ as if they were delivering ore, then head out on the map again and place the trap for real. My robot wouldn’t correctly flag these traps and might accidentally dig them. I missed the trick teccles described of detecting this by looking at the opponent’s score. Fortunately I don’t think any top bot ended up doing this.]

Whenever an enemy that might be carrying a trap possibly digs on a square, I write down the turn number.

As teccles described, by looking at my item ids, I can determine some turns on which the enemy definitely did not place any items. For example if I placed radar #11 on turn 5 and radar #12 on turn 7, the enemy did not place anything on turn 6 (I couldn’t tell from the referee code if I can infer anything about turns 5 or 7).

So, a square might be trapped if it was possibly dug by a possibly-item-carrying enemy on a turn when I can’t rule out that the enemy placed an item.

Also: If I dig a square and survive (or if a non-item-carrying enemy definitely digs a square), it’s not trapped. To rule out traps, I encourage my robots to dig possibly-trapped squares when an enemy is adjacent to that square.

Also: if a non-item-carrying enemy digs a square, it’s not trapped

Mining ore

Every turn, each miner assigns each ore spot a “cost”. Some components of the cost were:

  • Mostly the cost is how many turns it would take to get to that spot, mine, and return to hq.
  • There is an extra cost for how many turns it would take to get to the ore (since if that’s longer the enemy might mine it before we get there)
  • There is a bonus (reduced cost) for mining 2- and 3-ore spots while the enemy thinks we might be holding a trap.
  • There is a bonus for mining a possibly-trapped square if it would lead to a 1-for-1 trade.
  • There is a huge penalty if there is not enough time left in the game to mine that square and get back to base.

I assign each miner to an ore spot such that the total cost of all the assignments is minimal (with the Hungarian algorithm, which I read about at http://timroughgarden.org/w16/l/l5.pdf)

Robots returning to HQ with ore count as miners; they just have to factor in the time to return the current ore before starting on the next one in their cost.

Evading traps

It’s critical not to let the enemy blow you up. There are basically three things bad things that can happen:

  1. Your robot digs an enemy trap. This is solved by the system that tracks which squares might be trapped. I only dig them if it would be a 1-for-1 trade.

  2. The enemy digs a trap (of either player) and it blows up two of your bots.

  3. Then enemy places a trap on a square adjacent to where two of your robots end up, so that next turn the enemy digs the trap and gets a 2-for-1 trade.

For 2 and 3, I made a list of all the possible explosions I was worried about. Then I made sure that no two of my bots ended up in the same explosion. This was a postprocessing step after I had already decided on preliminary moves for all the bots. Basically, I assigned a possible set of moves a cost. Mostly the cost was how many potential 2-for-1 trades the moveset exposed me to. Also the cost penalized moving robots away from where they were trying to go, and favored disrupting as few robots’ plans as possible. If the initial preliminary moveset wasn’t satisfactory (because it risked 2-for-1’s) then I did a combination of random search and hill climbing to find a better moveset (meaning, a moveset with a lower cost).

I wasn’t totally satisfied with this evasion system. I think ideally instead of being a final postprocessing step it would be more tightly integrated into my mining, so that my robots could select ore targets that would naturally lead to them not being near enough to each other to risk 2-for-1 trades.

18 Likes

Thanks for the great PM and congrats for your insane run :smiley:

I don’t know if these stats come from CGStats or if you tracked the results during the rerun, but as I said on Discord yesterday, CG doesn’t keep more than 500 games in the last battles. It means that by the end of the rerun, some games were already missing in the history. Furthermore if you look at today’s stats, more games have been purged (186-ish remain atm, giving you 13 / 20 / 5 against Karliso). Having actual rerun stats would require to track games as they get purged.

4 Likes

Ah, right. Yes, my stats were from CGStats, so they are wrong for the reasons you give. So this problem is not as bad as I thought, and perhaps non-existent. Thanks!

1 Like

If i recall correctly, the final rerun is 320 games for each players in legend league. So you’ll play an average of 640 games during a rerun for a 1v1 game.

1 Like

Thanks. I was look at 500 of those, so the sample size was similar to what I saw. But I recall the early games being rather worse than the later ones for me, so I’m pretty sure that in this case, the overall result is statistically significant.

1 Like

Hi everyone,

Finished 1266 this time. I just wanted to say I am really happy that I participated in this contest. As someone who changed the workplace 1 month ago, I needed something to boost my confidence. As you see, my rank wasn’t high enough, but after reading a lot of your comments it is a big relief that I actually did a lot of similar logic like people in the top 20. I just didn’t have enough time to prepare all the problematic cases.

Thank you, everyone, who participated and enjoyed as I did, and I hope we meet again in the next tournament of power. <3

6 Likes

Hey, couldnt really go tryhard this time because of canadian Thanksgiving meaning I was away from Friday till the end. So I couldnt finish my refactoring (well it was done but all bugged :stuck_out_tongue: ) so I pretty much submitted my tuesday code. I finished 200 something. Was still a fun game and really wish it will end up as a multiplayer (hey Amadeus wink wink WINK WINKing your way).

The goal of my refactoring was:

  • each bot determine the “5 best objectives” via heuristic
  • A small sim determine how many turns it would take to “complete” it.
  • Go through all those and check “conflicts” between bots (ex: dont mine the same cell with only 1 ore), determine the bot who actually do it based on the sim result.
  • “Losing” bots go to their next best objective until every one is happy or reached there fifth objective.

It was working OK, but there was a couple of bugs where a bot would keep switching objective, making it really not optimal and I didnt have time to really use the system further with new stuff. My main goal was to compute the proper moment when a bot would end up without a proper objective and try to pop radar before that. Sadly didnt have time to implement that on top of the bugs. So yeah, make it a multi !!! :smiley:

3 Likes

Hello, I was ranked 150 and used python. My overall strategy was not smart, used a lot of ifs and very few smart functions. Here are some highlights :

  • At the start of the turn, I assign classes to each robot. At most one scout, one sapper (will carry a trap) and the rest will be harvesters
  • Scouts go find a location to maximize the potential ore available, divided by the distance to the HQ (I avoid places with potential traps) while minimizing the distance to the robot
  • Sapper go plant a trap, preferably, close to other traps, in existing holes, close to the robot but not too close to the HQ
  • Harvesters go to their assignment following Gale-Shapley algorithm, first towards appearing ore, then towards ore that I know is there but disappeared, and finally towards locations I don’t know. If no target and a radar will be dropped, go to that location.
  • If I can trigger a mine to kill more enemies bots than ally bots, go for it
  • Keep assignment from one round to the other if it is not dangerous
  • We need a new radar if there are less than 20 ore on the map
  • Detect trap and radar dropping, don’t try to differentiate one from the other. You can dig a trap if there is a neighboring robot
  • If there is an ore appearing on the edge of my vision, consider that the neighboring tiles are on a cluster, and make them prioritary for radars
  • Do some random delays to simulate inserting a trap
  • Pathfinding among one of the possible dest at distance 4, without too much effort

All in all I’d rewrite it if I had the time to try to find some more consistent formulas, but in the end, there was no time and ranking was so unstable that there was no way to know if a changed algorithm would do better or not…

Thanks for this challenge, it was fun !

2 Likes