Wondev Woman - Feedback & Strategies

Hi,

34 / java

As a lot of legend players did, I have :

  • Minimax with alpha/beta pruning
  • detection in fog of war (pretty solid with a glitch when cancelled push and repush)
  • my eval :
    • flood fill with all the pawn I know based on accessibility of cells
    • score
    • height
    • height of neighbors
    • position (maybe I was wrong but I decided center of the map was better)

What didn’t work (certainly a bug of mine)

  • memoization of states (aka transposition tables as I understand it)

What I would have tried with more time :

  • biconnected components
  • choose the ‘best’ possible position for pawns in fog of war
3 Likes

I also included this in my eval function but I feel like it a bad thing on some maps

21 / C

My first non-heuristic based AI. I loved the challenge and was stunned by the top 5 elo scores: you’re good!
This should yield my first CG tshirt ever too, if I got the attribution rules right.

Same strategy as most in legend: min-max (w/o alpha-beta, keep only the top ten scores per level/node) with a voronoi based evaluation.

One small feature I didn’t see mentioned: I detect isolation and fall back to single-player mode (max-max instead of min-max) to simulate deeper and harvest the maximum available points on the isolated zone.

After one day of nightmarish detection (I assume as described by Magus), I realized I could use the moves’ generation of the min-max to brute force the set of possible opponent’s positions (a couple of points each). It worked like a charm and appeared rock solid and converging very fast. Two small points I oversaw initially:

  • when the opponent starts, the map comes (most probably) with a 1: it gives a hint on the location of one (or possibly two) positions
  • when the opponent accepts defeat (or times out), we receive -1 -1, but there are some position hints hidden in the legal moves provided (I used to discard them starting bronze, but reintroduced them in legend) to refine the set of possible positions. Failing that may yield to issuing “invalid move”.

I finally use the set of possible positions as a first node in the min-max tree, as they belong to the opponent.

It was a nice week with a really nice outcome for me. I enjoyed every bit of it and learned a lot.

PS: thanks egaetan for the Paris’ coding hub

4 Likes

#90 (23 in gold), C#

danbabble

I enjoyed this contest greatly. Thank you very much, CG. This was my 12th contest (awarding me the Happy Birthday achievement) and my highest placement yet. A modest victory, but a victory for me, nonetheless. I also enjoyed using C# since I have written my last few contest entries in Perl in an effort to finally obtain the Cherry on the Cake accomplishment. Having achieved this in C4L, I felt free to return to a much saner (and more fun) language.

Early contest

Up until Friday, (3 days before contest end) my code simply consisted of an evaluation of the legal moves provided to me. I did not attempt to track unseen opponents, nor to evaluate any future moves. My evaluator had many “magic” numbers in it (12, I think) and I was able to significantly improve the performance of my AI just by playing with these values.

Evaluation

This was enough to get me to about 70th in gold Friday morning, and I held off on making any modifications with the expectation that my position would likely permit me a free pass to legend. No such luck! Only 17 in opening of legend league?!? Wow!

I will note, however, that based on the simplicity of my logic at the time, I don’t think that it really deserved the title of “legend”.

My evaluation is admittedly over-complex, but it served me decently well. As of Friday, it consisted of:

  • Me
  • Sum of the heights of all reachable squares (flood fill)
  • Plus what I call a “liberty” factor, consisting of a count of the number of neighboring tiles that were accessible to each of my pieces (more on this later)
  • Plus a “high-ground” factor: a multiple of the height of the tile under each of my pieces
  • Plus a “center-board” factor based on the distance of each of my pieces from the center of the board
  • Plus a bonus for scoring a point
  • Opponent (only calculated for pieces with a known position)
  • Minus the sum of the heights of all opponent reachable squares
  • Minus the liberty factor for each of my opponent’s pieces
  • Minus the high-ground factor for each of my opponent’s pieces
  • Minus the center-board factor for each of my opponent’s pieces
  • Minus a bonus for scoring a point

Liberty factor

I think that the “liberty” factor was probably the most important component of my logic. I played with various ways to score this (squares, triangular numbers, etc.) But I eventually decided that:

  • Zero liberties is VERY bad
  • 2 - 4 is the sweet-spot
  • Above 4 is nice to have, but not so important

Given this, I made an arbitrary 9-element array to score number of liberties on a sliding scale, with a severe penalty for 0, significant benefits for increasing from 1 through 4, and diminishing returns above 4.

Center-board factor

As some people have mentioned, the distance from the center of the board wasn’t always a good indicator of tactical advantage. I decided that it had much more significance early in a game, and became a hinderance in late-game. Near the end of the contest, I updated my evaluator to give the center less importance every 5 turns for my first 30 turns, and to ignore it completely after the 30th turn.

Last few days

Once it became evident that I wouldn’t make it to legend based only on the merits of my evaluator function, I determined to add opponent tracking and move prediction.

Opponent tracking

This is by far the most complex portion of my code. 33 of my 53 unit tests are dedicated to this logic. The code keeps a list of each possible location for each of my opponent’s pieces. The list is updated based on new information each turn, and is maintained throughout the entire game.

To weed out all bugs, I made this component of the logic able to detect its own errors, and had it throw an exception if either an opponent appeared in a location that it shouldn’t have been able to reach, or if the logic came up with an empty array for possible enemy locations.

Before throwing the exception, my AI would spit out, to stderr, complete C# code for a unit test to reproduce the problem. I ran 100s of games in CG Spunk, which flagged the thrown games. Most of these became new unit tests for my logic.

Move prediction

I only had a little time on Sunday to add this component to my AI, and so it’s only partly baked. It only kicks in when both opponent positions are known (according to my opponent tracking). And since I was so pressed for time, I wrote quickly, using the most expedient methods, meaning it was not very performant code. I also didn’t write many unit tests (only 3!!!) for this. I was sloppy. :slight_smile:

When WW goes multi, this will be the first place I look to improve my logic.

  • danBhentschel
3 Likes

Hello,

First of all, thank you very much for this contest and congratulation to the winners. The elo score is very impressive. I had a great time, a lot of excitement and fun.

My evaluation function was: sum of neighboring cells’ height that are my current height or my current height+1. Going down or going to high was discarded. This makes the bot building and going higher. Then I added the flood fill to avoid being trapped. I did not think about the voronoi. I compute this for my pawns and known opponent’s pawns. Board score is simply the difference. This quickly took me to top 100 and I spent the rest of the week fighting to stay there. I tried to add other factors to the evaluation function such as the distance between pawn or the “diameter” of the filled area, but this did not seem to change much. I called the diameter the surface / max depth of the BFS. I tried to avoid corridors and favor open areas in this way.

I wanted to implement the min max. Since I’m using scala, I spent most of my time optimizing my code. Thanks to visualvm for that. A very nice tool for us JVM language boys :slight_smile: Results were a bit disapointing, I wasn’t able to get enough performances to do the full level 2. So, I decided to “adapt” the algorithm. I computed level 1 moves (my moves) and sorted them (best move first). Then, I iterated these move and computed oponents moves, keeping the min score, as you would do for the min max. While I iterated, I kept track of the move which had the maximum minimum score :slight_smile: I hoped that I would be able to reach it in time. Most of the time, this best move was in the first 10, and I was able to compute it before the time out. Optimizations were crucial there. Before starting optimizing, I would barely evaluate ~300 boards. After, I was up to 3000.

My side goal was to stick to immutable code which I did mostly. I did not reach the point were the copies hindered the performances. There were bigger perf hogs :slight_smile:

In the end, I implemented the fog detector yesterday evening and night. Pressure was high, I had to focus hard to stop watching my submits and actually code something. The detector somewhat worked and brought me back to ~85th. I was so excited I had a hard time sleeping. I even dreamt I was coding for the contest. So, this morning, I decided to fix one or two minor bugs I thought about while trying to sleep. I made a final push which made me go down to 123th :slight_smile: This is not a bug, this is a feature !

So far my best contest, I passed straight away to silver and gold. Legend is still another league for me. The step is quite high. Maybe next time :slight_smile:

8 Likes

55th at the end

There was already enough written about the eval function, while I failed to go deeper than 1 (worse results, close to timeout).

For the enemy tracking I stored all possible locations for both enemies. In every turn I then simulated every possible action for each enemy location and checked, if that would lead to the current board state.
As my debug animations already got mentioned:

12 Likes

37th, funny, but hard challenge. The fog of war made for a huge headache in debugging.

Started off having fun with a Genetic Algorithm, got me top 20 (without Vornoi) until thursday night.
Only managed 5-10k sims.
The GA had a double for move direction and one for upgrade direction, where I would count possible moves every round and pick one by:
MoveNumber / UpgradeNumber = (int)(GA_val(0-1) * NumMoves);

Friday when Legend opened I was around 35 and started doing MinMax.
Once I was done I instantly regret doing GA at all, since it was a whole lot of debugging to make the GA work. While the MinMax could be integrated into the game making for easy undoing of moves.

Used some kind of deepening without memorisation for my MinMax:

			for (int i = 1; i < 9; i += 2)
		{
			if (TimeIsDone) break;
			var newMove = _game.MinMax(i, Evaluate);
			if (!TimeIsDone)
			{
				bestMove = newMove;
			}
		}

Where it would break if time had ended from depth of 2.
Did a BFS, from every node to every other node to find the distances, at the start of every round.
Which I used in my Vornoi score calculation (didn’t consider paths being blocked)
Had around 10-15k evaluations every round which led to depth or 3 early on and 5-7 lategame.

Regrets:

  • Not doing enemy prediction first.
  • Not using unit tests for enemy prediction.
2 Likes

#How I used Python3 to rank 44th,
##…rank first in Python/Python3, and##
##…have my bot chosen to be the notorious Gold Boss##

A lot of people expressed amazement in the chat that my Python bot made it to the leaderboard (at a time when all the other bots were in C/C++). The truth is that I didn’t do anything special.

##My algorithm##

  1. I predicted all possible enemy positions accurately, including the possibility that the opponent is deactivated either because of a bad move or a lack of moves. This wasn’t about fancy algorithms or fast search, it was about basic logical reasoning. It took me about a day to code and test. I found Python’s set and frozenset types to be very helpful here, as well as list/set comprehension.
  2. At the start of each turn, I just fixed an arbitrary pair of possible enemy positions, assuming the opponents are still active if possible.
  3. I went through all possible moves and scored the best ones according to an evaluation function. My evaluation function took into account the heights around my units, and the positions I can travel to (using the move rules but with no building) and reach before my opponent (basically a modification of the Voronoi heuristic). Finding the right version of the Voronoi heuristic is what bumped me into the legend range. Later evaluation functions also penalized being in a position which can be pushed to a lower level.

##What didn’t work##

  • Minimax (or alpha-beta). I couldn’t get it to be fast enough.
  • Brute-force search. I thought this would be good at the end when I am no longer connected to the opponent. Then it is just about optimizing the score. It was fast enough at the end, but for some reason it was doing slightly worse. Maybe I had a bug.

What was reassuring to me is that many C++/Java/C# players were also complaining that they couldn’t get minimax to work.

##My takeaways/advice##

  • Take the time to deduce all the hidden information. Most players won’t and this gives you a huge advantage.
  • A good evaluation trumps extra search depth. For example, my Voronoi heuristic really helped get a big picture view of the game positions.
  • Even though Python is slow, it is ease of code up new ideas, and that is a fair tradoff in my view.
  • There is a luck element, but the more you can experiment and test, the better chance you have of hitting the jack-pot.

##A few other things##

  • I have a CG contest template that makes it easy to get started on each new constest.
  • I keep track of the time and stop early if I am close to timing-out.
  • I try to adopt good programming style, including having classes for things like the board state. This makes it easy to modify and extend my code. (This time I even wrote my program as a bunch of seperate files and wrote a tool [a hack of the module import system] which converts those many Python files into one file.)
  • I wrote my own arena app for this contest (and every recent contest). (I wrote it in Python using a template that I have). As Agade and others have said, having a personal arena app makes it really easy to test your ideas. I can run 1000 games in about 30 minutes. If a new version does really well against the current version, I know it is worth submitting. Also it has that advantage that I can run code that would time out on CG, and I can see if it is worth trying to speed up. (I did this with the Voronoi heuristic for example.) I also use the CG Spunk tool, but that is 10x slower than my app.

View of the contest##

I like that the rules were simple. I also liked the hidden information aspect even though it seemed silly at first.

As for the artwork, I was frankly embarrassed to be playing this game in public. I don’t think the scantily clad motionless wonder women on the side of the board added anything to the game. (I did think the advertisement artwork was much better.) I really appreciate CodingGame focusing on women-themed puzzles, like Wondev Women, Coders of the Caribbean(?), Ghost in the Cell, and Codebusters(?). I’d encourage the trend to continue, but with better artwork. I also understand that it is a huge stretch to incorporate the theme into the puzzle, so I don’t want to be too harsh.

(I have seen a lot of sexism and racism in the chats here, and I am worried about this website being uninviting to women and others. I hope it is just a few outspoken trolls in the minority.)

17 Likes

600th, silver, C#

I wanted to implement a minimax with alpha-beta pruning. And I did. The challenge rules were ideal for this study. I have learned a lot. Many thanks to CG and to everyone who posted advice or post mortem. Agade’s hint to store JSON responses proved really helpful. Thanks again.

I jumped from league to league with depth 1 minimax. That means, no minimax :slight_smile:
50 ms is a really short delay to simulate possible moves and to evaluate the situations. I have spent the whole contest to optimize, so that I could reach depth 2. Yeah!

I feel that I could reach a better rank playing with some constants in the evaluation function, but I really wanted to explore minimax.
Next time, I think I will try harder to play to win!

1 Like

91st, Gold, C#

This is the first time I am in TOP100.
My first approach was to simulate and evaluate the moves given in the input. By refining the eval function I could get to Gold with this (130th on Saturday).
To get better result I extended this with iterative deepening minimax with alpha-beta pruning, and basic move ordering on top level. Jumped to 70th with this on Saturday.
I did not implement any kind of opponent tracking, so when I did not see any of them I played maximax instead of minimax.

Thanks for the contest! It was great!

1 Like

63rd legend 1st scala

My bot is based on three modules: fog cleaner, minimax with alpha beta pruning, order moving, an evaluation function.

I started simple with just an evaluation function. It checks the number of path to exit for a given position. One can exit from one cell to another one if the target is playable, not too high and not occupied. I use bfs for a depth of 3. The final score is given by sum of my exits minus 2 * sum of oppo exits. It’s all about trapping not scoring.

This function along with a simple fog cleaner sends me to gold league all the way from bronze.

To enter the legend league. I implemented a minimax with a depth 2 (me and oppo in total). In the beginning it times out. Then I added move ordering. I evaluate first push and then build and the pruning works better. It still times out sometimes but in most cases it only takes 20millis seconds.

The game is tough in legend league. It was already Sunday morning. I improved a little bit the fog cleaner. I’m very glad to in legend league and decided to stop.

Things for the future, more depths in minimax, learn voronoi, better enemy prediction.

Thx CG and all players for making this a great game.

1 Like

I use jmh for microbenchmarking and java flight recorder for profiling. These tools are integrated in the scala kit https://github.com/TrueLaurel/CodinGame-Scala-Kit

I don’t use immutable code everywhere. For example in my eval function I have array, var, queue but they don’t escape the function.

The thing boosted the most my minimax algo is the move ordering. Therefore, it is not always the mutable/immutable codes that weights the most but the deep understanding of the problem domain.

361(Gold), Python3

This was my first competition here on codingame, I only had the weekend for it, but I tried to make most of it

The backbone was iterative alpha beta pruning with move ordering based on score from the previous iteration. The depths it could go down were satisfactory in end game or with a simple eval function (around 4 depth(2ply)) but not so when the units were in the open field(even as little as 1depth). I went for this mainly to try out how well can python perform against C++ in a bruteforce battle.

The eval function ended up like this:
flood fill - number of accessible tiles without taking build into account (this causes problems when narrow passages were present - some parts were not accessible easily - should have gone for voronoi :slight_smile: )
points gained
penalty of standing near a deep pit (to avoid being pushed down too much)

I also added a bit of aggression (push was highly rewarded if it causes a high fall) which helped a lot as the opponent needs extra moves to gain back the height.

Fails:
One thing i tried was to give opponent a build-anywhere move when he has a hidden unit, but this caused the units to be too scared to go into any narrow passage.
Wrote the opponent can build anywhere if hidden move generator, forgot to evaluate that -1 -1 is not a valid position for unit, wondered why it does not work.

I really enjoyed the contest, with a high pressure until the last moment. I reached legend, 26th, last push on Monday at 4am (ouch!).

I used C, I do not want to code bad C++ due to optimisation settings. I was able to make about 5 scoring per microsecond without floodfill, and 0.5 scoring with floodfill on my machine.

I implemented a negamax for the first time with alpha/beta pruning, PV-nodes, and iterative deepening. I learnt the hard way some of its traps:

  • test alpha/beta pruning results against a full search, it should return the exact same result,
  • alpha/beta pruning does work only if score is computed on the leaf (or else alpha & beta should be offseted, but it is simpler to propagate score to the leaf),
  • do not count on move ordering and PV-nodes to compute a good solution when an iteration is interrupted (not sure why yet, this kept me awake late in the night).

Also another Codingame trap: use gettimeofday, not clock!

To implement it, I used Wikipedia article on negamax, and many good articles on Chess Programming Wiki:

My evaluation function computes Closeness Centrality using flood fill algorithm (thanks SatinneChatounette for the algorithm idea, Dijkstra would not do it).

I also have a opponent detection I find elegant. It uses several steps:

  • check whether previous action was successful, if it failed, compute a clue bitmap indicating where a opponent unit is needed, and update known positions and map,
  • check if there are any unknown position and whether I saw an unit moving,
  • for each unit, compute a mask (bitmap) of possible position before their action, also compute a mask of hidden position now and before my move,
  • if my move was canceled, apply the clue bitmap on possible positions (more on that later),
  • check whether I was pushed (this means opponent did not move and must be at one of three positions) or if the opponent built something (this means one and only one of its unit moved and must be around the build), compute the clue bitmap,
  • build the mask of possible positions from previous possible positions, using information on who moved or not,
  • apply the clue bitmap,
  • finally, for each unit, starting with the one with the more confidence (less position on its bitmap), assign position, choosing the worst position for me if there are several solutions,
  • and the more important step: output a funny message based on our guess!

I built an artillery of tools to use on my local computer:

  • The program outputs a base64 dump of its state so that I can replay part of match easily. I planned to output it in the text bubble (it fits), but this is not really nice so I stay with stderr :slight_smile:
  • I have a tool to load this dump and output various information about it (global score, score for each cell, dump to Codingame input format…).
  • I wrote a local arena, this is invaluable to detect obvious errors (timeout, crashes, bad moves…) and to test variations.
  • I use git to keep track of every versions. I also keep a binary for easy comparison.
  • I have a script which computes CRC of source code and patch it inside before putting it in clipboard so that my bot starts by saying out loud its “version number”.

Thanks for reading :slight_smile:

4 Likes

I also used mutation inside methods, but my classes had immutable interfaces, making easy to reason. I have a “dark area” with a scala object dedicated to handle the fills, for speed reasons.

Can you elaborate more on what you call move ordering here ? For depth 2, aren’t you obliged to evaluate all elements of the “cartesian product” of your moves and the opponents responses.

I did not setup you toolkit yet. I will give it a try one or another since using multiple files would be really nice.

25th Legend C++

Hey! An amasing contest as usual! Easy to enter, hard to manage… Thanks codingame!!

Overview

As most of the legendary players I used a variable depth Minimax with alpha beta prunning, I track the opponent in the fog of war, and I take its position into account in my minimax and evaluation function.
My evaluation function gives the maximum weight to the number of cells under control taking into account who’s playing first (BFS building a voronoi territory based on cells you can reach without building anything), so that you can try to push the opponent and close it in a small area.
Then it takes into account various factors I have no guaranty they are totally effective:

  • The height of the cell where you are
  • The number of reachable cells arround with a better factor for higher cells
  • A bonus if the position has at least 2 possible moves

Story and takeaways

I started the challenge in the train on Friday, and continued it during the return on sunday evening. I started in Java, even if I had the intuition the performances would be a key factor but I can prototype faster.
I first implemented the legal moves generator and asserted I was generating exactly the same thing as the game engine to be sure. I added a few tests to ensure my execution and cancel of the actions had no side effect, because I wanted to avoid loosing CPU time duplicating the game state. I already had the minimax with alpha beta pruning and move ordering you can find on my repo and was using the evaluation function I described above without the tracking and voronoi territory.
On Tuesday I decided to translate my code in C++ because the game engine was not too big, and…It took me until the Friday to have everything up and running. I should have switched in C++ from the beginning…

What surprised me is that I did not had a so big difference in performance. I had the exact same behavior, and if in Java I was able to run between 8k and 16k evaluations per 40ms, in C++ I was around 12k to 20k! I would have expected more differences, but maybe as I used fixed size arrays to model most of the game, did not use streams, it can explain why it was closer. Thanks to the alpha beta prunning, it allowed me to go up to 4 to 5 turn in advance, so the prunning was really effective.

If I reached silver and gold with the first cut of, legend has been another story…I had in mind all the features, so I started to code the tracking during the saturday afternoon and stabilised it on sunday morning. Knowing the position of the opponent clearly helped, and I was stuck in the top 20 of the gold league. I then added the territory control using a BFS, and it was magic… My bot climbed super quickly the leaderboard, reaching first position at 11%, and taking 8 points in the maximum to the Boss. People on the chat that were complaining on the boss strengh were quite amazed :smiley: : it was clearly possible. In legend the bot stabilized around the 15th position, and even if I tried a few sets of weights in my evaluation function, there have been no prooved improvement and the bot that finished 25th is almost the same as the one getting out of gold league. I lost almost a depth because the BFS was quite costly, but since I was still up to depth 3 to 4 with around 10k eval per turn, a better evaluation function was clearly above.

I can only measure the difference between my bot and the top players, there is a huge gap… I had more time on this challenge, but most of the bonus time has been spent in the translation. I clearly have no regret, and had no more real ideas to implement. I’m impatient to read the top players strategy to understand where the gap is.
I find the community challenge a great idea and I’m exited to see what will be done. But don’t take only the 4th top players to build it, take up to the 24th, so I have a chance to win :smiley:

@MrAnderson did an amasing work translating most of my repo in C# and I need to give it a try next time :slight_smile:
I’ll also organise a codinHub in my new job area near Grenoble!

10 Likes

130th, Javascript

My first contest ever on this site since i only joined quite recently. Picked Javascript to adapt quicker to any new rules on the higher leagues. Turned out there were no new rules later on. If i knew this beforehand, i would have picked a language i want to practice a bit more, python / php probably.

First few days, maybe first 3 days? I had no clue whatsoever what the winning moves were for the game if I played it personally, so i wasn’t sure what to tell the bot to do either.
And THIS exactly was the biggest issue i should have approached differently. Thinking back i should have created my own offline version of the game and play it either against myself or against a basic, dumb AI. I wouldn’t normally play this kind of game. It looks very dull to me.
I was about to not even bother with this any longer. Even as an avid gamer, I seriously hate the game design of WW, but because it’s a contest i kept going.

So first thing i did was output move[0]. Got wood 2.
I just wanted to see how far I can get with very little.

Then i added values to the moves based on height and output the move with highest value. Wood 3.
All i worked on was few lines of the MOVE&BUILD action type. I was still thinking of just doing very basic code and see how far it gets.

I got to bronze and I didn’t touch the PUSH&BUILD yet so i added that when silver opened. Why did i add the push-build so late? Because, again, i was about to give up at any moment and just forget about this.

The push-build was only added to get to silver since basic move-build didn’t do it for me.

Because i had free time to do anything, i just continued with this contest. I fine tuned the push-build a bit to make it more annoying and drop the enemies from Height-difference > 1, since that would turn into a difference of 2 and they wouldn’t be able to return.

Here comes the fun part:
Tweaked the code a bit and added extreme values for extreme cases.

  • For example when there’s a narrow space and my builder would get walled in if it steps in there, i gave the move a huge negative value.
  • Push an enemy into a narrow space and lock them off for good.
  • And ofcourse avoid burying the allied builder alive. :smiley:

This gave me a HUGE jump to silver 20. At which point i felt the motivation to try and stay at the top.

Soon enough people tried to drop me, so i kept adding minor tweaks. I liked staying at the top but didn’t feel like trying too hard.
I then found some mistakes within my code and fixed them. Then i dropped a lot of ranks. So i undid the fixes and jumped back up. After carefully analyzing the code i realized i made the bot play far more riskier moves and be far more paranoid about getting locked out than i intended to. So i kept running the risk and paranoia inducing evals alongside the normal fixed code.
Btw the fixed code wasn’t perfect either, i managed to fix it last day of the contest though.

Gold showed up and i got to top 50. But i kept dropping.
I continued to implement fixes, evaluation methods from scratch and even adding last position check, to see if i got pushed and react on that. It kept me up, but didn’t do too much.

What helped me a lot to stay up was watching replays and see what sort of cases would be good addition. Added few improvements here and there, such as pushing the enemy towards the edge and less towards the center, build more towards the center.

Overall the code was few evaluations of each move and the extreme cases. But a lot of the moves had same value so i randomized the pick for same value moves and that gave me a temporary boost. I guess this is because most players were always expecting a ‘best move’.

Legend got released and i was a bit annoyed i didn’t make it. So i added gradually more fancy stuff, like the enemy tracker which i then layered into:
visible - last seen - estimated
Would only use the estimated position when i have no clue about the whereabouts of the enemy builders.
If i start with completely ‘hidden’ opponents, i would use the ‘heatmap’ to see what changes are made between my turns to find the enemy.
I think this is the most basic and easiest tracking method you could implement.

And i guess at this point the contest seemed slightly more interesting and appealing. The only thing i didn’t get around doing was calculating the available area to my units.
The reason i didn’t add the area-check function, is because i messed up somewhere in evaluating the enemy position after my move is performed. Somehow an error kept being triggered when the enemy was completely hidden, regardless of how hard i tried to prevent the check on the map with enemy input index -1, -1

I really liked the tactical aspect the game had in the end, once i got into chasing the enemy based on estimated position. But the game overall still lacks too much in flexibility and options.
It ends up being hardcore computation which you can’t keep up with, since you can’t evaluate good / bad moves as easily as the bot. Instead you have to rely on the methodology itself and the outcome to decide whether it’s good, since it ends up looking same as chess engines, which appear to make strange moves, but clearly make the best moves if you look at their rating.
Why is that bad? You need an automated process that keeps improving the winning method, by adding new evaluation methods itself. You aren’t the coder anymore by that point, the code becomes the coder, which pretty much defeats the purpose of you implementing the winning algorithm.

However, with more options and flexibility it all comes down to smart decision making.

Also of great help would be a neural network to find the best values for the evaluation functions. I wasted way too much time on tweaking those values myself. Way too much. Should have instead implemented the area-check function during that time.

2 Likes

Ended up on the lonely island known as 3rd place. :wink:

As per tradition you can read my post-mortem here. I changed the style a bit after feedback from my last one, so I hope you enjoy. Feel free to let me know if you want more details.

30 Likes

Matchmaking: This one is beyond my control and a bit off topic, but it’s important to mention. I think CG’s matchmaking system is broken when a large point distribution happens in the top 10. It was an issue during the Fantastic Bits contest, and it is still the case here. There is no reason to waste 80% of matches in a submit to players 8+ points below, and barely have 10% played against the only other opponent that can give meaningful results. Some say it’s because of the low number of games, but I believe the selection algorithm also needs to be addressed. It is really frustrating to have to wait until legend league opens to have good quality submits (and sometimes, even then…).

Before, the matchmaking selection was far smaller. And the top 3 was playing only against the top 3. @pb4 asked to codingame to change that because the top 3 was just tweaking their code to fight against the top 3.

And i find this interesting. If you look at @Agade and @T1024, @Agade has a little better win rate against the top 5 (little better = +1% better). But @T1024 got a very better win rate against the top 5-10.

So, who should be first ? The one with a little advantage against the top 5 ? Or the one crushing hard the top 5-10 ?

3 Likes

If the calculations are done like the chess ELO method, then you gain more points if you win against a good player. So points are balanced by wins/loses according to opponents levels. In chess, it would be a nonsense to be able to win Kasparov but a newbie. So with Magus explanations, I think it’s a good thing to fight against all levels and not only “top x”. But perhaps a normal distribution probability around your current ELO to find opponents could focus a little bit more around your level. (Perhaps this is already the case ?). Indeed, in real life Kasparov will more probably play against Karpov than against the local chess club member.