First few days, maybe first 3 days? I had no clue whatsoever what the winning moves were for the game if I played it personally, so i wasn't sure what to tell the bot to do either.
And THIS exactly was the biggest issue i should have approached differently. Thinking back i should have created my own offline version of the game and play it either against myself or against a basic, dumb AI. I wouldn't normally play this kind of game. It looks very dull to me.
I was about to not even bother with this any longer. Even as an avid gamer, I seriously hate the game design of WW, but because it's a contest i kept going.
So first thing i did was output move. Got wood 2.
I just wanted to see how far I can get with very little.
Then i added values to the moves based on height and output the move with highest value. Wood 3.
All i worked on was few lines of the MOVE&BUILD action type. I was still thinking of just doing very basic code and see how far it gets.
I got to bronze and I didn't touch the PUSH&BUILD yet so i added that when silver opened. Why did i add the push-build so late? Because, again, i was about to give up at any moment and just forget about this.
The push-build was only added to get to silver since basic move-build didn't do it for me.
Because i had free time to do anything, i just continued with this contest. I fine tuned the push-build a bit to make it more annoying and drop the enemies from Height-difference > 1, since that would turn into a difference of 2 and they wouldn't be able to return.
Here comes the fun part:
Tweaked the code a bit and added extreme values for extreme cases.
- For example when there's a narrow space and my builder would get walled in if it steps in there, i gave the move a huge negative value.
- Push an enemy into a narrow space and lock them off for good.
- And ofcourse avoid burying the allied builder alive.
This gave me a HUGE jump to silver 20. At which point i felt the motivation to try and stay at the top.
Soon enough people tried to drop me, so i kept adding minor tweaks. I liked staying at the top but didn't feel like trying too hard.
I then found some mistakes within my code and fixed them. Then i dropped a lot of ranks. So i undid the fixes and jumped back up. After carefully analyzing the code i realized i made the bot play far more riskier moves and be far more paranoid about getting locked out than i intended to. So i kept running the risk and paranoia inducing evals alongside the normal fixed code.
Btw the fixed code wasn't perfect either, i managed to fix it last day of the contest though.
Gold showed up and i got to top 50. But i kept dropping.
I continued to implement fixes, evaluation methods from scratch and even adding last position check, to see if i got pushed and react on that. It kept me up, but didn't do too much.
What helped me a lot to stay up was watching replays and see what sort of cases would be good addition. Added few improvements here and there, such as pushing the enemy towards the edge and less towards the center, build more towards the center.
Overall the code was few evaluations of each move and the extreme cases. But a lot of the moves had same value so i randomized the pick for same value moves and that gave me a temporary boost. I guess this is because most players were always expecting a 'best move'.
Legend got released and i was a bit annoyed i didn't make it. So i added gradually more fancy stuff, like the enemy tracker which i then layered into:
visible - last seen - estimated
Would only use the estimated position when i have no clue about the whereabouts of the enemy builders.
If i start with completely 'hidden' opponents, i would use the 'heatmap' to see what changes are made between my turns to find the enemy.
I think this is the most basic and easiest tracking method you could implement.
And i guess at this point the contest seemed slightly more interesting and appealing. The only thing i didn't get around doing was calculating the available area to my units.
The reason i didn't add the area-check function, is because i messed up somewhere in evaluating the enemy position after my move is performed. Somehow an error kept being triggered when the enemy was completely hidden, regardless of how hard i tried to prevent the check on the map with enemy input index -1, -1
I really liked the tactical aspect the game had in the end, once i got into chasing the enemy based on estimated position. But the game overall still lacks too much in flexibility and options.
It ends up being hardcore computation which you can't keep up with, since you can't evaluate good / bad moves as easily as the bot. Instead you have to rely on the methodology itself and the outcome to decide whether it's good, since it ends up looking same as chess engines, which appear to make strange moves, but clearly make the best moves if you look at their rating.
Why is that bad? You need an automated process that keeps improving the winning method, by adding new evaluation methods itself. You aren't the coder anymore by that point, the code becomes the coder, which pretty much defeats the purpose of you implementing the winning algorithm.
However, with more options and flexibility it all comes down to smart decision making.
Also of great help would be a neural network to find the best values for the evaluation functions. I wasted way too much time on tweaking those values myself. Way too much. Should have instead implemented the area-check function during that time.