The topic where you can tell how you liked the challenge, & how you solved it.
Thanks again JohnnyYuge, nmahoude & eulerscheZahl for this amazing contest.
The topic where you can tell how you liked the challenge, & how you solved it.
Thanks again JohnnyYuge, nmahoude & eulerscheZahl for this amazing contest.
Great contest, very noob-frienldy. Easy to start, hard to master. Heuristics can compete with simulations but up to some limit.
Iām not sure why wood-bronze bosses were hard for many people. I find them too easy. Simple ārun from wanderers towards explorersā instantly goes to Silver.
The only thing I wish to be done is to make non-grouping strategies viable. It can be when sanityLossAlone = sanityLossInGroup, but Iām not sure if it ever happens. If it is, itās too rare to account for.
I believe iāll finish somwhere between 20th and 30th 5th and 60th (1v1v1v1 ranking ā¦)
My strategy is the same as in many contests :
My search algorithm is a minmax with alpha beta pruning against the nearest player. Most of the time i can reach depth 12 or 14 (so itās 6 or 7 moves for each of us).
About the contest itself :
Once again my feeling is that the contest creators tried to create an āanti simulation contestā because āeh eh we will prevent simulation to win it will be funā. And once again simulations just crushed heuristics. A complex engine will not prevent simumlations to win. It will just make life harder for both simulations and heuristics ā¦ For the last time if you want to create an āheuristic friendlyā contest, thereās only 2 solutions : A massive branching factor (Gitc, platinum rift) or a unpredictable future (fog of war like in CodeBuster). Forget the complex engine.
Too many rules. Slashers too complex. And iām surprised because in the codingame guideline we can find No extra rules for the sake of complexity e.g. (no bonus pickups, no once-per-game boosts, no hero selection). The gameās core must be enough
1v1v1v1 games are often bad because of the ranking system. The only good example of a 1v1v1v1 game is Hypersonic. Because Hypersonic is based on the farming with little interaction between players.
Many bugs in the referee. In fact, i donāt really care for bugs since there was no āgame breakingā bugs. But at least fix the statement ā¦
Referee conception and architecture is weird. The code was not easy to read. Many dead code (i believe some features were abandonned near the end).
I donāt know why, but the viewer just crashed and crashed again ā¦ Chrome and firefox same behavior. If i play too many games in the same tab, it will crash. If i submit, it will crash because of the viewer trying to show up in the last battles part. I hope it will be fixed before the multiplayer puzzle.
But, Code of Kutulu has a new interesting aspect. This is not a non zero sum game. Because each players will try to stay near each others. It was interesting to think about it. Because when you use minmax or paranoid algorithm, if you put a distance criteria in your evaluation function, you put the hypothesis that opponents will try to go away from you. And this is not a valid hypothesis in most of the cases.
My oponion on community contests for the moment is this one :
When we created MeanMax, we knows that it was complex. But Fantastic Bits was 1 year ago. But god damn it, it was not a challenge to create even more complex contests after it ā¦ Stop that mess ā¦
First things first: Semi-Cooperative would not have worked with less players, but maybe 2vs2 would have been a better choice.
First strategy got me to Wood 2:
MOVE 1 1
Second strategy got me to Bronze:
Third strategy went up to Gold:
No time for: handling slashers, finding a use for light and yell.
I really enjoyed the contest. I might have to agree with Magus on the contest being too complex, if I had spend more time on it.
I really liked the idea behind this contest, having to work together with the other bots (and their unpredictability), but also working against them. Great job on the idea, nmahoude and JohnnYuge, also thanks to our local Frog for bugfixing so much, I guess
Finished ~30 in Gold, still couldnāt find the motivation to write anything else than if-else Spaghetti and only looking at the next move, but I think it couldāve worked for Legend if I had implemented the Slasher movement correctly and didnāt run back into Wanderers when I got separated from the group.
This is how it went for me:
Wood 3 to Wood 1:
Wood 1 to Bronze:
Bronze to Silver:
Silver to low Gold:
I stayed on low Gold after that for some time, until I finally implemented the Slashers.
Low Gold to high Gold:
I have a whooping 172 IF-Statements in my code, but it kinda workedā¦
I really had fun with this contest, looking forward to the good work of other people still coming
By the way, @_CG_SaiksyApo : Itās Code of Kutulu, not Kukulu
BlitzProg 120th.
Took monday and tuesday off work for Codingame, which I ended up using to fix the computer of a friend and dealing with some social events. When I went back to work, everyone had a cold, and they gave it to me for rest of the challenge. I ran out of pokĆ©dollars so I couldnāt fix myself with a Full Heal.
Wood3 to Mid Silver : Quick AI to explore the early leagues.
I copy the map, put a fake wall on and around everything thatās not an explorer. I then have two cases:
Mid-silver to high gold : Monte Carlo with heuristic, Depth 6.
Didnāt have the motivation to finish my simulator completely, so it had prediction errors. In addition, I also unable to get any AI working for player dummies, because they would always be doing a misleading choice. So I ended up treating these as still ghosts that ennemies tĢ¶aĢ¶rĢ¶gĢ¶eĢ¶t do not target. (edited)
Because of the above I had to treat predictions errors in a fail-safe way, meaning I would dodge a threat, even if I didnāt happen - However many players with better simulations would detect there was no threat and I would end up making choices that would separate me from the rest of the team.
Lots of adjustments were made to ensure I would prioritize being near an explorer, even if it meant I had to run in the path of a Wanderer.
Abilities are attempted whenever Monte Carlo decides to not move. The higher the sanity, the harder I try for better opportunity (like causing more damage with a Yell or using a plan on every players)
The contest was very interesting. I liked the idea of staying close to other explorers, it gave a co-operative touch to the contest. I wish to see more of that in the future
Will probably finish in top 20 of gold (around 100 overall).
My strategy is simulation based (not all rules were fully implemented, e.g. LIGHT). It uses alphabeta search, only enemy explorers that are within walk distance 4 are fully investigated. A small quiescence search is performed at the end to check if my bot can reach a junction without getting caught. This usually results in a depth of 3 moves (me + other explorers) when all are close. Evaluation is mostly based on my explorerās rank, sanity, and average sanity of the other explorers.
Howver, this did not work well for me, I donāt know why exactly, but probably a combination of:
Unfortunately I did not have time enough to investigate the cause of this failure or try alternative approaches. It will be interesting to read the other strategies.
But most importantly: I really liked this contest! There was a lot of fresh thinking by the game creators. I really liked the cooperative part, and also the Lovecraftian atmosphere that permeated the whole game. The rules were IMO the simplest of all community contests so far. And it is also nice that it is possible to create a decent bot (some python bots will make it to top 20). Thanks a lot to the creators! And to codingame for hosting this competition!
Thanks for the contest, I love the Lovecraftian theme!
I got to 203th place / 131 in gold using very simple approach to just check how much sanity Iāll have after moving 1 into each direction (give or take, going to friends increases sanity, going towards creeps lowers) or doing PLAN/YELL/WAIT, without any long term planning. I even got to gold while still using distance=abs(a.x-b.x) + abs(a.y-b.y) and switching to cached BFS distances helped only like +40 rank. But unit testing and keeping it simple helped a lot (e.g. I didnāt do LIGHT).
If someone wants to keep playing offline and improving their solutions Iāve put the compiled cg-brutaltester and game referee here, ready to run out of the box: https://github.com/fala13/CoK_artifacts (Iāll do proper linking and code to both repos later).
Floyd is a bad choice for unweighted graph. O(n^3) could have been reduced to O(n^2) using BFS which will do the same (shortest path from every cell to every other cell)
Rank 24
I was going up and down around rank 5 before I broke my bot at the end of the contest to avoid discussions about a tester affecting the winner. I rarely even hit the 1ms/turn mark now.
I already had an accurate simulation of the game before the contest started (the silver boss is a nerfed version of my testing bot that waits every 10th turn), which allowed me to get about 4000 single simulation steps.
My basic algorithm is a breadth first search. I explore all possible first moves and further expand these nodes as long as I have time left. There are often several different move sequences that lead to the same resulting board state. These equivalent states can be detected efficiently with Zobrist hashing.
I use Floyd-Warshall to precompute all paths and have a lookup table for the wanderer movements. Light effects are ignored completely, as my attempts to add them to the path length just made it worse.
Initially I try to predict my opponentās actions to then follow the group and lose less sanity. I do that by running a search with fixed depth of 5 for each opponent independently, assuming that everyone else just waits and without allowing skills (light, plan, yell).
After that I plan my own actions based on these predictions. As I canāt really know, how opponents will move (except if they are stunned before or I yell at them), I increase the minion distance a little bit for them and make wanderers attack myself when I have similar distance as the opponent. I only use plan and yell (and only for the first turn) in that step, no light at all.
Plans are casted if Iām low on sanity or if I have several explorers around me.
I yell, if Iām low on sanity (donāt waste that precious action), if the stunned opponent is a blocker that kills the minions for me and helps me escape (this is done by adding costs to a yell, but less than the cost of colliding with a wanderer) or if it deals huge damage (at least three wanderers).
My scoring is mostly my sanity, but also takes into account how close I am to other explorers (mostly to the closest, less to the other ones). Not getting targetted by wanderers is a bonus too. Being in LoS of a rushing slasher costs me the equivalent of 5 sanity (unless he attacks me, not need for double penalty). I have a decay over time (factor 0.9), as future moves get increasingly uncertain and I want to avoid the damage now and maybe take it later.
When I wrote about BFS above, thatās not entirely correct, as I use beam search: after each depth of my search I remove all nodes that give me a bad score (equivalent of 50 sanity less than for my best plan) and only keep the best 250 nodes - though I only hit that second condition at the end of my own search, not for the depth 5 opponent prediction.
After the beam search I replace wait actions by light, if there are any wanderers targeting me and another explorer is far enough away from myself (to make that explorer the new target).
In case that only two explorers are remaining and mine has the higher sanity left, I completely ignore isolation and only avoid minions (when Iām isolated, so is my opponent and heāll die before me).
finally ranked 122th of gold .
I wanted to implement a MC tree search at first, but i thought too complex to code within the scope of a simultaneous move - 4 players game. Morever this contest is very challenging because of the cooperative aspect. I feared Minmax or MCTS with paranoid strategy would not give interesting results, maybe i was wrong and the best solutions have use some sort of tree search algorithm ?
So I went on a simpler monte-carlo simulation, 5 turns ahead, with wanderers moves and rushing slasher up to silver league.
Then I added the plan rule and other slasher states to go to gold (i had to look at the Referee code to clarify some statements) . And finally added Yell rule to gain a few places . I ignored light effect, because i chose to only use precomputed distances at init time, donāt know it would improve.
I didnāt care too much about shelters too, as i essentially relied on other explorers and āstupidlyā followed them !
I 've been in top 50 of gold saturday but loosed some places because of i focused on simulation at the cost of good heuristic
Rank kind of irrelevant since the ladder system has its issues and these issues become even clearer with this contest.
Bot can rank between #45 and #60 in Legend. So far my best contest result.
My experience throughout this contest, quick write up:
Joined in Friday to watch the stream, but TV was on with football playing so i had to give up on it.
Reached Wood2 by outputting āMOVE 11 8ā
Wood1 was indeed easy fun short if else write up.
Stuck! I was actually stuck in wood1 and so were many others. There first wood1 boss was horrible. It felt like i need to code for the gold league already.
Also slashers were confusing.
Bronze I had to write a sim to get out of wood1.
Started off with my Wondev Woman approach (code is available online if anyone is interested) and added a sim. Only thing i lacked was a better way to select the best direction, which i did by adding a simple algo, from there on the contest became very easy, mainly because of the sim.
When i reached bronze i realized i simulate the explorer with ID 0 instead of first given in the input as i assumed the first in the input has ID = 0
I didnāt even look at the referee this entire contest because the code was hard to read. Best referee so far was the Code Royale, itās in fact the tidiest referee iāve seen so far.
Maybe Kotlin helped with actually making it look cleaner.
So from here on i fixed some bugs, added slashers, added plans, added lights and finally added yells. This got me to around spot 20, where i stayed for most of the contest.
Silver not much was changed. I kept tweaking my depth 0 sim. It was simulating my explorer movements.
Gold added depth 0.1 to stay at the top of gold.
This included basically trying to predict how hurt my explorer gets in 2 moves. Thatās it. Still hovering around 20.
Around this time some people were finally done copying the referee and simulating, probably to greater depth than i bothered doing it.
Added wanderers, slashers and the other explorers to my sim. Added a simple wallinbetween check to exclude the ones that are too far away to be dangerous.
Legend
I thought this is when i finally have to add pathing. Well i might have added it, but i didnāt have any time during the weekend.
What i inteded to do was to either:
My code is around 500, maybe 550 lines of random stuff after last night trying to climb further into legend as i was more excited about experimenting with other things than doing the obvious.
I added an incomplete depth 1 to my sim. Slashers arenāt worth the trouble imo. And with the pathing i just described above i guess rank 20 would have been easily within reach again.
About my solution and some hints:
Everything is very straight forward in terms of calculating best next action. Itās when you want to go depth higher than 1 that things get messy and complicated because each bot has a different approach to trying to survive.
When these approaches differ too much from one another is when things start to look random. But overall there is still a best move with occasional 2 options.
I only checked possible moves next to me and the overall inspiration for finding best route / direction is the slime mold algorithm. It acts pretty much same like my bot. Itās looking for food (the goal) but has no idea where it is so it expands where it can.
At its core the algo is basically:
To better explain, you can treat explorers and shrines as the food source and the all the minions wandering around can be treated as unappealing edges.
I even treated light and plan as food source.
I reinforce preferred routes through the simulation. This is basically the most important part. Reinforcing the correct route and thus you can reach high rank in gold easily with just one neighboring tile checked.
Best part is you donāt need any pathing, you can just increase depth, which is more reliable since paths are in a way blocked and unblocked by minions.
Overall impression
I did like it for most part, with few exceptions:
The good parts:
Honestly, itās a nice game overall.
Hello every one !
I ended up 178th (107th in gold)
Thanks for this contest JohnnyYuge_of_kutulu , nmahoude and Toad_of_Kutulu
I started the contest on monday and I hardly managed to get more points than the wood 1 boss (to Bronze), but after this league, I overtook like 800 players and ended up 100th.
This was due to the incredible amount of player in the wood 1 league when i started the contest (more than 500), the low number of rating matches and i think the RNG of this game.
Heuristic :
My heuristic was quite simple but can be improved. Here is a python like pseudo code of my entire algo :
for each cell has a Manhattan distance with my Explorer < 2 and is not a wall: cell.score = -5.0 * sum(1.0 / (manhattanDistance(cell, w) + 1.0) for w in wanderers if w.state != 'SPAWNING') cell.score -= 2.0 * sum(1.0 / (manhattanDistance(cell, s) + 1.0) for s in slashers if s.target == myExplorer.id and (s.state == 'SPAWNING' or s.state == 'STALKING'))
for p in opponentExplorers: if manhattanDistance(cell, p) < 2: cell.score += 1.0 else: cell.score += 1.0 / (manhattanDistance(cell, p) + 1.0)
bestCell = cell with the best score if bestCell == myExplorerCurrentCell and myExplorer is not using another items: if myExplorer.plansLeft > 0 and myExplorer.sanity < 210: print('PLAN') else if myExplorer.lightsLeft > 0: print('LIGHT') if myExplorer did not use an item: printMoveToBestCell()
Improvement :
Change the score coefficient of the wanderers, slashers and explorers. But with 5 different submits, these numbers seem to be acceptable.
Use the YELL. I tried to use it if the distance between my Explorer and the closest Explorer is strictly less than 2 and the distance between my Explorer and the closest Wanderer is strictly more than 0. Unfortunatly, the submit after this change was worst than before so i revomed this feature.
I precalculated and ārealā distance between each cell of the map during the first round and use it instead of the Manhattan distance for the wanderers and slashers distance with my Explorer (keeping manhattant distance for the distance with other explorers). The test submit was not really good (ended up at the same rank). I actually let the real distance for the wanderers score in my final algo (but not for the slashers distance because of my poor slasher dodge abilities, i didnāt need an accurate distance). I think the real distance with the wanderers give helped me reach this final rank but not in a significant way. The only difference with the code above and my final code is the real distance on the wanderers (I want to show a simple way to reach gold with this pseudo code).
Contest was fun, i figured out how to reach my top 200 goal with a basic scoring algorithm, my job is done
Some clarifications,
There is a bug if the method acquireClosesTargets for Slashers. If only one explorer is on the same case as the slasher a list of 4 times this explorer will be returned and the Slasher in Wandering Mode will not change his state to PREPARING_RUSH.
private List acquireClosestTargets() {
int minDepth = Referee.maze.dim.tchebychevDistance();
List targets = new ArrayList<>();
// Check each direction one after an other
for (int i = 0; i < Constants.MAX_DIRS; i++) {
for (Cell cell = Referee.playfield.getCell(pos); cell.isWalkable(); cell = cell.cellFromHere(Constants.vdir[i])) {
List<PlayerUnit> possibleTargets = Referee.playersOnCell(cell);
if (possibleTargets.isEmpty())
continue;
int depth = cell.pos.tchebychevDistance(pos);
if (depth < minDepth) {
targets.clear();
targets.addAll(possibleTargets);
minDepth = depth;
} else if (depth == minDepth) {
targets.addAll(possibleTargets);
}
break;
}
}
return targets;
}
I seem to be stabilizing 53e.
I made a heuristic AI because I wanted to be better at it. First time going to legend.
Nothing really creative in my AI:
-wacthing distance between me and wanderer
-not being in LoS of slashers
-being close to other explorer ( especialy the nearest one)
About the contest,I think there was way too many rules and with only 3 light/2plan /1 yell per target it was hard to set a point to have the best possible use per game.
I like the idea for the slasher but I found it weird that it damages only the case targeted and not all the line between the slasher and the target, or stopping at the first explorer encountered.
The idea of coop was pretty interesting but it didnāt let players be creative about strategies,even when the loss alone = loss grouped,the plan is better grouped and yell help a lot winning.
Shelters were almost useless since a lot of games were done before turn 50 and they didnāt give that much sanity back for the fact that you often canāt be more than 2 turn on it.
I enjoyed Code of Kutulu as it was my first time ending up in legend league (#44 at the moment) in a CodinGame contest.
Strategy
About The Contest
Thanks for this contest, people of Kutulu! May you scare away the Deep Ones and may you find a way out of the crypt of Kutulu, brave explorers.
Thanks JohnnyYuge, Nmahoude and Toad for this great game
Wood to Silver
A simple heuristic as i do not want to waste too much time on it. The goal was only to have all the rules and if possible not to be stucked on a lower league :
I had in bronze league between this two strategies the use of PLAN and YELL
And thatās all!
Silver to Legend
I spent all the contest creating a simulation reusing parts of the referee as i code in java with paying attention about performances.
My first version ready to use on friday helped me to reach Gold. Fixes on saturday and the add of dummies send me to the Legend beating Abdul!
It is the first time i wrote Unit Tests to validate, first my model, then my simulation. For that, i have a flag to display only inputs of my model as it is sent by CG on the console (method toOutput in the referee). To test i copy/paste the output of CG in a file and i parse them calling my simulation :
The random target choose of minions make the second test a little bit hardā¦
As my simu was ready really late, i only implemented a Monte Carlo (full random) algorithm.
First i used a depth 1 and WAIT action for other player so i can concentrate on my evaluation function.
When it works pretty well beating my heuristic i set a depth of 3 and at the end 5.
To reach the legend i had dummies so player do not always wait but try to avoid minions (the first part of my heuristic strategy).
At the end, on sunday i optimized my code to play 10-12k simu step against 3k before. I also use my evaluation function to play the first move of my dummies (only move, no plan, light or yell for them).
Used to evaluate the state of a game returning a score :
I evaluate each turn at each depth multipling by a depthCoefficient decrementing so best actions are done first.
Data are precalculated a turn 0
Position
A position instance (coord x, y) is created for each box of the board and saved in an array so i never create new Positions later and avoid wasting time creating and garbage collecting.
Distance
For each Position i created an array of distance to each other position of the board using a breadth first search.
Path
For each Position i created 4 arrays of next position to each destination position using also a breadth first search. It is used with turn offset to determine next minion moves.
Light and Plan Effect
For each Position i saved a list of position where the effect will apply.
I did also some tricks using these data to really simplify the slasher acquire target.
Really fun game and really hard to have a good simulation. I think i can optimize a lot of things using pooling and changing a bit the model. The use of unit tests was really helpful to validate my simulation and avoid regression when i optimized the code.
Iām waiting the multiplayer game to try other strategies like minmax or beamsearch as i did not have enough time during this contest.
I am ranked about ~50 in gold (121 overall).
I liked this contest because I was able to do well using heuristics.
First I would check if I would be safe for the next turn (i.e. not in LOS of slasher and more than 1 tile away from all wanderers.) If I was safe, then I would check to see if it made sense to yell (if it would cause the person standing on me to get hit by an enemy) or if I should plan/heal (based on current sanity and amount of explorers near me) or if I should use light (based on number of mobs targeting me).
If I didnāt use a special, then I would move. My heuristic was based on adding costs to all the tiles, and then I would search either 3 tiles deep or a minimum of 8 tiles to find the lowest cost tile to move to. For the tiles that were more than 1 tile deep, I would average their cost with earlier tiles so that I wouldnāt run through a pack of wanderers in order to get to a shelter.
Things that added cost to a tile:
Nearness to wanderer or slasher
LOS of slasher
Dead end
Wanderer spawn location
Things that reduced cost to a tile:
Shelter with energy
Near explorer
Near plan
It took a lot of tweaking in order for my bot to do well. For instance, up until the last day my bot was so terrified of slashers that it would run through a gang of wanders just to stay out of the LOS of a slasher.
The last thing I did which helped a lot for the larger maps was to just tell my explorer to run to the nearest explorer for the first several turns.
Overall I thought this was a really great contest. My favorite part was how the game was semi-cooperative. You needed to stay near the explorers to all stay alive even though youāre all hoping for each other to go insane.