# A* Craft - Feedback & Strategies

The topic where you can tell how you liked the challenge, & how you solved it.

Thanks again Agade, Automaton2000 and Magus for this amazing contest.

The maximum known score on the testcases : 11982

View Details
``````01	23
02	46
03	22
04	44
05	11
06	100
07	60
08	29
09	86
10	82
11	144
12	104
13	354
14	128
15	516
16	618
17	580
18	218
19	585
20	988
21	790
22	1044
23	476
24	316
25	901
26	1063
27	771
28	249
29	784
30	850
``````

The maximum known score on the final validators : 11658

Thatāsā why you might experience a lower score.

2 Likes

Me code was a simple hill climbing algorithm.
At every step, i try to make 1 mutation, then 2 mutations, then 3 mutations, then 4 mutations, then 5 mutations. On the last test case i can make 650000 simulations.

But i had 3 weeks to make my code, and iāll finish something like 30th. Iām such a noob at my own contest

Iām very curious and iām waiting for the top 10 post mortems. I hope you enjoyed the contest !

9 Likes

I finished 36th with ~9000 points.

I had a Simulated annealing search which would change 1 or 2 arrows from one attempt to the next. I had:

• 20% chance to place an arrow, 80% chance to try an empty platform cell
• Exponential annealing schedule: Temperature=100 exp(-5 t) with t the used time from 0 to 1
• Reset to best solution every 256 attempts
• Around 350 000 boards tried in 1s for test case 30

There are a lot of things that would have improved my score but one thing I would have been really interested to try is to exploit the symmetries present in many of the test cases. Many test cases had an axial or rotation symmetry. This means that the optimal solution most likely also has this symmetry. Therefore one could reduce the search space dramatically by detecting said symmetries and only searching on half/a quarter of the map, assuming symmetrical placement of arrows on the rest of the map. This would also allow more simulations per second because you would only need to simulate the robots in one half/quarter of the map.

Edit: The symmetry idea is probably bad, see follow-up posts.

7 Likes

@Agade can you please give an example where symmetrical arrow placement (or nearly symmetrical) could have worked on big platforms?

I thought all the optimal solutions would involve circular or vortex like motions.

I enjoyed to play a bit since I wasnāt spoil by the game creation. The fact that the core simulation was simple was a plus.

My strat was very basic:

• Find sweet arrows placements (Dead end, intersection or near an already placed arrow). I donāt bother with corridors.
• Place (or not) random arrows on each point
• Remove deadlock: two adjacent cells with opposite directions
• Run the map
• Keep the best & Retry

It gave me around 8k points.

2 Likes

I got to 9.4k (and for a few hours, to the 1st place) on the test cases (my final score is still unknown) by writing a very simple GA - every platform that wasnāt a corridor became 1 gene (./U/R/D/L) and I started with a single individual (no arrows placed). Then, unti I ran out of time, I kept doing a fixed number of mutations (change 1 arrow) and crossovers(random combination of arrows from 2 individuals) and I kept 80 best individuals (determined by simming the game).

Unfortunately, I had no time to code during the weekend, so after the friday evening, I only changed some constants. Maybe if I added some mechanism to keep the individuals from becoming too similar and changed the crossover to something that makes more sense in this game, the score would get better.

Thanks to Agade and Magus for the contest, it looked really well-made (bug-free as far as I can tell, simple yet deep, and surprisingly punny considering that Agade got annoyed by my username) and I regret I couldnāt spend more time with it.

6 Likes

About the recalc, why am I at leaderboard if I never submitted?

1 Like

Finished 5th.

#### About the contest

I really enjoyed the contest. The rules are simple, yet the game has enough depth to keep one busy for a weekend. But not much longer, so the duration was well chosen too. Thanks to Magus and Agade for creating it.
I appreciate the rerun with different validators as during the contest, as it prevents from hardcoding, reversing validators, testing different random seeds or just spamming submits. The only downside is that it will turn out to be random with only 30 testcases, half of which can and will be solved optimally by most top players. More testcases or even running the same tests multiple times would help here.

#### Approach

I prefill the board to prevent myself from falling down the edge. Other than that I keep the map unchanged.

Then I modify random fields, add, remove and redict arrows as long as the timelimit allows. For each mutation I only change 1 to 3 arrows and keep the changes if they are as good as my previous best (or even slightly worse not to get stuck in a local optimum too easily).
I rerun the code 4 times during the 1s, starting with my initial fill again. For the first third of the time of each run I only use one robot to find a long path, as that is usually good for the others too (and faster to evaluate).
The scoring is quite simple, just take the score that the game would give you.
My simulation is slow compared to others (about 125k on test 30, despite only using one robot for a while).

#### Allowed placements and heuristics

There are some cases where you can exclude placement options to reduce the search space:

For long corridors thereās no point in adding arrows. Either they direct you the way you already go, or they make things worse (pushing you down the edge, shortening a loop).

Arrows pointing to each other arenāt an option either, as that will result in an instant loop and end the run. Same goes for pointing on a void cell.

During the simulation I also allow my first bot in the movement order to rearrange arrows, if that avoids doing a 180 degree turn (as these always result in a loop).

When you watch a replay with high points (sorry, I donāt have a better one), you will notice, that the most arrows are placed at the border, while the center is mostly empty (these cells can be traversed twice and give two points).
So we can take that into account when changing an arrow: if the cell we want to modify has an arrow on it an there is no preset arrow or void cell next, the most likely action is to remove the arrow in the mutation (I do that in 75% of the cases).
Otherwise take a random allowed arrow (without the excluded directions from above). If itās the same as before, just remove the arrow.

#### Components

Some maps are split into several components:

To detect them, start a breadth (or depth) first search from each robot and keep track of the reachable cells (preset arrows only have one neighbor cell of course).
If two robots share any empty cells, they are in the same component (even if they canāt reach each otherās starting position), otherwise they are not.
The components may overlap, as shown above, but only on cells with preset arrows.
Different components can be optimized independently from each other. That will give a higher score, as you donāt try to modify two components at once and drop the score with a bad change in on of them, even if you improved on the other.

25 Likes

Your prefill seems to create a lot of 180Ā° loops. This doesnāt go against the other heuristic?

1 Like

Yes it does, maybe I should have taken more time for that part. It still performed better than without.

Finished #19 with 9486 points.

It is the first time I get to be in the top 20 of a contest so Iām really happy about this result. I briefly read the rules Friday night, and started coding on Sunday afternoon.

My approach is to run 3 parallel hillclimbing searches to avoid getting stuck in a local suboptimal solution. If any search yields no update for more than 100ms I reset the solution to an empty board and start to search from there.

As for the mutations, I realized that not many arrows are needed in order to achieve long paths, on the countrary, sometimes the fewer the better. I tried different mutation strategies, unhappily didnāt have enough time to benchmark properly every configuration, so Iāll just list the last submit:

Moves:

• place/delete/flip single arrow (P = 30%)
• place/delete/flip 2 disjoint arrows (P = 20%)
• delete 5 arrows (star shaped: 1 center and 4 neighbours when mutable) (P = 35%)
• place/delete/flip 5 arrows (star shaped again) (P = 15%)

On testcase 30 I get an avarage of 330k sims so I think thatās not where I miss most of the score, itās probably due to my mutation strategy.

Thanks again to Agade and Magus for this awesome challenge, I enjoyed it very much!

5 Likes

Seems like 1400 people got submitted regardless to spike the numbers. Wish i knew this beforehand.

We might remove the āundefinedā language submission like we did for others.

BlitzProg - 74th

I use a Genetic algorithm for this contest. I have a population of 200, Crossing over bring a generation to 400, and mutation bring it up to 600. I select the 200 best of these 600 and start again.

The genetic system is very simple, each individual is simply the map of the game, from which I select the tile of one or the other parent to make a child when crossing over. A mutation is a copy of the original but with each possible cell having a chance to change to another arrow or just nothing. individuals are simply ranked according to the score the bots would make on them.

When arrows are created, I replace these with a blank if for whatever reason this arrow would be a very bad choice (pointing the void or another arrow on the opposite direction). which helped me making an extra 1000 points.

I also tried to remove arrows which had 6 neighboring voids (because itās not really useful to place arrows on narrow passages) but that didnāt help much for some reason.

Lots of effort went into that challenge! It was very difficult. Which is amazing for such a simple contest. The simulator was the easiest ever to code but thinking of a good approach was not obvious

3 Likes

Finished 6th with ~10.300 points.
Basically my solution was 8 consecutive simulated annealings for ~0.11sec each.

As for preprocessing, i generated all possible arrow placements and assigned weights to them. On the bigger maps (and on some of my handmade testcases) i took most of my highest scoring solutions and generated some statistics. The result showed that any cell having 1 or 2 neighbours is more likely to have an arrow than cells having at least 3. Also for cells having 2 adjacent neighbours are more likely to have an arrow than cells with 2 opposite neighbours. Just a few more ideas worth mentioning: no arrows inside corridors, no 2 adjacent arrows pointing at each other, etc. So i set my weights accordingly.

For SA, in each step i added or removed an arrow by 50-50% chance (no rotations or juggling with multiple arrows at once), picking the arrow based on those preset weights when adding, and completely randomly when removing. Then i simulated all robots 1 by 1 and summed up their scores.

For the simulation part, every robot kept track of its visited cells with direction info on its last run, and when executing an arrow addition/deletion step i checked if it affected the robotās path in any way (eg when adding an UP arrow i checked that the robot didnt arrive at that cell from top, left, or right at any point of its path [whole process costs a few bitmasks]). If the arrow did not cause any direction change in the path, i used the robotās last score.
On the bigger testcases i could reach 700k-2M+ (not so) full simulations in IDE, reaching around 10.300 score every time (with different random seeds).

Well, basically thats it, i was surprised to see how well it got me.

Congrats to the guys in top5

12 Likes

I finished 16th, never quite managing to break the 10K score on a submit.

I started off with a quick genetic algorithm implementation which got me to around 8500 by Saturday morning. I had a list of cells that could have arrows in and generated a random arrow (or nothing) for each cell, then did mutation / crossover to find the best solution.

I then tried a less random approach, doing a very narrow beam search with a lot of pruning. I changed my scoreing to return the score and a list of squares visited(stored as a bitmask in 3*uint64) meaning each step of the search I would only try to add arrows to squares that were visited by the solution I was trying to improve. I could also use this to detect if a new solution was a subset of its parent (if no new squares were visited and the score was equal or lower) and not explore that branch any further. I also immediately threw away any solutions where the arrow added pointed into the void. I also removed any solutions with useless arrows (while scoring I set a flag for every arrow if it actually changed the direction of a robot and then this flag had to be set for every arrow at the end).

With the beam search I managed to get around 9300 as it wouldnāt find optimal solutions that involved the same squares as a shorter one in a different direction (eg on roundabout the optimal solution would get discarded either because the score starts quite a bit lower or if the beam width was wide enough it would end up as a subset of the best solution before adding the final arrow as the route in the other direction included all squares). Without the subset filtering the tree expanded too quickly to be usable (it would find the optimal on roundabout as it was simple, but score very low on the bigger levels). I was considering making the subsets be the full state masks (760 bit for each direction in each cell) instead of just visited cells but didnāt get a chance to try this.

I then combined the solutions to use the beam search to seed my GA which didnāt really work at all (it barely improved on my beam even with 50% of the time). I then went for more of a SA approach - Iād take my final beam solution and try up to 10 random changes allowing accepting any arrows that left me within 20 points of my best score (to allow changing away from a solution that just went horizontally). It kept doing this until it had tried 50 times without any improvement at which point it reset to my current best solution. Whenever a new global best was set it would simplify it leaving only the arrows that were used (I found it best to let the random add useless arrows as they would sometimes be useful later).

As most of my solution was deterministic my scoring was very stable, only differing by about 10 points if I submitted the same code. Running offline I was getting over 11000 meaning performance was still a major factor for my final score (I was getting about 20% more simulations offline).

10 Likes

Finished 3rd

Using single simulated annealing for the whole time - running multiple was very similar on average so didnāt bother

Only possible mutations are changing the state of single cell - this makes recalculating simulation scores a bit easier than complex mutations

Recalculating robots scores only for those that were passing through modified cell and resume simulation from the point it first touched that cell - this made checking scores somewhat faster but updates still require full simulation and in the end there were around 1M score calculations for test 30

Never allow placing arrows that would lead to falling out of grid next turn from that cell - didnāt have any other restrictions like corridors but I like some ideas mentioned above

Modify scoring function to add penalty for using arrows on cells that donāt have non-empty neighbours in 8 directions - this was the only clever trick wasnāt mentioned here that I had and the reason is that for big empty tests you want to put arrows on borders and nothing in the middle so that each inner cell gets traversed twice. This was worth less than 500 point probably but good for big scoring tests.

What I didnāt have was very fast simulation and using the fact that all robots usually follow same cycle and you only have to maximize one such cycle and connecting robots to that cycle is relatively easy. Probably it is possible to do and eulerscheZahl even mentioned doing something similar.

17 Likes

Finished #1, 11054 points.

Until the very last night, I used a single simulated annealing for the whole time. My only mutations were placing an arrow into an empty cell and removing a previously placed arrow, and my scoring function was just the objective function without any modifications. I tried adding more complex modifications and penalties for placing arrows, but none of this seemingly helped. In fact, a more extensive testing ā say, running a solution 100 times and averaging the score ā wouldāve helped me compare ideas to each other and probably move to the right direction. But I was somehow too lazy for that, so what I did was just running a new solution several times and looking if the score āseeminglyā improved, so there was quite a lot of noise in my decisions.

I made two noticeable improvements in the last few hours which improved my score by several hundred points:

1. Do the big SA for half of the time. For the remaining half of the time, run several small SAās to improve the score. I took a random 5x5 square subgrid in my current best solution and spent some fixed number of iterations trying to improve the placement of arrows inside it. It has to be mentioned that I implemented this after being overtaken by eldidou. It probably helped me remember that he used a similar idea in a recent Topcoder Marathon Match and described it at the forums ā which Iām grateful for

2. Restrict the set of cells touched by the big SA to empty cells which have a neighboring (in eight directions) arrow or void cell. These cells will most likely contain arrows anyway, while arrows in cells with eight empty neighbors will likely screw my score up ā every such arrow prevents several cells here and there from being visited twice, which is essential in test cases with large empty areas. It helped me draw pictures like this, which is inoptimal but still pretty good.

The final quick dirty hack was doing SA for the first 0.1 seconds ignoring cells which only have neighboring arrows but not neighboring voids. It helped me get the score of 850 for the 30th test case almost instantly (which turns out to be the best known score), while if I also included cells with arrow neighbors, my score often was in the 700ās. I imagined it wouldnāt hurt me too much on the other test cases as well.

Thank you for the contest, I enjoyed it!

38 Likes

Hi! I was only able to take a short look at the description of the game but due to private reasons I was not able to code. Will it be available as an optimation puzzle in the near future? Looking forward to it!

Finished 18th (almost 10k on provisional, 9.5K for final)

Multiple hill climbing runs
-within a run always continue with best state
-ended up with 4 runs, all using somewhat different tricks
-some runs reuse end state of a previous run

Initialization
-change dead arrow chain to void (*should have been limited to short ones)
-calculate cells worth to have arrow: ignore tunnels (gave up max score on some easy tests, *could have been compensated in the end)
-calculate other characterstics (single component, symmetry)

Random change:
-pick some worthy cells (many changes for start, but only 1 around the end of time)
-2way: always arrow, 3way: almost always, 4way: high chance to became empty
-avoid -> <-

Simulate early exit:
-do not continue if a robot score is much less than before

Simulate with one robot (*coded in last hours and messed up, not used)
-mark the path and after reach end or cycle, calculate remaining length backwards for all cells
-later robots can stop when reaching those trail cells
-only if there are more than 2 robots and all robots are together (should be: in the same partition)

3way only run:
-for a few maps (eg. 15th, 30th) nothing else is needed to achieve a good score

Symmetric run:
-check if both map and robots are symmetric (center symmetry only)
-in a symmetric run, always do the same action on both sides
-use half of the robots when checking

Manual improve:
-try to find and add 2 adjacent unused cell to the path
-only for the final solution (to avoid blocking later improvement and costly check)

6 Likes