[Community Puzzle] Cloudy Weather

https://www.codingame.com/training/hard/cloudy-weather

Send your feedback or ask for help here!

Created by @Niako,validated by @JBM,@R2B2 and @Deltaspace.
If you have any issues, feel free to ping them.

please help me.I use A* findPath, However, itā€™is timeoutā€¦I canā€™t understand how to deal it

Hard to tell without more details, but the way I handled it was to replace the usual ā€œstep by 1ā€ of pathfinding algorithms by ā€œstep by Xā€ to cover long distances in one go and save performance.
This worked fine for me with Dijkstra, I assume it should also be the case with A*.

2 Likes

A first good idea was given by @Djoums above. And by the way, the puzzle was designed to be seamlessly solvable with Dijkstra (even though, considering the given maps, A* would probably be faster).

Here is a more detailed hint if it may help though (/!\ spoiler)

Consider a simple map as the one on the left below where the numbers (all 1 for now) indicate the heights of the lines and the widths of the columns.
As some consecutive lines/columns are identical, this map could be ā€œcompressedā€ into the smaller one on the right.

  1111111         232
1 .......       1 ...
1 ..ā–ˆā–ˆā–ˆ..   ->  2 .ā–ˆ.
1 ..ā–ˆā–ˆā–ˆ..       1 ...
1 .......
2 Likes

My solution fails at IDE test 05 (ā€œSmall map #3ā€)
I get 119 with this shortest path, while the solution is supposed to be higher: 157.
I donā€™t understand, why my solution is not to be accepted.
If anyone has some log output for the right solutionā€™s track, I would appreciate if you could help me out.
(For all other tests, I get the right answer, although for the last 3 cases I am still too slow online, not during pathfinding, but during the map compacting.)

EDIT: I managed to pass all tests (and also to speed up map compacting), although still donā€™t fully understand exactly whyā€¦ :slight_smile:

EDIT2: I think I understand it now. I donā€™t delete this post, just in case others run into the same problem: When compressing the map, for a cloud at (x,y,x+w,y+h) it is not enough to include the outer coordinates (x-1),(y-1),(x+w),(y+h) which can be used to go around the cloud. I needed also to add the inner coordinates x and y, because otherwise the compacted map representation did not prevent the pathfinder to go through the cloud in some edge cases. (E.g. going vertically at the nearest xā€™ coordinate (in the compacted list) to the right of x.

4 Likes

I implemented A* + orthogonal JPS to solve this, but I was stuck for big maps, couldnā€™t find a way to store in a proper and efficient way the map (I tried dict + borders of clouds, or checking if point is under cloud with some <= and some for loops). Thanks to your hint / spoiler, I could find the trick!

Thanks a lot !

1 Like

@RoccoSimon I tried the same approaches you did and had the same problems. Did you end up compressing the paths and did that make all tests pass? Any resources you could share? Thanks!

Nice puzzle! Iā€™m struggling a bit with this one, though. Iā€™m failing test case 3 and 10, but make it to 82% with the validators using the ideas from Niako and TBali above. Need to have a closer look at test case 3 tomorrow ā€¦

Turns out my priority queue implementation was buggy and apparently too slow for the test cases, but fast enough for the validators :slight_smile:

1 Like

I guess i need to store the original map, before compressing it. But from testcase 8 i get the following error, when i try to initialize the array:
self.graph = np.empty((self.rows, self.columns), dtype=Node)

in codingame console:
numpy.core._exceptions.MemoryError: Unable to allocate 4.13 EiB for an array with shape (1081423233, 1101741227) and data type <U1

in IDE:
ValueError: array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size.

How can I store the original map?

@Zolcsika The original map is way too big (as these error messages indicate, it could reach ~10^18 bits)! You have to build the compressed one directly from the list of rectangles.

2 Likes

27%ā€¦ damn u time out :smile:
I managed to handle clouds in a proper way in order to avoid storing a huge list or object but still my A* implementation sucks in terms of performanceā€¦ i got the first 2 validators and somehow the fifth (for reasons :laughing:) but canā€™t get to make it work good.
I guess that taking bigger chunks per time is a thing still i have to figuring out how to do that without breaking the counterā€¦

You can use a list of notable x values and a list of notable y values, both sorted.
Keep track of you indexes in this list and at each step, consider previous/next index.

1 Like

A* works well with me on a compressed map, but takes too long for the 4 large maps. It appear that more than 90% of the time is spent on checking wether a point is under a cloud or not, so either this check is not optimized, or I check too many points (Iā€™d go for that). For the compression I only keep inside and outside borders of each clouds, is it too much?
Here is the map of medium2 test, with black clouds, start in green, path in red, and visited point in blue (other points added to the graph are green). How could I reduce the number of visited points?

If Iā€™m not mistaken, looking at your picture, the number of points you consider is ā‰¤ (2N)Ā² (for N ā‰¤ 250), which is fine. Checking whether a point is under a cloud or not should be at most O(log N) (using sorted coordinates).

1 Like

I decided to prebake the collisions after compressing the map, so that collision checks are O(1). Building the matrix takes a while (I think itā€™s technically O(n^3) but practically much shorter, and N is at most 250 anyway), but you only have to do it once and itā€™s very straightforward. Also, Iā€™m not sure if Iā€™m reading your blue points correctly, but it seems like there are too many. My compressed map for Medium #2 is approximately 250 x 250.

Also, great job Niako on this puzzle. It took many hours scattered over a week to build some foundation code (I wanted to make a nice priority queue for C#), but my code works on almost the first tryā€¦ except for the last test case, which unexpectedly throws an exception. Currently trying to figure out how that happened in the first place :confused:

EDIT: Oh, I need longs for the final test case

1 Like

Hereā€™s the map for medium#2, I removed the points that are hidden by the compression:


The map is now 251x241.
Iā€™m not sure itā€™s a good idea to prebuild the collision matrix, the size of my compressed map for large#4 is 752*752, So youā€™ll have to check ~565k times if youā€™re under a cloud, which I only need about 230k when doing it directly from the A_star loop. Iā€™m working now on optimizing the check time, which should indeed be O(logn).
Anyway, I agree, fun puzzle Niako! Thatā€™s my first A_star, glad it does not work that bad!

1 Like

I sometimes obtain a wrong answer, but close to the good answer. Itā€™s hard to debug since it happens only with big maps. Did anyone have the same problem ?
Medium map 1 : found 1674 instead of 1672
Medium map 2 : found 1158 instead of 1150
Large map 1 : OK
Large map 2 : 1516492458 instead of 1516492456
Large map 3 : OK
Large map 4 : timeout

Iā€™ll take care about the timeout later, I just want to understand what can be wrong.
Thanks in advance.

Nevermind, I changed my A* for a Dijkstra, and now it works (timeout on Large map 3 & 4, but no bad result).
Itā€™s difficult to explain, because Iā€™m sure that the A* was good.

I understood why my A* is bad. And I wonder how some people solved this puzzle with an A*, since it can be biased by the jumps.

.D.................................... Dest
.####################################.
.####################################.
.....................................S Start
56....................................
4####################################2
.####################################.
32...................................1

At step 2, the priority_queue will choose the step 2 closer to the destination. Thatā€™s why at the end the result isnā€™t the shortest path.