https://www.codingame.com/training/hard/cloudy-weather
Send your feedback or ask for help here!
Created by @Niako,validated by @JBM,@R2B2 and @Deltaspace.
If you have any issues, feel free to ping them.
https://www.codingame.com/training/hard/cloudy-weather
Send your feedback or ask for help here!
Created by @Niako,validated by @JBM,@R2B2 and @Deltaspace.
If you have any issues, feel free to ping them.
please help me.I use A* findPath, However, itāis timeoutā¦I canāt understand how to deal it
Hard to tell without more details, but the way I handled it was to replace the usual āstep by 1ā of pathfinding algorithms by āstep by Xā to cover long distances in one go and save performance.
This worked fine for me with Dijkstra, I assume it should also be the case with A*.
A first good idea was given by @Djoums above. And by the way, the puzzle was designed to be seamlessly solvable with Dijkstra (even though, considering the given maps, A* would probably be faster).
Consider a simple map as the one on the left below where the numbers (all 1 for now) indicate the heights of the lines and the widths of the columns.
As some consecutive lines/columns are identical, this map could be ācompressedā into the smaller one on the right.
1111111 232
1 ....... 1 ...
1 ..āāā.. -> 2 .ā.
1 ..āāā.. 1 ...
1 .......
My solution fails at IDE test 05 (āSmall map #3ā)
I get 119 with this shortest path, while the solution is supposed to be higher: 157.
I donāt understand, why my solution is not to be accepted.
If anyone has some log output for the right solutionās track, I would appreciate if you could help me out.
(For all other tests, I get the right answer, although for the last 3 cases I am still too slow online, not during pathfinding, but during the map compacting.)
EDIT: I managed to pass all tests (and also to speed up map compacting), although still donāt fully understand exactly whyā¦
EDIT2: I think I understand it now. I donāt delete this post, just in case others run into the same problem: When compressing the map, for a cloud at (x,y,x+w,y+h) it is not enough to include the outer coordinates (x-1),(y-1),(x+w),(y+h) which can be used to go around the cloud. I needed also to add the inner coordinates x and y, because otherwise the compacted map representation did not prevent the pathfinder to go through the cloud in some edge cases. (E.g. going vertically at the nearest xā coordinate (in the compacted list) to the right of x.
I implemented A* + orthogonal JPS to solve this, but I was stuck for big maps, couldnāt find a way to store in a proper and efficient way the map (I tried dict + borders of clouds, or checking if point is under cloud with some <= and some for loops). Thanks to your hint / spoiler, I could find the trick!
Thanks a lot !
@RoccoSimon I tried the same approaches you did and had the same problems. Did you end up compressing the paths and did that make all tests pass? Any resources you could share? Thanks!
Nice puzzle! Iām struggling a bit with this one, though. Iām failing test case 3 and 10, but make it to 82% with the validators using the ideas from Niako and TBali above. Need to have a closer look at test case 3 tomorrow ā¦
Turns out my priority queue implementation was buggy and apparently too slow for the test cases, but fast enough for the validators
I guess i need to store the original map, before compressing it. But from testcase 8 i get the following error, when i try to initialize the array:
self.graph = np.empty((self.rows, self.columns), dtype=Node)
in codingame console:
numpy.core._exceptions.MemoryError: Unable to allocate 4.13 EiB for an array with shape (1081423233, 1101741227) and data type <U1
in IDE:
ValueError: array is too big; arr.size * arr.dtype.itemsize
is larger than the maximum possible size.
How can I store the original map?
@Zolcsika The original map is way too big (as these error messages indicate, it could reach ~10^18 bits)! You have to build the compressed one directly from the list of rectangles.
27%ā¦ damn u time out
I managed to handle clouds in a proper way in order to avoid storing a huge list or object but still my A* implementation sucks in terms of performanceā¦ i got the first 2 validators and somehow the fifth (for reasons ) but canāt get to make it work good.
I guess that taking bigger chunks per time is a thing still i have to figuring out how to do that without breaking the counterā¦
You can use a list of notable x values and a list of notable y values, both sorted.
Keep track of you indexes in this list and at each step, consider previous/next index.
A* works well with me on a compressed map, but takes too long for the 4 large maps. It appear that more than 90% of the time is spent on checking wether a point is under a cloud or not, so either this check is not optimized, or I check too many points (Iād go for that). For the compression I only keep inside and outside borders of each clouds, is it too much?
Here is the map of medium2 test, with black clouds, start in green, path in red, and visited point in blue (other points added to the graph are green). How could I reduce the number of visited points?
If Iām not mistaken, looking at your picture, the number of points you consider is ā¤ (2N)Ā² (for N ā¤ 250), which is fine. Checking whether a point is under a cloud or not should be at most O(log N) (using sorted coordinates).
I decided to prebake the collisions after compressing the map, so that collision checks are O(1). Building the matrix takes a while (I think itās technically O(n^3) but practically much shorter, and N is at most 250 anyway), but you only have to do it once and itās very straightforward. Also, Iām not sure if Iām reading your blue points correctly, but it seems like there are too many. My compressed map for Medium #2 is approximately 250 x 250.
Also, great job Niako on this puzzle. It took many hours scattered over a week to build some foundation code (I wanted to make a nice priority queue for C#), but my code works on almost the first tryā¦ except for the last test case, which unexpectedly throws an exception. Currently trying to figure out how that happened in the first place
EDIT: Oh, I need longs for the final test case
Hereās the map for medium#2, I removed the points that are hidden by the compression:
I sometimes obtain a wrong answer, but close to the good answer. Itās hard to debug since it happens only with big maps. Did anyone have the same problem ?
Medium map 1 : found 1674 instead of 1672
Medium map 2 : found 1158 instead of 1150
Large map 1 : OK
Large map 2 : 1516492458 instead of 1516492456
Large map 3 : OK
Large map 4 : timeout
Iāll take care about the timeout later, I just want to understand what can be wrong.
Thanks in advance.
Nevermind, I changed my A* for a Dijkstra, and now it works (timeout on Large map 3 & 4, but no bad result).
Itās difficult to explain, because Iām sure that the A* was good.
I understood why my A* is bad. And I wonder how some people solved this puzzle with an A*, since it can be biased by the jumps.
.D.................................... Dest
.####################################.
.####################################.
.....................................S Start
56....................................
4####################################2
.####################################.
32...................................1
At step 2, the priority_queue will choose the step 2 closer to the destination. Thatās why at the end the result isnāt the shortest path.