Surface puzzle discussion

Feel free to send your feedback or ask for some help here!

2 Likes

Ok, let me start.

At first i implemented very simple flood fill. It was giving me wrong result because i forgot to “repaint” visited cell.
To check if land was already repainted i’ve added it to a hash set of Point objects.
All tests but last pass.

Then i do micro optimization to check if there is any land or any water on a map during parsing it - so that i can instantly give an answer (if there is everything water it will be w*h, if everything earth - 0).

But here comes the trick - last level is 900 & 400, and it contains few land spaces near the bottom.

My solution suffered performance problems.
When i ran performance wizard i saw that my program spends 90% of a time in a Hashset operations - Adding checked item & checking if item on a map is already checked (when queuing).

Then i realized that hashset should be replaced, i just used simple temporary two-dimensional array of boolean, and copied original map in it before checking. Repainting was done in a right way and the problem gone. 100%.

Question to developers: do you add any test cases as time flows? I think good one could be the next:
Create large map, and a large lake on it (like current last one).

Then give ALL points from the lake to test.

Thank you for a puzzle!

1 Like

Just some feedback on my 100% solution (C++):
The main issue with the problem is, as often, performance. It is easy to get the proper solution for small maps, but more tricky to get the solution in time for large maps. Among the different tricks, I had to use a hash algorithm instead of a classical find in vectors. I also realized that on many maps, some lakes were never explored ; no need to compute everything in advance.

I had to “cheat” to get the 100%, as using the “flood” algorithm on the last large lake is very unefficient. My algorithm just detects a “large lake” problem (>99% water on the map) and then simply returns the amount of water. Otherwise it runs the proper algorithm. Not very proud of the trick, especially because it could fail on some maps, but solving the problem properly looked too complicated.

2 Likes

I got those performance problems too when i used hashset.
Problem was that 70% of the program run time was spent in HashSet.Contains(Point) method, which is obvious - those are frequent checks on large maps.

Just used two-dimensional bit array with repainting, and it did the trick. Could you try too?

Hi,
My code (Java) passes all tests in IDE but fail to pass 2 tests during evaluation (1000x1000 map).
As a matter of fact one of those two tests is included in the IDE set of tests.
Is that a perf issue? (and a difference between the evaluation in the IDE and the one when sending code?)
Thanks for answering.

My algorithm is the following: when reading the map and encountering a lac, I put in a Map under the key k=i*L+j the lake number. The lake number (as in Kruskal algorithm) helps to mesure connexity ; it is a new number if there is no lake on index k-1 or k-L and if there is a different lake on both k-1 and k-L, I merge them… I have an other Map that give, for every lake, the indexes of the related cells (the map is used for merging and for giving the answer).

Same problem I pass everything in IDE and fail the next 2 to last problems in validation (1000*1000 maps) with no feedback. My algorithm looks like a BFS and since this is a sparse graph performance is O(N) (N number of tiles). I’ve a hard time believing anyone’s got something sub linear for this.

EDIT: The problem was that in my code the grid was a 1D array in which I was moving with [y*L+x], I assumed (y,x+1) was on the right of (y,x) when in fact it can be on an other line if x+1=L. Same thing with x-1. Now what is mindblowing is that you can pass every single test in the IDE with an error like this. I really think there should be an extra test, or one of the tests should be changed a tiny bit so that this mistake will show in the IDE.

2 Likes

The input size can’t be 0 < L < 10000, 0 < H < 10000. The actually size should be like 0 < L < 1000, 0 < H < 1000

These are just guaranteed constraints to help calibrate the limits of your code. Not actual minimal and maximal values for the input.
But maybe 10000 is stretching it a little. Thanks for the feedback.

Please @songziang and everyone else, verify that a similar post doesn’t exist before creating a new thread.

Moreover, the category that you chose (contests) was irrelevant too.

Thank you. The reason why I think the limit on input data should be precise is it drives the algorithm and design decision. If the actual map is 10000 by 10000, then I don’t think there is easy flooding algorithm that can run and pass all tests within the limited time. That’s why I was thinking about using some technique to shrink the graph at first.

1 Like

Sorry about that.

i code in pascal and i made a recursive procedure for marking lake and counting its size, but on last test it segfails after 174702 nested calls
already 347465 calls, but 360000 is needed … and solved
tricks on tricks with tricks
i copied my recursive procedure, and in first copy i made a selection on certain condition when to call first copy or second, same in the second copy, and by some shuffling conditions and order of checks problem is solved at last

You could have replaced that recursive function with a stack instead : http://en.wikipedia.org/wiki/Flood_fill

I’ve just done a recursive function too to solve this puzzle. I pass all tests except the last one because I reach the maximum recursion depth in python. I’ll try to rewrite the code using an iterative algorithm.

@Elliot Yep for me too the recursive version couldn’t work no matter what, a switch to the stack version was enough for it to work properly.

Read here a non-recursive algorithm.

@nicolas_patrois Never saw the “Learning” games before , how do we get to them ?

you get to them from the forum, search the post from @loick_michard

2 Likes

@nicolas_patrois WoW A secret level :smiley:

Btw this level is too easy :joy:

#BFS + DP = GG :joy:

1 Like

There is another one, easier.