Send your feedback or ask for help here!
Note that this is the second part of the puzzle.
For the first part please take a look at: https://www.codingame.com/training/hard/jump-point-search---preprocessing
Really loved learning about JPS+, I had very much fun solving the first part.
However, I have a few issues with the second part:
You haven’t specified which heuristics to use. The paper does mention the octile heuristic (which is the one I used intuitively), but I think it is unfair to rely on it without specifying it in the description, especially given that it is not described in the paper. (E.g. Euclidean distance would work as well, but might lead to a different ordering of the nodes.) That being said, maybe your code does take this into account, I haven’t checked.
‘givenCost’ is incorrectly computed for diagonal distances, as illustrated here: https://www.codingame.com/share-replay/511278161. It might be because ‘DiffNodes’ is computed as the sum of ‘dx’ and ‘dy’ rather than the max? It is always double what it should be. (Heurisitc computation seems right though.)
It seems you are re-expanding nodes: https://www.codingame.com/replay/511282960.
This one is weird:
This happened on the last test case (Rooms no path). No idea where this could come from.
Finally, whenever I fail a test case, I get the following error message: <<Warning: your code did not read all available input before printing an instruction, your outputs will not be synchronized with the game’s turns and unexpected behaviour may occur.>>, while I do parse all of the input.
I appreciate this is probably coming from CG and you cannot do anything about it. But I thought I would point it out anyways.
NB: maybe that’s actually why I get the error above. But then, I read all of the input, so I don’t know…
Thank you for the feedback.
I tried to solve this puzzle and came up facing the exact same issues.
I was so upset that I fixed the stuff myself.
The heuristics to use is now given in the statement. Hopefully, the octile distance was the intended one… at least, this is the one I use to achieve 100%.
givenCost is now computed correctly, with a max as you mentioned.
I have added the closed list, so that a node is never revisited.
When providing an invalid node, the referee will give a correct node in the error message.
Unfortunately, this final point can’t be fixed easily. What we should do is providing some input each turn. But there is nothing significant to provide.
EDIT: @R2B2: Let me know if you still encounter some issues.
Now, I want to mention that I am really REALLY upset.
Neither the author, nor the three approvers have seen that the distance between (0,0) and (1,1) was not computed correctly. This clearly shows that this contribution was approved without proper review and that it should be deleted by CodinGame.
Finally, I (among others) think that CG solo games are not the appropriate tool for this kind of contribution. Next time, I suggest trying tech.io instead.
@Stilgart: Thanks for fixing the aforementioned issues. Worked like a charm for me.
However, I think it would be a shame to delete the contribution. I probably would never have come across JPS+ otherwise, although I am not sure I will need to use the algorithm any time soon. I do not think that because the format is a bit different from classical CG puzzles, it should be a reason to discard it.
But as you pointed out, the validation process did not work well, which would be the more important problem for me here.
I tested the game the best I could, and implemented all the fixes suggested by the reviewers (private conversations). It is a shame that some obvious errors can be missing and not encountered by some people implementing valid solutions yet that happens.
Yes it was somewhat rushed, sorry for that, but that is because I require the puzzle to be used during my classes, and review process on CG is horrible. At least some time ago there was xp reward that made people interested in doing so. Now it’s simply a burden to do and fix someone’s puzzle especially the harder one.
Considering your second remark, I do not understand it at all. CG is all for visualization-based tasks testing some type of algorithmic/mathematical skills. You can do article about it, ok. But there are planty of lectures and materials describing JPS (and all the other algorithms). What is lacking is a place when you can implement such algorithms - and that is a perfect opportunity with CG puzzle.
I also do not get the different from classical CG puzzles’ thing. Text-based CG puzzles are aberration that do not take the best from this site and exist because a) it is simpler to make, b) some time ago it was impossible to do solo games (I still think all my previous contributions should be rewritten with proper visualization to be really valuable) You may refer to ‘implement algorithm x’ format. But this is also not a novelty on CG coming from some of the initial CG-authored puzzles to many other contributions suggesting or even forcing the user to implement exact type of solution to pass the tests.
Mmm. I disagree with so much of that…
That’s a problem with your classes. Not one that has to be brought here. May I suggest tech.io, where you can publish on your own schedule as you see fit?
I’m afraid your opinion is biased.
For starters, the XP reward isn’t the panacea you allude to: it also got puzzles published without any scrutiny at all.
For a counterpoint: my contributions always ended up accepted after some editing back and forth. But, of course, my statements were never about following a very specific set of steps, even less so from an external resource.
I don’t think it’s that much of a burden yet to go out and moderate harder puzzles. Interesting harder puzzles get accepted all the time.
IMHO, it’s a burden to get boring puzzles accepted. As it should be! Which might definitely make the review process seem horrible, if you’re viewing it through the very specific lens of submitting puzzles that have a hard time getting accepted. Even more horrible if you’re putting on a self-inflicted additional constraint of having it done on a schedule.
But there’s a simpler explanation for that than “it’s got to be the review system.”
Dude, that’s, like, your own opinion. Here’s mine: CG is all for solving gamified programming puzzles.
I wouldn’t deny the CG puzzle format could be appropriate. But this is a shared platform. By publishing it, you’re putting it on every user’s todo list. And since it severely lacks both the fun and the problem-solving aspect, it sticks out like a sore thumb.
tech.io wouldn’t have that issue. A private area, if you can convince CG to implement it, wouldn’t have that issue.
Really, if you must have a CG puzzle format and aren’t willing to pay up, there’s always the option of sharing the link to a draft contribution.
But publication on the platform exposes it to the community’s opinion.
Being pre-existing doesn’t make it good.
Some of the initial CG-authored puzzles are widely recognized as lower-quality than what comes up nowadays.
Moreover, I don’t recall any of them directing a specific implementation. Care to freshen my memory?
As I see it:
- JPS+ seems like an interesting approach, thanks for creating something around that in CG @aCat
- The current puzzle is closer indeed to what we see in playgrounds (created via Tech.io) than in classic CG puzzles. I personally don’t like this kind of puzzle (even though the topic is interesting to me) but I understand other members can enjoy it
- It’s a shame it was approved in a rush though (thank you @Stilgart for fixing its issues) . As written by @JBM , contributions can be shared as draft, too. Please consider it next time you want to publish something quickly for your students instead of somewhat bypassing the moderation process by rushing it @aCat
- When contributing, please don’t think about what you’d like to see more on CG, but what is best for the community. Some players love gamified puzzles with a viewer, others not so much. But I’m pretty sure most players enjoy a puzzle without bugs.
- I don’t think the puzzle should be removed though (unless it still holds a bug not easily fixable). The moderation bot will remove the puzzle if its ratings are bad (which could happen considering its current format and issues)
I struggled with this puzzle much more than with the preprocessing part, finally 100%…
I found the pseudocode in section 14.3 of the article a bit hard to follow. E.g. what is the difference between DiffNodesRow() and RowDiff()?
Also, in the second if branch (else if (direction is diagonal &&…) I had to add an additional check for (DiffNodesRow > 0) and (DiffNodesCol > 0) that were not present in the pseudocode, otherwise some tests failed.
On a more general note, I have a problem with A*: in its typical pseudocode (and also in this puzzle) it requires to be able to change the priority of any node in it. But the abstract min priority queue has access only to the top element. Also that is the case for PHP’s standard library SplPriorityQueue
In the end, I had to make a min priority queue implementation by hand. Luckily this puzzle passed for a naive, (easy but inefficient ) implementation, not involving a proper heap data structure.
Or do I miss something here, how one is supposed to do A* withouth changePriority()?
If you use a basic heap you can just heappush a new priority instead of updating it and at each step you burn the already expanded nodes:
node = Heap.heappop()
if never_expanded[node]: break
It’s also convenient to have an array priority_in_heap to check if an encountered priority needs to be heappushed or not.
The performance lost by the many burns is not too bad actually, if you use an updatable priority queue you avoid those burns but the structure required is way more sophisticated so you trade those burns for heavier operations.
Thanks for the advise, makes sense!
If I remember correctly, I used once for a puzzle or multi such A* with a “re-push with new priority, leaving the old node there” approach, and it did not cause problem.
I got 100% in IDE and 90% when I submit, and it’s pretty hard to debug with the “referee” format.
Here’s the faulty validator replay (it’s the last validator):
Could someone who completed it tell me if the expected solution is in 19 steps (which would mean my search is too slow) or if it takes longer (which would mean that I have a bug to fix somewhere) ?
Thanks in advance
Here’s my log for last validator - the last value is the priority in queue (cost + heuristic value):
4 3 -1 -1 0 7.414213562373095
4 1 4 3 2 9.414213562373096
6 1 4 1 4 9.414213562373096
7 2 6 1 5.414213562373095 9.414213562373096
2 3 4 3 2 11.414213562373096
2 1 4 3 2.8284271247461903 12.242640687119286
1 1 2 1 3.8284271247461903 14.242640687119286
2 5 2 3 4 14.242640687119286
1 2 1 1 4.82842712474619 14.82842712474619
3 6 2 5 5.414213562373095 15.071067811865476
1 1 4 1 5 15.414213562373096
3 9 3 6 8.414213562373096 19.313708498984763
4 10 3 9 9.828427124746192 20.72792206135786
6 10 4 10 11.828427124746192 21.899494936611667
7 9 6 10 13.242640687119287 21.899494936611667
8 9 7 9 14.242640687119287 22.485281374238575
8 7 8 9 16.242640687119287 22.485281374238575
(I used the same preprocessed data as the last testcase with these initial entries:
w, h = 0, 13
xstart, ystart, xgoal, ygoal = 4, 3, 11, 2)
I found the bug: as the problem states that you must use a “priority queue”, I thought that you should not emulate the behaviour of an “updatable priority queue” and print all nodes, even when discarded.
It worked for all testcases so I didn’t question this interpretation but actually you must NOT print the discarded nodes.
It’s unfortunate that the expected behaviour of the priority queue is not tested in the testcases at all.
OK, finally I “fixed” this problem - simply made this case testing priority queue behavious both test and validator.