https://www.codingame.com/training/hard/jump-point-search---runtime
Send your feedback or ask for help here!
Created by @aCat,validated by @radekmie,@Zylo and @DomiKo.
If you have any issues, feel free to ping them.
https://www.codingame.com/training/hard/jump-point-search---runtime
Send your feedback or ask for help here!
Created by @aCat,validated by @radekmie,@Zylo and @DomiKo.
If you have any issues, feel free to ping them.
Note that this is the second part of the puzzle.
For the first part please take a look at: https://www.codingame.com/training/hard/jump-point-search---preprocessing
Really loved learning about JPS+, I had very much fun solving the first part.
However, I have a few issues with the second part:
You havenât specified which heuristics to use. The paper does mention the octile heuristic (which is the one I used intuitively), but I think it is unfair to rely on it without specifying it in the description, especially given that it is not described in the paper. (E.g. Euclidean distance would work as well, but might lead to a different ordering of the nodes.) That being said, maybe your code does take this into account, I havenât checked.
âgivenCostâ is incorrectly computed for diagonal distances, as illustrated here: https://www.codingame.com/share-replay/511278161. It might be because âDiffNodesâ is computed as the sum of âdxâ and âdyâ rather than the max? It is always double what it should be. (Heurisitc computation seems right though.)
It seems you are re-expanding nodes: https://www.codingame.com/replay/511282960.
This one is weird:
Finally, whenever I fail a test case, I get the following error message: <<Warning: your code did not read all available input before printing an instruction, your outputs will not be synchronized with the gameâs turns and unexpected behaviour may occur.>>, while I do parse all of the input.
I appreciate this is probably coming from CG and you cannot do anything about it. But I thought I would point it out anyways.
NB: maybe thatâs actually why I get the error above. But then, I read all of the input, so I donât knowâŚ
Thanks!
Thank you for the feedback.
I tried to solve this puzzle and came up facing the exact same issues.
I was so upset that I fixed the stuff myself.
The heuristics to use is now given in the statement. Hopefully, the octile distance was the intended one⌠at least, this is the one I use to achieve 100%.
givenCost is now computed correctly, with a max as you mentioned.
I have added the closed list, so that a node is never revisited.
When providing an invalid node, the referee will give a correct node in the error message.
Unfortunately, this final point canât be fixed easily. What we should do is providing some input each turn. But there is nothing significant to provide.
EDIT: @R2B2: Let me know if you still encounter some issues.
Now, I want to mention that I am really REALLY upset.
Neither the author, nor the three approvers have seen that the distance between (0,0) and (1,1) was not computed correctly. This clearly shows that this contribution was approved without proper review and that it should be deleted by CodinGame.
Finally, I (among others) think that CG solo games are not the appropriate tool for this kind of contribution. Next time, I suggest trying tech.io instead.
@Stilgart: Thanks for fixing the aforementioned issues. Worked like a charm for me.
However, I think it would be a shame to delete the contribution. I probably would never have come across JPS+ otherwise, although I am not sure I will need to use the algorithm any time soon. I do not think that because the format is a bit different from classical CG puzzles, it should be a reason to discard it.
But as you pointed out, the validation process did not work well, which would be the more important problem for me here.
I tested the game the best I could, and implemented all the fixes suggested by the reviewers (private conversations). It is a shame that some obvious errors can be missing and not encountered by some people implementing valid solutions yet that happens.
Yes it was somewhat rushed, sorry for that, but that is because I require the puzzle to be used during my classes, and review process on CG is horrible. At least some time ago there was xp reward that made people interested in doing so. Now itâs simply a burden to do and fix someoneâs puzzle especially the harder one.
Considering your second remark, I do not understand it at all. CG is all for visualization-based tasks testing some type of algorithmic/mathematical skills. You can do article about it, ok. But there are planty of lectures and materials describing JPS (and all the other algorithms). What is lacking is a place when you can implement such algorithms - and that is a perfect opportunity with CG puzzle.
I also do not get the different from classical CG puzzlesâ thing. Text-based CG puzzles are aberration that do not take the best from this site and exist because a) it is simpler to make, b) some time ago it was impossible to do solo games (I still think all my previous contributions should be rewritten with proper visualization to be really valuable) You may refer to âimplement algorithm xâ format. But this is also not a novelty on CG coming from some of the initial CG-authored puzzles to many other contributions suggesting or even forcing the user to implement exact type of solution to pass the tests.
Mmm. I disagree with so much of thatâŚ
Thatâs a problem with your classes. Not one that has to be brought here. May I suggest tech.io, where you can publish on your own schedule as you see fit?
Iâm afraid your opinion is biased.
For starters, the XP reward isnât the panacea you allude to: it also got puzzles published without any scrutiny at all.
For a counterpoint: my contributions always ended up accepted after some editing back and forth. But, of course, my statements were never about following a very specific set of steps, even less so from an external resource.
I donât think itâs that much of a burden yet to go out and moderate harder puzzles. Interesting harder puzzles get accepted all the time.
IMHO, itâs a burden to get boring puzzles accepted. As it should be! Which might definitely make the review process seem horrible, if youâre viewing it through the very specific lens of submitting puzzles that have a hard time getting accepted. Even more horrible if youâre putting on a self-inflicted additional constraint of having it done on a schedule.
But thereâs a simpler explanation for that than âitâs got to be the review system.â
Dude, thatâs, like, your own opinion. Hereâs mine: CG is all for solving gamified programming puzzles.
I wouldnât deny the CG puzzle format could be appropriate. But this is a shared platform. By publishing it, youâre putting it on every userâs todo list. And since it severely lacks both the fun and the problem-solving aspect, it sticks out like a sore thumb.
tech.io wouldnât have that issue. A private area, if you can convince CG to implement it, wouldnât have that issue.
Really, if you must have a CG puzzle format and arenât willing to pay up, thereâs always the option of sharing the link to a draft contribution.
But publication on the platform exposes it to the communityâs opinion.
Being pre-existing doesnât make it good.
Some of the initial CG-authored puzzles are widely recognized as lower-quality than what comes up nowadays.
Moreover, I donât recall any of them directing a specific implementation. Care to freshen my memory?
As I see it:
I struggled with this puzzle much more than with the preprocessing part, finally 100%âŚ
I found the pseudocode in section 14.3 of the article a bit hard to follow. E.g. what is the difference between DiffNodesRow() and RowDiff()?
Also, in the second if branch (else if (direction is diagonal &&âŚ) I had to add an additional check for (DiffNodesRow > 0) and (DiffNodesCol > 0) that were not present in the pseudocode, otherwise some tests failed.
On a more general note, I have a problem with A*: in its typical pseudocode (and also in this puzzle) it requires to be able to change the priority of any node in it. But the abstract min priority queue has access only to the top element. Also that is the case for PHPâs standard library SplPriorityQueue
.
In the end, I had to make a min priority queue implementation by hand. Luckily this puzzle passed for a naive, (easy but inefficient ) implementation, not involving a proper heap data structure.
Or do I miss something here, how one is supposed to do A* withouth changePriority()?
If you use a basic heap you can just heappush a new priority instead of updating it and at each step you burn the already expanded nodes:
while true:
node = Heap.heappop()
if never_expanded[node]: break
Itâs also convenient to have an array priority_in_heap to check if an encountered priority needs to be heappushed or not.
The performance lost by the many burns is not too bad actually, if you use an updatable priority queue you avoid those burns but the structure required is way more sophisticated so you trade those burns for heavier operations.
Thanks for the advise, makes sense!
If I remember correctly, I used once for a puzzle or multi such A* with a âre-push with new priority, leaving the old node thereâ approach, and it did not cause problem.
I got 100% in IDE and 90% when I submit, and itâs pretty hard to debug with the ârefereeâ format.
Hereâs the faulty validator replay (itâs the last validator):
Could someone who completed it tell me if the expected solution is in 19 steps (which would mean my search is too slow) or if it takes longer (which would mean that I have a bug to fix somewhere) ?
Thanks in advance
Edit:
Hereâs my log for last validator - the last value is the priority in queue (cost + heuristic value):
4 3 -1 -1 0 7.414213562373095
4 1 4 3 2 9.414213562373096
6 1 4 1 4 9.414213562373096
7 2 6 1 5.414213562373095 9.414213562373096
2 3 4 3 2 11.414213562373096
2 1 4 3 2.8284271247461903 12.242640687119286
1 1 2 1 3.8284271247461903 14.242640687119286
2 5 2 3 4 14.242640687119286
1 2 1 1 4.82842712474619 14.82842712474619
3 6 2 5 5.414213562373095 15.071067811865476
1 1 4 1 5 15.414213562373096
3 9 3 6 8.414213562373096 19.313708498984763
4 10 3 9 9.828427124746192 20.72792206135786
6 10 4 10 11.828427124746192 21.899494936611667
7 9 6 10 13.242640687119287 21.899494936611667
8 9 7 9 14.242640687119287 22.485281374238575
8 7 8 9 16.242640687119287 22.485281374238575
NO PATH
(I used the same preprocessed data as the last testcase with these initial entries:
w, h = 0, 13
xstart, ystart, xgoal, ygoal = 4, 3, 11, 2)
Edit 2:
I found the bug: as the problem states that you must use a âpriority queueâ, I thought that you should not emulate the behaviour of an âupdatable priority queueâ and print all nodes, even when discarded.
It worked for all testcases so I didnât question this interpretation but actually you must NOT print the discarded nodes.
Itâs unfortunate that the expected behaviour of the priority queue is not tested in the testcases at all.
OK, finally I âfixedâ this problem - simply made this case testing priority queue behavious both test and validator.