Send your feedback or ask for help here!
Hi, I am doing this problem in bash. I find problems by the test Inception, I am having problems by printing out things with break lines. Any suggestion?
I solved that problem. If you want to reproduce it the problem is the following: If you store a string with a break line in a variable and you try to echo, you will not display it correctly.
Note here that distinction between single and double quotes is important.
I finally made it in bash. For those of you who are doing it in bash and are struggling with timeouts like me try to use less sub processes.
Hi, I work on this problem in C++. My program works for all examples, excepted with the 3rd example. In fact, the output expected is “DECODE FAIL AT INDEX 186” and I have also this, but before it I have this sentence “Invalid : extra bit at the end of the encoded text”. I can’t find from where comes this problem, if someone can help me to understand this ?
You’re supposed to exit if it fails.
In C++ you can put the fail message in a return statement so that it stops the main function.
Something like this:
return printf(“DECODE FAIL AT INDEX %ld”, index);
that is the problem
it’s not failing it gives the output “Invalid : extra bit at the end of the encoded text”
I know this is quite late to the mix but the reason why it gives you that output is because the decoded string from Test 3 is exactly that message. In other words, the string still produces a simingly valid encoded output but the very last entry of the input string does not have any prefix code to use for its decoding… it’s effectively similar to the example given in the instructions “0000” → this gives you an (r) for the first three zeros and then an error because there is no prefix code for a single 0. What you’re doing is printing out that r!
I believe this is what @pardouin meant by saying that you need to exit/break from your code. Most likely you’re still outputting the decoded string even when there is no encoding found.
I’m guessing this is a bit late to ask but still.
How do you detect that a decoding will fail at the first position?
I mean, how do you check that at the first position, there’s a mistake?
(As a concept, I don’t get it).
Detection at every position is the same: you compare the keys in the association table with the starting bits of the unmatched encoded string. Decoding fails if none of the keys match.
I have a problem with the 3rd validator. This is the only validator that fails on me. I get the sentence “Invalid : extra bit at the end of the encoded text” which is 50 chars long. After that I get decoding error at index 50. I output the sentence “DECODE FAIL AT INDEX 50” but the validator wait for “DECODE FAIL AT INDEX 186”. The problem is encoding error occurs at index 50, not 186. Anyone can help ?
Because decoding still does not fail up to index 50:
000000 1000 10101 1001 10111 00100 0011 11 10100 11 011 00010 010
(I assume you are talking about test 3 instead of validator 3 because you mention “INDEX 186”.)
Thanks, I found my mistake. I was keeping track of decoded char index instead of raw “bit” index.
Excellent Problem to solve, appreciate doing it!