I’ve been playing with Guessing n Cheating ( https://www.codingame.com/training/medium/guessing-n-cheating ) but it feels the test case named “Impossible” is wrong.
I’m supposed to find the first round that proves Alice cheated. The only constraint is that there are at most 50 guesses. I test every line as soon as I read it (don’t read the entire input if I can prove Alice cheated earlier).
For the test case Impossible I get only two lines:
100 too low
1 too high
For me that’s cheating in round 2 because it contradicts round 1 so I report that. But the expected result is “Alice cheated in round 1”. How could Alice possibly cheat in round 1 if there is no constraint on the number to guess? For any number, there would be numbers greater or smaller than it, so it’s impossible to prove cheating on the first round.
PS: I tested with the rest of the test cases and “Right at the end” has the same issue for me. Alice “cheated” in round 7 when she said number 17 was right. Before that 71 was still a possible and valid answer. The test case says she cheated in round 1 but that assumes she gives the initially correct number at the end. What if she changed the number during the game? You can’t prove this until round 7.
It’s impossible because the numbers are between 1 and 100 (both included).
Ok, that explains the test case Impossible, although I would expect to see this in the Constraints section. What about “Right at the end”?
In fact, you must print the line where what Alice said is in contradiction with what she said before.
I agree. So in “Right at the end” she contradicts herself in round 7, because until then the answer could still be 71. Why does the test case say she cheated in round 1?
EDIT: Hmm now it seems it’s the oppsite, round 7 is expected. Maybe I read it wrong. If that’s the case, everything is clear, thanks
I’m baffled by this puzzle too.
I simply keep track of the high and low value limits according to Alice and detect inconsistencies based on that range of possible values. That is enough to pass all the tests, and yet half of the validators fail.
This is pretty frustrating.
Is there some subtle trick à la Ulam’s liar game there? Is Alice allowed to lie as often as she wants, or just once?
I don’t know, you must have to print the rank of the first answer that is inconsistent.
I started to create a Giant Factory to solve this one…
Until I realized I could get the 100% by cheating!
The answer wording is pretty confusing.
Take the last test for instance. Alice lied all the way. The answer should be “Alice cheated in round 1 (and 2, 3, 4, 5 and 6 if you really want to know)”.
Round 7 is actually the only round where she didn’t cheat
What the puzzle want us to find is the first round where it is possible to prove Alice cheated, with no knowledge of the following answers.
The examples prove Alice is allowed to lie as often as she pleases. She could give “too high” and “too low” answers at random for all she cares, as long as Bob doesn’t guess the right number.
All we can do is narrow down the range of possible values assuming Alice always says the truth and stop when Alice’s answer places the number to guess outside that range.
My first attempt was keeping track of a simple [low,high] range. It was enough to pass all the available tests, but for the life of me I didn’t understand how the validation tests managed to make that fail.
I’ve finally solved the puzzle with an inglorious brute force approach (keeping track of every single value), but I had to look at the other solutions to understand why my previous attempt didn’t work.
I’d say there is some critical case that only the validation tests cover. This is pretty unfortunate, as it can leave you stranded with a “nearly working” solution for ages.
Have you reached 100% at submission ? My code verifies all the tests but fails at 80% after submission. Have I missed some special case besides 1 and 100 ?
Thanks in advance
Heard that people are getting difficult time at solving this puzzle. Here is some generalized suggestions that might help you look at the problem from a different view point.
Tips #1: You are no Bob.
Do not fall into the trap of pretending yourself be Bob. You are not playing the guessing game. Your target is not to find out the “correct” number hidden by Alice. No need to know the number. There is no number. There is no spoon.
Tips #2: Your target is to prove Alice is making a false statement.
We have the presumption of innocence. One is considered innocent unless proven guilty.
Whatever Alice said, as long as it is (1) in line with the rule of the game, and (2) does not contradict with what she previous said, then we assume her new statement is a true statement. Otherwise, the statement is false and you can catch Alice red-handed.
Tips #3: Know the rule of the game.
Read the statement carefully. Every sentence and every word has a purpose (otherwise I should not have allowed it to appear).
I know you are feeling how dumb Bob was in the game. But being dumb is innocent. He is adding confusion to you but the game allows it.
Focus on logic. It is a logic game.
(If you want to play as Bob to be a smart hero, try hands on the Batman puzzles “Shadows of the Knight”)
I have 100% tests, but only 80% with submit (6 and 7 fails)
I see two problems with this puzzle:
the wording of the answer.
“Alice cheated on turn X” is plain wrong. The turn we need to ouput is the one where we could deduce Alice cheated. “I knew Alice cheated on turn X” or something like that.
This is yet another puzzle where a wrong yet convincing solution will fail validaton while passing all tests.
I don’t like puzzles that lull players into a false sense of accomplishment only to punish them with red validators.
Validators are supposed to avoid hard coded solutions, In my opinion, a legit progam should pass validation with flying colors. That means tests should detect the same pitfalls as validators do. This is clearly not the case here.
“I knew Alice cheated…”
And the problem was explained not by a single output line. Statement + input + output + sample cases are supposed to be enough not to mislead.
About validators, it is unfortunate that no one can guarantee ALL possible source codes passing the test cases must be able to pass the validators. In real-world programming projects, test cases during development help to reduce bugs, but it cannot eliminate unexpected cases to appear in production to crash the codes. What the industry can do is to throw in hundreds or thousands of test cases and automate the testing procedure, to reduce bugs as far as possible. But bugs can still be here and there after all testing.
In this platform we’ve only a very limited number of test cases. However we’ve done our best to make the validators look similar to, but yet still different from, the test cases.
Well it certainly mislead a few people. It’s not such a big deal though, the examples were indeed quite clear.
I suppose validators here could simply be carbon copies of the tests, with non-significant answers added (Bob making the same guess a few times with the same answers from Alice).
That would guarantee the turn number could not be hard coded while exercising the code the exact same way (assuming it won’t deduce anything from two identical guess/answer pairs).
What bugged me in this challenge is to see my mistake in another’s solution.
There is clearly a condition you must check while maintaining your [low,high] range that i missed in my analysis of the problem. I realized it as soon as I saw it in several variants of the solution.
I was surprised this condition was not checked in any of the tests while the validator caught it.
You can look at my solution if you like, i left a commented copy of the solution that would pass the tests and fail the validation.
Luckily for me, in that case I had a backup plan that worked so I could pass the validation and understand what i had done wrong in the first place.
In the general case there is no such plan B and you’re stuck in front of uncooperative validators.
Maybe it’s just bad luck, I don’t hold a grudge against anybody, but that’s the reason why I voiced a critical opinion on this particular puzzle.
Any guess what you supposed to do after getting 100% on tests and failing on validators?
Any guess what you supposed to do after getting 100% on tests and failing on validators?
This is a situation I frequently encounter, not only on this site.
I’ll do several things:
- review the requirements, read all information again carefully.
- review my assumptions, get super suspicious to any info not said by the requirement but that I just ‘know’ it.
- review my source code. Add lots of debugging messages to it. Run critical blocks step-by-step trying to detect logical errors.
- if the problem is related to knowledge from external sources, find reference of these extra knowledge. I enjoy this part. It is high time to learn new things - new algorithms, new programming skill, new languages, new spectrum of knowledge.
- based on refreshed knowledge and info, run my imagination engine: day-dreaming, night-dreaming, brain-storming. Think out-of-the-box. Think out-of-myself, out of the way I usually will follow.
- do experiments - use the “what if” approach. What happens if I change this bit in the source? What happens if I change that bit in the input?
- construct my own inputs. For simple ones I can hand-craft a few and put the dreaming result into the inputs. For complex inputs I might write a program to generate them, either from some known patterns or by random.
- consult others - reviewing forum messages and discussing with other people
- if all in vain, put it down, forget it for some days. Pick it up again and rerun the above route.
+1 to this, this puzzle is badly designed, IDE TC should be updated to prevent this kind of issue.
Why don’t you just simply update your TC to prevent the issue several people encountered? The issue is pointed out, just fix it for the sake of future users trying to solve this.
Hello, all tests pass when I submit my code, except the test “Elimination”. I tried different validator but my code passes good, but not in validation. What is the wait for this test?