Can I see console output while validating my result?

I’ve encountered problem: my code passes all test during code development, but when I try to validate it fails test in unexpected way. And as I don’t have console output window during validation I can’t figure out what is wrong. Can I somehow see console output during validation?

4 Likes

i vote for this

If you don’t pass all the “validators”, that means that you have a bug in your program. I fully understand that this is very frustrating, It happened to me too. However, if you have access to the output window during the validation, it would be exactly the same as moving the “validators” into the IDE as tests. It is essential that you check your program with different tests that the one used to compute your score. For some problems, there are not enough tests to cover all possible mistakes, hence I’d recommend you to do your own test. With graph problems / back tracking, I’d suggest to reverse the order or exploration.

I’m interested to know which puzzle you are dealing with.

It’s Skynet Finale Level 1 puzzle. Seems a lot of people encountered problems with it, but the thing is that if I can’t pass some validation tests I have to know what is going on to debug code. And even if I more or less can understand what is going on - I don’t know exact inputs and outputs and therefore I can’t reproduce and debug the test case that fails, and I don’t have a part of code to control agent in same way as it is controlled in original puzzle (so I can’t reproduce testcase, I can only use existing). Moreover, looks like it fails in a way that I can’t reproduce at all during code development (for ex. looks like it fails to output command, but it is not possible in initial code as output is outside of any conditions - so possibly it fails to output result in given time, but I don’t see such message so I don’t know what is going on for sure). So may be it should give not full but only part of info enough to reproduce validation test and debug.

In fact I’ve solved the problem. I thought that all links initially are ordered like N1 < N2. Looks like I was wrong and when I added links check in both ways (for N1 < N2 and N1 > N2) - it worked. But it was a guessing game, so I still believe that at least some console output results are needed for validating tests to be able to debug your code in case of fails.

See that as an exam. You learn and try on the test before, and once at the exam, nobody’s tell you what’s wrong except to go look back to your exercises (in some exams, you sometimes doesn’t even get your copy back).

If it’s not enough, let’s try this: if we did give you output, why would we make different tests than those in test case? The point is mostly to check that you code can pass any “user-input” and to do so we design them a little different to avoid hardcoding.

I hope that’s helpful to you :wink:

Agree. But there is a lack of info in problem statement to see it in that way. For ex., as I mantioned earlier it is not clear are all initial links given in “min max” order or it is possible to get “max min” (and based on test cases it always was “min max”, but solution become correct only when both variants were processed). Sometimes guessing is not right tactics 'cos there are too many possible variants. So, my suggestion is to make more precise description with all relevant issues detalisation - in fact to make not a problem statement, but tech spec for puzzle (becouse you can’t correspond to some test without tech spec - you dont know what are requirements and constraints). You have some part of tech spec at the end of problem statement, but it is not always detailed enough.

You’re seeing things in the opposite mind, you shouldn’t be certain that whatever is passed to you will always be in a specific format. There is a saying in computer science which goes “never trust user input”.

Let’s imagine that you make a html form in order that the user print his/her mail for the newsletter, you musn’t assume that what he/she is really going to do, you must sit in his/her shoes, worse even, in the shoes of someone trying to break your website, and one of the first thing I’ll test as a user is to enter something like “owned” to see if you have made safeguard, then why not "toto@gmail.com'; echo $sensitiveinformation" or someone else email to spam it, etc.

The point is that you mustn’t accept the idea that everything is the way you supposed to be, you have to think outside of the box to wonders "what if they send negative numbers? What if they are not in proper order? What if there is no links?" and try everything.

Not only it will help you here but also on future website/application that you’ll probably (or not) made.

Partially true. But here I dont know not much about problem “engine”. Following you logic I should not believe problem description and check by myself if input format corresponds to the description. Furthermor, I have to guess answers to everything that isn’t described but migh be important for me (as operands order in my example, which is really not ideal one). As I have only limited set of tests I can test my code only on a limited set of testcases. So all protective code usually will be skipped and not tested. And when I try to validate code it say “Hey, we have another tests case. You fail it, but guess what happened.”.
I agree that testaces shall be different to avoid hardcoded answers, but in both cases they have to cover same test intends (have same test cases coverage as final tests set). For ex. lets look again at Skynet Level 1 problem - all testcases during development follow the rule “For links input N1 < N2”. So I cant check “N1>N2” case (but looks like such case exist in finale test). And I cant create my own testcase becouse I dont have access to problem engine. So, such test but with other links shall be added in development phase to have opportunity to test this case. Or at least final test name shall pay attention that operands order is different (because it is simple case to guess, but for harder cases you may not even guess what this case cheks).

I see what you meant, and truth is, the latest puzzles hasn’t got this issue

Yeap, possibly now it is ok. Because a lot of people encountered such issue. But still it may happen again. So possible solution is to add some tool for creating custom test cases. Then user will be able to test all cases he expects, not only available by default.

1 Like

Note: you can check the stdout and stderr outputs of your program during validation by opening the replay of the failed test case. From the report page.

1 Like

I opened “Reply and share” for a validation test (in Skynet Finale) and could see stdout, but not stderr, despite stderr being designed to be chock-full during testing.

It would be nice to see some output without having to kill your program! However, it may help me to realize the power of integrating test functions during development. Cheers CodinGame – I love seeing my code do stuff besides console output for school!

I agreepartially eith this idea. Especially that the fact than you can not debug anything after deployment in real life is not completely true. Unhandled exception are yousualy prompted to the user. With that in mind, I agree that an access to the full possibility of the console might be too much, but knowing if the fail is due to an exception (like OutOfRange) or a timeout would be very valuable while not “killing” the validation aspect and still prevents hardcoding.

3 Likes

I stumbled over the same problem now 2 times and it’s really really frustrating to the point where I have no intention to do any other puzzles and just move to the multiplayer stuff only…

I can’t get Skynet Revolution - Part 2 to work with the validators (the IDE Testcases work fine), but only Testcase 1 and Testcase 5 fail.

I don’t see anything obvious why it should fail there nothing hardcoded in my code.

BUT if I hardcode a solution, I can get past the Validators… (I tested it for the first 2 Testcases and stopped then https://pastebin.com/QHnazBPQ )

So the Validators don’t prevent hardcoded solutions, but won’t show me why my code fails. I think that’s a really shitty situation.

With the Skynet Validator’s I atleast see how far my code gets, but other puzzles where I had the same problem I see no output nothing so I don’t even have a starting point what could be wrong…

Edit: Finally I found the error with using the stdout of the validator to debug my code, but I would have saved at least 2 hours if I could have searched for it in the IDE normally.