Coding Games and Programming Challenges to Code Better
Send your feedback or ask for help here!
Created by @ElPeauDeLaBanana,validated by @DeanTheMachine,@FredericLocquet and @Matsoy.
If you have any issues, feel free to ping them.
Coding Games and Programming Challenges to Code Better
Send your feedback or ask for help here!
Created by @ElPeauDeLaBanana,validated by @DeanTheMachine,@FredericLocquet and @Matsoy.
If you have any issues, feel free to ping them.
Instead of comparing the frequency of letters in the message with the reference frequencies of the English language, I accidentally compared the number of occurrences of letters in the message with the frequencies-as-percentages of the English language.
I still got all the tests and validators correct despite not fixing the bug! Presumably because there is approximately 100 letters per message!
Maybe it is similar, I guess?
But there definitely not 100 letters per test
It goes from 76 for the first one to more than 700 for the last (only counting letters, not punctuation)
The evaluation function I used, for example, was going into the 26 possible shifts, computing the sum of the lettersā frequency in the language, and considered the max value was the correct one
No comparison, only maximisation of a āscoreā value, and I still got 100%
I think there are just different ways to get the same result, here
Weak tests, my solution finds the most frequent letter and determines the shift based on it(so if it was the letter e in a decode string), but if you do this right away when you process the input string, then in the third test the most frequent letter in a strict comparison will be Y, but at the same time if you look for the most frequent letter after the calculation is done for all letters, the most frequent in the same test will be O, well, it is obvious that the test fails with the first approach, but passes with the second. I am sure that my solution should not pass with more accurate tests.
Please prove me that i am wrong.
Iām using Chi Squared statistics here, and passing correctly all tests. But for some reason validator 3 is failing constantly when submitting.
I was initially just comparing the highest frequency found in message to the highest frequency in english language, but doing that failed sometimes in test 3 as two letter have the same frequency. Chi Squared fixed it and makes it pass every time. But I canāt figure out why it would fail in the validation.
Is there some case that Iām failing to see?
I would have thought that Chi Squared should certainly work, as just summing the differences works, or summing difference divided by natural frequency works*. Maybe the squaring exceeds some type limit and throws an error, rather than it being logically wrong?
*(Assuming the frequencies are divided by the number of alphabetical characters in the message)
Hi,
Would you say the problem is not about occurences but probabilities?
The table is clearly not about occurences in the message but the statement say so.
thx[quote=āStef_3, post:2, topic:200495, full:trueā]
Instead of comparing the frequency of letters in the message with the reference frequencies of the English language, I accidentally compared the number of occurrences of letters in the message with the frequencies-as-percentages of the English language.
I still got all the tests and validators correct despite not fixing the bug! Presumably because there is approximately 100 letters per message!
[/quote]
Thanks for the input, it might be something like that. Iāve tested the exact same logic with python and I got 100%.
My previous failing attempt was with rust. It might have to do with limits, or maybe with how I handle the printing, as Iām using str::chars
method that according to documentation āmight not match your idea of what a ācharacterā is.ā. But solved now!
And youāre right, I tried with just difference instead of chi squared and it works the same!