Bender - Episode 3 - Puzzle discussion


#43

I got 100%. This is the puzzle which I feel I had the least confidence, so I just tried whatever and see what happens.

In order to deal with the constant term in program initialization, I sought to calculate the derivative of the input data, so the constant term can be ignored. Then I compared the derivatives against the derivatives of the ideal curve.

To compare the derivates, I chose to divide one gradient against another, this leads to errors with the O(1) curve as the gradient is zero. I hard-coded this case checking if t hasn’t varied much then just output O(1).

Also calculating the curve of O(2^n) leads to overflow in all test cases except the last one due to large n as input. I added a check to ignore this curve when overflow happens.

I was happy that I needed not add any special handling beyond that - no need to cut / remove data or detect outliers.


#44

I saw “regression analysis” as a note to this problem, so I calculated fitted values for various functions (least squares method, as it is usual in linear regression) and then residual sum of squares (RSS). Then I decided to choose a function with the least RSS. With this approach, I have problem only with the “simple” example, where the answer should be O(1).

It seems that regression analysis is not appropriate for the first example (I can find another functions with smaller RSS and therefore is more optimal from this point of view). I solved this problem separately with a ratio of the last and the first value (if it is small enough, then it is O(1) problem), otherwise I use regression to find a better function. Do you have some other way to recognize O(1) problem?


#45

damnn, this puzzle messed a lot with my head, but solved it anyway ^^


#46

There’s a typo in the problem description: “Let’s determin the complexity of each algorithm and choose the most optimal one.” where “determine” is misspelt.


#47

At first this task sounded easy. At the end it turned out that if you do not know what you are doing it is quite complicated. It took me 4 days to find out that I need to make linear regression lines for each complexity model with the given input data for N and compare it to the regression line for given input data for the milliseconds and choose the closest pair (milliseconds regression line; closest error from other complexity models).
These videos helped me a lot: https://www.youtube.com/watch?v=ZkjP5RJLQF4&list=PLIeGtxpvyG-LoKUpV0fSY8BGKIMIdmfCi
Also I made several diagrams to help me visualize the problem: https://docs.google.com/spreadsheets/d/1ApMmM5T38RL5B0XR_pRsKWzQmdQqjIVC0ChFh5VSwkQ/edit?usp=sharing