[Community Puzzle] 1D Spreadsheet

Bonjour,
J’utilise Python, 2 fonctions : une qui détermine l’opération à effectuer et l’autre qui détermine les valeurs.
Mon code passe tous les tests sauf le 13ème (Deep Birecursion) :
" Échec
Le délai d’exécution du processus a été dépassé. Cela peut signifier que votre solution n’est pas suffisamment optimisée pour traiter certains cas."
Une idée pour optimiser ?
Merci d’avance

@Elwithien Mémoriser les valeurs calculées (mémoïsation) pour ne plus avoir à les recalculer par la suite (c’est un peu plus qu’une “optimisation” cela dit, cela garantit une complexité linéaire O(n) et non plus exponentielle O(2^n) dans le pire cas).

5 Likes

Thank you :slight_smile:

Does anyone have any tips? Having a hard time wrapping my head around how to solve this one. Do we need recursion to solve the cells that other cells are dependent on first?

Really scratching my head lol :frowning:

So I got it, but it took about 2-3 hours of thinking on. Man I feel dumb.

Also, javascript is weird. -0 is a thing!

1 Like

Merci beaucoup @Niako , après quelques recherches sur la mémoïsation j’ai réussi à terminer ce niveau.

i tried to solve this puzzle in bash. the dependency graph was to be solved by parallel processes waiting for their dependencies to finish. similar to how systemd solved the daemon dependencies during system startup using sockets (http://0pointer.de/blog/projects/systemd.html).

sadly it did not work for the last three test cases. the codingame environment has a hard limit of 200 file descriptors (ulimit -Hn). 100 parallel processes need a bit more than that. maybe it would have worked if i could have used bash coproc. but for now coprocs are limited to one background process (https://wiki-dev.bash-hackers.org/syntax/keywords/coproc).

my script uses background processes for concurency, mkfifo to manage synchronization, flock to manage critical sections, and the filesystem to store state.

i tried to reduce the number of parallel processes by preprocessing the input using awk and delaying the start of the processes. but to no avail. also too much preprocessing and i would have end up solving the dependency graph in awk instead of letting the parallel processes solve it themselves. all the beautiful syncing and locking would be of no use.

i am sad that i cannot publish the script since it does not pass the validation because of the last three test cases. i am writing this to mourn a bit for my script. and to brag a bit for my idea. and maybe the codingame devs take pity on my plea and raise the file descriptor limit. although i do not know how many i would need. i was not able to determine the file descriptor count. lsof is a beast.

it also irks me a bit that this puzzle is categorized as easy. it seems more like a medium for me. but maybe my strive for this concurrent bash script solution biased my perception of the difficulty.

thank you for reading.

6 Likes

I think it’s a great idea and you ought to push it through.

A few thoughts:

  • You didn’t tell us specifically how you implemented your dependency tracking. This is likely to be crucial to your current blocking point of number of file descriptors.
  • Is systemd available on the CG platform? That would be a very fun way to implement a solution.
  • I’m pretty sure make is available, that could be another not-as-concurrent-but-still-fun way to go.

You piqued my interest; I’m going to have to try it out too. :smiley:

2 Likes

thank you for the encouragement.

i ended up solving it using a makefile (thank you for the hint). it feels a bit cheaty. all my script does is transform the input to a makefile. the makefile does the “heavy lifting” of solving the dependency graph. then again, letting parallel processes sort it out by blocking fifos basically means the linux kernel is solving the dependancy graph.

still, using a makefile feels right too because i am using “the right tool for the job”. because one of the main purpose of a makefile is to sort the dependancy graph.

but the parallel processing with syncing and locking in a bash script had some nerd cred feeling to it. oh well. i guess using a makefile has nerd cred too because all the script kidies today do not even know what a makefile is when all they use is npm.

for your question about my implementation: i suspect that the limit i am hitting is not the file descriptor limit.

here is a rought sketch of my parallel process implementation:

a process roughly does the following:

  1. parse operand and arguments
  2. if both arguments are available then calculate and exit. this is the trivial case.
  3. if an argument is missing then notify the other process (by placing a file) so it knows that this one is waiting. then read from a fifo which will block (until someone writes to the fifo).
  4. if the other process is done it will write to the fifo which will allow this process to continue.
  5. if this process is done collecting all arguments and calculating it will itself see whether some other process is waiting and write to their fifo to allow them to continue.

the use of a file descriptor is when reading and writing to the fifo. also when locking critical sections using flock. critical sections are in point 3 and 5 above (race condition between checking for value and writing value).

no file description action happens simultaneously in a given process. so the extra file descriptor usage of a process is at most one at any point in time. at least as far as i understand. i am by far no expert in linux kernel internals.

one thing i tried was to close stdin and stdout and stderr since i didn’t use them (other than debug print). no effect on the amount of processes i was able to start before error. so i suspected it was not a file descriptor limit.

the error message: /tmp/Answer.sh: fork: retry: Resource temporarily unavailable

a quick search came up with a handfull of internet posts saying it is a file descriptor limit. but it seems that it could be any other limit from ulimit. i did not make a lot of effort to try to understand what limit exactly that i hit.

i tried to optimize a bit here and there but it seemed clear to me that no amount of optimizing will reduce the amount of resources that i need other than solving the dependancy graph before starting the processes. at which point the parallel processes are no longer doing anything cool.

i am curious whether you will be able to solve it using parallel processing in bash. i hope i did not spoil to much for you.

thank you again for encouragement. i am happy that my idea and my rambling about my mourning inspired someone.

3 Likes

Hello everyone,

I submitted a python3 code for this puzzle and it solved all test cases beside the “accounting is hard 1” (92% of the puzzle). I have been looking through and can’t really think of how to debug it. Is there a way so my code can be visible / testable by the community so you could point me in the right direction to what I did wrong?

If this is not the right place for this type of question, please feel free to redirect me and I will follow your indications.

Thank you in advance for your help.
Halogen.

4 Likes

I am wondering if anyone could pass the last test case with C++ implementation.

2 Likes

did you end up using recursion? I tried it but it times out. I’m really out of ideas here. ( using C) ( got up to 10 test case)

1 Like

@OverCoder Of course, you can pass anything with C++.
@AbsoluteUnit Consider adding memoization (keep the computed cell values to avoid recomputing them several times) to your recursive implementation.

4 Likes

@Niako Thank you! Memoization worked for me.

Thank you very much!! That idea helped me solve last test!

Memoization worked for me too, thank you!

The last Test seems to be impossible to pass with Rust even with Memoization (caching the computed values of referenced cells)
I get a time out on the last test

I did the puzzle in rust and my version is still working. Maybe there is still a huge bottleneck in your code ?
I’m a little bit busy right now, but if you still don’t find the problem you could send me your code.

Way to hard for the easy section, Shadows Of The Knight EP1 for example was much easier, nonetheless it’s in the medium section.

5 Likes

Hmm, so the solution that I’m working with right now in Go is able to solve all but 2 of the test cases (Accounting is hard 1 and Accounting is hard 2)…

“Accounting is hard 1” shows:

Found: “81”
Expected: “-119”

“Accounting is hard 2” shows:

Found: “0”
Expected: “-8526”

I’m wondering if the way that i’m resolving the dependency chains is causing some kind of order of operations issue. Does anyone know if this is a caveat that needs to be considered?

1 Like

Seriously I’m the first one to publish a solution?
lol

Anyway for those who are stuck here some tips:

create a table / array with 4 dimensions and use it wisely
when you resolve a cell find the initial “VALUES” and cache “on the way back”.
look if you have a cached value first :wink:

2 Likes