Code that use ‘system’-like functions to execute code in another language seems (to me) to run counter to the concept and spirit of code golfing. But in many or most cases, this is required to be competitive on the leaderboards. On the learning angle, users currently don’t have a way to judge how well they’ve golfed the intended language when looking at the leaderboard, since it’s not clear which submissions used such a cheat. I suggest creating a ‘pure’ leaderboard for each code golf puzzle that only showcases scores of ‘pure’ code that do not use such functions. I’m not sure what would be the best way of achieving the required sandboxing (for lack of a better term), but I wonder if using Seccomp BPF to disallow execve system calls would work? Or might that break some interpreters/runtime-environments? The code submission process could then, in the background, try to run the submitted code sandboxed. On success, the score would be submitted to the ‘pure’ leaderboard, and on failure, would revert to un-sandboxed execution and submit to the legacy leaderboard on success.
This problem has been going on for a very long time, and so far there is only one answer: this is not a priority.
For ex: https://www.codingame.com/forum/t/code-golf-cross-language-bug/1250
Btw, if there was any way to filter out system-like functions, what’s the point with ‘non-pure’ leaderboard to ever exist?
There are few interesting languages interactions. One language is good for some task, another is good for another task, it allows mixed solutions.
But to be honest this kind of thing only applies to few languages actually, for most solutions where system is used it’s to run a script made entirely in another language cause it’s more efficient than a mixed solution.
And yes it’s not very interesting when it’s just that.
If you want fair competition, there are few languages where calling system is not worth it cause the cost of the system call doesn’t balance the benefits of using another language.
So for example in Python, JS, Ruby, the top solutions of the leaderboard are purely in the expected language (except maybe for Chuck Norris in Ruby, which I believe is calling Sed).
When searching other posts on this topic, I skipped over reading that post because of the “bug” labeling, which seems misleading to me now that I’ve read the thread. I wonder if the unwillingness to tackle the problem is due to a failure of imagination to realize a simple solution. The OP states
I really dislike the system() call posibility, to which CvxFous replied:
If this has been their line of thought on the topic, it’s no wonder they’ve brushed it off. Tackling the problem by performing language-specific parsing and intervening at the language-specific library level I imagine would be quite the kludge that would take considerable effort to pull off reasonably well. However, using a wrapper program to blacklist the execve system call (this is at the kernel level, not the library/language level) should be straight-forward and simple to implement, and should cover all the languages. Though, again, maybe there’s one or more interpreters or runtime environments that would break under this condition? If someone knows, that would be valuable input to this discussion.
My concern was about not penalizing players who already submitted solutions. I don’t fault players for submitting solutions to fit the system as it was designed; I fault the design of the system. Adding a second leaderboard would benefit those who want to compare their ‘pure’ solutions to other ‘pure’ solutions, without penalizing those who don’t see this ‘problem’ as problem at all.