In the current edition of my AI4Games course, which lab is mainly based on CodinGame, I decided to put more accent on puzzles - not only multiplayer games. As there were no classic puzzles satisfying my needs, I decided create my own (A* exercise, Minimax exercise, MCTS exercise). And I was hit by everything I remember inconvenient when trying that activity last time, probably a year ago. Sadly, I just realized that many issues were already pointed out and foreseen by @player_one in a 1.5 years old topic: link.
Thus, I decided to share my feelings, thoughts, and ideas of improvements. Of course I’m very interested how these things I’m pointing out are seen by the rest of the CG community. I structured the post so the sections correspond to the life cycle of a puzzle: creating, validating, and finally and hopefully - solving.
It’s probably the most straightforward one. Interface for making puzzles is sometimes a painful copy-paster but it is mainly OK (although I won’t complain on some “upload form xml” option; adding test cases between the existing ones isn’t the most user-friendly operation under the Sun).
However, there are some simple (as one might think) things that impossible to do when writing a puzzle. The most trivial and important one is inserting hyperlinks. The choice is either to make people google for the phrase (but the common reviewing point is that there should be some link), or put plain link as text, which looks ugly, unreadable, and requires copy-pasting to actually use the link.
The second point concerns image insertion. It is probably more complex to add, but really - a picture is worth a thousand words - and this will be very useful feature for the more complex puzzles. I know ASCII art is fancy, but not always the most readable.
Last, and most complex, improvement proposition repeats my idea from the previously-mentioned topic. Allow us to make interactive puzzles! This is not SPOJ, this is CodinGame. Community puzzles will always be boring because they lack graphics that is so crucial in official puzzles - is making them fun. At least, give users the power to send programs as validators. Programs that can converse with user’s programs. This is the scheme used by most official puzzles and this is the other reason it distinct such game-like tasks from simple and boring output-checking tasks. The interactivity that gives game-feel is really important here. Also - this will allow to implement more interesting puzzles and in less ‘artificial’ way. Consider my Minimax puzzle - what’s the point of alpha-beta if I have to read all the leafs? No point at all. It’s just the best way I found to make the puzzle given current limitations. Situation with MCTS puzzle is even worse.
Reviewing process sounds great - really. On paper it shines. In practice its… not working well, euphemistically speaking. However I don’t have any clear recipes, I just observe the flaws. I won’t focus on CoC puzzles because a) they are handled better as they are simpler, b) it is not an interesting case from my point of view.
One thing I know and strongly suggest to be fixed is acceptance/rejection ratio not visible right on the puzzle. Now we have to click puzzle, and then click dots to know what the values are. Which is incomprehensible and inconvenient. Also one have to click play to know who validated the puzzle, and I still have no idea how to check who refused it. (Notifications are the other thing, they do not remain with the puzzles, after month, two months and so on.) I wouldn’t mind puzzle/CoC filtering but that’s a minor suggestion.
The problem is that there are around 80 puzzles in the queue, some more than 1 year old that are mostly in a limbo state. No one is patient enough to solve them and approve them, no one is brave enough to reject them. The idea of community rejection/acceptance is good. There are even some really minor xp points to gain, but still it seems it is not motivating enough - or, alternatively, not clear enough how to make the reviews.
If I don’t like the puzzle, it doesn’t necessarily mean I’m rejecting it. I often feel like I do not have background to throw it to the bin right away. And probably multiple reviewers feel the same so the puzzle stays untouched. On the other hand, there are some puzzles I know are good, but it’s too much work to make a working solution and I have no time, so I won’t accept them. Even If I do, waiting for the remaining two approvals can take months!
It’s easier with clashes because they are smaller. But with community puzzles? I have to make my own solution that takes time equal of solving other unsolved community puzzle. Then inform the author about my propositions of improvements / found errors etc. Then wait for him to response - if he still cares, and some authors are not. An alternative, is dumping into someone’s contribution and changing it without his approval. As old versions are not stored, it does not seem as a nice behavior. Also, I should know all existing (>160) community puzzles to be able to spot a duplicate. (Just a note that existence of tags would simplify the process, as that would allow to make beam search through the list of accepted puzzles.)
So assume I solved the puzzle, I have to make a verdict. That is problematic because of the lack of clear criteria. I know everything is subjective, but really: ‘original enough + nicely themed’ make no criteria at all. What’s the global policy? Accept as many puzzles as possible? Accept the ones that teach something useful? Only hard ones? At the end all of them are threated the same, so all this thinking is pointless. Ironically, imho it will be easier to accept ‘hard puzzle’ or ‘easy puzzle’ then just ‘puzzle’. At least we would more clearly know what is expected of us (I meand the reviewers).
When opening community puzzles page, you are thrown over the bag of more than 160 puzzles. And you know that newest are first. In theory there are some user grades, but: a) they are meaningless, b) nearly all of them have 4 stars (the rest have 5 stars). Why meaningless? Given a grade, do you know the puzzle is easy/hard? No. Do you know it is teaching something important? No. You just know that some (not all) of the users that finished this puzzle liked it. Whatever that means for them. You even don’t have access to your ‘friends’ grades.
OK, but we have number of people that solved the puzzle. That gives some intuition towards easiness but: except the cases when puzzle is new, or is easy but long and boring, or the statement is vague, or is long forgotten on the bottom of the list, or whatever.
My point is, there should be some tagging system. If we are here to train ourselves and learn, what is expected is a mechanism consistent with the one of classic puzzles. Tags pointing out the topic covered. Difficulty measured by ‘experts’, hopefully influencing XP gain. Some of the community puzzles are hard, interesting and should have assigned appropriate rewards. Some has difficulty of the CoC tasks.
Tags and difficulties should also go in hand with some mechanism of filtering/sorting the puzzles. Also, given the number of puzzles, some more achievements should be very appropriate (solve 50 community puzzles, solve 100, solve 10 hard - given existing tag system, etc.)
To summarize. As there are no new classic puzzles (why?), community puzzles are the ones that support growth of the ‘practice’ subpage. And because they are not evolving, they are decaying. There are not that fancy because of no graphic, and no interactivity. There is too many of them, and too diverse in difficulty and teaching potential, to be threated as an unordered list without any sorting/filtering. Citing @player_one prediction, now we have “haphazard list of hundreds of random puzzles”. Also the reviewing process is slow, unreliable and unclear in its goals.
I’m very interesting about your feelings: authors, reviewers, solvers. And CG team - as this situation is actually not new - what plans do you have to support this half(!) of CodinGame-based activities?