Today, whenever moderators want to validate a contribution, they must check that the contribution verifies some criteria. Criteria that the creators are not even aware of. They can find it in the documentation but it’s scattered among several pages.
To enhance the contribution process (both for creation and moderation), I’m proposing a new version of guidelines. You can find them here. Edit: Guidelines are up to date now
As you can see, there are several categories with a few guidelines in each: statement, test cases, referee, etc… The idea would be to only show these categories in the validation popup with a link to the documentation page for reference. Also, we would like to add this to the pop-up when contributors publish their contributions.
What do you think? Is what is expected from creators and contributors clearer?
Is there something missing? Do you agree with everything?
The documentation is a GitHub repo, so don’t hesitate to contribute.
- “Bad” contributions could still get through by moderators with the wrong intentions. If that happens and the contribution is really broken, let me know about it in the forum and I’ll take care of it. If the contribution is not good enough, it will most likely get rejected by the bot we’re putting in place. After enough votes, if the ratings of the contribution are too low, the bot will reject it. We’re still defining the rating limits.
- We’ve discussed internally how we should handle difficulty. Especially for CoC. Clash of Code by nature is a speed game. Most CoC players are looking for a very short break. So we decided that easy CoC puzzles should not be rejected on that ground.
- We already have a lot of puzzles and games so every new contribution should be original enough. For Clash of Code, it’s a bit different as it’s interesting for players to expand the pool of different puzzles. Contributions close to current contributions (same programming problem with different description) should be allowed.