[Community Puzzle] Othello

https://www.codingame.com/multiplayer/bot-programming/othello-1

Send your feedback or ask for help here!

Created by @struct,validated by @Astrobytes,@WINWINWIN and @jrke.
If you have any issues, feel free to ping them.

I loved this one so much when I was a kid! (Later, chess love took over Othello though.)

Now I decided to bring some codegolf into this multi and submitted a 126 characters long bot. (Reading all inputs = 119 chars; AI + output = 7 charsā€¦), and now I am #11 - my best rank ever! :slight_smile: Okay, the leaderboard is kinda short yet, but stillā€¦
And I could be even much better with same bot: if @struct could please modify the referee to list the valid actions ā€˜sortedā€™ (by some eval score).

1 Like

please modify the referee to list the valid actions ā€˜sortedā€™ (by some eval score

Well done! Not bad to have achieved that result by a few lines of codes!

The concept of ā€œscoreā€ exists in your algorithm but not in the game platform. It should be individual botā€™s duty to define, if there is such a need, and calculate any score for its own use.

Yes I knowā€¦ :slight_smile: but in 7 bytes I can only echo back the last valid possible action I receive from the referee. I would be even higher if the leaderboard would be modified to TrueSkill point PER source code chars. That is a coding efficiency ranking. (Donā€™t take too seriously what I write).

After viewing the ā€œbattle TVā€ for a while, I found that battles between any two players will highly likely to be always the same, because each bot has a fixed algorithm to produce the same response upon the same input.
To enhance the viewability of battles, I suggest the game engine should shuffle the list of possible actions in each round. It will produce a more varied battle result and make the game less predictable.

Hi @java_coffee_cup, I think this is only the case with some of the lower ranked bots who are simply outputting either random actions from the list, or first action/last action, I doubt most of those ones are even evaluating the moves.
I know I, for one, and most of the upper part of the league are not using these moves at all.
If there are more serious players then the issue should become less problematic I think.

I am wondering: as this is a no-hidden-information-game, does it make sense to introduce any random element into the AI of a stronger bot even IF NOT using a random-based algo like MCTS or GA? If same two (strong) bots are fighting each other twice (with same colours), why would be the game different?
I mean, starting game state is the same, so eval should result the same ā€˜best moveā€™ each turn in both games and the resulting game would be exactly the same, no?
My guess answer to my own question (but I am curious what more seasoned players think):

  • It might make sense for a strong bot to select randomly from the 2-3 best move candidates (only if their eval scores are ā€˜closeā€™), just to make its play undeterministic, to be harder to play against.
  • Depending on other workload (e.g. background of processes) on the same virtual machine the bot is running on, the number of CPU cycles actually available for the bot within the 100 ms answer threshold might be slightly different each time. So even a deterministic minimax or alpha-beta might build different tree, thus different best move.

Maybe it is time to introduce a new game feature into Othello rules you guys all seem to love: fog of war! :slight_smile:

And ahh, @Astrobytes you revealed my 7-bytes AI publicly - that is not fair :slight_smile: As a punishment now I ruin the game for everyone by disclosing my full AI code - even if risking lifetime CG ban:

echo$s;

Unless there is some random element in the bot or in the game engine, battles between two bots will always proceed in the same path and end in the same final result. Thatā€™s the current situation I observed.

You want the outcome of a zero-sum, perfect information, deterministic game between two players using a deterministic algorithm to be different every time?

By the way this is a perfect puzzle as playground for trying out basic algos. Game representation in 2 longsā€¦ simple rulesā€¦ no fogā€¦ no random. My TODO list for next 6 months for Othello:

  • do my 2nd ever MCTS implementation (after ultimate tic-tac-toe)
  • do my 2nd ever minimax with alpha-beta (after a solo puzzle)
  • do my 1st ever GA
    Though I think I have to give up golfing in this. 126 chars seems a bit short for these.
2 Likes

Beware of the passes or even multiple passes.

A good strategy guide, originally aimed at the human player, but provides alternative ideas for a static game state evaluation function, to be used in any search algo that needs one: http://www.samsoft.org.uk/reversi/strategy.htm

2 Likes

Hi everyone,
Iā€™m actually trying to implement a monte carlo tree search algorithm to find the best move, but the algorithm donā€™t have time to explore much (3 loops with 2000ms and less than 2 loops with 500ms).
Is it normal, and is there a way to increase the speed of the algorithm by a lot ?

No you have a major performance problem, you should get at least a few thousand simulations every turn. My personal experience with this game though is that minimax performs better than MCTS, the opposite of UTTT.

Before I spend some more time to check my algorithm, do you use any library for MCTS/minimax ?
I couldnā€™t find any in c++.

You wonā€™t find any MCTS library on CG. All bots are hand-made, polished and refined with love.

5 Likes

Hello everyone, what software is Jacek using? Itā€™s too powerful. How can I download and use it

You canā€™t. The whole point of CG is to write your own software

You canā€™t download it, everyone writes bots their own. If youā€™re curious, the search is mcts variant and uses neural network for evaluation, no opening books are used.

1 Like