Send your feedback or ask for help here!
I loved this one so much when I was a kid! (Later, chess love took over Othello though.)
Now I decided to bring some codegolf into this multi and submitted a 126 characters long bot. (Reading all inputs = 119 chars; AI + output = 7 chars…), and now I am #11 - my best rank ever! Okay, the leaderboard is kinda short yet, but still…
And I could be even much better with same bot: if @struct could please modify the referee to list the valid actions ‘sorted’ (by some eval score).
please modify the referee to list the valid actions ‘sorted’ (by some eval score
Well done! Not bad to have achieved that result by a few lines of codes!
The concept of “score” exists in your algorithm but not in the game platform. It should be individual bot’s duty to define, if there is such a need, and calculate any score for its own use.
Yes I know… but in 7 bytes I can only echo back the last valid possible action I receive from the referee. I would be even higher if the leaderboard would be modified to TrueSkill point PER source code chars. That is a coding efficiency ranking. (Don’t take too seriously what I write).
After viewing the “battle TV” for a while, I found that battles between any two players will highly likely to be always the same, because each bot has a fixed algorithm to produce the same response upon the same input.
To enhance the viewability of battles, I suggest the game engine should shuffle the list of possible actions in each round. It will produce a more varied battle result and make the game less predictable.
Hi @java_coffee_cup, I think this is only the case with some of the lower ranked bots who are simply outputting either random actions from the list, or first action/last action, I doubt most of those ones are even evaluating the moves.
I know I, for one, and most of the upper part of the league are not using these moves at all.
If there are more serious players then the issue should become less problematic I think.
I am wondering: as this is a no-hidden-information-game, does it make sense to introduce any random element into the AI of a stronger bot even IF NOT using a random-based algo like MCTS or GA? If same two (strong) bots are fighting each other twice (with same colours), why would be the game different?
I mean, starting game state is the same, so eval should result the same ‘best move’ each turn in both games and the resulting game would be exactly the same, no?
My guess answer to my own question (but I am curious what more seasoned players think):
- It might make sense for a strong bot to select randomly from the 2-3 best move candidates (only if their eval scores are ‘close’), just to make its play undeterministic, to be harder to play against.
- Depending on other workload (e.g. background of processes) on the same virtual machine the bot is running on, the number of CPU cycles actually available for the bot within the 100 ms answer threshold might be slightly different each time. So even a deterministic minimax or alpha-beta might build different tree, thus different best move.
Maybe it is time to introduce a new game feature into Othello rules you guys all seem to love: fog of war!
And ahh, @Astrobytes you revealed my 7-bytes AI publicly - that is not fair As a punishment now I ruin the game for everyone by disclosing my full AI code - even if risking lifetime CG ban:
Unless there is some random element in the bot or in the game engine, battles between two bots will always proceed in the same path and end in the same final result. That’s the current situation I observed.
You want the outcome of a zero-sum, perfect information, deterministic game between two players using a deterministic algorithm to be different every time?
By the way this is a perfect puzzle as playground for trying out basic algos. Game representation in 2 longs… simple rules… no fog… no random. My TODO list for next 6 months for Othello:
- do my 2nd ever MCTS implementation (after ultimate tic-tac-toe)
- do my 2nd ever minimax with alpha-beta (after a solo puzzle)
- do my 1st ever GA
Though I think I have to give up golfing in this. 126 chars seems a bit short for these.
Beware of the passes or even multiple passes.
A good strategy guide, originally aimed at the human player, but provides alternative ideas for a static game state evaluation function, to be used in any search algo that needs one: http://www.samsoft.org.uk/reversi/strategy.htm
I’m actually trying to implement a monte carlo tree search algorithm to find the best move, but the algorithm don’t have time to explore much (3 loops with 2000ms and less than 2 loops with 500ms).
Is it normal, and is there a way to increase the speed of the algorithm by a lot ?
No you have a major performance problem, you should get at least a few thousand simulations every turn. My personal experience with this game though is that minimax performs better than MCTS, the opposite of UTTT.
Before I spend some more time to check my algorithm, do you use any library for MCTS/minimax ?
I couldn’t find any in c++.
You won’t find any MCTS library on CG. All bots are hand-made, polished and refined with love.