[Community Puzzle] Connect 4


Send your feedback or ask for help here!

Created by @AshKetchum,validated by @darkhorse64,@jacek1 and @MSmits.
If you have any issues, feel free to ping them.

1 Like

thanks for this interesting puzzle!
I would like to ask that how could I access some information from the Connect4Board Class in the game’s code. Could I access to it directly? Or I need to rewrite that part of code in my program?

You can’t access the referee from your code. The only interaction you can have with it is via the I/O.

Thats sad ><

After an hour of coding, the code in Go seems to be buggy. turnIndex is scanned twice every turn, oppPreviousAction return an incorrect value and all of this ends by the game crashing…

Hmm the default code in Go works for me. :thinking:

While we’re at connect4 forum, let’s appreciate some connect4 memes, shall we https://9gag.com/gag/aeDNdGv


in the dev environment in the first move the opponent (player 2) is overwriting my first position (until now always mid bottom) if he chooses the same position as I did. It now happened multiple times.

P.s.: using C# in Wood 1

That’s in the rules. It’s called STEAL move or pie rule. At his first turn, second player can make a move or choose to become the 1st player. This is due to advantage that 1st player has - middle columns are too good, so it’s good to steal them. The strategy for 1st player is then to use the most balanced first move.


I submit to test in arena and it spins all first 5 battles forever and just says “No battles available” at the top. I submitted in c#. I checked other bot arenas and those are working normally. Am I doing something wrong?

Oh, nevermind. Seems codingame is having major server issues. Clashes won’t start either.

Perhaps any of the C++ Coders can answer this question: If you use MiniMax with Alpha-Beta Pruning for this Connect4-Puzzle, what depth can you reach within the time-limit? I am using Python and I cannot go deeper than depth 4 (depth 4 = where you (without pruning) have to evaluate 9 * 9 * 9 * 9 gamestates). I know that C++ is faster than Python, is it fast enough to reach depth 5?

I would say yes with the caveat that C++ is not efficient per se: the quality of the implementation matters a lot.

1 Like

Thanks for your answer darkhorse64. You are saying “I would say” - so you didn’t use minimax in your solution? I just want to see, if someone can confirm, that it’s possible to go depth 5, given the implementation is correct and the evaluation is not too costly.

Yes, I am using Monte Carlo Tree Search and I represent the game state with bitboards for maximum efficiency. My reply is based on the fact that people usually see a 10x speed increase or more going from Python to C++.


Thanks @darkhorse64 , a 10 times increase in speed would nicely fit for the next depth. - If you don’t mind, let me ask a question about your MCTS approach. Do you use the basic-MCTS approach, that is: Do many times: play a game to end(!) then propagate back the result, updating the statistics in the nodes of your growing search tree, then decide what to do next using the UCT-formula? This approach would use no heuristics for evaluation, just check win or loose. I really can’t imagine, that this can work with this many possibilities for playing a game to the end. In short: with your MCTS, do you always rollout to the end of the game or do you use an evaluation for gamestates that are not terminal?

Depending on the game, both approach are valid. For Connect4, I use full rollouts (= played until the end). I have implemented a very fast game engine that plays 2 million games per second. At this rate, game statistics are really relevant.
I use other tricks to speed things up:

  • during rollouts, I avoid losing moves and play winning moves. Playing “smart” rollouts greatly enhance the search quality.
  • I have a solver. When expanding, if I reach the end game and the game is a win, it means the opponent played a losing move. If all moves are losses, it means the opponent played a winning move. These results can be backpropagated according to these rules. The soIver routinely predicts the game outcome 15 to 20 moves before the end.

Amazing stuff @darkhorse64, last question then: your solver (for the prediction of the outcome): Is it some externally trained neural net? - if so, is your overall setting something like AlphaGo?

MCTS Solver is an enhancement to MCTS (https://dke.maastrichtuniversity.nl/m.winands/documents/uctloa.pdf) which changes how you backprop the result when your selection reaches a solved state (the first solved states are when the game is over, but as these results move back through the tree you solve earlier states as well) to quickly give a more accurate scores (eg if it selects a node that looked good early but turns out to be a loss that node’s value instantly becomes 0 instead of having to move the average down over multiple visits). It doesn’t require any extra game knowledge / offline training.

There are some neural net bots on C4 - the bot I currently have submitted is based on alphazero techniques. There are differences from alphazero (eg due to CG constraints the network is a small dense network), but the overall idea is the same (MCTS search guided by a NN giving policy / value information for each state). Marchete has made a guide for oware (Coding Games and Programming Challenges to Code Better) which shows how to get something like this working on CG.

edit: http://blog.gamesolver.org/ is a good resource for improving a minimax based C4 bot (and a lot of the same things apply to MCTS versions as well)


Thank you very much @RoboStac for your explanations and for sharing those links. Very, very interesting!