Smash The Code - Feedback & Strategy

The topic where you can tell how you liked the challenge, & how you solved it. (Do not post full code here.)

You can share your solution via github/pastebin, but every link will be removed when the competition will be available for multi .

Brutefoce it all!

i optimized it up to x5 depth, and i checked only x-1 to x+1 position moves for the future turns, and all 22 moves for the current turn

Part brute force, part trying to keep the board more or less tidy. I looked about 4 moves ahead, but only checked the game states on each level based on how much promise they seemed to be showing (eg having a lot of the same colour blocks touching).

If it sees a good combo in those 4 moves, it goes for it. All the rest is placing blocks to make that combo more likely.

12th in the final ranking.

Monte Carlo Tree Search. I search the solution for the opponent for 15ms. Then i seach for myself during 85ms. The strategy is the following:

  • I calculate the ā€œscore to killā€. It is the ā€œone shotā€ score minus one line (and modified by my current nuisance)
  • I calculate the ā€œscore to be safeā€. I check out the greater gap between two columns on my opponent grid. The score to be safe is the amount of needed lines to fill that gap.
  • I calculate a floor combo score. This is scoreToKill*0.25.

At this moment, i can now search my solution. I ignore too small combos (under the floor combo score). If i do a combo greater than the score to be safe, i assume that the opponent will not send my a new combo for the next turns of my search.

I searched an evaluation finction for all the contest, but iā€™m not in the top 10. So i assume my evaluation function is not that good :frowning: I use the following criteria:

  • Every group of 2 balls is 2 points. Every group of 3 balls is 6 points.
  • Every ball touching an empty cell add 1 point : This one is important because you want to access many ball as fast as possible
  • A late one : Every ball touching another ball (ecxept a skull ball) is 0.2 points by touching balls. With the previous one and this one, your AI will try to build towers. This is the best way to be ā€œskull proofā€.

And at last, every score point is one bonus point. But with some additional rules:

  • If your combo is above the kill score, set it to the kill score. Donā€™t waste your time by building too big combos.
  • If the combo is too small, ignore it (like said above)
  • To force my AI to not seek for a better combo every time she see it, i use a ā€œpatienceā€ term. My patience term is 0.7

For example, if i can win 100 points at the turn 0 (the current turn), my eval function will returns 100*0.7^0, so 100 points. But if i can win 500 points at the turn 2, my eval function will returns 500* 0.7^2. This way, my AI will prefer a combo now than a bigger combo in 5 turns.

15 Likes

My score heuristic:
Basic simulator here, find out how much points iā€™d get for a particular move

My connection heuristic:
each group of connected n ball is given nĀ² points (so 1, 4 or 9).

My super greedy combo heuristic:
after the score heuristic is done, post-simulate with a vertical mono-colored block, for each color and each column (total of 30 to test) - and get the biggest score!

with no configuration (naively deciding a move by summing up all of these 3 calculations) this got my AI to start filling then entire board with a single overkilling combo. Good start! :smiley:

Then I added in a basic monte carlo search to look for better configurations ahead. I look for a combo that would fill half of the opponent board (counting from the collumn that has the lowest amount of cells). If found, try to do it.

If the opponent is about to activate a combo, try to activate mine. if not, build a combo.
If the combo heuristic of my opponent is tingling my AI, my own combo heuristic score will decrease so only score&connection heuristic remains, to flavor immediate action rather than build up and waiting for the correct piece to come up.

mc search is of various depth. it does depth 3, and if thereā€™s some time left, it search a bit with depth 4 and 5. :slightly_smiling:

1 Like

I did a Monte Carlo too, but not quite as good since Iā€™m ranked ā€œonlyā€ #55 as I write this.

My main concern has been performance. Iā€™ve tried to squeeze every bit out of C# (forced inlining, usage of structs, ā€¦), but thereā€™s only so much you can do with a managed language. In the end I could simulate about 100k steps during the 100 ms (one step being ā€œpick one move, play the combos, and calculate the scoreā€).

The main algorithm goes like this:

  • Exhaustively bruteforce the opponent moves with a depth of 2, to detect danger (combos that will send at least 2 lines of skulls)
  • Run the simulations: pick n moves randomly, and play them to see if I can find a combo. Repeat until the 100 ms are spent. ā€œnā€ is picked dynamically: I use a 8-steps depth most of the time, but reduce it to 6 in the end game (as it makes no sense to search 8 move ahead when the grid is almost filled, I prefer to reduce the depths and increase the probability to find a short killer move). I pick the best moves based on the score: best move one turn ahead, best move two turns ahead, best move three turns ahead, etcā€¦ If for the same depth I find two moves with the same score, I use two other criterias for the choice: 1/ the variance of the height of each column (so the AI will try to build towers), 2/ I try to make clusters of 3 colors).
  • When the simulations end and I have the best move for each depth, itā€™s a matter of deciding on how many turns I want to play:
    • If the opponent is going to make a combo in the next turn or the turn after that, I pick the corresponding depth
    • If there is a combo with more than 1500 points in the next 5 turns, I pick it
    • Otherwise, I use a ā€œpatienceā€ term (as perfectly explained by Magus) of 0.4 (lowered to 0.2 in the end game, when my AI is just trying to survive)

In the last day I tried a whole different approach, inspired from neural networks, where I would still randomly simulate, but build a tree with each simulated move and associated weights. Whenever a desired trait is found on one branch of tree (such as a combo, a cluster of 3, a construction in towers, etcā€¦), Iā€™d backpropagate and increase the weight of the nodes leading to that move. On the contrary, when an undesired trait is found (scattered colors for instance), Iā€™d reduce the weights of the nodes. The random generator is rigged to pick the moves along the branches of the tree with the highest weights. This way, I was hoping to find interesing combos, then refining the best way to get there (for instance, for similar scores, itā€™s much more interesing to build in towers rather than flat). Unfortunately, even though the tree is lazily constructed, the number of allocations was taking a huge toll during the garbage collections, and I couldnā€™t optimize it enough to provide competitive results. So I finally dropped that solution.

3 Likes

Patience termā€¦ gotta remember that. I spent a lot of time trying to teach my bot to accept ā€˜good enoughā€™ when he saw something bigger coming down the line.

i did that by decrementing the score of far away future moves even down to negative values relatively to the hardest enemy move - the worse the move, the more i decremented scores for the future moves

My less-than-optimal evaluation function was good enough for (what looks like it might be) a top-100 finish. Edit: finished #83 after hovering in the 90s throughout the final evaluation.

Number of blocks in each group, cubed
+1 for each group near another of the same color (the big timewaster I couldnā€™t get rid of)
+1 if itā€™s also above said group
-15 if the group has no available open spaced adjacent to it
-2 if the group size is 1

Then, add to it the number of skull rows the move generated times a multiplier that changed a lot.

If the board was almost full, Iā€™d subtract the number of blocks on the board times a high multiplier to make him focus on survival. That felt kludgy but it worked.

I did brute force for 2 turns for both myself and the opponent using the same evaluation, then subtracted his score from mine to get my move. Iā€™d continue my own evaluation until the time got close, so my brute force was more like 2.3 turns most of the time.

Since I applied the opponentā€™s expected move to my second turn, the bot was good at firing off all my combos right before he was going to be buried, as long as he guessed right. If he guessed wrong, he tended to get owned. He also had a knack for waiting for a weak combo to land, then burying the opponentā€™s now-shorter stacks in skulls.

1 Like

At first I remembered playing this kind of game back in the late 90ā€™s and so I started browsing the web in a quest for some strategy hints, or AI related pointers. I came across this website: https://puyonexus.net/ which contains a lot of hint and strategy for a game called puyo.

It was a good starter to get a grasp and start to try to devise a strategy.

As I was kinda impressed by jeff06 article on Genetic Algorithms that he made for the previous contest,
I also looked in that direction and found that page with a few good hints on the matter: http://www.cs.columbia.edu/~evs/ml/MLfinalproj/suk/genetic.html

Seeking for more iā€™ve ended up looking into this: https://dspace.jaist.ac.jp/dspace/bitstream/10119/10925/1/18704.pdf

In fact I spent most of the 3 first days writing code to simulate the game and looking for clues on the web, I first just used a simple random distribution then tried to place colors by their index making some sum mixed with random. It wasnā€™t so great but it still got me to silver league.

Then when my simulation got to work, I quickly realized that pure brute force would not get me far with so much colors known in advance, the combinations had to be pruned in some way.

I made a pretty basic heuristic that looked for balls of same colors in the neighborhood. Giving weights based on whether there was some ā€œfriendlyā€ ball close enough or not. What I had in mind was to eliminate as much combination as possible before attempting to brute force some solution.

And itā€™s all my algorithm is about, i only check for immediate opponent attack if i have a choice to make at that very moment, all the rest is just about a few tweaked weights for pruning followed by brute forcing to find the best possible combination.

Edit: On a note, if my AI is so limited in term of opponent analysis is that my algorithm showed poor abilities in predicting coming moves from a frame to another it could drastically change prediction because of the nature of the heuristic pruning (Edit2: on a second thought this is likely to be a bug in fact, I was too short in time to investigate any furtherā€¦ I hope we will get the game in multi soon so I can double check it)

Hence I rather chose to give better weighting to tall structure in order to maximize my chance to survive an attack and to tweak an according coefficient to select my attacking target. I further refined this coefficient by the current remaining space in order to unleash chains faster when I had less remaining space.

Then, I also applied the same concept the other way around to try to pressure the opponent when he was low on space left. I guess there is a lot space for improvement on my algorithm, especially on the heuristic end which is really very basic and often make poor decisions. Strong point of my algorithm being probably the amount of brute force combination I evaluate every frame. I actually average 80-100% of 22*4^8 (~1.44m) moves in 95ms.

And that was enough to get me in the top 10 of the legend league ^^

Once again, it was a lot of fun, thanks!

5 Likes

Awesome challenge as always.

Feedbacks regarding the league system and the final ranking: it is very frustrating for the players not in the legend league not to be able to be ranked in relation to the higher leagues. Some of the low legend have very low score which might indicate they could be beaten easily, giving points and, therefore, a higher ranking. Iā€™ve taken the legend league as an example but it applies to every leagues.
One solution to this problem would be, once the challenge is over, to use the old ranking technique, removing the league system, which better reflect the global ranking of everyone related to every other participants.
Iā€™ve noticed, for example, players going from gold to legend once the challenge was over and then being ranked with the legend players, gaining another 50 to 100 places in ranking due to weak low-legend AI.

League system is really great for accessibility and not scaring newcomer away with a really complex set of rules, but it is quite bad at reflecting the players level in relation with every other participants and not just related to his league (which, again, is great as long as the challenge is not over).
Apart from that bit of frustration at then end (I knew if I had managed to go to legend, i would have gained maybe 50 ranksā€¦ thatā€™s a HUGE frustration), I had a lot of fun :slightly_smiling:

4 Likes

Donā€™t forget that the score you see is calculated in relation to the opponents from different leagues.

That aside, I suspect that some people reached legend using a mainstream language and then switched to some half-assed solution in an unpopular language, just to get the 1st in the language award. This should be totally punishable IMO

1 Like

I spend all my last hours on computing efficiency, and promote from top 10 to top 3.

I simulate more than 300k steps in 100 ms.

My main solution finding frame is limited width tree search. Limited with two ranking lists.

  1. Heuristic information like same color neighbors and highest column and lowest column. (length:2k)
  2. Accumulative score with some decaying coefficient. (length:100)

The first one is intending to find following scoring moves.
The second one is holding the good solutions on calculated moves.

Before the last day, my heuristic evaluation on the board is very complicated. Itā€™s hard to make it efficient.
In the end, I just reserve the same color neighbor counts, skull numbers and tallest column and shortest column. When the move do not get any scores, the evaluation on the new board could be updated from the old value which makes the evaluation do not cost much time.

Iā€™ll give some extra points on same color neighbors on the vertical direction. Because they are stable when the basis collapsed and a high tower might save me when Iā€™m covered by a lot of skulls.

Another important system is pace control which tries to predict the next fighting turn. I do not polish this part really well and use some hard code rules.

  1. The first fight might happen after 10 turns.
  2. The following strike might come after 5 turns.
  3. If I find out a big strike from my enemy in following three turns by tree search, Iā€™ll fasten the pace.
  4. If I could not get any efficient moves before the predicted fighting turns by tree search, Iā€™ll postpone the prediction.

Iā€™m swinging a big axe to fight against some heavy armored warriors, and try to overpower them by strike power.
If my enemy is playing with a dagger and cut off my move by some little strikes, it might be a painful game for me. But itā€™s very hard for the enemy to have continuously little strikes on me. Once he failed, Iā€™ll punish him with my soaring anger.

I do not use English for my daily work. It might be a painful experience for some one read this. :slight_smile:

10 Likes

My was quite easy.

  1. Brutforce for 4 turns and one rotate (choose randomly) case with 3 turns. This decrease of one rotate case allow to check all others for 4 turns.

  2. Calculate placeScore and gameScore. Game score comes from game mechanics, it is points. placeScore was easy - for 2 blocks with same color - 2 points. For 3 - 4 points. For 4 and more - minus 99 points. This give me turns that have good enough game situation.

  3. If game score was higher than target - I go for it. If less - than I make move with best placeScore.

4- Target by default was two lines of skulls. First 4 turns after drop two lines I try to drop another one line. If situation was critical for me - I try to kill any blocks. If it is critical for opponent - target is one line of skulls. Critical situation - just calculation empty blocks. If it less than 6*4 - than target is drop one line of skulls.

It gives me 93 place.
My evaluation function for place score was so easy and I must focus on it first. Instead of I try to follow opponent situation, but finally decide not to submit this part, since it gives not so good results :frowning:

Congratulation to winner, was very interesting :slight_smile:

1 Like

Feedbackā€¦ This contest was nice. The game itself was nicely thought, and I did not get to the point where I would have appreciated to be given info about the opponentā€™s nuisance, so simple input, simple output, perfect.

The tactics involved quite a lot of bruteforce, both in the early game and (from what I could read) in the late game. So this game felt really penalizing for slower languages.
I actually switched from python3 to C++ mid-contest for this reason (and I went from sometimes timing-out with only one simulation to simulating all moves to depth 2, with the same poorly-optimized algorithm).

In the end I settled for checking the first 7 valid moves (left to right, all rotations) up to depth 3, and evaluating both a static heuristic (rewarding large groups of up to 3 and groups of 5 or more, penalizing dropping a ball of the wrong color near a large group).
This tactic was surprisingly decent (able to beat robossnik in about 20% of games). I suspect the fact that it tended to privilege moves towards the left side of the board helped the tactic a lot. I also tried prioritizing moves with my static evaluation function, but the end result was much worse.

End ranking: 375

2 Likes

14th here

I had very similar stuff as Magus because we talk alot, but I stuck with Monte Carlo Search. So I was just picking 8 moves at random and evaluating it, 25-50k simulations of 8 turns in 100ms. One problem when youā€™re doing MC vs MCTS was being careful when your board is almost full. If your board is almost full the chance of picking 8 moves that dont kill you at random is pretty bad so you have to pick from moves that are actually possible. So one thing might be to look at the board at every step, add possible moves to a list and pick from it but that seemed expensive to me, so I had a function that only told me if a move existed, it usually returned instantly after having looked at the first two squares, and if a move was possible I would pick moves at random in do-while loop. This allowed me to keep my performance when alot of space was available and still do alot better in low space conditions.

If you have 1/4 chance of picking a valid move 8 times, at random your chance of picking 8 valid moves is 1/(4^8)=1/65536. With the while loops the problem becomes linearly bad with an average of 4*8=32 steps to find a valid series of 8 moves.

But this is kind of a detail, for the goodies look at Magusā€™ post.

Feedback: I think the contest wasnā€™t as interesting as the previous one. It was very bruteforcy, as such the first person in an interpreted language was 47th in Python. Also I donā€™t think your move of making the simplest game info, like score and nuisance points, only accessible via recoding the game engine was a good one (and then it was still a guess). I like the league system but I think that the slight decrease of people compared to the last contest shows something, I expected an increase in players from the playerbase inflation alone.

7 Likes

Itā€™s funny.

After many tries, i ended with the same evaluation values for the numbers of balls in the groups (2 for 2 balls and 6 for 3 balls). I could not find better values. But i also had a score of -1 for lonely balls.

I thought about your others evaluations (for balls touching empty cells, aso) but i didnā€™t have the time to implement them properly and find good weights.

1 Like

Who use Monte-Carlo? Can you explain in details how?

Just choose random move for first turn, random rotation, repeat it for some depth?

Than again, again and again?

I used a genetic algorithm in javascript.

Could compute around 3-4k moves per 100ms with depth between 8 and 6.

final position 73.

As an addition to the ā€˜usualā€™ evolutionary algorithm I also added a kill function to remove ā€˜unhealthy genomesā€™ (scoring too low, doing invalid moves, etc) to improve the overall quality of the population in a shorter timeframe with the added bonus that if all the existing population dies it means that the AI is in a critical situation and it needs to regenerate a new population with a smaller genome (less steps) so that it could try more steps in the same time frame to get out of the critical situation.

the best 10% of the turn population was stored and used as a seed for the next iteration (after shifting the genome by one).

I gave a score for potential damage ( 1 point for 1block group, 4 point for 2 block group, 20 point for 3block group) multiplied by 1.5 if the group was accessible (1 empty space near)
more point were added based on height of the layout and number of skulls destroyed.

the total fitness of a genotype was the total score (real point + potential + height bonus + destroyed skull bonus) divided by (1 + round)

in this way 600 points now are better that 800 next turn , 1400 next turn are better that 600 now, and 2000 at turn 4 are better that 2200 at turn 5.

i also tried to use enemy prediction (20 ms for enemy simulation ) but the result was underperforming compared to the default one so had to revert

using JS I was very short on computational power ( my late contest c++ code is much faster but i hadnā€™t the time to complete it) but iā€™m happy of the achieved result and had a lot of fun in this contest !

6 Likes