Smash The Code - Feedback & Strategy

Don’t forget that the score you see is calculated in relation to the opponents from different leagues.

That aside, I suspect that some people reached legend using a mainstream language and then switched to some half-assed solution in an unpopular language, just to get the 1st in the language award. This should be totally punishable IMO

1 Like

I spend all my last hours on computing efficiency, and promote from top 10 to top 3.

I simulate more than 300k steps in 100 ms.

My main solution finding frame is limited width tree search. Limited with two ranking lists.

  1. Heuristic information like same color neighbors and highest column and lowest column. (length:2k)
  2. Accumulative score with some decaying coefficient. (length:100)

The first one is intending to find following scoring moves.
The second one is holding the good solutions on calculated moves.

Before the last day, my heuristic evaluation on the board is very complicated. It’s hard to make it efficient.
In the end, I just reserve the same color neighbor counts, skull numbers and tallest column and shortest column. When the move do not get any scores, the evaluation on the new board could be updated from the old value which makes the evaluation do not cost much time.

I’ll give some extra points on same color neighbors on the vertical direction. Because they are stable when the basis collapsed and a high tower might save me when I’m covered by a lot of skulls.

Another important system is pace control which tries to predict the next fighting turn. I do not polish this part really well and use some hard code rules.

  1. The first fight might happen after 10 turns.
  2. The following strike might come after 5 turns.
  3. If I find out a big strike from my enemy in following three turns by tree search, I’ll fasten the pace.
  4. If I could not get any efficient moves before the predicted fighting turns by tree search, I’ll postpone the prediction.

I’m swinging a big axe to fight against some heavy armored warriors, and try to overpower them by strike power.
If my enemy is playing with a dagger and cut off my move by some little strikes, it might be a painful game for me. But it’s very hard for the enemy to have continuously little strikes on me. Once he failed, I’ll punish him with my soaring anger.

I do not use English for my daily work. It might be a painful experience for some one read this. :slight_smile:

10 Likes

My was quite easy.

  1. Brutforce for 4 turns and one rotate (choose randomly) case with 3 turns. This decrease of one rotate case allow to check all others for 4 turns.

  2. Calculate placeScore and gameScore. Game score comes from game mechanics, it is points. placeScore was easy - for 2 blocks with same color - 2 points. For 3 - 4 points. For 4 and more - minus 99 points. This give me turns that have good enough game situation.

  3. If game score was higher than target - I go for it. If less - than I make move with best placeScore.

4- Target by default was two lines of skulls. First 4 turns after drop two lines I try to drop another one line. If situation was critical for me - I try to kill any blocks. If it is critical for opponent - target is one line of skulls. Critical situation - just calculation empty blocks. If it less than 6*4 - than target is drop one line of skulls.

It gives me 93 place.
My evaluation function for place score was so easy and I must focus on it first. Instead of I try to follow opponent situation, but finally decide not to submit this part, since it gives not so good results :frowning:

Congratulation to winner, was very interesting :slight_smile:

1 Like

Feedback… This contest was nice. The game itself was nicely thought, and I did not get to the point where I would have appreciated to be given info about the opponent’s nuisance, so simple input, simple output, perfect.

The tactics involved quite a lot of bruteforce, both in the early game and (from what I could read) in the late game. So this game felt really penalizing for slower languages.
I actually switched from python3 to C++ mid-contest for this reason (and I went from sometimes timing-out with only one simulation to simulating all moves to depth 2, with the same poorly-optimized algorithm).

In the end I settled for checking the first 7 valid moves (left to right, all rotations) up to depth 3, and evaluating both a static heuristic (rewarding large groups of up to 3 and groups of 5 or more, penalizing dropping a ball of the wrong color near a large group).
This tactic was surprisingly decent (able to beat robossnik in about 20% of games). I suspect the fact that it tended to privilege moves towards the left side of the board helped the tactic a lot. I also tried prioritizing moves with my static evaluation function, but the end result was much worse.

End ranking: 375

2 Likes

14th here

I had very similar stuff as Magus because we talk alot, but I stuck with Monte Carlo Search. So I was just picking 8 moves at random and evaluating it, 25-50k simulations of 8 turns in 100ms. One problem when you’re doing MC vs MCTS was being careful when your board is almost full. If your board is almost full the chance of picking 8 moves that dont kill you at random is pretty bad so you have to pick from moves that are actually possible. So one thing might be to look at the board at every step, add possible moves to a list and pick from it but that seemed expensive to me, so I had a function that only told me if a move existed, it usually returned instantly after having looked at the first two squares, and if a move was possible I would pick moves at random in do-while loop. This allowed me to keep my performance when alot of space was available and still do alot better in low space conditions.

If you have 1/4 chance of picking a valid move 8 times, at random your chance of picking 8 valid moves is 1/(4^8)=1/65536. With the while loops the problem becomes linearly bad with an average of 4*8=32 steps to find a valid series of 8 moves.

But this is kind of a detail, for the goodies look at Magus’ post.

Feedback: I think the contest wasn’t as interesting as the previous one. It was very bruteforcy, as such the first person in an interpreted language was 47th in Python. Also I don’t think your move of making the simplest game info, like score and nuisance points, only accessible via recoding the game engine was a good one (and then it was still a guess). I like the league system but I think that the slight decrease of people compared to the last contest shows something, I expected an increase in players from the playerbase inflation alone.

7 Likes

It’s funny.

After many tries, i ended with the same evaluation values for the numbers of balls in the groups (2 for 2 balls and 6 for 3 balls). I could not find better values. But i also had a score of -1 for lonely balls.

I thought about your others evaluations (for balls touching empty cells, aso) but i didn’t have the time to implement them properly and find good weights.

1 Like

Who use Monte-Carlo? Can you explain in details how?

Just choose random move for first turn, random rotation, repeat it for some depth?

Than again, again and again?

I used a genetic algorithm in javascript.

Could compute around 3-4k moves per 100ms with depth between 8 and 6.

final position 73.

As an addition to the ‘usual’ evolutionary algorithm I also added a kill function to remove ‘unhealthy genomes’ (scoring too low, doing invalid moves, etc) to improve the overall quality of the population in a shorter timeframe with the added bonus that if all the existing population dies it means that the AI is in a critical situation and it needs to regenerate a new population with a smaller genome (less steps) so that it could try more steps in the same time frame to get out of the critical situation.

the best 10% of the turn population was stored and used as a seed for the next iteration (after shifting the genome by one).

I gave a score for potential damage ( 1 point for 1block group, 4 point for 2 block group, 20 point for 3block group) multiplied by 1.5 if the group was accessible (1 empty space near)
more point were added based on height of the layout and number of skulls destroyed.

the total fitness of a genotype was the total score (real point + potential + height bonus + destroyed skull bonus) divided by (1 + round)

in this way 600 points now are better that 800 next turn , 1400 next turn are better that 600 now, and 2000 at turn 4 are better that 2200 at turn 5.

i also tried to use enemy prediction (20 ms for enemy simulation ) but the result was underperforming compared to the default one so had to revert

using JS I was very short on computational power ( my late contest c++ code is much faster but i hadn’t the time to complete it) but i’m happy of the achieved result and had a lot of fun in this contest !

6 Likes

For the 3 first days, I used an exhaustive 4 levels exploration (once for the points max, once for the heuristic max)

Then, I switched to a weighted Monte-Carlo tree favoring central positions and children max points. I forgot to weight the children points with the deepness, so it was not very effective :smiley:

On the last days, I added an attack / defense system:

  • Exhaustive 3 levels exploration for P2 board
  • Attack was just selecting first 3 skull lines combo, or wait to get a bigger if there was no danger
  • Defense was trying to do a lower combo, if it had the ability to disturb a bigger P2 combo for more than 3 turns

[Edit] Yesterday, I tried to rewrite it completely to simulate a turn based strategy game with a real MCTS, but it never reached the RC state ^^

4 Likes

Hello ^^ 1st thing, Thx, was fun :slight_smile:

  • League system=> nice ^^ have to be improved of course, but for a first time, was fine managed ^^ (at the beginnig, the step between bronze and silver was to high, but after the change middle week, was much better ^^)

  • Game itself=> cool, but i think that scoring must change when you will put it on multiplayer, or will ended with lot of ppl doing only tie fight, both finding the best combo fast… i think that just make “harder” to one shoot the other will help to evade that… (or maybe do 15-20 row…)

=> my strat=> Until legend was up ->Full scan 2 turns, me and opponent -> to find best combo at each turn, and best move for me at turn 1 to combo this turn, next, or later… and so, choosing carrefully ;p was enough for legend…

(but was determining “blocking” combo move, best strat concerning color, “Danger” value regarding space available for both of us,skulls already on tab… etc…)

When legend up, i begin to check later turns, taking my x best move, computing 22 next for all… selecting back my x best ont this x*22 moves, and go… i don’t like random ;p

But long WE, havn’t got the time to do it proper, and test how much i can up x to improve result without timeout… find some particular case where my choice was realy bad, but hard to redo against opponent using random ;p (and “not-to-good” opponent to let me the time to reach this choice ;p )

Pushed with x=15, crossed finger, and finished only 126… hard wait until i can correct it to test on multiplayer^^

2 Likes

Hi,

#31 here. I didn’t find inspiration during the contest, but here’s my strategy anyway :

First, I simulate every combination of pieces the opponent can play at depth=3 (it takes between 1 and 4 ms) and selects the best one with the formula : 5turn0 + 3turn1 + 1*turn3. I save this sequence of 3 pieces for later.

Like a lot of other contestants, I chose Monte Carlo to select the optimal play.
I tried to implement MCTS but I had performance issues (it requires a lot of board copies, and I had timeout issues related to the JVM) so I settled for a full random Monte Carlo (eww).

In order to limit the size of the tree to explore, my AI decides beforehand if it’s worth it or not to trigger an explosion at this very turn. If the sum of the three opponent plays that I previously chose (this with a 1 factor for every piece) is less than 1/4 of my board empty cells *70, then my AI will remove every piece that triggers an explosion from the possible plays for the first turn. It won’t even check if those pieces could trigger a BIG combo or not, it just ignores them …

I then generates and simulates as many sequences of 8 pieces as I can until I reach 100ms.

For the evaluation of the sequences, my AI computes a dynamic patience ratio. It’s based on the opponent score sequence I selected at the beginning, and basically scales with it :
I start with a 10 ratio, and for each turn, I reduce my ratio by (enemy_score_at_turn_X / score_that_kills_me). After turn 3 (I have no more vision on enemy score at this point), ratio is reduced by 2 at each turn. These dynamics ratio allowed my AI to favor combos that occurs before ‘the enemy potential shitstorm’.

I also keep track of the best sequences I get and reuse them for the next turn (I just add 1 more random piece at the end). I was able to simulate between 100 and 300k single-piece plays in 100ms.

Nothing too fancy but a descent simple AI.

5 Likes

Hi, this challenge was very fun, good job !

My strategy is built on many many many parameters, and in my last submit, I changed just one which finally was not a good idea. The main questions were how to build colors to maximize the points potential, and when attack the opponent player.

Deep exploration here is only necessary to build the colors chain because deciding attack strategy with deep search would have made informations collect very complicated due to rules of the game and lack of data given about opponent actions and game status. So attack decision will depend only on points potential on the self current turn and the enemy next turn.

Also, I think game evaluation here is very important because potential color chains may depends on every move even those made in the beginning. The main criterias will be to keep an open game to maintain a lot of possibility for moves, which include maintaining space, playing in the center of the board, avoiding color closing when there is only on or two, and if possible, verifying the chain connections, which consists in evaluating the potential four or more colors groups. The other reason I think evaluation is very important is that every move can close opportunities in the future, even in the far end of the game.

So the key here will be knowing if a move is closing the game, opening it, and if it is building 3 colors groups connected with potential 4 or more colors groups.

  1. The first part of the game, I used DFS, only on centered columns :
  • Number of turns for this first part : 8,
  • Deep : 4
  • First column : 1
  • Last column +1 : 5 (so last column is 4)
  • Evaluation : f1 * points + f2 * cumul-points + f3 * eval_position with f1, f2, f3 = 0, 1.2, 1
  1. I tried to go deeper than 4 with a random rule based on proba decreasing with deep to not open some moves in exploration, but it was not very efficient, so I didn’t keep it (I didn’t had time to learn and implement MCTS) :
  • Probability to explore bad moves : 200 / ( deep * 1000)
  • Probability to explore good moves : 2000 / ( deep * 100)

I thought here that randomly eliminate moves was not a good idea for me for many reasons :

  • Every single move can potentially be a good move for the future destruct combo
  • Every move is strongly different from another, and can be very good or very bad in the far future
  • There is not a lot of possible moves (22) so eliminate one is very risky.
  • deep evaluation is mainly present in the game construction itself (like destruct chain), so time is better used in game evaluation of every early move than gaining deep in some random future move.
  • tree search algos is not the best skill I can improve in limited time :slight_smile:
  1. The second part of the game (after the 8th round), I used DFS too, but on all columns :
  • deep : 3
  • evaluation : same as before (see next parts for detail)
  1. Each turn, I looked at the best enemy move in points on deep 1 and deep 2

  2. Each turn I looked at my best move in points, and decided to strike based on some criterias :

  • Normal strike : points > 2300
  • Skull strike (if there are too much skulls) : points > 420 if at least 40 skulls
  • Space strike (if there is no more space : points > 420 if space left < 30 free cells
  • Fatal strike (chance to kill the enemy) if points >= (enemy space +1)* 70 : on / off
  • Defense strike : if enemy points > 2300 this move or next, and my points > 700
  1. To build the game with chains of color :
  • Keep a free cell next to colors with bonus given : +10 (1 color alone) +50 (2 same colors) +150 (3 same colors)
    and with malus if no free cell : -5 (1 color alone) -2(2 same) -1(3 same)
  • multiply this bonus + malus with a factor depending on number and proximity of the same colors in the AB colors queue
  • Adding a bonus for 3 same color groups to avoid bad moves destroying only 4 colors : +150
    To compact same colors together :
  • Align colors on specific column: malus of - 4 * (color - column) if color - column > 1
    (factor -8 here in last submit was the error because I only tested it in IDE and -4 was my best value balance during all my submits :(()
  • Add a malus for height : -2 * height
  • Malus if color is not on center : -30 * distance from center
  • Malus if a same color are in too far columns : -2 * width since witdh > 2
  • Verifying combo, comparing the 3 colors above each group of 3 with bottom, left and right colors : +50 for each different color found making a group of 4 or more if the group of 3 is destroyed
  1. other stuff :
  • Malus to kill skulls based on height of skull : -10 * height * height
  • Malus to preserve space : -100 since space < 24
  • and many other things I finally didn’t keep,…

So, last 2 days were about… how combine all that ? That was my big problem… :smile:
My last error was changing one parameter at 7:56 PM in my last submission, although I had found a nice balance during all these days. Too curious last 4 minutes (rank 63).

To improve my parameters choice, I think I would have simulate them with a local simulator, because the color construction is very independent from the play strategy with the opponent. I didn’t got the idea but in fact I think that it is the best thing to do in this type of game.

This challenge was very interesting. The league system is a very good idea. Thanks to all for your good job.

3 Likes

Great contest, again!
I was surprised how well gauged was the system of leagues, at least from the middle of the week.
On the other hand, it really felt need for speed – only 3 players with interpreted languages in the top 100.

And since performances are always discriminating, here goes my proposition.
Would it be possible to use Numba for accelerating Python?
This is a great LLVM-based JIT; works as a simple python module and… “a single function decorator results in a 1300x speedup of simple Python code”. Incredible as it sounds, but have a look at this.

Anther option would be a standard Cython addition, available in many distros. In short, it’s well-known compiler for Python that gives you C-like performances.

Any of these two should be not difficult to install and give real benefits for those who would like to use an “easy” language for contest.

3 Likes

For the people interested, here is a write-up of how I approached the contest.

As for the exact heuristics used in the “Board Quality”, they were very similar to those previously described by ranarama69.

16 Likes

Hi,
I really appreciated the league system, it was way faster to be ranked after submission :+1:

For my strategy (16th, 1rst C#) :

Pure Branch & Bound for exploration with good metrics, no random : not that much nodes explored (~20k), but efficient. I memorized the tree each turn, and taken into account the most likely opponent’s move. I also simplified the tree for special cases and forgot a big one, just realized it monday along with the bug that cost me 40ms each turn ^^. Can’t wait for the multi game to be published so I can fix this xD

1 Like

I used bruteforce for the first 3 hints and then a genetic algorithm for the hints 4-8 choosing the best moves for each depth. The same strategy was used for inspecting enemy grid. Once you have the numbers it comes to deciding how greedy should you be in regards to getting more points and postponing your attack at the risk of being attacked by the enemy and possibly lose your whole board advantage.

I was ignoring potential enemy attacks with one line of skulls, focusing only on the 2+ ones. Also was disregarding deeper solution with more points if that wasn’t worth it, eg getting 450 points at step 6 is not that good as getting 400 points at step 3 but getting 3000 points on the other hand may be worth it.

When detecting a potential dangerous attack of 2 skulls or more I was switching from patience mode to counter-attack mode where I tried to score a number of points enough to add at least one line of skulls to enemy’s grid in fewer steps than the enemy would attack.

Unfortunately I finished my main algorithm in the last day of the contest and didn’t have time to play with the attack/wait parameters.

Final rank 251 - just 4 steps away from legend league :frowning:

1 Like

On the Coders Strike Back contest there was a Report button on the LeaderBoard page that let you see the code you submitted for the contest.

Is there a way to see my submitted code for this contest?

you should have received an email with exactly the same kind of report, including you final submission… at least I did on 9th may

Go to: https://www.codingame.com/leaderboards/global/challenge/smash-the-code/

Then replace :
/leaderboards/global/challenge/coders-strike-back/XXX -> /challengereport/XXX

Thanks that worked great.