Spring Challenge 2020 - Feedback & Strategy

Ended up in mid silver. Spent most of my time trying to grind out of bronze due to the fluctuation. It didn’t feel like beating the boss was enough, but eventually I got through.

The grand majority was spent debugging, especially ensuring efficiency of not going only 1 space if you have speed, but also ensuring that we acknowledge that said pellet inbetween is gone.

The three major parts involved are as follows:

Defensive switching:

I had thought of an idea to have a different tactic depending on how many pacs are spawned (2-3 pacs = win by killing them, 4-5 pacs it’s not worth the time so just devour pellets), but just ended up with a simple “oh, there’s a scissors within 5 squares of me, so I should become rock” and that took care of most situations.

A Pellet Map:

Like others mentioned, a map of pellets (since we get that nice grid to work with) can be made based on all the empty spaces, and once we see that pellet update it on the map. If we run over a square or see an enemy pac run over a square, remove that pellet. If we have speed or they have speed, remove all pellets 1 around them after doing calculations. If I had more motivation, I’d have implemented line of sight checking too, since we didn’t just have the knowledge of the visible pellets that weren’t there any more, we had to deduce it, but I didn’t bother with that.

The Pellet Finder

Obviously, this wasn’t the greatest as people had chess-engine-like searches, but I liked what I had done. Last time when I did this sort of thing, the first thing I controlled had priority, but that’s inefficient, especially for this, as pac 0 who was downtown would end up with priority and go for the big pellet while there’s a pac right next to it. The solution was to have a barrel of commands (since the order of what pac was commanded didn’t matter, this was easy to set up and have execute) and add to it when you find the nearest pellet. Pellets that we know exist (also easy to implement by just having the unknown ones start with a value of 0) were given a bit of priority since we know they’re there. Those with speed would try and target something at least 2 squares away, and if they had targetted something 1 square away, they were told to change their target.

Simple but effective. After adding to the barrel of commands what movement they wanted to do (speed and switch were also commands added), ie what pellet they targetted, the next pac was queried the same way. Beforehand, they just ignored pellets that were already targetted, but now they would compare their distance to the pellet to the other pac’s distance to the pellet, and if it’s closer, it would remove the other pac’s command from the command barrel and continue shuffling through pellets finding the closest one before making a command. This would go on for every pac until we went through each once, and then it would loop back through all unaccounted for pacs again and again until everyone had a command, and then just execute the commands.

Overall it was fun, though I personally am not a big fan of pac-man, but it was still lots of fun. I’m pumped and ready for the next challenge on the horizon!

3 Likes

Legend #106:

Wood2 to Wood1: Closest Pellet (prefer star pellets)
Wood1 to Bronze: Do the same with multiple pellets
Bronze to Silver (hardest league): Collision avoidance + SPEED + Defend from opponent
Silver to Gold: Pellet tracking, Team co-operation
Gold to Legend: Path finding, Track hidden opponent for avoiding deaths

I did not add much after reaching legend as anything I added did not give fruitful results. I had a simulation with custom depth. If my player has 1 pac, I’ll go upto 20 depth. If I have 5 pacs, then I’ll go upto 12 depth. (I only use speed on depth 0). I also assume opponent’s intention is to kill me. So usually end up finding a safe path rather than an optimal one. In most of the games I would have killed many opponents (even though I don’t ask my bot to kill) but still lose the game because they took an optimal path and took lot of pellets before their death.

2 Likes

Finished #29. I’ll try to not make this post redundant with the 80 other posts above it :blush:

Small suggestions

Let me first start wtih a couple of small feedback points for CG

  • Starter packs: this was my first contest where there was no started pack, it didn’t really matter for me but I do think that starter packs can be a huge help for new coders as it gives them a template to organize their code. Helps avoid ending up with a big much and let them work more on their ideas, while also to some extent learning about organization!
  • Getting input files: an easy way to reproduce and debug a game seen on CG is (when both bots are deterministic) to just get the input file and use that as an input in their own IDE. Now to get the input, I just add a bunch of prints to error for all inputs, then copy the output and remove the extra information. When I want to reproduce the end of several games, this starts to be pretty annoying. A few easy suggestions (anyone of them would be great) are:
    • Just have an option to download the input file as a text file, that would be awesome!
    • Have an option to download exactly what was printed to the console without any alteration for new turns
    • Have an option to only show what was printed to the console without any alteration for new turns so that it is at least very easy to copy paste that without having random numbers and/or text in the middle for new turns

Now as for my bot itself, I’ll focus on my algo and inferring some of the incomplete information as I thing everything else I could say has already been covered

Pathfinding algo

  • Start by running a DFS for all of my pacs that do not have a move already assigned. Depth was 12 to 15 depending on my number of pacs alive. I did not allow to visit the same cell twice, except when I reached a dead end or a super pellet, in which case I just erase the visited history to make all cells accessible again
  • Keep the one with the highest score and assign it a move based on its best path
  • Propagate the effect of its movements to a copy of my game state. So for example, if it is supposed to go and grab a pellet next turn, completely discount that pellet. If it is supposed to grab one in 10 turns, still discard it but much less
  • Repeat

Picking the pac with the highest score at each step meant I could not find optimal combinations of paths where none of the individual paths were optimal on their own. However, rerunning the DFS for all non-assigned pacs after assigning each pac meant that they still worked pretty well together. The other downside was that I had to run n! DFS where n is the number of pacs, just because the eval changed with each new assignment. I did not find a way to avoid this though (if anyone sees a way, please let me know!)

Regarding abilities, I initially switched if I knew an enemy pac could try and kill me next turn, but I then realized that at the top of the leaderboard no one ever does that. I even saw one guy who always assumed no one would try to kill his pacs if their cooldown was 0, so who actually used the speed boost right next to one of my stronger pacs… Anyway, at that point I removed switch altogether, and since I did not implement any sort of attacking for sure kills or preemptively defending against those, I just never switched.

I also saw a few people mention they only used their speed ability at (odd, odd) cells. I think this is slightly too aggressive and I had another approach: since I have an estimated path for > 10 cells, I just check if I ever step on the same cell during moves 0 and 2, 2 and 4, 4 and 6, 6 and 8 or 8 and 10 and if not, that means I can go ahead and use speed now without wasting one of its uses.

Inferring hidden information

This is what I spent most of my contest on. Unlike others, I don’t find incomplete information games annoying, on the contrary it is quite fun to come with more and more subtle ways to get an extra slither of information out. There is one caveat though: this is fun only if more information does give you more edge. Here unfortunately, after a certain point more information about pellets completely stopped helping. I think this was due to the nature of this game being more random. Thinking about it, I do believe this randomness (and therefore the leaderboard randomness) is due to the impossibility to always avoid “unlucky” encounters across a corner where you just die without being able to avoid it, so any decent bot can easily win a couple games against #1. This was an unfortunate feature of the game, but in my opinion was impossible to have been predicted by the creators so this is not a reproach in any way, just a fact.

Anyway, what made me jump to top 15 was the following relatively easy feature: whenever I see an opponent or a pellet that I know just disappeared (usually super pellets) and could have been eaten by only 1 opponent, I look for all possible paths this opponent pac could have taken since its last knows position, I then remove paths for which I know there currently still is a cell with a pallet on it, and I intersect the remaining paths to get all cells this enemy pac MUST have visited. This very often leaves you with only 1 possible path.

I then decided you could do better, and worked on adding memory of all cells at all time to remove even more paths. This is actually the exact same algorithm MSmits described in details so I won’t repeat what he wrote, but similarly to him this took me a while to finish and yielded no result, which was extremely frustrating. Retrospectively, we both should have acknowledged the randomness sooner, realized that given this getting 90 or 92% of the information would not have changed anything and moved on to try to mitigate the randomness more by better enemy tracking or to coding other features (my bot didn’t even have features such as sure kill detection which would have been helpful both for defense and offense, or super pellet allocation when I just hope the furthest pellet is in my DFS breadth length). While we wasted our time on this feature, we both slowly drifted down in the leaderboard as others worked on more useful things :frowning:

Conclusion

I found this contest very entertaining, and I did learn a lot as well! A big thanks to the whole CG team for organizing it, and I am looking forward to the next one! :slight_smile:

11 Likes

Here is my PM : https://github.com/Saelyos/Spring-Challenge-2020 :slight_smile:

Don’t hesitate to ask for any detail.

28 Likes

I find the starter “files” enough to get started. What else would be included in that pack?

I think the only input of the game is the seed. Everything else is generated by the game engine.

For me a good way to debug was to print all stdin in my stderr. Then with the great tutorial https://www.codingame.com/playgrounds/53705/contest-tools-and-workflow/introduction from @eulerscheZahl you can easily get your stderr and replay games and debug them. Personally it was my first contest in C++ and I made great use of that to understand where my pointers/local variables went wrong in the global algorithm. In c++ I would have something like that:
bool isIde = false;
if (isIde) {
string myID = “1114834”;
ifstream* inputFile = new ifstream(“resources/467005582-” + myID + “-stderr.txt”);
cin.rdbuf(inputFile->rdbuf());
}

3 Likes

Hi zogstrip,

Yes files are definitely a good start, but starters could include more structure such as using classes. This is very helpful when you don’t have much experience coding / using classes. Here are examples with the starter packs provided for UTG and COFI

You can’t reproduce a battle against an opponent for which you don’t have the code you using it though. An example I run into very often is looking at my last arena battles, I notice one of my pacs becoming crazy and I want to dig more. I think the only way to reproduce that locally is to somehow get the input file.
K4ngOu pointed out you can use the CG API (which I did not know, that’s great for reproducing battles against non deterministic opponents), but if I understand it properly it still relies on reading the output from stderr, so I still think that getting a way to very easily get the inpout file or at least copying the stderr would be very helpful :slight_smile:

2 Likes

@Zuabros

I’m gonna weigh in here too. The thing is, a simulation is just a way to see what happens when you make a certain action. That is necessary for any bot. Even a bot that focuses on heuristics is going to have some simulation. Even a pathfinding algorithm is a simulation, because you simulate where you end up when you do a certain move.

What you are referring to is mostly (I think) a search. Just because an algorithm is a search, it doesn’t mean you can call it a brute force algorithm or that you can say “it tries to test all solutions”. There are many assumptions and simplifications going into a search. You need to prune moves (= heuristics) to get deep enough to get a meaningful result. Good searches are actually quite intelligent and nothing like brute force in almost every case in a CG arena or contest.

Have you ever coded a simple minimax or mcts and then tried to submit it to a CG arena? If you haven’t , then the discussion is over, because you don’t know what you are talking about. If you have, then you have noticed that the simple versions aren’t enough to get to the top. You need far more than just the basic search algorithm in almost every case. That extra part, is the intelligence and strategy you code into your bot.

Go look at some AI research papers involving games, just use google. You will see that the things you find are similar to the things we do here. Especially when it comes to the simpler board games. Without search algorithms, this field would be almost non-existant.

8 Likes

Ultimately what you aim for with any search is to cover all scenarios and make your bot undefeatable. So he is right about the brute force aspect, it just seems to have a negative connotation for some unknown reason… The bot doesn’t understand what it does, therefore it can not connect the dots like a human and neither does it have anything that resembles intuition. Maybe there’s a lot more it needs to have to be called “true AI”. But even the ML stuff doesn’t count as “true AI”, so w/e.

I don’t think anyone should take offense, just because someone points out the real goal of using a search. You go through all possible options to find the best. The fact that there are various techniques to reduce the amount of options you need to go through is a different topic. It’s the best approach to win and that’s it. No need to sugar coat it.

The real problem is @Zuabros didn’t say what he actually wants. He probably doesn’t know what he wants either.

This takes the discussion nowhere. He was obviously referring to AGI.

1 Like

Why did you choose Chokudai search? How does it compare to the standard beam search?

1 Like

I disagree. He is not right about the brute force aspect. The term brute force is used too loosely here.

Let’s say we use this defintion:

We could use this definition to claim a simple minimax is a brute force algorithm, but as soon as you use alpha bèta pruning, it’s no longer brute force.

Many searches explained in this PM thread are more complicated than that even, with improvements that make them more efficient at a small cost of accuracy. They are not brute force at all. I think using the term here diminishes the creativity people showed in solving the problem of searching this game. It’s impressive how many different solutions have been found. Brute force algorithms don’t require much thinking, but these surely did!

3 Likes

Beam Search has a width as a parameter, while Chokudai Search has a depth.
If we alter the evaluation function and the processing cost increases, Beam Search will decrease the depth and Chokudai Search will decrease the width.
I use Chokudai Search because I prefer fixed depth.

I don’t really understand the difference in performance between the two algorithms.
The developer of this algorithm says that Beam Search tends to select only nodes with similar states (because they have similar evaluations), whereas Chokudai Search seems to solve this to some extent.

4 Likes

My strategy

I wrote a heavily search-based bot with C++ from the start. However, because my bot did not do any significant pruning, it had very limited search depth.

Using a search-based bot was not so effective here because of how little information it often had to work with. Surely, if you saw an enemy and in the next turn you can see that two of the three positions it could have gone to are empty, you know where it is now. However, make the number of elapsed turns greater than one and you will quite often have almost no clue as to where they actually are. The fog of war was certainly a motivation-killer for me.

Additionally, relying on exhaustive search is not so great in the late game if there are very few pellets left, as you will need to either prune enough to get to the pellets through search or have a dedicated path-finding algorithm that does nothing other than finding paths to distant pellets.

Feedback

Hopefully, in the next contest I will have learned and spend less time micro-optimizing the memory layout of the search nodes and making sure I am never using less than 90% of the search time and instead focus on not exhausting several levels of subtrees that contain nothing but sadness.

As zasmu said above, “the fact that two different approaches could both yield very good results makes me think that this contest is a success.” I agree with this statement, allowing different “macro” strategies to be viable is great. Also, the graphics of this context felt really nice to look at.

I am hopeful that the next contest will be one without incomplete information (this seems to be a common desire in this thread) and be open for longer (I am probably the only one that wants this).

It’s complicated. One of looking at it is that the people at the top are usually fairly experienced. This also means they can choose between several languages.

I think you can implement your ideas faster (but not that much faster) in Python or Julia, maybe even in C#, and this seems to be a common opinion. However, C++ will usually compile to a faster executable, which allows you to explore more states or run more heuristic tests in the same amount of time.

Therefore, you can get some rank earlier than others by writing your bot in a language that you can develop faster; however, in a week-long context, this “delivery speed” ends up not mattering much and it may make sense to develop it in C++ all the way to prevent a rewrite in the last few days.

Interesting. My interpretation of beam search is that it also has a fixed depth, but if you want to stop on arbitrary time limit, you have to choose width dynamically on each level based on remaining time.

Chokudai search achieves this anytime property without this complication, so that in itself could be an advantage.

On the other hand, Chokudai search requires keeping all levels in memory, while for beam search it’s sufficient to only hold two consecutive levels. And beam search doesn’t have this extra log n factor because it’s O(n) std::nth algorithm instead of a priority queue.

Anyway, that’s another tool for the toolbox, thanks for the link!

3 Likes

I got interested in seeing how much randomness is introduced to the games through accidental kills. So I downloaded replays from gold and legend leagues and I looked at the circumstances of each kill. When none of the players had vision of the other, I assumed a kill was blind, that is rather accidental than planned. I also counted separately kills due to entrapment, that is those that happened in a dead end tunnel.


In the gold league about 60% of the kills were blind. With a very small difference in trap kills when enemy was visible. On maps with more units, the blind kills happened slightly less frequently, probably due to more map parts being visible.


In the legend league, there were more blind kills that in gold, and the number of units on the map made even larger difference. Also less kills happened in the open, especially with vision (+50% trap kills vs. gold). Probably due to many bots either chasing enemies or opportunistically killing units in the dead ends.

But do these kills help win games? It’s likely complicated, as it probably depends on the timing of the kills (e.g. early vs. late game. I ignored such nuances and just looked at the distribution of the total number of kills performed by winners and loosers. I did it separately for 4 categories of map sizes (based on the map area), and unit numbers.


In the gold league (left), the map size didn’t matter much. Loosers finished ~40% more games without kills, and when they did kill, they did it in smaller numbers. Especially in games with 2 units, a single kill seemed to lead to a win. With more units, the difference was smaller. For 5 units, 2 kills where needed to make a difference. Still, not killing resulted in more losses.

In the legend league (right), it was pretty much the same, with slightly less kills overall, especially on the smallest maps with more than 4 units. Probably again, due to better vision there, and improved dead avoidance compared to gold.

22 Likes

Nice infographics!

How do you “download” them? By the way, thanks for these graphics :slight_smile:

https://www.codingame.com/playgrounds/53705/contest-tools-and-workflow/the-codingame-api

2 Likes

I efficiently used the map as a 1D array, using encode, decode functions to convert between 2D coordinates and 1D indices. It wasn’t much of a problem.

2 Likes