Spring Challenge 2020 - Feedback & Strategy

Hi zogstrip,

Yes files are definitely a good start, but starters could include more structure such as using classes. This is very helpful when you don’t have much experience coding / using classes. Here are examples with the starter packs provided for UTG and COFI

You can’t reproduce a battle against an opponent for which you don’t have the code you using it though. An example I run into very often is looking at my last arena battles, I notice one of my pacs becoming crazy and I want to dig more. I think the only way to reproduce that locally is to somehow get the input file.
K4ngOu pointed out you can use the CG API (which I did not know, that’s great for reproducing battles against non deterministic opponents), but if I understand it properly it still relies on reading the output from stderr, so I still think that getting a way to very easily get the inpout file or at least copying the stderr would be very helpful :slight_smile:

2 Likes

@Zuabros

I’m gonna weigh in here too. The thing is, a simulation is just a way to see what happens when you make a certain action. That is necessary for any bot. Even a bot that focuses on heuristics is going to have some simulation. Even a pathfinding algorithm is a simulation, because you simulate where you end up when you do a certain move.

What you are referring to is mostly (I think) a search. Just because an algorithm is a search, it doesn’t mean you can call it a brute force algorithm or that you can say “it tries to test all solutions”. There are many assumptions and simplifications going into a search. You need to prune moves (= heuristics) to get deep enough to get a meaningful result. Good searches are actually quite intelligent and nothing like brute force in almost every case in a CG arena or contest.

Have you ever coded a simple minimax or mcts and then tried to submit it to a CG arena? If you haven’t , then the discussion is over, because you don’t know what you are talking about. If you have, then you have noticed that the simple versions aren’t enough to get to the top. You need far more than just the basic search algorithm in almost every case. That extra part, is the intelligence and strategy you code into your bot.

Go look at some AI research papers involving games, just use google. You will see that the things you find are similar to the things we do here. Especially when it comes to the simpler board games. Without search algorithms, this field would be almost non-existant.

8 Likes

Ultimately what you aim for with any search is to cover all scenarios and make your bot undefeatable. So he is right about the brute force aspect, it just seems to have a negative connotation for some unknown reason… The bot doesn’t understand what it does, therefore it can not connect the dots like a human and neither does it have anything that resembles intuition. Maybe there’s a lot more it needs to have to be called “true AI”. But even the ML stuff doesn’t count as “true AI”, so w/e.

I don’t think anyone should take offense, just because someone points out the real goal of using a search. You go through all possible options to find the best. The fact that there are various techniques to reduce the amount of options you need to go through is a different topic. It’s the best approach to win and that’s it. No need to sugar coat it.

The real problem is @Zuabros didn’t say what he actually wants. He probably doesn’t know what he wants either.

This takes the discussion nowhere. He was obviously referring to AGI.

1 Like

Why did you choose Chokudai search? How does it compare to the standard beam search?

1 Like

I disagree. He is not right about the brute force aspect. The term brute force is used too loosely here.

Let’s say we use this defintion:

We could use this definition to claim a simple minimax is a brute force algorithm, but as soon as you use alpha bèta pruning, it’s no longer brute force.

Many searches explained in this PM thread are more complicated than that even, with improvements that make them more efficient at a small cost of accuracy. They are not brute force at all. I think using the term here diminishes the creativity people showed in solving the problem of searching this game. It’s impressive how many different solutions have been found. Brute force algorithms don’t require much thinking, but these surely did!

3 Likes

Beam Search has a width as a parameter, while Chokudai Search has a depth.
If we alter the evaluation function and the processing cost increases, Beam Search will decrease the depth and Chokudai Search will decrease the width.
I use Chokudai Search because I prefer fixed depth.

I don’t really understand the difference in performance between the two algorithms.
The developer of this algorithm says that Beam Search tends to select only nodes with similar states (because they have similar evaluations), whereas Chokudai Search seems to solve this to some extent.

4 Likes

My strategy

I wrote a heavily search-based bot with C++ from the start. However, because my bot did not do any significant pruning, it had very limited search depth.

Using a search-based bot was not so effective here because of how little information it often had to work with. Surely, if you saw an enemy and in the next turn you can see that two of the three positions it could have gone to are empty, you know where it is now. However, make the number of elapsed turns greater than one and you will quite often have almost no clue as to where they actually are. The fog of war was certainly a motivation-killer for me.

Additionally, relying on exhaustive search is not so great in the late game if there are very few pellets left, as you will need to either prune enough to get to the pellets through search or have a dedicated path-finding algorithm that does nothing other than finding paths to distant pellets.

Feedback

Hopefully, in the next contest I will have learned and spend less time micro-optimizing the memory layout of the search nodes and making sure I am never using less than 90% of the search time and instead focus on not exhausting several levels of subtrees that contain nothing but sadness.

As zasmu said above, “the fact that two different approaches could both yield very good results makes me think that this contest is a success.” I agree with this statement, allowing different “macro” strategies to be viable is great. Also, the graphics of this context felt really nice to look at.

I am hopeful that the next contest will be one without incomplete information (this seems to be a common desire in this thread) and be open for longer (I am probably the only one that wants this).

It’s complicated. One of looking at it is that the people at the top are usually fairly experienced. This also means they can choose between several languages.

I think you can implement your ideas faster (but not that much faster) in Python or Julia, maybe even in C#, and this seems to be a common opinion. However, C++ will usually compile to a faster executable, which allows you to explore more states or run more heuristic tests in the same amount of time.

Therefore, you can get some rank earlier than others by writing your bot in a language that you can develop faster; however, in a week-long context, this “delivery speed” ends up not mattering much and it may make sense to develop it in C++ all the way to prevent a rewrite in the last few days.

Interesting. My interpretation of beam search is that it also has a fixed depth, but if you want to stop on arbitrary time limit, you have to choose width dynamically on each level based on remaining time.

Chokudai search achieves this anytime property without this complication, so that in itself could be an advantage.

On the other hand, Chokudai search requires keeping all levels in memory, while for beam search it’s sufficient to only hold two consecutive levels. And beam search doesn’t have this extra log n factor because it’s O(n) std::nth algorithm instead of a priority queue.

Anyway, that’s another tool for the toolbox, thanks for the link!

3 Likes

I got interested in seeing how much randomness is introduced to the games through accidental kills. So I downloaded replays from gold and legend leagues and I looked at the circumstances of each kill. When none of the players had vision of the other, I assumed a kill was blind, that is rather accidental than planned. I also counted separately kills due to entrapment, that is those that happened in a dead end tunnel.


In the gold league about 60% of the kills were blind. With a very small difference in trap kills when enemy was visible. On maps with more units, the blind kills happened slightly less frequently, probably due to more map parts being visible.


In the legend league, there were more blind kills that in gold, and the number of units on the map made even larger difference. Also less kills happened in the open, especially with vision (+50% trap kills vs. gold). Probably due to many bots either chasing enemies or opportunistically killing units in the dead ends.

But do these kills help win games? It’s likely complicated, as it probably depends on the timing of the kills (e.g. early vs. late game. I ignored such nuances and just looked at the distribution of the total number of kills performed by winners and loosers. I did it separately for 4 categories of map sizes (based on the map area), and unit numbers.


In the gold league (left), the map size didn’t matter much. Loosers finished ~40% more games without kills, and when they did kill, they did it in smaller numbers. Especially in games with 2 units, a single kill seemed to lead to a win. With more units, the difference was smaller. For 5 units, 2 kills where needed to make a difference. Still, not killing resulted in more losses.

In the legend league (right), it was pretty much the same, with slightly less kills overall, especially on the smallest maps with more than 4 units. Probably again, due to better vision there, and improved dead avoidance compared to gold.

22 Likes

Nice infographics!

How do you “download” them? By the way, thanks for these graphics :slight_smile:

https://www.codingame.com/playgrounds/53705/contest-tools-and-workflow/the-codingame-api

2 Likes

I efficiently used the map as a 1D array, using encode, decode functions to convert between 2D coordinates and 1D indices. It wasn’t much of a problem.

2 Likes

It is not easy to pathfind etc in a 1D array.
Probably could have managed it a lot better… I did not put much focus into it after I set it up. Looks like you used it much better than me.

Hi everyone, this was my very first CodinGame bot competition and I was very excited to be able to be placed within the top 50 at one point in time. I eventually did a late resubmit and finished at 400th (on hindsight shouldn’t have done that) and now settled at around 200th.

I had two versions of my code.
The earlier version (up till Bronze) used an information matrix to store the total value of each grid squares in order to decide which squares were more attractive for my pacs to go to. This total value was a combination of information value (length of row and column of the square -> proxy to pellet information gain) and pellet value. The pellet value is doubled if it is visible to reward certainty. Thereafter, the top 5 valued positions will be greedily assigned to the nearest pac via Manhattan Distance.

The updated version of my code used BFS instead of Manhattan Distance to assign super pellets. Also, after assigning super pellets, instead of using an information matrix, I instead evaluated the total sum of discounted rewards for each pac using a BFS from each pac. If the path goes into another pac, then it is terminated. There is a discount factor of 0.9 for early game (50% of score), 0.8 for mid game (70% of score) and 0.7 for late game (80% of score) in order to encourage exploitation of immediate rewards when the rewards are sparse in end game.

I have written two blog posts on the detailed execution of my solutions as I progressed through the various leagues for those interested:

Thanks again to CodinGame for this awesome challenge and I’m looking forward to the next one:) And thanks to the awesome discussion with eulerscheZahl and MSmits and many others - it really made the competition fun.

5 Likes

Hi CowZow,
may i ask how do you manage to print the coordinates on the heads of the pacs? I could only print them on the console.
Thanks!
Nick

You can add a message after a command for a pac and it will appear above that pac. Like this:
MOVE 0 10 13 message

I am now playing in spring challenge, and for curiosity I check the best of the best, that leader board of legend players, and I am so disappointed of techniques what are they used. From observation, i can say that My code is even better they they have. hey even are not changing pacman type to change it, against opponent. I am now in bronze league, and is already hard to break it, now for 3000 peoples, I am on 462.
And I am using so sophisticated methods, to just move 200 to 300 places forward, where of comparison of my pacs against that best league, my pacs are behaving more intelligent. They are changing type, if opponent is to close, they are exploring map, if at surroundings are no pallets, and they are spread evenly, that one pac is not walking at area of another pac. And also, they are updated with size of exploring, if one of my pacs is died, they another pacs are taking bigger area to explore.
But looking at the best of the best, they have more simple logic and using simple logic… I could say, that walking thru map, they are remembering only pallets, where route are spreading in 3 directions.
I know that this is problem of codingame site, that designers did not include comparison against best of the best from league lower, or all against all…
But also last year, I notice, that the harder was past wood 3 league, then silver league. There I have very easy to get higher place. I am wondering how that legends league will be competite from beginning, with everybody. How then will look leaderboard.