Improve an AI and evaluate it

Hi,

I wrote an AI for a multiplayer game and it is fighting in the area. However, I wrote an other one and it is hard to say if it will be better. I tried some matches against some players from the leaderboard but it does not approximate the score it will have after a lot of matches. In addition, running tests manually is very long. So, maybe it could be a good improvement to have two AIs in the area to know which one is the best and only keep the best one for the score, the other one would be just fighting to have an idea of its real score.

Bye.

1 Like

I believe you can test your current AI against your AI already in the arena. This is very handy.

That is what I am doing but it is not automated and often it does not help me to understand what will be better against someone who does not act like me. What will be very cool would be to configure a sequence of automated tests and have thanks to that statistics about the AI (mean score, approximate raking,…). What would be even cooler (and it also apply to other kind of game) would be the possibility to do machine learning, i.e. the possibility that the program remembers previous games. Then, it would be able to tune parameters and to do more advance AI. For example, the possibility to train a neural network on a game like it is done for MarI/O (https://www.youtube.com/watch?v=qv6UVOQ0F44) would be fantastic (even if it does not find the best solution, it is really impressive :slight_smile: )

2 Likes