AI for Avalon
https://arxiv.org/pdf/1906.02330.pdf
This paper discussed the implementation and testing of an implemented AI for the game Avalon, where players try to figure out who the good and bad guys are within a certain number of rounds. Currently, the AI only utilizes people’s actions and ignore all messages in chat. I found it fascinating that the AI could have such a high winrate even when not taking into account what people say. I always take into strong account what people say when playing the game, but the results suggest that their actions are a lot more important. I wonder if the AI could be improved by creating some sort of graph with edges with positive edges representing another player backing up another player and negative edges representing an accusation made.
In the results, it’s noted that when matched only against other humans, the AI was better both as a player or opponent. This means that the AI is an ESS since in a large fraction of a human population, it can thrive and has a higher expected value (its winrate). It is also interesting that when multiple copies of the same AI are in the same game, the AI has a higher winrate than humans. I speculate because its hard for the AI to adjust to the less ideal actions of what humans choose to do. Unlike chess where you can take advantage of poor moves from your opponent, a bad teammate makes a cooperative game harder.