Skip to main content



An Application of Game Theory in Machine Learning

Source: “GANGs: Generative Adversarial Network Games“, authored by Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, Edwin D. de Jong, Roderich Gross.

Generative Adversarial Networks (GANs) are a machine learning framework that consists of two competing neural networks: a generator (G) and a discriminator (D). G tries to trick D in classifying its fake data, while D tries to correctly classify real data as real and fake data from G as fake. This framework has been growing in great popularity recently in various applications, for example distributions of images such as faces or digits. However, classic GANs suffer from a variety of problems in regards to training. One of these problems is that when training via gradient descent, a popular optimization method for neural networks, one can get stuck in a local Nash Equilibria.

To address this local equilibria problem this paper, “GANGs: Generative Adversarial Network Games“, introduces Generative Adversarial Network Games (GANGs), which do not suffer from this problem. “GANGs formulate late adversarial networks as finite zero-sum games, and the solutions that we try to find are saddle points in mixed strategies.” A zero-sum game is when interacting players’ aggregate gains and losses are equal to zero. Thus they model the GAN framework as a finite zero-sum game, which means any local Nash equilibrium is a global one in the space of mixed strategies. These are all classical game theory concepts that we have discussed in class, which are then applied to the interesting field of machine learning. However, it is often intractable to find the exact best responses due to the massive amount of pure strategies that result from potential neural networks. Therefore, the paper introduces Resource-Bounded Best-Responses (RBBRs), and the corresponding Resource-Bounded Nash equilibrium (RB-NE), which are essentially estimators of the exact best responses.

There are many additional technical details I have not covered and entire portions of the paper I have left out, but essentially using classical game theory, some of which we have discussed in class, they augment the classical GAN, forming the GANG. They train the model using the concept of bounded rationality to reduce the size of the action space, use a rich set of methods aimed at solving zero-sum games, and achieve promising results. Empirically, GANGs are able to deal well with typical GAN problems when training such as mode collapse, partial mode coverage and forgetting.

Comments

Leave a Reply

Blogging Calendar

September 2019
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Archives