DeepFP for Finding Nash Equilibrium in Continuous Action Spaces

Citation:

Nitin Kamra, Umang Gupta, Kai Wang, Fei Fang, Yan Liu, and Milind Tambe. 2019. “DeepFP for Finding Nash Equilibrium in Continuous Action Spaces .” In Conference on Decision and Game Theory for Security, 2019.

Abstract:

Finding Nash equilibrium in continuous action spaces is a
challenging problem and has applications in domains such as protecting geographic areas from potential attackers. We present DeepFP, an
approximate extension of fictitious play in continuous action spaces.
DeepFP represents players’ approximate best responses via generative
neural networks which are highly expressive implicit density approximators. It additionally uses a game-model network which approximates
the players’ expected payoffs given their actions, and trains the networks
end-to-end in a model-based learning regime. Further, DeepFP allows using domain-specific oracles if available and can hence exploit techniques
such as mathematical programming to compute best responses for structured games. We demonstrate stable convergence to Nash equilibrium
on several classic games and also apply DeepFP to a large forest security domain with a novel defender best response oracle. We show that
DeepFP learns strategies robust to adversarial 
See also: 2019