Playing Security Games with No Prior Knowledge

Citation:

Haifeng Xu, Long Tran Thanh, and Nick Jennings. 2016. “Playing Security Games with No Prior Knowledge .” In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
2016_26_teamcore_mab.pdf792 KB

Abstract:

This paper investigates repeated security games with unknown (to the defender) game payoffs and attacker behaviors. As existing work assumes prior knowledge about either the game payoffs or the attacker’s behaviors, they are not suitable for tackling our problem. Given this, we propose the first efficient defender strategy, based on an adversarial online learning framework, that can provably achieve good performance guarantees without any prior knowledge. In particular, we prove that our algorithm can achieve low performance loss against the best fixed strategy on hindsight (i.e., having full knowledge of the attacker’s moves). In addition, we prove that our algorithm can achieve an efficient competitive ratio against the optimal adaptive defender strategy. We also show that for zero-sum security games, our algorithm achieves efficient results in approximating a number of solution concepts, such as algorithmic equilibria and the minimax value. Finally, our extensive numerical results demonstrate that, without having any prior information, our algorithm still achieves good performance, compared to state-of-the-art algorithms from the literature on security games, such as SUQR [19], which require significant amount of prior knowledge.
See also: 2016