Game-theoretic Security Patrolling with Dynamic Execution Uncertainty and a Case Study on a Real Transit System

Citation:

F. M. Delle Fave, A.X. Jiang, Z. Yin, C. Zhang, M. Tambe, S. Kraus, and J.P. Sullivan. 2014. “Game-theoretic Security Patrolling with Dynamic Execution Uncertainty and a Case Study on a Real Transit System .” Journal of Artificial Intelligence Research, 50, Pp. 321-367.

Abstract:

Attacker-Defender Stackelberg security games (SSGs) have emerged as an important research area in multi-agent systems. However, existing SSGs models yield fixed, static, schedules which fail in dynamic domains where defenders face execution uncertainty, i.e., in domains where defenders may face unanticipated disruptions of their schedules. A concrete example is an application involving checking fares on trains, where a defender’s schedule is frequently interrupted by fare evaders, making static schedules useless. To address this shortcoming, this paper provides four main contributions. First, we present a novel general Bayesian Stackelberg game model for security resource allocation in dynamic uncertain domains. In this new model, execution uncertainty is handled by using a Markov decision process (MDP) for generating defender policies. Second, we study the problem of computing a Stackelberg equilibrium for this game and exploit problem structure to reduce it to a polynomial-sized optimization problem. Shifting to evaluation, our third contribution shows in simulation that our MDP-based policies overcome the failures of previous SSG algorithms. In so doing, we can now build a complete system, that enables handling of schedule interruptions and, consequently, to conduct some of the first controlled experiments on SSGs in the field. Hence, as our final contribution, we present results from a real-world experiment on Metro trains in Los Angeles validating our MDPbased model, and most importantly, concretely measuring the benefits of SSGs for security resource allocation.
See also: 2014