Given the real-world deployments of attacker-defender Stackelberg security games, robustness to deviations from expected
attacker behaviors has now emerged as a critically important
issue. This paper provides four key contributions in this context. First, it identifies a fundamentally problematic aspect of
current algorithms for security games. It shows that there are
many situations where these algorithms face multiple equilibria, and they arbitrarily select one that may hand the defender
a significant disadvantage, particularly if the attacker deviates
from its equilibrium strategies due to unknown constraints.
Second, for important subclasses of security games, it identifies situations where we will face such multiple equilibria.
Third, to address these problematic situations, it presents two equilibrium refinement algorithms that can optimize the
defender’s utility if the attacker deviates from equilibrium strategies. Finally, it experimentally illustrates that the refinement approach achieved significant robustness in consideration of attackers’ deviation due to unknown constraints.