Stackelberg security games (SSGs) are now established as a powerful tool in security
domains. In order to compute the optimal strategy for the defender in SSG model,
the defender needs to know the attacker’s preferences over targets so that she can predict
how the attacker would react under a certain defender strategy. Uncertainty over attacker
preferences may cause the defender to suffer significant losses. Motivated by that, my
thesis focuses on addressing uncertainty in attacker preferences using robust and learning
In security domains with one-shot attack, e.g., counter-terrorism domains, the defender is interested in robust approaches that can provide performance guarantee in the
worst case. The first part of my thesis focuses on handling attacker’s preference uncertainty with robust approaches in these domains. My work considers a new dimension
of preference uncertainty that has not been taken into account in previous literatures:
the risk preference uncertainty of the attacker, and propose an algorithm to efficiently
compute defender’s robust strategy against uncertain risk-aware attackers.
In security domains with repeated attacks, e.g., green security domain of protecting
natural resources, the attacker “attacks” (illegally extracts natural resources) frequently,
so it is possible for the defender to learn attacker’s preference from their previous actions.