Stackelberg games have recently gained significant attention
for resource allocation decisions in security settings. One
critical assumption of traditional Stackelberg models is that
all players are perfectly rational and that the followers perfectly observe the leader’s strategy. However, in real-world
security settings, security agencies must deal with human adversaries who may not always follow the utility maximizing
rational strategy. Accounting for these likely deviations is
important since they may adversely affect the leader’s (security agency’s) utility. In fact, a number of behavioral gametheoretic models have begun to emerge for these domains.
Two such models in particular are COBRA (Combined Observability and Bounded Rationality Assumption) and BRQR
(Best Response to Quantal Response), which have both been
shown to outperform game-theoretic optimal models against
human adversaries within a security setting based on Los Angeles International Airport (LAX). Under perfect observation
conditions, BRQR has been shown to be the leading contender for addressing human adversaries. In this work we
explore these models under limited observation conditions.
Due to human anchoring biases, BRQR’s performance may
suffer under limited observation conditions. An anchoring
bias is when, given no information about the occurrence of
a discrete set of events, humans will tend to assign an equal
weight to the occurrence of each event (a uniform distribution). This study makes three main contributions: (i) we
incorporate an anchoring bias into BRQR to improve performance under limited observation; (ii) we explore finding
appropriate parameter settings for BRQR under limited observation; (iii) we compare BRQR’s performance versus COBRA under limited observation conditions.