Publications by Year: 2016

Thanh H. Nguyen, Arunesh Sinha, and Milind Tambe. 2016. “Addressing Behavioral Uncertainty in Security Games: An Efficient Robust Strategic Solution for Defender Patrols .” In ParSocial Workshop 2016.Abstract
Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems — with a significant research emphasis on modeling attackers’ behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker’s behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty; and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender’s optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.
Yasaman Dehghani Abbasi, Nicole Sintov, Milind Tambe, Cleotilde Gonzalez, Don Morrison, and Noam Ben-Asher. 2016. “Adversaries Wising Up: Modeling Heterogeneity and Dynamics of Behavior .” In International Conference on Cognitive Modeling(ICCM).Abstract
Security is an important concern worldwide. Stackelberg Security Games have been used successfully in a variety of security applications, to optimally schedule limited defense resources by modeling the interaction between attackers and defenders. Prior research has suggested that it is possible to classify adversary behavior into distinct groups of adversaries based on the ways humans explore their decision alternatives. However, despite the widespread use of Stackelberg Security Games, there has been little research on how adversaries adapt to defense strategies over time (i.e., dynamics of behavior). In this paper, we advance this work by showing how adversaries’ behavior changes as they learn the defenders’ behavior over time. Furthermore, we show how behavioral game theory models can be modified to capture learning dynamics using a Bayesian Updating modeling approach. These models perform similarly to a cognitive model known as Instance-Based-Learning to predict learning patterns.
Thanh Hong Nguyen. 2016. “Combating Adversaries under Uncertainties in Real-world Security Problems: Advanced Game-theoretic Behavioral Models and Robust Algorithms ”.Abstract
Security is a global concern. Real-world security problems range from domains such as the protection of ports, airports, and transportation from terrorists to protecting forests, wildlife, and fisheries from smugglers, poachers, and illegal fishermen. A key challenge in solving these security problems is that security resources are limited; not all targets can be protected all the time. Therefore, security resources must be deployed intelligently, taking into account the responses of adversaries and potential uncertainties over their types, priorities, and knowledge. Stackelberg Security Games (SSG) have drawn a significant amount of interest from security agencies by capturing the strategic interaction between security agencies and human adversaries. SSG-based decision aids are in widespread use (both nationally and internationally) for the protection of assets such as major ports in the US, airport terminals, and wildlife and fisheries. My research focuses on addressing uncertainties in SSGs — one recognized area of weakness. My thesis provides innovative techniques and significant advances in addressing these uncertainties in SSGs. First, in many security problems, human adversaries are known to be boundedly rational, and often choose targets with non-highest expected value to attack. I introduce novel behavioral models of adversaries which significantly advance the state-of-the-art in capturing the adversaries’ decision making. More specifically, my new model for predicting poachers’ behavior in wildlife protection is the first game-theoretic model which takes into account key domain challenges including imperfect poaching data and complex temporal dependencies in poachers’ behavior. The superiority of my new models over the existing ones is demonstrated via extensive experiments based on the biggest real-world poaching dataset, collected in a national park in Uganda over 12 years. Second, my research also focuses on developing new robust algorithms which address uncertainties in real-world security problems. I present the first unified maximinbased robust algorithm — a single algorithm — to handle all different types of uncertainties explored in SSGs. Furthermore, I propose a less conservative decision criterion; minimax regret, for generating new, candidate defensive strategies that handle uncertainties in SSGs. In fact, minimax regret and maximin can be used in different security situations which may demand different robust criteria. I then present novel robust algorithms to compute minimax regret for addressing payoff uncertainty. A contribution of particular significance is that my work is deployed in the real world; I have deployed my robust algorithms and behavioral models in the PAWS system, which is currently being used by NGOs (Panthera and Rimba) in a conservation area in Malaysia.
Debarun Kar, Fei Fang, Francesco M. Delle Fave, Nicole Sintov, Milind Tambe, and Arnaud Lyet. 2016. “Comparing Human Behavior Models in Stackelberg Security Games: An Extended Study .” In Artificial Intelligence Journal (AIJ), Elsevier, DOI. Publisher's VersionAbstract
Several competing human behavior models have been proposed to model boundedly rational adversaries in repeated Stackelberg Security Games (SSG). However, these existing models fail to address three main issues which are detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries’ past actions (“attacks on targets”), they fail to take into account adversaries’ future adaptation based on successes or failures of these past actions. Second, existing algorithms fail to learn a reliable model of the adversary unless there exists sufficient data collected by exposing enough of the attack surface — a situation that often arises in initial rounds of the repeated SSG. Third, current leading models have failed to include probability weighting functions, even though it is well known that human beings’ weighting of probability is typically nonlinear. To address these limitations of existing models, this article provides three main contributions. Our first contribution is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary’s past actions on exposed portions of the attack surface to model adversary adaptivity; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary’s lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary’s true weighting of probability. Our second contribution is a first “repeated measures study” – at least in the context of SSGs – of competing human behavior models. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects on the Amazon Mechanical Turk platform, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP. Our third major contribution is to demonstrate SHARP’s superiority by conducting real-world human subjects experiments at the Bukit Barisan Seletan National Park in Indonesia against wildlife security experts.
2016. “Conquering Adversary Behavioral Uncertainty in Security Games: An Efficient Modeling Robust based Algorithm .” In 30th AAAI Conference on Artificial Intelligence (AAAI) (Student Abstract).Abstract
Real-world deployed applications of Stackelberg Security Games (Shieh et al. 2012; Basilico, Gatti, and Amigoni 2009; Letchford and Vorobeychik 2011) have led to significant research emphasis on modeling the attacker’s bounded rationality (Yang et al. 2011; Nguyen et al. 2013). One key assumption in behavioral modeling is the availability of a significant amount of data to obtain an accurate prediction. However, in real-world security domains such as the wildlife protection, this assumption may be inapplicable due to the limited access to real-world data (Lemieux 2014), leading to uncertainty in the attacker’s behaviors — a key research challenge of security problems. Recent research has focused on addressing uncertainty in behavioral modeling, following two different approaches: 1) one approach assumes a known distribution of multiple attacker types, each follows a certain behavioral model, and attempts to solve the resulting Bayesian games (Yang et al. 2014); and 2) another considers the existence of multiple attacker types of which behavioral models are perfectly known, but without a known distribution over the types. It then only considers the worst attacker type for the defender (Brown, Haskell, and Tambe 2014). These two approaches have several limitations. First, both still require a sufficient amount of data to precisely estimate either the distribution over attacker types (the former approach) or the model parameters for each individual type (the latter approach). Second, solving the resulting Bayesian games in the former case is computationally expensive. Third, the latter approach tends to be overly conservative as it only focuses on the worst-case attacker type. This paper remedies these shortcomings of state-of-theart approaches when addressing behavioral uncertainty in SSG by providing three key contributions. First, we present a new game model with uncertainty in which we consider a single behavioral model to capture decision making of the whole attacker population (instead of multiple behavioral models); uncertainty intervals are integrated with the chosen model to capture behavioral uncertainty. The idea of uncertainty interval is commonly used in literature (Aghassi and Bertsimas 2006) and has been shown to effectively represent uncertainty in SSG (Kiekintveld, Islam, and Kreinovich 2013). Second, based on this game model, we propose a new efficient robust algorithm that computes the defender’s optimal strategy which is robust to the uncertainty. Overall, the resulting robust optimization problem for computing the defender’s optimal strategy against the worst case of behavioral uncertainty is a non-linear non-convex fractional maximin problem. Our algorithm efficiently solves this problem based on the following key insights: 1) it converts the problem into a single maximization problem via a non-linear conversion for fractional terms and the dual of the inner minimization in maximin; 2) a binary search is then applied to remove the fractional terms; and 3) the algorithm explores extreme points of the feasible solution region and uses a piece-wise linear approximation to convert the problem into a Mixed Integer Linear Program (MILP). Our new algorithm provides an O(+ 1 K )-optimal solution where is the convergence threshold for the binary search and K is the number of segments in the piecewise approximation.
Sara Marie Mc Carthy, Arunesh Sinha, Milind Tambe, and Pratyusa Manadhata. 2016. “Data Exfiltration Detection and Prevention: Virtually Distributed POMDPs for Practically Safer Networks .” In Decision and Game Theory for Security (GameSec 2016).Abstract
We address the challenge of detecting and addressing advanced persistent threats (APTs) in a computer network, focusing in particular on the challenge of detecting data exfiltration over Domain Name System (DNS) queries, where existing detection sensors are imperfect and lead to noisy observations about the network’s security state. Data exfiltration over DNS queries involves unauthorized transfer of sensitive data from an organization to a remote adversary through a DNS data tunnel to a malicious web domain. Given the noisy sensors, previous work has illustrated that standard approaches fail to satisfactorily rise to the challenge of detecting exfiltration attempts. Instead, we propose a decision-theoretic technique that sequentially plans to accumulate evidence under uncertainty while taking into account the cost of deploying such sensors. More specifically, we provide a fast scalable POMDP formulation to address the challenge, where the efficiency of the formulation is based on two key contributions: (i) we use a virtually distributed POMDP (VD-POMDP) formulation, motivated by previous work in distributed POMDPs with sparse interactions, where individual policies for different sub-POMDPs are planned separately but their sparse interactions are only resolved at execution time to determine the joint actions to perform; (ii) we allow for abstraction in planning for speedups, and then use a fast MILP to implement the abstraction while resolving any interactions. This allows us to determine optimal sensing strategies, leveraging information from many noisy detectors, and subject to constraints imposed by network topology, forwarding rules and performance costs on the frequency, scope and efficiency of sensing we can perform.
Fei Fang, Thanh H. Nguyen, Rob Pickles, Wai Y. Lam, Gopalasamy R. Clements, Bo An, Amandeep Singh, and Milind Tambe. 2016. “Deploying PAWS to Combat Poaching: Game-theoretic Patrolling in Areas with Complex Terrains (Demonstration).” In .Abstract
The conservation of key wildlife species such as tigers and elephants are threatened by poaching activities. In many conservation areas, foot patrols are conducted to prevent poaching but they may not be well-planned to make the best use of the limited patrolling resources. While prior work has introduced PAWS (Protection Assistant for Wildlife Security) as a game-theoretic decision aid to design effective foot patrol strategies to protect wildlife, the patrol routes generated by PAWS may be difficult to follow in areas with complex terrain. Subsequent research has worked on the significant evolution of PAWS, from an emerging application to a regularly deployed software. A key advance of the deployed version of PAWS is that it incorporates the complex terrain information and generates a strategy consisting of easy-to-follow routes. In this demonstration, we provide 1) a video introducing the PAWS system; 2) an interactive visualization of the patrol routes generated by PAWS in an example area with complex terrain; and 3) a machine-human competition in designing patrol strategy given complex terrain and animal distribution.
Shahrzad Gholami, Bryan Wilder, Matthew Brown, Dana Thomas, Nicole Sintov, and Milind Tambe. 2016. “Divide to Defend: Collusive Security Games .” In Conference on Decision and Game Theory for Security (GameSec 2016).Abstract
Research on security games has focused on settings where the defender must protect against either a single adversary or multiple, independent adversaries. However, there are a variety of real-world security domains where adversaries may benefit from colluding in their actions against the defender, e.g., wildlife poaching, urban crime and drug trafficking. Given such adversary collusion may be more detrimental for the defender, she has an incentive to break up collusion by playing off the self-interest of individual adversaries. As we show in this paper, breaking up such collusion is difficult given bounded rationality of human adversaries; we therefore investigate algorithms for the defender assuming both rational and boundedly rational adversaries. The contributions of this paper include (i) collusive security games (COSGs), a model for security games involving potential collusion among adversaries, (ii) SPECTRE-R, an algorithm to solve COSGs and break collusion assuming rational adversaries, (iii) observations and analyses of adversary behavior and the underlying factors including bounded rationality, imbalanced- resource-allocation effect, coverage perception, and individualism / collectivism attitudes within COSGs with data from 700 human subjects, (iv) a learned human behavioral model that incorporates these factors to predict when collusion will occur, (v) SPECTRE-BR, an enhanced algorithm which optimizes against the learned behavior model to provide demonstrably better performing defender strategies against human subjects compared to SPECTRE-R.
Eric Shieh, Albert Xin Jiang, Amulya Yadav, Pradeep Varakantham, and Milind Tambe. 2016. “An Extended Study on Addressing Defender Teamwork while Accounting for Uncertainty in Attacker Defender Games using Iterative Dec-MDPs .” Multiagent and Grid Systems (MAGS) Journal.Abstract
Multi-agent teamwork and defender-attacker security games are two areas that are currently receiving significant attention within multi-agent systems research. Unfortunately, despite the need for effective teamwork among multiple defenders, little has been done to harness the teamwork research in security games. The problem that this paper seeks to solve is the coordination of decentralized defender agents in the presence of uncertainty while securing targets against an observing adversary. To address this problem, we offer the following novel contributions in this paper: (i) New model of security games with defender teams that coordinate under uncertainty; (ii) New algorithm based on column generation that utilizes Decentralized Markov Decision Processes (Dec-MDPs) to generate defender strategies that incorporate uncertainty; (iii) New techniques to handle global events (when one or more agents may leave the system) during defender execution; (iv) Heuristics that help scale up in the number of targets and agents to handle real-world scenarios; (v) Exploration of the robustness of randomized pure strategies. The paper opens the door to a potentially new area combining computational game theory and multi-agent teamwork.
Nicole Sintov, Debarun Kar, Thanh Nguyen, Fei Fang, Kevin Hoffman, Arnaud Lyet, and Milind Tambe. 2016. “From the Lab to the Classroom and Beyond: Extending a Game-Based Research Platform for Teaching AI to Diverse Audiences .” In Symposium on Educational Advances in Artificial Intelligence (EAAI) 2016.Abstract
Recent years have seen increasing interest in AI from outside the AI community. This is partly due to applications based on AI that have been used in real-world domains, for example, the successful deployment of game theory-based decision aids in security domains. This paper describes our teaching approach for introducing the AI concepts underlying security games to diverse audiences. We adapted a game-based research platform that served as a testbed for recent research advances in computational game theory into a set of interactive role-playing games. We guided learners in playing these games as part of our teaching strategy, which also included didactic instruction and interactive exercises on broader AI topics. We describe our experience in applying this teaching approach to diverse audiences, including students of an urban public high school, university undergraduates, and security domain experts who protect wildlife. We evaluate our approach based on results from the games and participant surveys.
Andrew Plucker. 2016. “The Future of Counterinsurgency Modeling: Decision Aids for United States Army Commanders ”. 2016_2_teamcore_thefuture_of_counterinsurgency_modeling_final.pdf
Shahrzad Gholami, Bryan Wilder, Matthew Brown, Arunesh Sinha, Nicole Sintov, and Milind Tambe. 2016. “A Game Theoretic Approach on Addressing Collusion among Human Adversaries .” In Workshop on security and multiagent systems, International conference on Autonomous Agents and Multiagent Systems (AAMAS).Abstract
Several models have been proposed for Stackelberg security games (SSGs) and protection against perfectly rational and bounded rational adversaries; however, none of these existing models addressed the collusion mechanism between adversaries. In a large number of studies related to SSGs, there is one leader and one follower in the game such that the leader takes action and the follower responds accordingly. These studies fail to take into account the possibility of existence of group of adversaries who can collude and cause synergistic loss to the security agents (defenders). The first contribution of this paper is formulating a new type of Stackleberg security game involving a beneficial collusion mechanism among adversaries. The second contribution of this paper is to develop a parametric human behavior model which is able to capture the bounded rationality of adversaries in this type of collusive games. This model is proposed based on human subject experiments with participants on Amazon Mechanical Turk (AMT).
Aaron Schlenker, Matthew Brown, Arunesh Sinha, Milind Tambe, and Ruta Mehta. 2016. “Get Me to My GATE On Time: Efficiently Solving General-Sum Bayesian Threat Screening Games .” In 22nd European Conference on Artificial Intelligence (ECAI).Abstract
Threat Screening Games (TSGs) are used in domains where there is a set of individuals or objects to screen with a limited amount of screening resources available to screen them. TSGs are broadly applicable to domains like airport passenger screening, stadium screening, cargo container screening, etc. Previous work on TSGs focused only on the Bayesian zero-sum case and provided the MGA algorithm to solve these games. In this paper, we solve Bayesian general-sum TSGs which we prove are NP-hard even when exploiting a compact marginal representation. We also present an algorithm based upon a adversary type hierarchical tree decomposition and an efficient branch-and-bound search to solve Bayesian generalsum TSGs. With this we provide four contributions: (1) GATE, the first algorithm for solving Bayesian general-sum TSGs, which uses hierarchical type trees and a novel branch-and-bound search, (2) the Branch-and-Guide approach which combines branch-and-bound search with the MGA algorithm for the first time, (3) heuristics based on properties of TSGs for accelerated computation of GATE, and (4) experimental results showing the scalability of GATE needed for real-world domains.
Yundi Qian. 2016. “Handling Attacker's Preference in Security Domains: Robust Optimization and Learning Approaches ”.Abstract
Stackelberg security games (SSGs) are now established as a powerful tool in security domains. In order to compute the optimal strategy for the defender in SSG model, the defender needs to know the attacker’s preferences over targets so that she can predict how the attacker would react under a certain defender strategy. Uncertainty over attacker preferences may cause the defender to suffer significant losses. Motivated by that, my thesis focuses on addressing uncertainty in attacker preferences using robust and learning approaches. In security domains with one-shot attack, e.g., counter-terrorism domains, the defender is interested in robust approaches that can provide performance guarantee in the worst case. The first part of my thesis focuses on handling attacker’s preference uncertainty with robust approaches in these domains. My work considers a new dimension of preference uncertainty that has not been taken into account in previous literatures: the risk preference uncertainty of the attacker, and propose an algorithm to efficiently compute defender’s robust strategy against uncertain risk-aware attackers. In security domains with repeated attacks, e.g., green security domain of protecting natural resources, the attacker “attacks” (illegally extracts natural resources) frequently, so it is possible for the defender to learn attacker’s preference from their previous actions.
G. R. Martins, M. Escarce Junior, and L. S. Marcolino. 2016. “A Hands-on Musical Experience in AI, Games and Art (Demonstration) .” In 30th AAAI Conference on Artificial Intelligence (AAAI 2016).Abstract
AI is typically applied in video games in the creation of artificial opponents, in order to make them strong, realistic or even fallible (for the game to be “enjoyable” by human players). We offer a different perspective: we present the concept of “Art Games”, a view that opens up many possibilities for AI research and applications. Conference participants will play Jikan to Kukan, an art game where the player dynamically creates the soundtrack with the AI system, while developing her experience in the unconscious world of a character.
Chao Zhang, Shahrzad Gholami, Debarun Kar, Arunesh Sinha, Manish Jain, Ripple Goyal, and Milind Tambe. 2016. “Keeping Pace with Criminals: An Extended Study of Designing Patrol Allocation against Adaptive Opportunistic Criminals .” Games Journal.Abstract
Game theoretic approaches have recently been used to model the deterrence effect of patrol officers’ assignments on opportunistic crimes in urban areas. One major challenge in this domain is modeling the behavior of opportunistic criminals. Compared to strategic attackers (such as terrorists) who execute a well-laid out plan, opportunistic criminals are less strategic in planning attacks and more flexible in executing well-laid plans based on their knowledge of patrol officers’ assignments. In this paper, we aim to design an optimal police patrolling strategy against opportunistic criminals in urban areas. Our approach is comprised by two major parts: learning a model of the opportunistic criminal (and how he or she responds to patrols) and then planning optimal patrols against this learned model. The planning part, by using information about how criminals responds to patrols, takes into account the strategic game interaction between the police and criminals. In more detail, first, we propose two categories of models for modeling opportunistic crimes. The first category of models learns the relationship between defender strategy and crime distribution as a Markov chain. The second category of models represents the interaction of criminals and patrol officers as a Dynamic Bayesian Network (DBN) with the number of criminals as the unobserved hidden states. To this end, we: (i) apply standard algorithms, such as Expectation Maximization (EM), to learn the parameters of the DBN; (ii) modify the DBN representation that allows for a compact representation of the model, resulting in better learning accuracy and the increased speed of learning of the EM algorithm when used for the modified DBN. These modifications exploit the structure of the problem and use independence assumptions to factorize the large joint probability distributions. Next, we propose an iterative learning and planning mechanism that periodically updates the adversary model. We demonstrate the efficiency of our learning algorithms by applying them to a real dataset of criminal activity obtained from the police department of the University of Southern California (USC) situated in Los Angeles, CA, USA. We project a significant reduction in crime rate using our planning strategy as compared to the actual strategy deployed by the police department. We also demonstrate the improvement in crime prevention in simulation when we use our iterative planning and learning mechanism when compared to just learning once and planning. Finally, we introduce a web-based software for recommending patrol strategies, which is currently deployed at USC. In the near future, our learning and planning algorithm is planned to be integrated with this software. This work was done in collaboration with the police department of USC.
Arunesh Sinha, Debarun Kar, and Milind Tambe. 2016. “Learning Adversary Behavior in Security Games: A PAC Model Perspective .” In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).Abstract
Recent applications of Stackelberg Security Games (SSG), from wildlife crime to urban crime, have employed machine learning tools to learn and predict adversary behavior using available data about defender-adversary interactions. Given these recent developments, this paper commits to an approach of directly learning the response function of the adversary. Using the PAC model, this paper lays a firm theoretical foundation for learning in SSGs and provides utility guarantees when the learned adversary model is used to plan the defender’s strategy. The paper also aims to answer practical questions such as how much more data is needed to improve an adversary model’s accuracy. Additionally, we explain a recently observed phenomenon that prediction accuracy of learned adversary behavior is not enough to discover the utility maximizing defender strategy. We provide four main contributions: (1) a PAC model of learning adversary response functions in SSGs; (2) PAC-model analysis of the learning of key, existing bounded rationality models in SSGs; (3) an entirely new approach to adversary modeling based on a non-parametric class of response functions with PAC-model analysis and (4) identification of conditions under which computing the best defender strategy against the learned adversary behavior is indeed the optimal strategy. Finally, we conduct experiments with real-world data from a national park in Uganda, showing the benefit of our new adversary modeling approach and verification of our PAC model predictions.
Yasaman Dehghani Abbasi. 2016. “Modeling Human Bounded Rationality in Opportunistic Security Games ”.Abstract
Security has been an important, world-wild concern over the past decades. Security agencies have been established to prevent different types of crimes in various domains, such as illegal poaching, human trafficking, terrorist attacks to ports and airports, and urban crimes. Unfortunately, in all these domains, security agencies have limited resources and cannot protect all potential targets at all time. Therefore, it is critical for the security agencies to allocate their limited resources optimally to protect potential targets from the adversary. Recently, game-theoretic decision support systems have been applied to assist defenders (e.g. security agencies) in allocating and scheduling their limited resources. Stackelberg Security Game (denoted as SSG), is an example of a game-theoretic model that has been deployed to assign the security resources to the potential targets. Indeed, decision-support systems based on SSG models have been successfully implemented to assist real-world security agencies in protecting critical infrastructure such as airports, ports, or suppressing crime in urban areas. SSG provides an approach for generating randomized protection strategies for the defender using a mathematical representation of the interaction between the defender and the attacker. Therefore, one of the key steps in applying the SSG algorithm to real-world security problems is to model adversary decision-making process. Building upon the success of SSGs applications, game theory is now being applied to adjacent domains such as Opportunistic Security. In this domain, the defender is faced with adversaries with special characteristics. Opportunistic criminals carry out repeated, and frequent illegal activities (attacks), and they generally do not conduct extensive surveillance before performing an attack and spend less time and effort in planning each attack. To that end, in my thesis, I focus on modeling the opportunistic criminals’ behavior in which modeling adversary decision-making process is particularly crucial to develop efficient patrolling strategies for the defenders. I provide an empirical investigation of adversary behavior in opportunistic crime settings by conducting extensive human subject experiments and analyzing how participants are making their decisions to create adversary behavior prediction models to be deployed in many opportunistic crime domains. More specifically, this thesis provides (i) a comprehensive answer to the question that “which of the proposed human bounded rationality models best predicts adversaries’ behavior in the Opportunistic Crime domain?”, (ii) enhanced human behavior models which outperform existing state-of-the-art models (iii) a detailed comparison between human behavior models and well-known Cognitive Science model: InstanceBased Learning model (iv) an extensive study on the heterogeneity of adversarial behavior, and (v) a thorough study of human behavior changing over time, (vi) as well as how to improve human behavior models to account for the adversaries’ behavior evolve over time.
L. S. Marcolino, H. Xu, D. Gerber, B. Kolev, S. Price, E. Pantazis, and M. Tambe. 2016. “Multi-agent Team Formation for Design Problems .” In Coordination, Organizations, Institutions and Norms in Agent Systems XI. Springer-Verlag Lecture Notes in AI.Abstract
Design imposes a novel social choice problem: using a team of voting agents, maximize the number of optimal solutions; allowing a user to then take an aesthetical choice. In an open system of design agents, team formation is fundamental. We present the first model of agent teams for design. For maximum applicability, we envision agents that are queried for a single opinion, and multiple solutions are obtained by multiple iterations. We show that diverse teams composed of agents with different preferences maximize the number of optimal solutions, while uniform teams composed of multiple copies of the best agent are in general suboptimal. Our experiments study the model in bounded time; and we also study a real system, where agents vote to design buildings.
Haifeng Xu. 2016. “The Mysteries of Security Games: Equilibrium Computation Becomes Combinatorial Algorithm Design .” In The 17 ACM Conference on Economics and Computation (ACM-EC).Abstract
The security game is a basic model for resource allocation in adversarial environments. Here there are two players, a defender and an attacker. The defender wants to allocate her limited resources to defend critical targets and the attacker seeks his most favorable target to attack. In the past decade, there has been a surge of research interest in analyzing and solving security games that are motivated by applications from various domains. Remarkably, these models and their game-theoretic solutions have led to real-world deployments in use by major security agencies like the LAX airport, the US Coast Guard and Federal Air Marshal Service, as well as non-governmental organizations. Among all these research and applications, equilibrium computation serves as a foundation. This paper examines security games from a theoretical perspective and provides a unified view of various security game models. In particular, each security game can be characterized by a set system E which consists of the defender’s pure strategies; The defender’s best response problem can be viewed as a combinatorial optimization problem over E. Our framework captures most of the basic security game models in the literature, including all the deployed systems; The set system E arising from various domains encodes standard combinatorial problems like bipartite matching, maximum coverage, min-cost flow, packing problems, etc. Our main result shows that equilibrium computation in security games is essentially a combinatorial problem. In particular, we prove that, for any set system E, the following problems can be reduced to each other in polynomial time: (0) combinatorial optimization over E; (1) computing the minimax equilibrium for zero-sum security games over E; (2) computing the strong Stackelberg equilibrium for security games over E; (3) computing the best or worst (for the defender) Nash equilibrium for security games over E. Therefore, the hardness [polynomial solvability] of any of these problems implies the hardness [polynomial solvability] of all the others. Here, by “games over E” we mean the class of security games with arbitrary payoff structures, but a fixed set E of defender pure strategies. This shows that the complexity of a security game is essentially determined by the set system E. We view drawing these connections as an important conceptual contribution of this paper.