Algorithmic Experimental Game Theory
Dealing with Human Decision Making in Security Games

Current Members

Debarun Kar

Benjamin Ford

Shahrzad Gholami

Subhasree Sengupta

Nicole Sintov

Milind Tambe

Alumni

Rong Yang

James Pita

Chris Kiekintveld

Mohit Goenka

Fernando Ordonez

Richard John

Yasaman Dehghani Abbasi

Thanh Nguyen


Gratefully acknowledge support of:

Motivation

Security at major locations of economic or political importance is a key concern around the world, particularly given the threat of terrorism. Stackelberg games have received significant attention in recent years given their deployment for real-world security problems, such as critical infrastructure protection and robot patrolling strategies. Many of these real-world systems, including ARMOR, IRIS and GUARDS, have adopted the standard game-theoretic assumption that the adversaries are perfectly rational and strictly maximize their expected utilities. However, this assumption may not hold when facing human adversaries with bounded rationality and may reduce the effectiveness of these systems as a result.

Our goal is to relax the perfect rationality assumption of human adversaries in real-world Stackelberg security games. In particular, we aim at developing new models of the adversary that incorporate their bounded rationality. At the same time, we also want to build new algorithms for efficiently computing the optimal defender strategy for real-world security problems.

Approaches

There are several open questions we need to address in moving beyond perfect rationality assumptions. First, a large variety of alternative models have been studied in Behavioral Game Theory and Cognitive Psychology that capture some of the deviations of human decisions from perfect rationality. However, there is an important empirical question of which model best represents the salient features of human behavior in applied security contexts. Second, integrating any of the proposed models into a decision-support system (even for the purpose of empirical evaluation) requires developing new computational methods, since the existing algorithms for security games are based on perfectly rational attackers. Third, many of these models imply mathematically complex representations of the adversary's decision-making procedure (e.g., nonlinear and non-convex function forms), which in general leads to an NP-hard problem of computing the defender's optimal strategy. Therefore, developing efficient algorithms to solve such a computationally complex problem is critical for real-world security problems due to their massive scale.

We work towards answering these open questions by first developing mathematical models of adversary decision making based on fundamental theories in literature of related fields, such as Cognitive Science and Behavioral Economics. At the same time, we develop efficient algorithms to compute optimal strategies. We also conduct experiments with human subjects to evaluate the performance of both the adversary behavioral models and the corresponding defender strategies.

We have developed models of adversary behavior for both single-shot game domains such as counter-terrorism and repeated SSG game settings such as wildlife poaching which involves repeated interactions between the defenders and the adversaries. Assuming that attackers act independently in such domains, i.e., they do not collude with each other to attack targets, we have developed several adversary behavior models, such as QR, SUQR, Bayesian SUQR, Robust SUQR and SHARP. We have conducted extensive human subjects experiments on simulated games developed for such domains (see this page ) to compare the performances of these behavioral models. From data collected about human behavior in such games, we observed that: (a) human being's perception of probabilities are S-shaped in nature (see Fig. 1 in this page ), this is contrary to what is popularly observed in the behavioral game theory literature such as in Kahneman and Tversky's Noble prize winning work on prospect theory; and (b) behavioral models such as SHARP which model the adaptive nature of human adversaries perform better as opposed to other models (see Fig. 2 in this page ). See this paper for more details about our experiments and findings.

However, the attackers may collude with each other while conducting attacks. Given such adversary collusion may be more detrimental for the defender, she has an incentive to break up collusion by playing off the self-interest of individual adversaries. As we show in our study, breaking up such collusion is difficult given bounded rationality of human adversaries; we therefore investigate algorithms for the defender assuming both rational and boundedly rational adversaries. Rational algorithm showed that the best strategy for defender is an imbalanced resource allocation strategy such that one of the adversaries is in an advantaged situation and it is completely irrational for him to collude with other adversary who is in a disadvantaged situation; hence, collusion breaks. In this approach, adversaries are assumed to be utility maximizers; however, our human subject experiments showed that human adversaries not necessarily maximize their utility, rather targets with higher expected utility is more probable to be chosen by them. By this stochastic approach, we were able to improve the performance of the algorithm for security resource allocation against collusive adversaries. See this paper for more details about our models, experiments and findings.

Pulpit Rock

Fig. 1

Pulpit Rock

Fig. 2

Pulpit Rock

Fig. 3

Real-world Deployment

Models and algorithms based on human behaviour models developed in our group have been tested and deployed in various places. Protection Assistant for Wildlife Security (PAWS) is an application which was initially developed with the goal of improving wildlife patrols. PAWS based strategies have been tested in the Queen Elizabeth National park in Uganda in collaboration with Andrew Lemieux. MaxImin Defense Against SUQR (MIDAS) is a scalable and robust algorithm for generating game-theoretic patrols against multiple boundedly-rational adversaries. MIDAS has recently been tested in the Gulf of Mexico by the USCG. We have successfully developed PROTECT, a software system helps the US Coast Guard schedule patrols to protect the ports. At the heart of PROTECT is the PASAQ algorithm which assumes a Quantal Response model of adversary decision making.


Part I

Part II
Rong Yang Presenting at International Joint Conference on Artificial Intelligence (IJCAI), July 2011, Barcelona, Spain

Pulpit Rock
Pulpit Rock

Relevant Publications

  • D. Kar, F. Fang, F. M. D. Fave, N. Sintov, M. Tambe, A. Lyet, Comparing Human Behavior Models in Stackelberg Security Games: An Extended Study, In Artificial Intelligence Journal (AIJ), Elsevier, Volume 240, November 2016, Pages 65–103. DOI [.pdf]

  • A. Sinha, D. Kar, M. Tambe "Learning Adversary Behavior in Security Games: A PAC Model Perspective " In International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2016 [.pdf]

  • Thanh H. Nguyen, A. Sinha, S. Gholami, A. Plumptre, L. Joppa, M. Tambe, M. Driciru, F. Wanyama, A. Rwetsiba, R. Critchlow, C. Beale "CAPTURE: A New Predictive Anti-Poaching Tool for Wildlife Protection," In 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2016 [.pdf]

  • S. Gholami, B. Wilder, M. Brown, D. Thomas, N. Sintov, M. Tambe, "Divide to Defend: Collusive Security Games," In Proc. of the Conference on Decision and Game Theory for Security (GameSec 2016) [.pdf]

  • S. Gholami, B. Wilder, M. Brown, A. Sinha, N. Sintov, M. Tambe, "A Game Theoretic Approach on Addressing Cooperation among Human Adversaries," In Workshop on security and multiagent systems(SECMAS) held at the 15th international conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) [.pdf]

  • S. Gholami, B. Wilder, M. Brown, A. Sinha, N. Sintov, M. Tambe, "SPECTRE: A Game Theoretic Framework for Preventing Collusion in Security Games (Demonstration)," In Proc. of the 15th international conference on Autonomous Agents and Multiagent Systems (AAMAS 2016)[.pdf]

  • S. Gholami, B. Wilder, M. Brown, D. Thomas, N. Sintov, M. Tambe, "Toward Addressing Collusion among Human Adversaries in Security Games," In Proc. of the 22nd he biennial European Conference on Artificial Intelligence (ECAI 2016)[.pdf]

  • D. Kar, F. Fang, F. Delle Fave, N. Sintov, M. Tambe, " ‘A Game of Thrones’: When Human Behavior Models Compete in Repeated Stackelberg Security Games", (AAMAS’15, Istanbul, Turkey)[.pdf]

  • D. Kar, F. Fang, F. Delle Fave, N. Sintov, M. Tambe, "Conducting Longitudinal Experiments with Behavioral Models in Repeated Stackelberg Security Games on Amazon Mechanical Turk" In Human-Agent Interaction Design and Models (HAIDM) Workshop at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015) 2015 [.pdf]

  • D. Kar, F. Fang, F. Delle Fave, N. Sintov, A. Sinha, A. Galstyan, B. An, M. Tambe, "Learning Bounded Rationality Models of the Adversary in Repeated Stackelberg Security Games," In Adaptive and Learning Agents Workshop at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015) 2015 [.pdf]

  • D. Kar, F. Fang, F. Delle Fave, N. Sintov, M. Tambe, A. Van Wissen, "Effectiveness of Probability Perception Modeling and Defender Strategy Generation Algorithms in Repeated Stackelberg Games: An Initial Report", (Computational Sustainability Workshop at AAAI’15, Texas, Austin)[.pdf]

  • R. Yang, B. Ford, M. Tambe, A. Lemieux "Adaptive Resource Allocation for Wildlife Protection against Illegal Poachers,"International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2014 [.pdf]

  • B. Haskell, D. Kar, F. Fang, M. Tambe, S. Cheung, E. Denicola, "Robust protection of fisheries with COmPASS," Innovative applications of Artificial Intelligence (IAAI) 2014 [.pdf]

  • R. Yang, A. X. Jiang, M. Tambe, F. Ordo´nez, "Scaling-up Security Games with Boundedly Rational Adversaries: A Cutting-plane Approach", International Joint Conference on Artificial Intelligence (IJCAI 2013) [.pdf]

  • T.H. Nguyen, R. Yang, A. Azaria, S. Kraus, M. Tambe, "Analyzing the Effectiveness of Adversary Modeling in Security Games", Conference on Artificial Intelligence (AAAI 2013) [.pdf]

  • R. Yang, C. Kiekintvled, F. Ordonez, M. Tambe and R. John, "Improving Resource Allocation Strategies Against Human Adversaries in Security Games: An Extended Study", Artificial Intelligence Journal, vol 195, February 2013, pp 440-469 [.pdf]

  • J. Pita, R. John, R. Maheswaran, M. Tambe and S. Kraus, "A Robust Approach to Addressing Human Adversaries in Security Games", European Conference on Artificial Intelligence (ECAI 2012) [.pdf]

  • R. Yang, F. Ordonez and M. Tambe, "Computing Optimal Strategy against Quantal Response in Security Games", In Proc. of the 11th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2012) [.pdf]

  • R. Yang, A. X. Jiang, F. Fang, R. Maheswaran and M. Tambe, "Designing Better Strategies against Human Adversaries in Graph-Based Security Games", In Proc. of the 11th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2012) [.pdf] (Short Paper)

  • E. Shieh, B. An, R. Yang, M. Tambe, C. Baldwin, J. DiRenzo, B. Maule, G. Meyer, "PROTECT: A deployed game theoretic system to protect the ports of the United States", In Proc. of the 11th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2012) [.pdf]

  • J. Pita, R. John, R. Maheswaran, M. Tambe, R. Yang and S. Kraus, "A Robust Approach to Addressing Human Adversaries in Security Games", In Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012) [.pdf] (Short Paper)

  • R. Yang, C. Kiekintveld, F. Ordonez, M. Tambe and R. John, "Improving Resource Allocation Strategy Against Human Adversaries in Security Games", In Proc. of the 22nd International Joint Conference on Artifical Intelligence (IJCAI 2011) [.pdf]

  • R. Yang, M. Tambe, C. Kiekintveld, F. Ordonez and R. John, "Improved Computational Models of Human Behavior in Security Games", In Proc. of the 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011) [.pdf] (Short Paper)

  • J. Pita, M. Jain, F. Ordonez, M. Tambe and S. Kraus, "Robust Solutions to Stackelberg Games: Addressing Bounded Rationality and Limited Observations in Human Cognition", Artificial Intelligence Journal, 174(15):1142-1171, 2010 [.pdf]

  • J. Pita, M. Jain, F. Ordonez, M. Tambe, S. Kraus and R. Magori-Cohen, "Effective Solutions for Real-World Stackelberg Games: When Agents Must Deal with Human Uncertainties", In Proc. of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009) [.pdf]

If you have any questions about the contents of this page please contact Debarun Kar ( dkar@usc.edu )