ARMOR: Research on game theory for security
(In use at LAX since 2007)
(ARMOR-LAX)

ARMOR-LAX is now managed by ARMORWAY

Current ARMOR Team:

Milind Tambe Fernando Ordóñez Parth Shah
James Pita Manish Jain

Alumni:
Praveen Paruchuri Janusz Marecki Christopher Portway
Craig Western Sarit Kraus Shyamsunder Rathi


Gratefully acknowledge support of:






Motivation

Security at major locations of economic or political importance is a key concern around the world, particularly given the threat of terrorism. Limited security resources prevent full security coverage at all times, which allows adversaries to observe and exploit patterns in selective patrolling or monitoring, e.g. they can plan an attack avoiding existing patrols. Hence, randomized patrolling or monitoring is important, but randomization must provide distinct weights to different actions based on their complex costs and benefits.

Security Challenge

Los Angeles International Airport has posed the following two key challenges to our team:

  1. When and where to place checkpoints on inbound roads
  2. When and where to allocate canine units to terminals

These challenges bring with them some other important security challenges:
  • Security is observable making any patterns or predictability exploitable
  • There are many different types of adversaries a security force will face
  • Adversary types will be faced with different probabilities of occurance

ARMOR: Assistant for Randomized Monitoring Over Routes

The ARMOR software casts the above patrolling/monitoring problem as a Bayesian Stackelberg game, allowing the ARMOR program to appropriately weigh the different actions in randomization taking into account different target weights, as well as the fact that the adversary will conduct surveillance, and the that there is uncertainty over adversary types. According to Dr Milind Tambe, a tenured Professor at USC and the prime developer of ARMOR, the program was based on the fastest known solver for Bayesian Stackelberg games at that time called DOBSS (Decomposed Optimal Bayesian Stackelberg Solver), where the dominant mixed strategies enables the weighted randomization mentioned above. (Since the deployment of ARMOR, the USC CREATE team led by Dr. Tambe has continued to scale up their algorithms beyond DOBSS so the algorithms can handle much larger domains such as those presented by the US Coast Guard as well as address more of the uncertainties presented in such domains.)

The image to the left shows the main screen of the ARMOR interface. In this example, a week's worth of schedules is shown. All three types of constraints are visible:

  • Red constraints show times and locations that must not be scheduled
  • Green shows the reverse, ,a time and location that must be scheduled
  • Yellow regions must have at least one checkpoint scheduled

Download Problem Set

News on the ARMOR Project

Recent Updates

ARMOR's Sucessful Deployment Leads to Work with Federal Air Marshals

ARMOR has now been successfully deployed at Los Angeles International Airport (LAX) since August 2007 to schedule canine patrols and police checkpoints. Based on the successful deployment at LAX the United States Federal Airmarshals have recently commissioned TEAMCORE to work on a similar project for randomization of Federal Air Marshals on flights. See the full details here: ARMOR Federal Air Marshals

Pictured here are the attendees of the original six month debriefing celebration for the six month trial period at LAX. ARMOR has since then been officially handed over to the LAX Police.

General Advances in Security Domains

Security, commonly defined as the ability to deal with intentional threats from other agents, is a major challenge for agents or agent-teams deployed in adversarial domains. Such adversarial scenarios arise in a wide variety of situations that are becoming increasingly important. Some example cases are agents patrolling to provide perimeter security around critical infrastruture or performing routine security checks.

These domains have multiple characteristics which must be carefully addressed:

  • The agent or agent-team needs to commit to a security policy while the adversaries may observe and exploit the policy committed to.
  • The agent/agent-team potentially faces different types of adversaries and has varying information available about the adversaries (thus limiting the agents' ability to model its adversaries).
  • The adversary may have anywhere from limited to full knowledge of the security policy chosen by the agent/agent-team
  • The adversary may be boundedly rational causing him to deviate from what may be rational choices
To address security in such domains, we have developed multiple types of algorithms, to handle cases when the agent has come knowledge of the adversary as well as when it has none.

In the case where the agent has no model of its adversaries, our key idea is to randomize agent's policies to minimize the information gained by adversaries. To that end, we developed algorithms for policy randomization for both the Markov Decision Processes (MDPs) and the Decentralized-Partially Observable MDPs (Dec POMPDPs). Since arbitrary randomization can violate quality constraints (for example, the resource usage should be below a certain threshold or key areas must be patrolled with a certain frequency), our algorithms guarantee quality constraints on the randomized policies generated. For efficiency, we provide a novel linear program for randomized policy generation in MDPs, and then build on this program for a heuristic solution for Dec-POMDPs.

In the other case, when the agent has a partial model of the adversaries, we model the security domain as a Bayesian Stackelberg game, where the agent's model of the adversary includes a probability distribution over possible adversary types. While the optimal policy selection for a Bayesian Stackelberg game is known to be NP-hard, our solution approach based on an efficient Mixed Integer Linear Program (MILP) provides significant speed-ups over existing approaches while obtaining the optimal solution. The resulting policy randomizes the agent's possible strategies, while taking into account the probability distribution over adversary types. This is the approach used in the ARMOR program.

We have also developed a new algorithm similar to the one used in the ARMOR program that accounts for adversaries who may be boundedly rational or have limited observational capabilities. This algorithm makes certain assupmtions on the observational capabilities of the adversary as well as their rationality and finds an optimal solution to the Bayesian Stackelberg games given these assumptions. We have already begun to show that under certain conditions this new algorithm can better predict the actions of human adversaries.

Relevant Papers and Presentations

Title Author Published At Year Download
Quality-bounded Solutions for Finite Bayesian Stackelberg Games: Scaling up Manish Jain, Milind Tambe and Christopher Kiekintveld International Conference on Autonomous Agents and Multiagent Systems 2011 download
Refinement of Strong Stackelberg Equilibria in Security Games Bo An, Milind Tambe, Fernando Ordonez, Eric Shieh and Christopher Kiekintveld Conference on Artificial Intelligence (AAAI) 2011 download
Stackelberg vs. Nash in Security Games: An Extended Investigation of Interchangeability, Equivalence, and Uniqueness Dmytro Korzhyk, Zhengyu Yin, Christopher Kiekintveld, Vincent Conitzer, and Milind Tambe Journal of AI Research (JAIR) 2011 download
A Framework for Evaluating Deployed Security Systems: Is There a Chink in your ARMOR? Matthew E. Taylor, Christopher Kiekintveld, Craig Western and Milind Tambe Informatica 2010 download
Effective Solutions for Real-World Stackelberg Games: When Agents Must Deal with Human Uncertainties James Pita, Manish Jain, Fernando Ordóñez, Milind Tambe, Sarit Kraus, Reuma Magori-Cohen AAMAS 2009 download
Using Game Theory for Los Angeles Airport Security James Pita, Manish Jain, Fernando Ordonez, Christopher Portway, Milind Tambe, Craig . Western, Praveen Paruchuri, Sarit Kraus AI Magazine 2009 download
ARMOR Security for Los Angeles International Airport James Pita, Manish Jain, Fernando Ordonez, Christopher Portway, Milind Tambe, Craig Western, Praveen Paruchuri, Sarit Kraus AAAI Intelligent Systems Demonstrations 2008 download
Bayesian Stackelberg Games and their Application for Security at Los Angeles International Airport Manish Jain, J . Pita, Milind Tambe, Fernando Ordonez, Praveen Paruchuri, Sarit Kraus SIGecom Exchanges 2008 download
Efficient Algorithms to solve Bayesian Stackelberg Games for Security Applications Praveen Paruchuri, Jonathan P. Pearce, Janusz Marecki, Milind Tambe, Fernando Ordonez, Sarit Kraus AAAI 2008 download
Deployed ARMOR protection: The application of a game-theoretic model for security at the Los Angeles International Airport James Pita, Manish Jain, Craig Western, Christopher Portway, Milind Tambe, Fernando Ordonez, Sarit Kraus, Praveen Paruchuri AAMAS 2008 download
Playing Games for Security: An Efficient Exact Algorithm for Solving Bayesian Stackelberg Games Praveen Paruchuri, Jonathan P. Pearce, Janusz Marecki, Milind Tambe, Fernando Ordonez, Sarit Kraus AAMAS 2008 download
Robust Solutions in Stackelberg Games: Addressing Boundedly Rational Human Preference Models Manish Jain, Fernando Ord´o˜ nez, James Pita, Christopher Portway, Milind Tambe, Craig Western, Praveen Paruchuri, Sarit Kraus AAAI 2008 download
An Efficient Heuristic Approach for Security Against Multiple Adversaries Praveen Paruchuri, Jonathan P. Pearce, Milind Tambe, Fernando Ordonez, Sarit Kraus AAMAS 2007 download
Security in Multiagent Systems by Policy Randomization Praveen Paruchuri, Milind Tambe, Fernando Ordonez, Sarit Kraus AAMAS 2006 download
Keep the Adversary Guessing: Agent  Security by Policy Randomization Praveen Paruchuri Thesis Defence 2006 download

This project funded by the USC Homeland Security Center (CREATE).

If you have any questions about the contents of this page please contact James Pita   (jpita@usc.edu)