Algorithmic Experimental Game Theory
Dealing with Human Uncertainty in Critical Adversarial Domains

 

Simulated Security Games

We design games to simulate security scenarios and recruit human subjects to play the games in order to learn their behaviors in security games. Experiments are conducted both in the lab (on campus with college students) and on the crowd sourcing platform Amazon Mechanical Turk. Following are four different types of security game simulations, each focussed on simulating a particular type of security game.

The Wildlife Poaching Game (Non-collusive Adversaries): This is a game developed to conduct longitudinal experiments with human subjects for "Green Security Games". In our game, human subjects play the role of poachers looking to place a snare to hunt a hippopotamus. The game interface is shown below. In the game, the portion of the park shown in the map is divided into a 5*5 grid, i.e. 25 distinct cells. Overlaid on the Google Maps view of the park is a heat-map, which represents the rangers' mixed strategy x - a cell i with higher coverage probability xi is shown more in red, while a cell with lower coverage probability is shown more in green. As the subjects play the game, they are given detailed information corresponding to each target i. However, they do not know the pure strategy that will be played by the rangers, which is drawn randomly from mixed strategy x shown on the game interface. Thus, we model the real-world situation that poachers have knowledge of past pattern of ranger deployment but not the exact location of ranger patrols when they set out to lay snares.


The Wildlife Poaching Game 2 (Collusive Adversaries) : This game is similar to the Wildlife Poaching Game 1 but there are two poachers in this game who might decide to collude with each other. Again human subjects are asked to play the role of a poacher in a national park in Africa. The entire park area is divided into two sections (right and left) and each human subject can only attack in one section (either right or left); however, they can explore the whole park to obtain information about the other player's situation. To ensure repeatability of the experiments, the other side is played by a computer, not a real player. Since our goal is to study human adversaries, we do not reveal the identity of the other player to the human subjects. This creates a more realistic environment since the subjects believe that they are playing against another human. Each section of the park is divided into a 3 x 3 grid, giving each player nine potential targets to attack.

In addition to all of the features mentioned about the Wildlife Poaching Game 1, we provided a table that summarizes all possible payoffs for both colluding and not colluding. The human subjects may decide to attack "individually and independently" or "in collusion" with the other player. In both situations, they will attack different sections separately but if both agree to attack in collusion, they will share all of their payoffs with each other equally. In each game, the human player is given a set amount of time to explore the park and make decisions about: (i) whether to collude with the other player or not and (ii) which region of the park to place their snare. Data collected based on this game are used to generate effective security resource allocation against collusive adversaries.

The Opportunistic Security Game is a game developed to simulate the real-world scenario of opportunistic crimes (see pdf) and to conduct experiments with human subjects to evaluate how proposed models perform against human subjects. The interface is shown below. This work is currently on-going and so details about the game and experimental results are not written here in more details.


The Network Security Game is a game developed to simulate the network security problem, where the adversary needs to select a path in a network to get to some of the nodes that are denoted as targets. At the same time, the defender is trying to set up random checkpoints on the edges of the network to catch the attacker and block them from reaching the target. The latest version of the interface is shown below. This work is currently on-going and so details about the game and experimental results are not written here in more details.


Previous work in our group on Network Security Games had been conducted with the Gulliver's Lost Treasure game. The game is available online: [Gulliver's Lost Treasure]. The interface is shown below:

We have released the guards and the treasure game as a tool for human-subject experiments in the setting of Stackelberg games. The package includes the source code and instructions for using the tool. It can be downloaded here.

The Guards and The Treasures is a game we developed to simulate "Infrastructure Security Games", i.e., the security scenario at the LAX airport. In the game, the subjects are asked to select one gate to open (attack). Behind each gate, there are treasures with varying values. At the same time, the guards are trying to protect the treasures. The game is available online: [The Guards and The Treasure]. The interface of the game is shown below.


Sample results

  • Non-collusive adversary models : From data collected about human behavior using the wildlife poaching game, we observed that:

    • Human being's perception of probabilities is S-shaped in nature (see Fig. 1), this is contrary to what is popularly observed in the behavioral game theory literature such as in Kahneman and Tversky's Noble prize winning work on prospect theory; and

    • Behavioral models such as SHARP which model the adaptive nature of human adversaries perform better as opposed to other models (see Fig. 2)

    Fig. 1Fig. 2

  • Collusive adversaries models: From the human subject experiments, we observed:

    • Resource imbalance effect on collusion does not follow rational models: Rational models suggest that when reward distribution is identical for both adversaries, by allocating a larger portion of the security resource to one side and leaving the other side with less security resources, the self-interested adversaries would avoid collusion. Supposing rational adversaries, we generated the optimum division rule for security allocation and conducted experiments based on that; however, human subject experiments on Amazon Mecanical Turk showed that some of the adversaries decide to collude even though it is not rational for them to collude.

    • Bounded rational models outperformed: Bounded rational models assume stochastic approach to describe decision making process by human adversaries. We showed that these models result in less defender loss.

    • Defender coverage perception is not linear: Human adversaries' probability weightings follow S-shaped curves independent of their decision about collusion, i.e., low probabilities are underweighted and high probabilities are overweighed.
    • Human adversaries who are collectivists are more likely to collude than individualists: The personal attitudes and attributes of participants can also influence their interactions in strategic settings. A key characteristic is the well-established individualism- collectivism paradigm, which describes cultural differences in the likelihood of people to prioritize themselves versus their in-group. We measured these factors by giving the participants a set of survey questions after finishing all of the games.

  • If you have any questions about the contents of this page please contact Debarun Kar ( dkar@usc.edu )