Adversarial reasoning is essential for modeling real-world problems in presence of adversaries, competitions, strategic interactions or uncertainties. Research and applications related to adversarial reasoning span a broad variety of disciplines, including computer science, electrical engineering, economics, biology, etc. The focus of this workshop is to bring together the broad community working on adversarial reasoning in multi-agent systems motivated by any of these domains.
One of the most successful examples in the past decade of adversarial reasoning is its instantiation in security domains. This has led to the development of security games which have been used to model many real-world security problems. Besides various lines of research developments, several software assistants rooted from these research results have been successfully deployed in real world, showing a high practical impact of such adversarial reasoning. Examples include, but are not limited to, patrolling assistants for seaports and airports, scheduling air marshals, ticket audit in transit systems, fishery protection and prevention of illegal poaching. Recently, such models exhibit increasing applicability in cyber and cyber-physical system (CPS) security, such as adversarial machine learning methods (e.g., in intrusion detection systems), resilient sensor placement and monitoring strategies, and privacy preserving data publishing and auditing systems. This shows that applications of adversarial reasoning are not only restricted to physical security – in fact, we believe that adversarial reasoning is essential in almost every field exhibiting adversaries, competitions or uncertainties.
While there has been significant progress, there still exist many major challenges facing the design of effective approaches as well as developing real-world applications beyond physical security. These include building predictive behavioral models for the players, dealing with information leakage and uncertainty in security, scaling up for large games, and applications of machine learning and multi-agent learning to adversarial settings, broadening the study to crime prediction, privacy control, voting manipulation, etc. Addressing these challenges requires collaboration among different communities including artificial intelligence, game theory, operations research, social science, and psychology. This workshop is structured to encourage a lively exchange of ideas between members from these communities.