===========================================================================
CALL FOR PAPERS
AAMAS 2011 Workshop
Multiagent Sequential Decision Making in Uncertain Domains

6th Workshop in the MSDM series
=========================================================================

Location & Organization

The 6th MSDM workshop is held in conjunction with AAMAS-2011 (the 10th International Joint Conference on Autonomous Agents and Multiagent Systems), in Taipei, Taiwan. It will take place on May 3, 2011, preceding the AAMAS conference.

Important Dates

January 31, 2011 (extended) Submission deadline
February 27, 2011 Notification of Acceptance
March 8, 2011 Camera-ready submission
May 3, 2011 Workshop

Attending MSDM & AAMAS 2011

For the registration, please visit the following link:
http://www.aamas2011.tw/AttendingAAMAS2011.html

Camera-Ready Submission

Submissions of revised papers should follow the same 8 page limit, but use the camera ready format found here and be uploaded on EasyChair by March 8th. Note that categories, keywords and general terms should be included.

Please ensure that the PDF file your submit uses paper size US Letter (8.5x11 inches). If you are compiling your paper using LaTeX, please see http://amath.colorado.edu/documentation/LaTeX/reference/faq/a4.html for instructions on how to output to Letter. If you are compiling your paper using PDFLaTeX, simply insert the following lines into the header of your document:
\pdfpagewidth=8.5truein
\pdfpageheight=11truein

Invited Talk

Rajiv Maheswaran (University of Southern California)
"Lessons Learned about Practicality in Probabilistic Planning"

Details: TBA

Accepted Papers

Zhengyu Yin, Milind Tambe and Kanna Rajan. Solving Continuous-Time Transition-Independent DEC-MDP with Temporal Constraints

Arnaud Canu and Mouaddib Abdel-Illah. Dynamic Local Interaction Model: framework and algorithms.

Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor and Milind Tambe. Robust Execution-time Coordination in DEC-POMDPs Under Model Uncertainty

Yifeng Zeng, Yingke Chen and Prashant Doshi. Approximating Behavioral Equivalence of Models Using Top-K Policy Paths

Matthijs Spaan, Frans Oliehoek and Christopher Amato. Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion

Matthew Brown, Emma Bowring, Shira Epstein, Mufaddal Jhaveri, Rajiv Maheswaran, Parag Mallick, Michelle Povinelli and Milind Tambe. Applying Multi-Agent Techniques to Cancer Modeling

Pradeep Varakantham, Shih-Fen Cheng and Nguyen Thi Duong. Decentralized Decision support for an agent population in dynamic and uncertain domains

Jesus Capitan, Matthijs Spaan, Luis Merino and Anibal Ollero. Decentralized Multi-Robot Cooperation with Auctioned POMDPs

Laetitia Matignon, Laurent Jeanpierre and Abdel-Illah Mouaddib. Distributed Value Functions for Multi-Robot Exploration: a Position Paper

Hala Mostafa and Victor Lesser. A Compact Mathematical Formulation For Problems With Structured Agent Interactions

Scott Alfeld, Kumera Berkele, Stephen Desalvo, Tong Pham, Daniel Russo, Lisa Yan and Matthew E. Taylor. Reducing the Team Uncertainty Penalty: Empirical and Theoretical Approaches

Inn-Tung Chen, Satinder Singh, Edmund Durfee and Stefan Witwicki. Influence-Based Multiagent Planning under Reward Uncertainty

Proceedings

The proceedings can be downloaded here.

Please note that the proceedings are not considered archival and may not always be available for download from this webpage.

Schedule

Each paper has 20mins for the presentation + 5mins for Q&A.

[9:00-9:10] Welcome and introductions

*Paper Session I: Exploiting Interaction Structure
[9:10-9:35] "Solving Continuous-Time Transition-Independent DEC-MDP with Temporal Constraints", Zhengyu Yin, Milind Tambe, and Kanna Rajan
[9:35-10:00] "A Compact Mathematical Formulation For Problems With Structured Agent Interactions", Hala Mostafa and Victor Lesser

[10:00-10:30] Group Discussion and Brainstorm

[10:30-11:00] COFFEE BREAK

*Paper Session II: Applications
[11:00-11:25] "Applying Multi-Agent Techniques to Cancer Modeling", Matthew Brown, Emma Bowring, Shira Epstein, Mufaddal Jhaveri, Rajiv Maheswaran, Parag Mallick, Shannon Mumenthaler, Michelle Povinelli, and Milind Tambe
[11:25-11:50] "Decentralized Multi-Robot Cooperation with Auctioned POMDPs", Jesus Capitan, Matthijs Spaan, Luis Merino, and Anibal Ollero
[11:50-12:15] "Decentralized Decision support for an agent population in dynamic and uncertain domains", Pradeep Varakantham, Shih-Fen Cheng, and Nguyen Thi Duong

[12:15-13:00] Invited Talk: "Lessons Learned about Practicality in Probabilistic Planning", Rajiv Maheswaran

[13:00-14:00] LUNCH

*Paper Session III: Uncertain Environments
[14:00-14:25] "Robust Execution-time Coordination in DEC-POMDPs Under Model Uncertainty", Jun-Young Kwak, Rong Yang, Zhengyu Yin, Matthew Taylor, and Milind Tambe
[14:25-14:50] "Influence-Based Multiagent Planning under Reward Uncertainty", Inn-Tung Chen, Edmund Durfee, Satinder Singh, and Stefan Witwicki
[14:50-15:15] "Reducing the Team Uncertainty Penalty: Empirical and Theoretical Approaches", Scott Alfeld, Kumera Berkele, Stephen Desalvo, Tong Pham, Daniel Russo, Lisa Yan, and Matthew E. Taylor

[15:15-16:00] COFFEE BREAK

*Paper Session IV: Advancements in Scalability
[16:00-16:25] "Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion", Matthijs Spaan, Frans Oliehoek, and Christopher Amato
[16:25-16:50] "Approximating Behavioral Equivalence of Models Using Top-K Policy Paths", Yifeng Zeng, Yingke Chen, and Prashant Doshi

[16:50-18:00] Closing Discussion

Workshop Overview

In sequential decision making, an agent's objective is to choose actions, based on its observations of the world, that will maximize its performance over the course of a series of such decisions. In worlds where action consequences are nondeterministic or observations incomplete, Markov Decision Processes (MDPs) and Partially-Observable MDPs (POMDPs) serve as the basis for principled approaches to single-agent sequential decision making. Extending these models to systems of multiple agents has become the subject of an increasingly active area of research over the past decade and a variety of different multiagent models have emerged (e.g., the MMDP, Dec-POMDP, MTDP, I-POMDP, and POSG). The high computational complexity of these models has also driven researchers to develop multiagent planning and learning methods that exploit structure in agents' interactions, methods geared towards efficient approximate solutions, and decentralized methods that distribute computation among the agents.

The primary purpose of this workshop is to bring together researchers in the field of MSDM to present and discuss new work, to identify recent trends in model and algorithmic development, and to establish important directions and goals for further research and collaboration. A secondary goal is to help address an important challenge; in order to make the field more accessible to newcomers, and to facilitate multidisciplinary collaboration, we seek to bring order in the large number of models and methods that have been introduced over the last decade. The workshop also aims to discuss interesting and challenging application areas (e.g., cooperative robotics, distributed sensor and/or communication networks, decision support systems) and suitable evaluation methodologies.

Topics

Multiagent sequential decision making comprises (1) problem representation, (2) planning, (3) coordination, and (4) learning during execution. The MSDM workshop addresses this full range of aspects. Topics of particular interest include:
	 
- Novel representations, algorithms and complexity results.
- Comparisons of algorithms.
- Relationships between models and their assumptions.
- Decentralized vs. centralized planning approaches.
- Online vs. offline planning.
- Communication and coordination during execution.
- Dealing with... 
		...large numbers of agents.
		...large numbers of / continuous states, observations and actions.
		...long decision horizons.
- (Reinforcement) learning in partially observable multiagent systems.
- Cooperative, competitive, and self-interested agents.
- Application domains.
- Benchmarks and evaluation methodologies.
- Standardization of software.
- Past trends and future directions of MSDM research.
- High-level principles in MDSM.

Submission Procedure

Authors are encouraged to submit papers up to 8 pages in length in the AAMAS2011 format. Submissions should be uploaded in PDF form at http://www.easychair.org/conferences/?conf=msdm2011. Each submission will be reviewed by at least two Program Committee members. The review process will be "single-blind"; thus authors do not have to remove their names when submitting papers.

Organizing Committee

Prashant Doshi Department of Computer Science, University of Georgia
Abdel-Illah Mouaddib Lab. of GREYC-CNRS, University of Caen Basse-Normandie
Stefan Witwicki Distributed Intelligent Agents Group, University of Michigan
Jun-young Kwak Computer Science Department, University of Southern California
Frans A. Oliehoek Learning and Intelligent Systems Group, CSAIL, MIT

Program Committee

Martin Allen University of Wisconsin - La Crosse
Christopher Amato Aptima, Inc.
Aurelie Beynier University Pierre and Marie Curie (Paris 6)
Brahim Chaib-draa Laval University
Georgios Chalkiadakis University of Southampton
Alessandro Farinelli University of Verona
Piotr Gmytrasiewicz University of Illinois Chicago
Rajiv Maheswaran University of Southern California
Francisco S. Melo INESC-ID Lisboa
Hala Mostafa University of Massachussetts, Amherst
Enrique Munoz de Cote University of Southampton
Brenda Ng Lawrence Livermore National Laboratory
Simon Parsons Brooklyn College
David Pynadath Institute for Creative Technologies, University of Southern California
Xia Qu University of Georgia
Zinovi Rabinovich Bar Ilan University
Paul Scerri Carnegie Mellon University
Jiaying Shen SRI International
Matthijs Spaan Institute for Systems and Robotics - Lisbon
Karl Tuyls Maastricht University
Pradeep Varakantham Singapore Management University
Jianhui Wu Amazon
Makoto Yokoo Kyushu University
Shlomo Zilberstein University of Massachusetts, Amherst