The 6th MSDM workshop is held in conjunction with AAMAS-2011 (the 10th International Joint Conference on Autonomous Agents and Multiagent Systems), in Taipei, Taiwan. It will take place on May 3, 2011, preceding the AAMAS conference.
For the registration, please visit the following link:
Submissions of revised papers should follow the
same 8 page limit, but use the camera ready format found here and be uploaded on
EasyChair by March 8th. Note that categories, keywords and general
terms should be included.
"Lessons Learned about Practicality in Probabilistic Planning"
Arnaud Canu and Mouaddib Abdel-Illah. Dynamic Local Interaction Model: framework and algorithms.
Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor and Milind Tambe. Robust Execution-time Coordination in DEC-POMDPs Under Model Uncertainty
Yifeng Zeng, Yingke Chen and Prashant Doshi. Approximating Behavioral Equivalence of Models Using Top-K Policy Paths
Matthijs Spaan, Frans Oliehoek and Christopher Amato. Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion
Matthew Brown, Emma Bowring, Shira Epstein, Mufaddal Jhaveri, Rajiv Maheswaran, Parag Mallick, Michelle Povinelli and Milind Tambe. Applying Multi-Agent Techniques to Cancer Modeling
Pradeep Varakantham, Shih-Fen Cheng and Nguyen Thi Duong. Decentralized Decision support for an agent population in dynamic and uncertain domains
Jesus Capitan, Matthijs Spaan, Luis Merino and Anibal Ollero. Decentralized Multi-Robot Cooperation with Auctioned POMDPs
Laetitia Matignon, Laurent Jeanpierre and Abdel-Illah Mouaddib. Distributed Value Functions for Multi-Robot Exploration: a Position Paper
Hala Mostafa and Victor Lesser. A Compact Mathematical Formulation For Problems With Structured Agent Interactions
Scott Alfeld, Kumera Berkele, Stephen Desalvo, Tong Pham, Daniel Russo, Lisa Yan and Matthew E. Taylor. Reducing the Team Uncertainty Penalty: Empirical and Theoretical Approaches
Inn-Tung Chen, Satinder Singh, Edmund Durfee and Stefan Witwicki. Influence-Based Multiagent Planning under Reward Uncertainty
Please note that the proceedings are not considered archival and may not always be available for download from this webpage.
[9:00-9:10] Welcome and introductions
*Paper Session I: Exploiting Interaction Structure
[9:10-9:35] "Solving Continuous-Time Transition-Independent DEC-MDP with Temporal Constraints", Zhengyu Yin, Milind Tambe, and Kanna Rajan
[9:35-10:00] "A Compact Mathematical Formulation For Problems With Structured Agent Interactions", Hala Mostafa and Victor Lesser
[10:00-10:30] Group Discussion and Brainstorm
[10:30-11:00] COFFEE BREAK
*Paper Session II: Applications
[11:00-11:25] "Applying Multi-Agent Techniques to Cancer Modeling", Matthew Brown, Emma Bowring, Shira Epstein, Mufaddal Jhaveri, Rajiv Maheswaran, Parag Mallick, Shannon Mumenthaler, Michelle Povinelli, and Milind Tambe
[11:25-11:50] "Decentralized Multi-Robot Cooperation with Auctioned POMDPs", Jesus Capitan, Matthijs Spaan, Luis Merino, and Anibal Ollero
[11:50-12:15] "Decentralized Decision support for an agent population in dynamic and uncertain domains", Pradeep Varakantham, Shih-Fen Cheng, and Nguyen Thi Duong
[12:15-13:00] Invited Talk: "Lessons Learned about Practicality in Probabilistic Planning", Rajiv Maheswaran
*Paper Session III: Uncertain Environments
[14:00-14:25] "Robust Execution-time Coordination in DEC-POMDPs Under Model Uncertainty", Jun-Young Kwak, Rong Yang, Zhengyu Yin, Matthew Taylor, and Milind Tambe
[14:25-14:50] "Influence-Based Multiagent Planning under Reward Uncertainty", Inn-Tung Chen, Edmund Durfee, Satinder Singh, and Stefan Witwicki
[14:50-15:15] "Reducing the Team Uncertainty Penalty: Empirical and Theoretical Approaches", Scott Alfeld, Kumera Berkele, Stephen Desalvo, Tong Pham, Daniel Russo, Lisa Yan, and Matthew E. Taylor
[15:15-16:00] COFFEE BREAK
*Paper Session IV: Advancements in Scalability
[16:00-16:25] "Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion", Matthijs Spaan, Frans Oliehoek, and Christopher Amato
[16:25-16:50] "Approximating Behavioral Equivalence of Models Using Top-K Policy Paths", Yifeng Zeng, Yingke Chen, and Prashant Doshi
[16:50-18:00] Closing Discussion
In sequential decision making, an agent's objective is to choose actions, based on its observations of the world, that will maximize its performance over the course of a series of such decisions. In worlds where action consequences are nondeterministic or observations incomplete, Markov Decision Processes (MDPs) and Partially-Observable MDPs (POMDPs) serve as the basis for principled approaches to single-agent sequential decision making. Extending these models to systems of multiple agents has become the subject of an increasingly active area of research over the past decade and a variety of different multiagent models have emerged (e.g., the MMDP, Dec-POMDP, MTDP, I-POMDP, and POSG). The high computational complexity of these models has also driven researchers to develop multiagent planning and learning methods that exploit structure in agents' interactions, methods geared towards efficient approximate solutions, and decentralized methods that distribute computation among the agents.
The primary purpose of this workshop is to bring together researchers in the field of MSDM to present and discuss new work, to identify recent trends in model and algorithmic development, and to establish important directions and goals for further research and collaboration. A secondary goal is to help address an important challenge; in order to make the field more accessible to newcomers, and to facilitate multidisciplinary collaboration, we seek to bring order in the large number of models and methods that have been introduced over the last decade. The workshop also aims to discuss interesting and challenging application areas (e.g., cooperative robotics, distributed sensor and/or communication networks, decision support systems) and suitable evaluation methodologies.
- Novel representations, algorithms and complexity results. - Comparisons of algorithms. - Relationships between models and their assumptions. - Decentralized vs. centralized planning approaches. - Online vs. offline planning. - Communication and coordination during execution. - Dealing with... ...large numbers of agents. ...large numbers of / continuous states, observations and actions. ...long decision horizons. - (Reinforcement) learning in partially observable multiagent systems. - Cooperative, competitive, and self-interested agents. - Application domains. - Benchmarks and evaluation methodologies. - Standardization of software. - Past trends and future directions of MSDM research. - High-level principles in MDSM.
Authors are encouraged to submit papers up to 8 pages in length in the AAMAS2011 format. Submissions should be uploaded in PDF form at http://www.easychair.org/conferences/?conf=msdm2011. Each submission will be reviewed by at least two Program Committee members. The review process will be "single-blind"; thus authors do not have to remove their names when submitting papers.