Table of Contents
Problem Subclasses, Models, and Methods
The NEXP-Complete complexity of solving the Dec-POMDP has led researchers to carve out problem subclasses wherein traction may be gained. For better or for worse, it has become common practice within this research community to define and publish a new model for each such subclass, leading to a veritable alphabet soup of acronyms. And so, the ambition behind this section of the wiki is to bring some order to the large body of work that has emerged.
For each of the MSDM models/frameworks that follow, help us to maintain a succinct description, referencing the motivating problem domains, key assumptions, theoretical properties, and solution methods to-date. This will make all of the work more accessible to researchers entering the field, as well as to practitioners choosing the most suitable model and method to apply to their application.
— Stefan Witwicki 2014/05/06 07:00
List of currently-maintained subclasses
Here are links to class and/or model descriptions. Each delineates a subclass of the general Decentralized POMDP [link], the canonical model for cooperative agent decision making. For the moment, let us restrict attention to decision-making frameworks for cooperative agents.
Above, you find partially-completed descriptions, and below stubs that are to be filled in. If you are an author of one of the frameworks below, or else know the framework intimately, I implore you to contribute a description, taking care not to plagiarize. Keeping with the motivation of this wiki (stated above), please strive for consistent section headings and description style among the members of this list.
This list is nowhere near exhaustive. You can help to expand it by adding descriptions of a models/frameworks, by filling in missing content, or by updating content to reference recent results. Keeping with the motivation of this wiki (stated above), please strive for consistent section headings and description style among the members of this list.
Documented Characterizations of the Model / Problem Space
Through the evolution of a tutorial at AAMAS, a couple of nice graphics have emerged that illustrates the differences in complexity of a few of the problem classes. The first places the Dec-POMDP, Dec-MMDP, and MMDP in the context of the overarching Partially-Observable Stochastic Game (POSG), as well as the single-agent MDP and POMDP models.
The second explores subclasses of the Dec-MDP model, some of which have significantly lower complexity.
These illustration were inspired by publications such as the following, which provide detailed complexity analyses:
The Complexity of Decentralized Control of Markov Decision Processes. Daniel S. Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. Mathematics of Operations Research, 27(4):819-840, 2002.
Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis. Claudia V. Goldman and Shlomo Zilberstein. Journal of Artificial Intelligence Research, 22:143-174, 2004.
Agent Interaction in Distributed MDPs and its Implications on Complexity. Jiaying Shen, Raphen Becker, and Victor Lesser. Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi-Agent Systems, ACM, pp. 529-536. 2006.
Complexity of Decentralized Control: Special Cases. Martin Allen and Shlomo Zilberstein. Proceedings of the Twenty-Third Neural Information Processing Systems Conference (NIPS), 19-27, Vancouver, British Columbia, Canada, 2009.
A more recent study attempts to characterize various classes of problems based on the problem structure they exploit:
Towards a Unifying Characterization for Quantifying Weak Coupling in Dec-POMDPs. Stefan J. Witwicki and Edmund H. Durfee. In Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2011), pages 29-36. Taipei, Taiwan. May 2011.
Formal Models and Algorithms for Decentralized Decision Making under Uncertainty. Sven Seuken and Shlomo Zilberstein. Autonomous Agents and Multi-Agent Systems, 17(2):190-250, 2008.