Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
msdm2014 [2013/12/04 21:04]
msdm2014 [Topics]
msdm2014 [2013/12/05 22:16]
msdm2014 [Overview]
Line 12: Line 12:
 ===== Overview ===== ===== Overview =====
  
-In sequential decision making, an agent'​s objective is to choose actions, based on  observations of its environment,​ that will maximize the expected performance over the course of a series of such decisions. In worlds where action consequences are non-deterministic or observations incomplete, Markov Decision Processes (MDPs) and Partially-Observable MDPs (POMDPs) serve as the basis for principled approaches to single-agent sequential decision making. Extending these models to systems of multiple agents has become the subject of an increasingly active area of research ​over the past decade+In sequential decision making, an agent'​s objective is to choose actions, based on observations of its environment,​ that will maximize the expected performance over the course of a series of such decisions. In worlds where action consequences are non-deterministic or observations incomplete, Markov Decision Processes (MDPs) and Partially-Observable MDPs (POMDPs) serve as the basis for principled approaches to single-agent sequential decision making. Extending these models to systems of multiple agents has become the subject of an increasingly active area of research. Over the past decade, a variety of different multiagent models have emerged, both for fully-cooperative agents (e.g., MMDP, MTDP and Dec-POMDP) and self-interested agents (e.g., I-POMDP and POSG), and under an assortment of different assumptions about agents'​ capabilities to communicate (e.g., Dec-MDP-Com,​ COM-MTDP), observe (e.g., Dec-MDP) and influence other agents (e.g., TI-Dec-MDP, ND-POMDP). ​Often, high computational complexity has driven researchers to develop multiagent planning and learning methods that exploit structure in agents'​ interactions,​ methods geared towards efficient approximate solutions, decentralized methods that distribute computation among the agentsand new ways for agents ​to model and reason about their interactions with other agents.
- +
-variety of different multiagent models have emerged, both for fully-cooperative agents (e.g., MMDP, MTDP and Dec-POMDP) and self-interested agents (e.g., I-POMDP and POSG), and under an assortment of different assumptions about agents'​ capabilities to communicate (e.g., Dec-MDP-Com,​ COM-MTDP), observe (e.g., Dec-MDP) and influence other agents (e.g., TI-Dec-MDP, ND-POMDP). ​Furthermore, high computational complexity has driven researchers to develop multiagent planning and learning methods that exploit structure in agents'​ interactions,​ methods geared towards efficient approximate solutions, ​and decentralized methods that distribute computation among the agents+
- +
-The purpose of this workshop is to bring together researchers in the field of multiagent sequential decision making to present ​and discuss promising ​new work, to identify recent trends in model and algorithmic development,​ and to establish important directions and goals for further research and collaboration. This workshop strives to develop consensus within the community on benchmarks and evaluation methodology in order to compare and validate alternative approaches and models. In the long term, we believe that the active discussions promoted by the MSDM workshop will help us to overcome the challenges of applying multiagent sequential decision making methods to large-scale problems in the real world, for instance, in problem areas of security, sustainability,​ public safety and health. +
  
 +The purpose of this workshop is to bring together current and future researchers in the field of multiagent sequential decision making (MSDM) to present and discuss promising new work, to identify recent trends in model and algorithmic development,​ and to establish important directions and goals for further research and collaboration. This workshop also strives to develop consensus within the community on benchmarks and evaluation methodology in order to compare and validate alternative approaches and models. Further, we hope that these active discussions and collaborations will help us to overcome the challenges of successfully applying MSDM methods to real-world problems in security, sustainability,​ public safety and health, and other challenging domains.
  
 +This year, the MSDM workshop will also include an extensive tutorial geared towards introducing the fundamental concepts of multiagent sequential decision making, acclimating researchers to the broad landscape of MSDM models and methods, and informing potential practitioners of the state of the art.
  
 ===== Topics ====  ===== Topics ==== 
Line 27: Line 24:
   * Fundamental modeling challenges, e.g.,   * Fundamental modeling challenges, e.g.,
      ​* ​  model specification:​ how should models be derived?      ​* ​  model specification:​ how should models be derived?
-     ​* ​  model granularity:​ how should one decides ​on an appropriate level of abstraction to express ​the models?+     ​* ​  model granularity:​ how should one decide ​on an appropriate level of abstraction to express ​decision-making ​models?
  
   * Novel representations,​ algorithms and complexity results   * Novel representations,​ algorithms and complexity results
Line 45: Line 42:
   * Standardization of software   * Standardization of software
   * High-level principles in MSDM: past trends and future directions   * High-level principles in MSDM: past trends and future directions
- 
Creative Commons License Minima Template by Wikidesign Driven by DokuWiki