Multiagent Decision Process Toolbox
The Multiagent decision process (MADP) Toolbox is a free C++ software toolbox for scientific research in decision-theoretic planning and learning in multiagent systems (MASs), developed by Frans Oliehoek and Matthijs Spaan. The term MADP is used to refer to a collection of mathematical models for multiagent planning: multiagent Markov decision processes (MMDPs), decentralized MDPs (Dec-MDPs), decentralized partially observable MDPs (Dec-POMDPs), partially observable stochastic games (POSGs), etc.
The toolbox is designed to be rather general, potentially providing support for all these models, although so far most effort has been put in planning algorithms for discrete Dec-POMDPs. It provides classes modeling the basic data types of MADPs (e.g., action, observations, etc.) as well as derived types for planning (observation histories, policies, etc.). It also provides base classes for planning algorithms and includes several applications using the provided functionality. For instance, applications that use JESP or brute-force search to solve .dpomdp files for a particular planning horizon. In this way, Dec-POMDPs can be solved directly from the command line. Furthermore, several utility applications are provided, for instance one which empirically determines a joint policy's control quality by simulation.
Its homepage, from which code and documentation are available.
If you only care about solving a particular problem, you can also upload your problem file to THINC Lab's online POMDP portal.
Nonlinear programming code
At the UMass Dec-POMDP page you can download code for nonlinear programming of POMDPs and Dec-POMDPs, as well as some problem specifications.
ND-POMDPs
The Distributed POMDP page of USC provides code and datasets for ND-POMDPs and other models.