Multiagent decision process toolbox
decision process (MADP) Toolbox is a free C++ software toolbox for
scientific research in decision-theoretic planning and learning in
multiagent systems (MASs). It is jointly developed by Frans
Oliehoek and me. We use the term MADP to refer to a collection of
mathematical models for multiagent planning: multiagent Markov
decision processes (MMDPs), decentralized MDPs (Dec-MDPs),
decentralized partially observable MDPs (Dec-POMDPs), partially
observable stochastic games (POSGs), etc.
The toolbox is designed to be rather general, potentially providing support for
all these models, although so far most effort has been put in planning
algorithms for discrete Dec-POMDPs. It provides classes modeling the basic data
types of MADPs (e.g., action, observations, etc.) as well as derived types for
planning (observation histories, policies, etc.). It also provides base classes
for planning algorithms and includes several applications using the provided
functionality. For instance, applications that use JESP or brute-force search
to solve .dpomdp files for a particular planning horizon. In this way,
Dec-POMDPs can be solved directly from the command line. Furthermore, several
utility applications are provided, for instance one which empirically
determines a joint policy's control quality by simulation.
Its homepage, from which code and documentation are available.
Perseus approximate POMDP solving software
Matlab parser for Tony's POMDP file format
Perseus approximate POMDP algorithm implementation
POMDPs with continuous spaces
Josep Porta cleaned up and reimplemented the code we used for the JMLR2006
paper, it's available at his webpage.