Home
Contact
People
Publications
Research
Teaching
Projects (current)
    LearnSDM
    FlexI
    DCSMART
    BalanCity
    GCP
Projects (past)
    Smoover
    PURe-MaS
    MAIS-S
    DecPUCS
    URUS
Resources
    Software
    Dec-POMDP
    POMDPs
Activities
    Workshops
    Tutorials
    Events

Research Overview

Keywords: planning under uncertainty, sequential decision making, autonomous robots, smart grids, cooperative multiagent / multi-robot systems, (decentralized) partially observable Markov decision processes (POMDPs / Dec-POMDPs), reinforcement learning, machine learning and artificial intelligence in general.

During my PhD I became interested in formal ways of modeling robot decision processes, in particular how to plan under sensing and acting uncertainty. Major contributions of my PhD work concern approximate POMDP planning machinery such as the Perseus POMDP solver, a fast approximate POMDP planner which is easy to implement (JAIR 2005). We extended Perseus to continuous action spaces (ICRA 2005) and we generalized approximate POMDP planning to fully continuous domains (RSS 2005, JMLR 2006). At that time, we also considered planning for teams of communicating agents that optimize their use of communication primitives (AAMAS 2006).

After my PhD I have been pursuing three main lines of research. First, I have been working on planning under uncertainty for multiagent and multi-robot systems. In particular, I have been developing theory as well as solution methods for Dec-POMDPs. For instance, we developed the currently fastest optimal planner for general Dec- POMDPs (AAMAS 2009, IJCAI 2011). It is based on an algorithm for speeding up a key Dec-POMDP operation (the backup) with up to 10 orders of magnitude speedups on benchmarks (AAMAS 2010). These advances build on a journal paper that laid the foundations for value-based planning in Dec-POMDPs (JAIR 2008).

A second avenue of work concerns exploiting domain structure and communication in multiagent planning under uncertainty. In many domains, such as smart grid infrastructures, agent interactions are localized, which allows for gains in scalability. We have explored local interactions in factored Dec-POMDP models (AAMAS 2008a,arXiv:1108.0404) as well as in models in which interactions are defined on a task level (AAMAS 2008b). Furthermore, explicit communication between agents can improve task performance and lower computational complexity. In contrast to most of the relevant literature, we focus on cases in which communication channels are not perfect, but instead communication can be delayed (ICAPS 2008, AAMAS 2012). Combining the two concepts, we investigated how sparse interactions can lead to sparse communication needs (NIPS 2011). Finally, we have been exploring fuzzy reinforcement learning for multiagent POMDPs and Dec-POMDPs (IEEE-FUZZ 2010, 2011).

A third line of research that I have started is on applying approximate POMDP planning techniques to robotic applications, for instance in Network Robot Systems. I have demonstrated successful POMDP-based cooperation between surveillance cameras and mobile robots (ICAPS 2009, ICRA 2010, IROS 2010). Surveillance cameras provide an incomplete and inaccurate global view, which can be enhanced by a robot's local sensors (IROS 2010). POMDPs form a sound framework for modelling such active cooperative perception tasks. For the case in which multiple robots are involved, we developed POMDP task auctions (ICRA 2010), which form a flexible way of coordinating many robots. Recently, we combined a POMDP task auction with a decentralized data fusion filter to successfully solve a cooperative tracking application, demonstrated on a set of real robots (ICRA 2012). Also, I mapped a dynamic sensor selection problem to a POMDP that selects a subset of active sensors (ICAPS 2009), which was successfully tested in ISR's testbed (IROS 2009).