Communication for Improving Policy Computation in Distributed POMDPs

Citation:

Ranjit Nair, Makoto Yokoo, Maayan Roth, and Milind Tambe. 2004. “Communication for Improving Policy Computation in Distributed POMDPs .” In Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-04).

Abstract:

Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.
See also: 2004