Teamwork and Coordination under Model Uncertainty in DEC-POMDPs

Citation:

Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, and Milind Tambe. 2010. “Teamwork and Coordination under Model Uncertainty in DEC-POMDPs .” In AAAI 2010 Workshop on Interactive Decision Theory and Game Theory (IDTGT) .

Abstract:

Distributed Partially Observable Markov Decision Processes (DEC-POMDPs) are a popular planning framework for multiagent teamwork to compute (near-)optimal plans. However, these methods assume a complete and correct world model, which is often violated in real-world domains. We provide a new algorithm for DEC-POMDPs that is more robust to model uncertainty, with a focus on domains with sparse agent interactions. Our STC algorithm relies on the following key ideas: (1) reduce planning-time computation by shifting some of the burden to execution-time reasoning, (2) exploit sparse interactions between agents, and (3) maintain an approximate model of agents’ beliefs. We empirically show that STC is often substantially faster to existing DEC-POMDP methods without sacrificing reward performance.
See also: 2010