Given a large group of cooperative agents, selecting the right coordination or conflict resolution strategy can have a significant impact
on their performance (e.g., speed of convergence). While performance models of such coordination or conflict resolution strategies
could aid in selecting the right strategy for a given domain, such
models remain largely uninvestigated in the multiagent literature.
This paper takes a step towards applying the recently emerging distributed POMDP (partially observable Markov decision process)
frameworks, such as MTDP (Markov team decision process), in
service of creating such performance models. To address issues
of scale-up, we use small-scale models, called building blocks that
represent the local interaction among a small group of agents. We
discuss several ways to combine building blocks for performance
prediction of a larger-scale multiagent system.
We present our approach in the context of DCSPs (distributed
constraint satisfaction problems), where we first show that there is
a large bank of conflict resolution strategies and no strategy dominates all others across different domains. By modeling and combining building blocks, we are able to predict the performance of
five different DCSP strategies for four different domain settings,
for a large-scale multiagent system. Our approach thus points the
way to new tools for strategy analysis and performance modeling
in multiagent systems in general.