I envision a tomorrow where intelligent agents, broadly defined as
autonomous software entities, will pervade our lives, making
decisions on our behalf by working closely with other agents as part
of multiagent teams. Such multiagent teams will be increasingly
entrusted with complex tasks where decisions need to be made under
uncertainty and where these decisions are linked to critical factors
like loss of valuable resources.
My research has focused on multiagent theory and practice that I
believe will make this dream a reality. In particular, my work has been focused on distributed partially observable Markov decision problems (distributed POMDPs) and their application to a range of complex domains like
sensor nets, mission rehearsal simulations, disaster rescue, and personal assistant teams. Some of my previous work include a hybrid BDI-POMDP framework for multiagent teaming, joint equilibrium-based search for policies (JESP) for distributed POMDPs, and communicative JESP. My most recent work has been on applying distributed constraint optimization (DCOP) approaches to networked distributed POMDPs.