Adjustable autonomy for the real world

Citation:

Milind Tambe, Paul Scerri, and D. V. Pynadath. 2002. “Adjustable autonomy for the real world .” In AAAI Spring Symposium on Safe learning agents.
2002_9_teamcore_ss02.pdf168 KB

Abstract:

Adjustable autonomy refers to agents’ dynamically varying their own autonomy, transferring decision making control to other entities (typically human users) in key situations. Determining whether and when such transfers of control must occur is arguably the fundamental research question in adjustable autonomy. Previous work, often focused on individual agent-human interactions, has provided several different techniques to address this question. Unfortunately, domains requiring collaboration between teams of agents and humans reveals two key shortcomings of these previous techniques. First, these techniques use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects of actions) to an agent’s team due to such transfers of control. To remedy these problems, this paper presents a novel approach to adjustable autonomy, based on the notion of transfer of control strategy. A transfer of control strategy consists of a sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from the agent to the user or vice versa) (ii) actions to change an agent’s pre-specified coordination constraints with others, aimed at minimizing miscoordination costs. The goal is for high quality individual decisions to be made with minimal disruption to the coordination of the team. These strategies are operationalized using Markov Decision Processes to select the optimal strategy given an uncertain environment and costs to individuals and teams. We present a detailed evaluation of the approach in the context of a real-world, deployed multi-agent system that assists a research group in daily activities.
See also: 2002