Teams of heterogeneous agents working within and
alongside human organizations offer exciting possibilities for streamlining processes in ways not possible with
conventional software[4, 6]. For example, personal software assistants and information gathering and scheduling
agents can coordinate with each other to achieve a variety of coordination and organizational tasks, e.g. facilitating teaming of experts in an organization for crisis response and aiding in execution and monitoring of such a
Inevitably, due to the complexity of the environment,
the unpredictability of human beings and the range of situation with which the multi-agent systems must deal, there
will be times when the system does not produce the results it’s users desire. In such cases human intervention
is required. Sometimes simple tweaks are required due to
system failures. In other cases, perhaps because a particular user has more experience than the system, the user will
want to “steer” the entire multi-agent system on a different course. For example, some researchers at USC/ISI,
including ourselves, are currently focused on the Electric Elves project (http://www.isi.edu/agents-united). In
this project humans will be agentified by providing agent
proxies to act on their behalf, while entities such as meeting schedulers will be active agents that can communicate
with the proxies to achieve a variety of scheduling and
rescheduling tasks. In this domain at an individual level
a user will sometimes want to override decisions of their
proxy. At a team level a human will want to fix undesirable properties of overall team behavior, such as large
breaks in a visitor’s schedule.
However, to require a human to completely take control
of an entire multi-agent system, or even a single agent,
defeats the purpose for which the agents were deployed.
Thus, while it is desirable that the multi-agent system
should not assume full autonomy neither should it be a
zero autonomy system. Rather, some form of Adjustable
Autonomy (AA) is desired. A system supporting AA is
able to dynamically change the autonomy it has to make
and carry out decisions, i.e. the system can continuously
vary its autonomy from being completely dependent on
humans to being completely in control. An AA tool needs
to support user interaction with such a system.
To support effective user interaction with complex
multi-agent system we are developing a layered Adjustable Autonomy approach that allows users to intervene either with a single agent or with a team of agents.
Previous work has in AA has looked at either individual
agents or whole teams but not, to our knowledge, a layered approach to AA. The layering of the AA parallels
the levels of autonomy existing in human organizations.
Technically, the layered approach separates out issues relevant at different levels of abstraction, making it easier to
provide users with the information and tools they need to
effectively interact with a complex multi-agent system.