Publications by Year: 2000

Milind Tambe, D. V. Pynadath, C. Chauvat, A. Das, and Gal Kaminka. 2000. “Adaptive agent architectures for heterogeneous team members .” In International Conference on Multi-agent Systems (ICMAS).Abstract
With the proliferation of software agents and smart hardware devices there is a growing realization that large-scale problems can be addressed by integration of such standalone systems. This has led to an increasing interest in integration architectures that enable a heterogeneous variety of agents and humansto work together. These agents and humans differ in their capabilities, preferences, the level of autonomy they are willing to grant the integration architecture and their information requirements and performance. The challenge in coordinating such a diverse agentset isthat potentially a large number of domain-specific and agentspecific coordination plans may be required. We present a novel two-tiered approach to address this coordination problem. We first provide the integration architecture with general purpose teamwork coordination capabilities, but then enable adaptation of such capabilities for the needs or requirements of specific individuals. A key novel aspect of this adaptation is that it takes place in the context of other heterogeneous team members. We are realizing this approach in an implemented distributed agent integration architecture called Teamcore. Experimental results from two different domains are presented.
Milind Tambe, T. Raines, and S. Marsella. 2000. “Agent Assistants for Team Analysis .” AI Magazine.Abstract
With the growing importance of multi-agent teamwork, tools that can help humans analyze, evaluate, and understand team behaviors are becoming increasingly important as well. To this end, we are creating ISAAC, a team analyst agent for post-hoc, off-line agentteam analysis. ISAAC's novelty stems from a key design constraint that arises in team analysis: multiple types of models of team behavior are necessary to analyze different granularities of team events, including agent actions, interactions, and global performance. These heterogeneous team models are automatically acquired via machine learning over teams' external behavior traces, where the specific learning techniques are tailored to the particular model learned. Additionally, ISAAC employs multiple presentation techniques that can aid human understanding of the analyses. This paper presents ISAAC's general conceptual framework and its application in the RoboCup soccer domain, where ISAAC was awarded the RoboCup scientific challenge award.
T. Raines, Milind Tambe, and S. Marsella. 2000. “Automated agents that help humans understand agent team behaviors .” In International conference on Autonomous Agents (Agents).Abstract
Multi-agent teamwork is critical in a large number of agent applications, including training, education, virtual enterprises and collective robotics. Tools that can help humans analyze, evaluate, and understand team behaviors are becoming increasingly important as well. We have taken a step towards building such a tool by creating an automated analyst agent called ISAAC for post-hoc, off-line agent-team analysis. ISAAC’s novelty stems from a key design constraint that arises in team analysis: multiple types of models of team behavior are necessary to analyze different granularities of team events, including agent actions, interactions, and global performance. These heterogeneous team models are automatically acquired via machine learning over teams’ external behavior traces, where the specific learning techniques are tailored to the particular model learned. Additionally, ISAAC employs multiple presentation techniques that can aid human understanding of the analyses. This paper presents ISAAC’s general conceptual framework, motivating its design, as well as its concrete application in the domain of RoboCup soccer. In the RoboCup domain, ISAAC was used prior to and during the RoboCup’99 tournament, and was awarded the RoboCup scientific challenge award.
Milind Tambe, D. V. Pynadath, and N. Chauvat. 2000. “Building dynamic agent organizations in cyberspace .” IEEE Internet Computing 4 (2).Abstract
With the promise of agent-based systems, a variety of research/industrial groups are developing autonomous, heterogeneous agents, that are distributed over a variety of platforms and environments in cyberspace. Rapid integration of such distributed, heterogeneous agents would enable software to be rapidly developed to address large-scale problems of interest. Unfortunately, rapid and robust integration remains a difficult challenge. To address this challenge, we are developing a novel teamwork-based agent integration framework. In this framework, software developers specify an agent organization called a team-oriented program. To recruit agents for this organization, an agent resources manager (an analogue of a “human resources manager”) searches the cyberspace for agents of interest to this organization, and monitors their performance over time. Agents in this organization are wrapped with TEAMCORE wrappers, that make them team ready, and thus ensure robust, flexible teamwork among the members of the newly formed organization. This implemented framework promises to reduce the software development effort in agent integration while providing robustness due to its teamwork-based foundations. A concrete, running example, based on heterogeneous, distributed agents is presented.
2000. “Conflicts in agent teams .” In Conflicting agents. Kluwer academic publishers.Abstract
Multi-agent teamwork is a critical capability in a large number of applications. Yet, despite the considerable progress in teamwork research, the challenge of intra-team conflict resolution has remained largely unaddressed. This chapter presents a system called CONSA, to resolve conflicts using argumentation-based negotiations. The key insight in CONSA(COllaborative Negotiation System based on Argumentation) is to fully exploit the benefits of argumentation in a team setting. Thus, CONSA casts conflict resolution as a team problem, so that the recent advances in teamwork can be fully brought to bear during conflict resolution to improve argumentation flexibility. Furthermore, since teamwork conflicts often involve past teamwork, recently developed teamwork models can be exploited to provide agents with reusable argumentation knowledge. Additionally, CONSA also includes argumentation strategies geared towards benefiting the team rather than the individual, and techniques to reduce argumentation overhead. We present detailed algorithms used in CONSA and shows a detailed trace from CONSA’s implementations.
Paul Scerri, Milind Tambe, H. Lee, and D. V. Pynadath. 2000. “Don't cancel my Barcelona trip: Adjusting the autonomy of agent proxies in human organizations .” In AAAI Fall Symposium on Socially Intelligent Agents --- the human in the loop.Abstract
Teamwork is a critical capability in multiagent environments Many such en vironments mandate that the agents and agentteams must be persistent ie exist over long periods of time Agents in such persistent teams are bound together by their longterm common interests and goals This paper focuses on exible teamwork in such persistent teams Unfortunately while previous work has investigated exible teamwork persistent teams remain unexplored For exible teamwork one promising approach that has emerged is modelbased ie providing agents with general models of teamwork that explicitly specify their commitments in teamwork Such models enable agents to autonomously reason about coordination Unfortunately for persistent teams such models may lead to coordination and communication actions that while locally optimal are highly problematic for the teams longterm goals We present a decisiontheoretic technique based on Markov decision processes to enable persistent teams to over come such limitations of the modelbased approach In particular agents reason about expected team utilities of future team states that are pro jected to result from actions recommended by the teamwork model as well as lowercost or highercost variations on these actions To accomodate realtime constraints this reasoning is done in an anytime fashion Implemented examples from an analytic search tree and some realworld domains are presented.
D. V. Pynadath, Milind Tambe, Y. Arens, and H. Chalupsky. 2000. “Electric Elves: Immersing an agent organization in a human organization .” In AAAI Fall Symposium on Socially Intelligent Agents --- the human in the loop.Abstract
Future large-scale human organizations will be highly agentized, with software agents supporting the traditional tasks of information gathering, planning, and execution monitoring, as well as having increased control of resources and devices (communication and otherwise). As these heterogeneous software agents take on more of these activities, they will face the additional tasks of interfacing with people and sometimes acting as their proxies. Dynamic teaming of such heterogeneous agents will enable organizations to act coherently, to robustly attain their mission goals, to react swiftly to crises, and to dynamically adapt to events. Advances in this agentization could potentially assist all organizations, including the military, civilian disaster response organizations, corporations, and universities and research institutions. Within an organization, we envision that agent-based technology will facilitate (and sometimes supervise) all collaborative activities. For a research institution, agentization may facilitate such activities as meeting organization, paper composition, software development, and deployment of people and equipment for out-of-town demonstrations. For a military organization, agentization may enable the teaming of military units and equipment for rapid deployment, the monitoring of the progress of such deployments, and the rapid response to any crises that may arise. To accomplish such goals, we envision the presence of agent proxies for each person within an organization. Thus, for instance, if an organizational crisis requires an urgent deployment of a team of people and equipment, then agent proxies could dynamically volunteer for team membership on behalf of the people or resources they represent, while also ensuring that the selected team collectively possesses sufficient resources and capabilities. The proxies must also manage efficient transportation of such resources, the monitoring of the progress of individual participants and of the mission as a whole, and the execution of corrective actions when goals appear to be endangered. The complexity inherent in human organizations complicates all of these tasks and provides a challenging research testbed for agent technology. First, there is the key research question of adjustable autonomy. In particular, agents acting as proxies for people must automatically adjust their own autonomy, e.g., avoiding critical errors, possibly by letting people make important decisions while autonomously making the more routine decisions. Second, human organizations operate continually over time, and the agents must operate continually as well. In fact, the agent systems must be up and running 24 hours a day 7 days a week (24/7). Third, people, as well as their associated tasks are very heterogeneous, having a wide and rich variety of capabilities, interests, preferences, etc. To enable teaming among such people for crisis response or other organizational tasks, agents acting as proxies must represent and reason with such capabilities and interests. We thus require powerful matchmaking capabilities to match two people with similar interests. Fourth, human organizations are often large, so providing proxies often means a big scale-up in the number of agents, as compared against typical multiagent systems in current operation. Our Electric Elves project is currently investigating the above research issues and the impact of agentization on human organizations in general, using our own Intelligent Systems Division of USC/ISI as a testbed. Within our research institution, we intend that our Electric Elves agent proxies automatically manage tasks such as: Select teams of researchers for giving a demonstration out of town, plan all of their travel arrangements and ship relevant equipment; also, resolve problems that come up during such a demonstration (e.g., a selected researcher becomes ill at the last minute) Determine the researchers interested in meeting with a visitor to our institute, and schedule meetings with the visitor Reschedule meetings if one or more users are absent or unable to arrive on time at a meeting Monitor the location of users and keep others informed (within privacy limits) about their whereabouts This short paper presents an overview of our project, as space limitations preclude a detailed discussion of the research issues and operation of the current system. We do have a working prototype of about 10 agent proxies running almost continuously, managing the schedules of one research group. In the following section, we first present an overview of the agent organization, which immerses several heterogeneous agents and sets of agents within the existing human organization of our division. Following that, we describe the current state of the system, and then conclude.
Gal Kaminka. 2000. “Execution Monitoring in Multi-Agent Environments ”.Abstract
Agents in complex, dynamic, multi-agent environments face uncertainty in the execution of their tasks, as their sensors, plans, and actions may fail unexpectedly, e.g., the weather may render a robots camera useless, its grip too slippery, etc. The explosive number of states in such environments prohibits any resource-bounded designer from predicting all failures at design time. This situation is exacerbated in multi-agent settings, where interactions between agents increase the complexity. For instance, it is difficult to predict an opponent's behavior. Agents in such environments must therefore rely on runtime execution monitoring and diagnosis to detect a failure, diagnose it, and recover. Previous approaches have focused on supplying the agent with goal-attentive knowledge of the ideal behavior expected of the agent with respect to its goals. These approaches encounter key pitfalls and fail to exploit key opportunities in multi-agent settings: (a) only a subset of the sensors (those that measure achievement of goals) are used, despite other agents' sensed behavior that can be used to indirectly sense the environment or complete the agent's knowledge; (b) there is no monitoring of social relationships that must be maintained between the agents regardless of achievement of the goal (e.g., teamwork); and (c) there is no recognition of failures in others, though these change the ideal behavior expected of an agent (for instance, assisting a failing teammate). To address these problems, we investigate a novel complementary paradigm for multi-agent monitoring and diagnosis. Socially-Attentive Monitoring (SAM) focuses on monitoring the social relationships between the agents as they are executing their tasks, and uses models of multiple agents and their relationships in monitoring and diagnosis. We hypothesize that failures to maintain relationships would be indicative of failures in behavior, and diagnosis of relationships can be used to complement goal-attentive methods. In particular, SAM addresses the weaknesses listed above: (a) it allows inference of missing knowledge and sensor readings through other agents' sensed behavior; (b) it directly monitors social relationships, with no attention to the goals; and (c) it allows recognition of failures in others (even if they are not using SAM!). SAM currently uses the STEAM teamwork model, and a role-similarity relationship model to monitor agents. It relies on plan-recognition to infer agents' reactive-plan hierarchies from their observed actions. These hierarchies are compared in a top-down fashion to find relationship violations, e.g., cases where two agents selected different plans despite their being on the same team. Such detections trigger diagnosis which uses the relationship models to facilitate recovery. For example, in teamwork, a commitment to joint selection of plans further mandates mutual belief in preconditions. Thus a difference in selected plans may be explained by a difference in preconditions, and can lead to recovery using negotiations. We empirically and analytically investigate SAM in two dynamic, complex, multi-agent domains: the ModSAF battlefield simulation, where SAM is employed by helicopter pilot agents; and the RoboCup soccer simulation where SAM is used by a coach agent to monitor teams' behavior. We show that SAM can capture failures that are otherwise undetectable, and that distributed monitoring is better (correct and complete) detection) and simpler (no representation of ambiguity) than a centralized scheme (complete and incorrect, requiring representation of ambiguity). Key contributions and novelties include: (a) a general framework for socially-attentive monitoring, and a deployed implementation for monitoring teamwork; (b) rigorously proven guarantees on the applicability and results of practical socially-attentive monitoring of teamwork under conditions of uncertainty; (c) procedures for diagnosis based on a teamwork relationship model. Future work includes the use of additional relationship models in monitoring and diagnosis, formalization of the social diagnosis capabilities, and further demonstration of SAM's usefulness in current domains and others.
S. Marsella, J. Adibi, Y. Alonaizan, Gal Kaminka, I. Muslea, and Milind Tambe. 2000. “Experiences acquired in the design of Robocup teams: A comparison of two fielded teams .” Journal of Autonomous Agents and Multi-agent Systems, special issue on Best of Agents '99, 4, Pp. 115-129.Abstract
tract Increasingly multiagent systems are being designed for a variety of complex dynamic domains Eective agent interactions in such domains raise some of the most fundamental research challenges for agentbased systems in teamwork multiagent learning and agent modelling The RoboCup research initiative particularly the simulation league has been proposed to pursue such multiagent research challenges using the common testbed of simulation soccer Despite the significant popularity of RoboCup within the research community general lessons have not often been extracted from participation in RoboCup This is what we attempt to do here We have elded two teams ISIS and ISIS in RoboCup competitions These teams have been in the top four teams in these competitions We compare the teams and attempt to analyze and generalize the lessons learned This analysis reveals several surprises pointing out lessons for teamwork and for multi-agent learning.
M. Asada, M. Veloso, Milind Tambe, H. Kitano, I. Noda, and G. K. Kraetzschmar. 2000. “Overview of RoboCup'98 .” AI Magazine, Spring 2000.Abstract
The Robot World Cup Soccer Games and Conferences (RoboCup) are a series of competitions and events designed to promote the full integration of AI and robotics research. Following the first RoboCup, held in Nagoya, Japan, in 1997, RoboCup-98 was held in Paris from 2–9 July, overlapping with the real World Cup soccer competition. RoboCup-98 included competitions in three leagues: (1) the simulation league, (2) the real robot small-size league, and (3) the real robot middle- size league. Champion teams were CMUNITED-98 in both the simulation and the real robot smallsize leagues and CS-FREIBURG (Freiburg, Germany) in the real robot middle-size league. RoboCup-98 also included a Scientific Challenge Award, which was given to three research groups for their simultaneous development of fully automatic commentator systems for the RoboCup simulator league. Over 15,000 spectators watched the games, and 120 international media provided worldwide coverage of the competition.
Ranjit Nair, T. Ito, Milind Tambe, and S. Marsella. 2000. “RoboCup Rescue: A Proposal and Preliminary Experiences .” In ICMAS workshop on RoboCup Rescue.Abstract
Abstract RoboCup Rescue is an international project aimed at apply ing multiagent research to the domain of search and rescue in large scale disasters This paper reports our initial experiences with using the Robocup Rescue Simulator and building agents capable of making decisions based on observation of other agents behavior We also plan on analyzing team behavior to obtain rules that explain this behavior.
Gal Kaminka and Milind Tambe. 2000. “Robust agent teams via socially attentive monitoring .” Journal of Artificial Intelligence Research (JAIR), 12, Pp. 105-147.Abstract

Agents in dynamic multi-agent environments must monitor their peers to execute individual and group plans. A key open question is how much monitoring of other agents' states is required to be effective: The Monitoring Selectivity Problem. We investigate this question in the context of detecting failures in teams of cooperating agents, via Socially-Attentive Monitoring, which focuses on monitoring for failures in the social relationships between the agents. We empirically and analytically explore a family of socially-attentive teamwork monitoring algorithms in two dynamic, complex, multi-agent domains, under varying conditions of task distribution and uncertainty. We show that a centralized scheme using a complex algorithm trades correctness for completeness and requires monitoring all teammates. In contrast, a simple distributed teamwork monitoring algorithm results in correct and complete detection of teamwork failures, despite relying on limited, uncertain knowledge, and monitoring only key agents in a team. In addition, we report on the design of a socially-attentive monitoring system and demonstrate its generality in monitoring several coordination relationships, diagnosing detected failures, and both on-line and off-line applications.

Milind Tambe and W. Zhang. 2000. “Towards flexible teamwork in persistent teams: extended report .” Journal of Autonomous Agents and Multi-agent Systems, special issue on 'Best of ICMAS 98,' 3, Pp. 159-183.Abstract
Teams of heterogeneous agents working within and alongside human organizations offer exciting possibilities for streamlining processes in ways not possible with conventional software[4, 6]. For example, personal software assistants and information gathering and scheduling agents can coordinate with each other to achieve a variety of coordination and organizational tasks, e.g. facilitating teaming of experts in an organization for crisis response and aiding in execution and monitoring of such a response[5]. Inevitably, due to the complexity of the environment, the unpredictability of human beings and the range of situation with which the multi-agent systems must deal, there will be times when the system does not produce the results it’s users desire. In such cases human intervention is required. Sometimes simple tweaks are required due to system failures. In other cases, perhaps because a particular user has more experience than the system, the user will want to “steer” the entire multi-agent system on a different course. For example, some researchers at USC/ISI, including ourselves, are currently focused on the Electric Elves project ( In this project humans will be agentified by providing agent proxies to act on their behalf, while entities such as meeting schedulers will be active agents that can communicate with the proxies to achieve a variety of scheduling and rescheduling tasks. In this domain at an individual level a user will sometimes want to override decisions of their proxy. At a team level a human will want to fix undesirable properties of overall team behavior, such as large breaks in a visitor’s schedule. However, to require a human to completely take control of an entire multi-agent system, or even a single agent, defeats the purpose for which the agents were deployed. Thus, while it is desirable that the multi-agent system should not assume full autonomy neither should it be a zero autonomy system. Rather, some form of Adjustable Autonomy (AA) is desired. A system supporting AA is able to dynamically change the autonomy it has to make and carry out decisions, i.e. the system can continuously vary its autonomy from being completely dependent on humans to being completely in control. An AA tool needs to support user interaction with such a system. To support effective user interaction with complex multi-agent system we are developing a layered Adjustable Autonomy approach that allows users to intervene either with a single agent or with a team of agents. Previous work has in AA has looked at either individual agents or whole teams but not, to our knowledge, a layered approach to AA. The layering of the AA parallels the levels of autonomy existing in human organizations. Technically, the layered approach separates out issues relevant at different levels of abstraction, making it easier to provide users with the information and tools they need to effectively interact with a complex multi-agent system.
H. Jung, M. Tambe, W. Shen, and W. Zhang. 2000. “Towards large-scale conflict resolution: Initial results .” In Initial results In Proceedings of the International Conference on Multi-Agent Systems (ICMAS) (POSTER).Abstract
With the increasing interest in distributed and collaborative multi-agent applications, conflict resolution in largescale systems becomes an important problem. Our approach to collaborative conflict resolution is based on argumentation. To understand the feasibility and the scope of the approach, we first implemented the process in a system called CONSA and applied it to two complex, dynamic domains. We then modeled this approach in distributed constraint satisfaction problems (DCSP) to investigate the effect of different conflict resolution configurations, such as the degree of shared responsibility and unshared information, and their effects in large-scale conflict resolution via argumentation. Our results suggest some interesting correlations between these configurations and the performance of conflict resolution.
D. Pynadath, M. Tambe, N. Chauvat, and L. Cavedon. 2000. “Towards team-oriented programming .” In Intelligent Agents, Volume VI: Workshop on Agents, theories, architectures and Languages. Springer, Heidelberg, Germany.Abstract
. The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments. Teamwork among these heterogeneous agents is critical in realizing the full potential of these systems and scaling up to the demands of large-scale applications. Indeed, to succeed in highly uncertain, complex applications, the agent teams must be both robust and exible. Unfortunately, development of such agent teams is currently extremely dicult. This paper focuses on signicantly accelerating the process of building such teams using a simplied, abstract framework called team-oriented programming (TOP). In TOP, a programmer species an agent organization hierarchy and the team tasks for the organization to perform, but abstracts away from the large number of coordination plans potentially necessary to ensure robust and exible team operation. We support TOP through a distributed, domain-independent teamwork layer that integrates core teamwork coordination and communication capabilities. We have recently used TOP to integrate a diverse team of heterogeneous distributed agents in performing a complex task. We outline the current state of our TOP implementation and the outstanding issues in developing such a framework.