Multi-agent teamwork is critical in a large number of agent
applications, including training, education, virtual enterprises
and collective robotics. Tools that can help humans analyze,
evaluate, and understand team behaviors are becoming
increasingly important as well. We have taken a step towards
building such a tool by creating an automated analyst agent
called ISAAC for post-hoc, off-line agent-team analysis.
ISAAC’s novelty stems from a key design constraint that
arises in team analysis: multiple types of models of team
behavior are necessary to analyze different granularities of
team events, including agent actions, interactions, and global
performance. These heterogeneous team models are
automatically acquired via machine learning over teams’
external behavior traces, where the specific learning
techniques are tailored to the particular model learned.
Additionally, ISAAC employs multiple presentation
techniques that can aid human understanding of the analyses.
This paper presents ISAAC’s general conceptual framework,
motivating its design, as well as its concrete application in
the domain of RoboCup soccer. In the RoboCup domain,
ISAAC was used prior to and during the RoboCup’99
tournament, and was awarded the RoboCup scientific
With the promise of agent-based systems, a variety of research/industrial groups are developing
autonomous, heterogeneous agents, that are distributed over a variety of platforms and environments in
cyberspace. Rapid integration of such distributed, heterogeneous agents would enable software to be
rapidly developed to address large-scale problems of interest. Unfortunately, rapid and robust integration
remains a difficult challenge.
To address this challenge, we are developing a novel teamwork-based agent integration framework.
In this framework, software developers specify an agent organization called a team-oriented program. To
recruit agents for this organization, an agent resources manager (an analogue of a “human resources manager”) searches the cyberspace for agents of interest to this organization, and monitors their performance
over time. Agents in this organization are wrapped with TEAMCORE wrappers, that make them team
ready, and thus ensure robust, flexible teamwork among the members of the newly formed organization.
This implemented framework promises to reduce the software development effort in agent integration
while providing robustness due to its teamwork-based foundations. A concrete, running example, based
on heterogeneous, distributed agents is presented.
Multi-agent teamwork is a critical capability in a large number of applications.
Yet, despite the considerable progress in teamwork research, the challenge of
intra-team conflict resolution has remained largely unaddressed. This chapter
presents a system called CONSA, to resolve conflicts using argumentation-based
negotiations. The key insight in CONSA(COllaborative Negotiation System
based on Argumentation) is to fully exploit the benefits of argumentation in a
team setting. Thus, CONSA casts conflict resolution as a team problem, so that
the recent advances in teamwork can be fully brought to bear during conflict
resolution to improve argumentation flexibility. Furthermore, since teamwork
conflicts often involve past teamwork, recently developed teamwork models can
be exploited to provide agents with reusable argumentation knowledge. Additionally, CONSA also includes argumentation strategies geared towards benefiting the team rather than the individual, and techniques to reduce argumentation
overhead. We present detailed algorithms used in CONSA and shows a detailed
trace from CONSA’s implementations.
Teamwork is a critical capability in multiagent environments Many such en
vironments mandate that the agents and agentteams must be persistent ie exist over long periods of time Agents in such persistent teams are bound together by
their longterm common interests and goals
This paper focuses on exible teamwork in such persistent teams Unfortunately while previous work has investigated exible teamwork persistent teams remain
unexplored For exible teamwork one promising approach that has emerged is
modelbased ie providing agents with general models of teamwork that explicitly
specify their commitments in teamwork Such models enable agents to autonomously
reason about coordination Unfortunately for persistent teams such models may
lead to coordination and communication actions that while locally optimal are highly problematic for the teams longterm goals We present a decisiontheoretic
technique based on Markov decision processes to enable persistent teams to over come such limitations of the modelbased approach In particular agents reason
about expected team utilities of future team states that are pro jected to result from
actions recommended by the teamwork model as well as lowercost or highercost variations on these actions To accomodate realtime constraints this reasoning is
done in an anytime fashion Implemented examples from an analytic search tree
and some realworld domains are presented.
Future large-scale human organizations will be highly agentized, with software agents supporting the traditional tasks
of information gathering, planning, and execution monitoring, as well as having increased control of resources and
devices (communication and otherwise). As these heterogeneous software agents take on more of these activities,
they will face the additional tasks of interfacing with people and sometimes acting as their proxies. Dynamic teaming of such heterogeneous agents will enable organizations
to act coherently, to robustly attain their mission goals, to
react swiftly to crises, and to dynamically adapt to events.
Advances in this agentization could potentially assist all organizations, including the military, civilian disaster response
organizations, corporations, and universities and research institutions.
Within an organization, we envision that agent-based
technology will facilitate (and sometimes supervise) all collaborative activities. For a research institution, agentization
may facilitate such activities as meeting organization, paper composition, software development, and deployment of
people and equipment for out-of-town demonstrations. For
a military organization, agentization may enable the teaming of military units and equipment for rapid deployment,
the monitoring of the progress of such deployments, and the
rapid response to any crises that may arise. To accomplish
such goals, we envision the presence of agent proxies for
each person within an organization. Thus, for instance, if
an organizational crisis requires an urgent deployment of
a team of people and equipment, then agent proxies could
dynamically volunteer for team membership on behalf of
the people or resources they represent, while also ensuring
that the selected team collectively possesses sufficient resources and capabilities. The proxies must also manage efficient transportation of such resources, the monitoring of the
progress of individual participants and of the mission as a
whole, and the execution of corrective actions when goals
appear to be endangered.
The complexity inherent in human organizations complicates all of these tasks and provides a challenging research
testbed for agent technology. First, there is the key research
question of adjustable autonomy. In particular, agents acting as proxies for people must automatically adjust their own
autonomy, e.g., avoiding critical errors, possibly by letting people make important decisions while autonomously making the more routine decisions. Second, human organizations operate continually over time, and the agents must operate continually as well. In fact, the agent systems must be
up and running 24 hours a day 7 days a week (24/7). Third,
people, as well as their associated tasks are very heterogeneous, having a wide and rich variety of capabilities, interests, preferences, etc. To enable teaming among such people for crisis response or other organizational tasks, agents
acting as proxies must represent and reason with such capabilities and interests. We thus require powerful matchmaking capabilities to match two people with similar interests.
Fourth, human organizations are often large, so providing
proxies often means a big scale-up in the number of agents,
as compared against typical multiagent systems in current
Our Electric Elves project is currently investigating the
above research issues and the impact of agentization on human organizations in general, using our own Intelligent Systems Division of USC/ISI as a testbed. Within our research
institution, we intend that our Electric Elves agent proxies
automatically manage tasks such as: Select teams of researchers for giving a demonstration out of
town, plan all of their travel arrangements and ship relevant
equipment; also, resolve problems that come up during such a
demonstration (e.g., a selected researcher becomes ill at the last
minute) Determine the researchers interested in meeting with a visitor to
our institute, and schedule meetings with the visitor Reschedule meetings if one or more users are absent or unable
to arrive on time at a meeting Monitor the location of users and keep others informed (within
privacy limits) about their whereabouts
This short paper presents an overview of our project, as
space limitations preclude a detailed discussion of the research issues and operation of the current system. We do
have a working prototype of about 10 agent proxies running almost continuously, managing the schedules of one
research group. In the following section, we first present an
overview of the agent organization, which immerses several
heterogeneous agents and sets of agents within the existing
human organization of our division. Following that, we describe the current state of the system, and then conclude.
Agents in complex, dynamic, multi-agent environments
face uncertainty in the execution of their tasks, as their
sensors, plans, and actions may fail unexpectedly, e.g., the
weather may render a robots camera useless, its grip too
slippery, etc. The explosive number of states in such
environments prohibits any resource-bounded designer
from predicting all failures at design time. This situation is
exacerbated in multi-agent settings, where interactions
between agents increase the complexity. For instance, it is
difficult to predict an opponent's behavior.
Agents in such environments must therefore rely on runtime execution monitoring and diagnosis to detect a failure,
diagnose it, and recover. Previous approaches have focused
on supplying the agent with goal-attentive knowledge of
the ideal behavior expected of the agent with respect to its
goals. These approaches encounter key pitfalls and fail to
exploit key opportunities in multi-agent settings: (a) only a
subset of the sensors (those that measure achievement of
goals) are used, despite other agents' sensed behavior that
can be used to indirectly sense the environment or
complete the agent's knowledge; (b) there is no monitoring
of social relationships that must be maintained between the
agents regardless of achievement of the goal (e.g.,
teamwork); and (c) there is no recognition of failures in
others, though these change the ideal behavior expected of
an agent (for instance, assisting a failing teammate).
To address these problems, we investigate a novel
complementary paradigm for multi-agent monitoring and
diagnosis. Socially-Attentive Monitoring (SAM) focuses on
monitoring the social relationships between the agents as
they are executing their tasks, and uses models of multiple
agents and their relationships in monitoring and diagnosis.
We hypothesize that failures to maintain relationships
would be indicative of failures in behavior, and diagnosis
of relationships can be used to complement goal-attentive
methods. In particular, SAM addresses the weaknesses
listed above: (a) it allows inference of missing knowledge
and sensor readings through other agents' sensed behavior;
(b) it directly monitors social relationships, with no
attention to the goals; and (c) it allows recognition of
failures in others (even if they are not using SAM!).
SAM currently uses the STEAM teamwork model, and a
role-similarity relationship model to monitor agents. It
relies on plan-recognition to infer agents' reactive-plan
hierarchies from their observed actions. These hierarchies are compared in a top-down fashion to find relationship
violations, e.g., cases where two agents selected different
plans despite their being on the same team. Such detections
trigger diagnosis which uses the relationship models to
facilitate recovery. For example, in teamwork, a
commitment to joint selection of plans further mandates
mutual belief in preconditions. Thus a difference in
selected plans may be explained by a difference in
preconditions, and can lead to recovery using negotiations.
We empirically and analytically investigate SAM in two
dynamic, complex, multi-agent domains: the ModSAF
battlefield simulation, where SAM is employed by
helicopter pilot agents; and the RoboCup soccer simulation
where SAM is used by a coach agent to monitor teams'
behavior. We show that SAM can capture failures that are
otherwise undetectable, and that distributed monitoring is
better (correct and complete) detection) and simpler (no
representation of ambiguity) than a centralized scheme
(complete and incorrect, requiring representation of
ambiguity). Key contributions and novelties include: (a) a
general framework for socially-attentive monitoring, and a
deployed implementation for monitoring teamwork; (b)
rigorously proven guarantees on the applicability and
results of practical socially-attentive monitoring of
teamwork under conditions of uncertainty; (c) procedures
for diagnosis based on a teamwork relationship model.
Future work includes the use of additional relationship
models in monitoring and diagnosis, formalization of the
social diagnosis capabilities, and further demonstration of
SAM's usefulness in current domains and others.
tract Increasingly multiagent systems are being designed for a variety of
complex dynamic domains Eective agent interactions in such domains raise some
of the most fundamental research challenges for agentbased systems in teamwork multiagent learning and agent modelling The RoboCup research initiative particularly the simulation league has been proposed to pursue such multiagent research
challenges using the common testbed of simulation soccer Despite the significant
popularity of RoboCup within the research community general lessons have not
often been extracted from participation in RoboCup This is what we attempt to
do here We have elded two teams ISIS and ISIS in RoboCup competitions
These teams have been in the top four teams in these competitions We compare
the teams and attempt to analyze and generalize the lessons learned This analysis
reveals several surprises pointing out lessons for teamwork and for multi-agent learning.
The Robot World Cup Soccer Games and Conferences
(RoboCup) are a series of competitions and
events designed to promote the full integration of
AI and robotics research. Following the first
RoboCup, held in Nagoya, Japan, in 1997,
RoboCup-98 was held in Paris from 2–9 July, overlapping
with the real World Cup soccer competition.
RoboCup-98 included competitions in three
leagues: (1) the simulation league, (2) the real
robot small-size league, and (3) the real robot middle-
size league. Champion teams were CMUNITED-98
in both the simulation and the real robot smallsize
leagues and CS-FREIBURG (Freiburg, Germany) in
the real robot middle-size league. RoboCup-98 also
included a Scientific Challenge Award, which was
given to three research groups for their simultaneous
development of fully automatic commentator
systems for the RoboCup simulator league. Over
15,000 spectators watched the games, and 120
international media provided worldwide coverage
of the competition.
Abstract RoboCup Rescue is an international project aimed at apply
ing multiagent research to the domain of search and rescue in large
scale disasters This paper reports our initial experiences with using the
Robocup Rescue Simulator and building agents capable of making decisions based on observation of other agents behavior We also plan on
analyzing team behavior to obtain rules that explain this behavior.
Agents in dynamic multi-agent environments must monitor their peers to execute individual and group plans. A key open question is how much monitoring of other agents' states is required to be effective: The Monitoring Selectivity Problem. We investigate this question in the context of detecting failures in teams of cooperating agents, via Socially-Attentive Monitoring, which focuses on monitoring for failures in the social relationships between the agents. We empirically and analytically explore a family of socially-attentive teamwork monitoring algorithms in two dynamic, complex, multi-agent domains, under varying conditions of task distribution and uncertainty. We show that a centralized scheme using a complex algorithm trades correctness for completeness and requires monitoring all teammates. In contrast, a simple distributed teamwork monitoring algorithm results in correct and complete detection of teamwork failures, despite relying on limited, uncertain knowledge, and monitoring only key agents in a team. In addition, we report on the design of a socially-attentive monitoring system and demonstrate its generality in monitoring several coordination relationships, diagnosing detected failures, and both on-line and off-line applications.
Teams of heterogeneous agents working within and
alongside human organizations offer exciting possibilities for streamlining processes in ways not possible with
conventional software[4, 6]. For example, personal software assistants and information gathering and scheduling
agents can coordinate with each other to achieve a variety of coordination and organizational tasks, e.g. facilitating teaming of experts in an organization for crisis response and aiding in execution and monitoring of such a
Inevitably, due to the complexity of the environment,
the unpredictability of human beings and the range of situation with which the multi-agent systems must deal, there
will be times when the system does not produce the results it’s users desire. In such cases human intervention
is required. Sometimes simple tweaks are required due to
system failures. In other cases, perhaps because a particular user has more experience than the system, the user will
want to “steer” the entire multi-agent system on a different course. For example, some researchers at USC/ISI,
including ourselves, are currently focused on the Electric Elves project (http://www.isi.edu/agents-united). In
this project humans will be agentified by providing agent
proxies to act on their behalf, while entities such as meeting schedulers will be active agents that can communicate
with the proxies to achieve a variety of scheduling and
rescheduling tasks. In this domain at an individual level
a user will sometimes want to override decisions of their
proxy. At a team level a human will want to fix undesirable properties of overall team behavior, such as large
breaks in a visitor’s schedule.
However, to require a human to completely take control
of an entire multi-agent system, or even a single agent,
defeats the purpose for which the agents were deployed.
Thus, while it is desirable that the multi-agent system
should not assume full autonomy neither should it be a
zero autonomy system. Rather, some form of Adjustable
Autonomy (AA) is desired. A system supporting AA is
able to dynamically change the autonomy it has to make
and carry out decisions, i.e. the system can continuously
vary its autonomy from being completely dependent on
humans to being completely in control. An AA tool needs
to support user interaction with such a system.
To support effective user interaction with complex
multi-agent system we are developing a layered Adjustable Autonomy approach that allows users to intervene either with a single agent or with a team of agents.
Previous work has in AA has looked at either individual
agents or whole teams but not, to our knowledge, a layered approach to AA. The layering of the AA parallels
the levels of autonomy existing in human organizations.
Technically, the layered approach separates out issues relevant at different levels of abstraction, making it easier to
provide users with the information and tools they need to
effectively interact with a complex multi-agent system.
With the increasing interest in distributed and collaborative multi-agent applications, conflict resolution in largescale systems becomes an important problem. Our approach
to collaborative conflict resolution is based on argumentation. To understand the feasibility and the scope of the approach, we first implemented the process in a system called
CONSA and applied it to two complex, dynamic domains.
We then modeled this approach in distributed constraint satisfaction problems (DCSP) to investigate the effect of different conflict resolution configurations, such as the degree of
shared responsibility and unshared information, and their
effects in large-scale conflict resolution via argumentation.
Our results suggest some interesting correlations between
these configurations and the performance of conflict resolution.
D. Pynadath, M. Tambe, N. Chauvat, and L. Cavedon. 2000. “Towards team-oriented programming .” In Intelligent Agents, Volume VI: Workshop on Agents, theories, architectures and Languages. Springer, Heidelberg, Germany.Abstract
. The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments. Teamwork among these heterogeneous agents is critical in realizing the full potential of these systems and scaling up to the demands of large-scale applications. Indeed, to succeed in highly uncertain, complex applications, the agent teams must be both robust and exible. Unfortunately, development of such agent teams is currently extremely dicult. This paper focuses on signicantly accelerating the process of building such teams using a simplied, abstract framework called team-oriented programming (TOP). In TOP, a programmer species an agent organization hierarchy and the team tasks for the organization to perform, but abstracts away from the large number of coordination plans potentially necessary to ensure robust and exible team operation. We support TOP through a distributed, domain-independent teamwork layer that integrates core teamwork coordination and communication capabilities. We have recently used TOP to integrate a diverse team of heterogeneous distributed agents in performing a complex task. We outline the current state of our TOP implementation and the outstanding issues in developing such a framework.
Increasingly multiagent systems are being designed for a
variety of complex dynamic domains Eective agent inter
actions in such domains raise some of the most fundamental
research challenges for agentbased systems in teamwork multiagent learning and agent modelling The RoboCup
research initiative particularly the simulation league has
been proposed to pursue such multiagent research chal
lenges using the common testbed of simulation soccer De
spite the signicant popularity of RoboCup within the re
search community general lessons have not often been ex
tracted from participation in RoboCup This is what we
attempt to do here We have elded two teams ISIS and
ISIS in RoboCup competitions These teams have been
in the top four teams in these competitions We compare
the teams and attempt to analyze and generalize the lessons
learned This analysis reveals several surprises pointing out
lessons for teamwork and for multi-agent learning.
Within the ATAL community, the belief-desire-intention (BDI) model has come to be
possibly the best known and best studied model of practical reasoning agents. There are
several reasons for its success, but perhaps the most compelling are that the BDI model
combines a respectable philosophical model of human practical reasoning, (originally
developed by Michael Bratman ), a number of implementations (in the IRMA architecture  and the various PRS-like systems currently available ), several successful
applications (including the now-famous fault diagnosis system for the space shuttle, as
well as factory process control systems and business process management ), and finally, an elegant abstract logical semantics, which have been taken up and elaborated
upon widely within the agent research community [14, 16].
However, it could be argued that the BDI model is now becoming somewhat dated:
the principles of the architecture were established in the mid-1980s, and have remained
essentially unchanged since then. With the explosion of interest in intelligent agents
and multi-agent systems that has occurred since then, a great many other architectures have been developed, which, it could be argued, address some issues that the
BDI model fundamentally fails to. Furthermore, the focus of agent research (and AI in
general) has shifted significantly since the BDI model was originally developed. New
advances in understanding (such as Russell and Subramanian’s model of “boundedoptimal agents” ) have led to radical changes in how the agents community (and
more generally, the artificial intelligence community) views its enterprise.
The purpose of this panel is therefore to establish how the BDI model stands in relation to other contemporary models of agency, and in particular where it can or should
In a complex, dynamic multi-agentsetting, coherent team actions are often jeopardized by conflicts in
agents’ beliefs, plans and actions. Despite the considerable progress in teamwork research, the challenge
ofintra-team conflict resolutionhas remained largely unaddressed. This paper presents CONSA, a system
we are developing to resolve conflicts using argumentation-based negotiations. CONSA is focused on
exploiting the benefits of argumentation in a team setting. Thus, CONSA casts conflict resolution as a
team problem, so that the recent advances in teamwork can be brought to bear during conflict resolution
to improve argumentation flexibility. Furthermore, since teamwork conflicts sometimes involve past
teamwork, teamwork models can be exploited to provide agents with reusable argumentation knowledge.
Additionally, CONSA also includes argumentation strategies geared towards benefiting the team rather
than the individual, and techniques to reduce argumentation overhead.
Execution monitoring is a critical challenge for agents in dynamic,
complex, multi-agent domains. Existing approaches utilize goalattentive models which monitor achievement of task goals.
However, they lack knowledge of the intended relationships
which should hold among the agents, and so fail to address key
opportunities and difficulties in multi-agent settings. We explore
SAM, a novel complementary framework for social monitoring
that utilizes knowledge of social relationships among agents in
monitoring them. We compare the performance of SAM when
monitoring is done by a single agent in a centralized fashion,
versus team monitoring in a distributed fashion. We experiment
with several SAM instantiations, algorithms that are sound and
incomplete, unsound and complete, and both sound and complete.
While a more complex algorithm appears useful in the centralized
case (but is unsound), the surprising result is that a much simpler
algorithm in the distributed case is both sound and complete. We
present a set of techniques for practical, efficient implementations
with rigorously proven performance guarantees, and systematic
As the agent methodology proves more and more useful in organizationalenterprises, research/industrial groups
are developing autonomous, heterogeneous agents that are
distributed over a variety of platforms and environments.
Rapid integration of such distributed, heterogeneous agent
components could address large-scale problems of interest
in these enterprises. Unfortunately, rapid and robust integration remains a difficult challenge. To address this challenge, we are developing a novel teamwork-based agent integration framework. In this framework, software developers specify an agent organization through a team-oriented
program. To locate and recruit agent components for this
organization, an agent resources manager (an analogue of
a “human resources manager”) searches for agents of interest to this organization and monitors their performance
over time. TEAMCORE wrappers render the agent components in this organization team ready, thus ensuring robust,
flexible teamwork among the members of the newly formed
organization. This implemented framework promises to reduce the development effort in enterprise integration while
providing robustness due to its teamwork-based foundations. We have applied this framework to a concrete, running example, using heterogeneous, distributed agents in a
problem setting comparable to many collaborative enterprises.
In complex, dynamic and uncertain environments
extending from disaster rescue missions, to future
battlefields, to monitoring and surveillance tasks, to virtual
training environments, to future robotic space missions,
intelligent agents will play a key role in information
gathering and filtering, as well as in task planning and
execution. Although physically distributed on a variety of
platforms, these agents will interact with information
sources, network facilities, and other agents via
cyberspace, in the form of the Internet, Intranet, the secure
defense communication network, or other forms of
cyberspace. Indeed, it now appears well accepted that
cyberspace will be (if it is not already) populated by a vast
number of such distributed, individual agents.
Thus, a new distributed model of agent development has
begun to emerge. In particular, when faced with a new
task, this model prescribes working with a distributed set
of agents rather than building a centralized, large-scale,
monolithic individual agent. A centralized approach suffers
from problems in robustness (due to a single point of
failure), exhibits a lack of modularity (as a single
monolithic system), suffers from difficulty in scalability
(by not utilizing existing agents as components), and is
often a mismatch with the distributed ground reality. The
distributed approach addresses these weaknesses of the
centralized approach. Our hypothesis is that the key to the
success of such a distributed approach is teamwork in
cyberspace. That is, multiple distributed agents must
collaborate in teams in cyberspace so as to scale up to the
complexities of the complex and dynamic environments
mentioned earlier. For instance, consider an application
such as monitoring traffic violators in a city. Ideally, we
wish to be able to construct a suitable agent-team quickly,
from existing agents that can control UAVs (Unmanned
Air Vehicles), an existing 3D route-planning agent, and an
agent capable of recognizing traffic violations based on a
video input. Furthermore, by suitable substitution, we wish
to be able to quickly reconfigure the team to monitor
enemy activity on a battlefield or illegal poaching in
forests. Such rapid agent-team assembly obviates the need
to construct a monolithic agent for each new application
from scratch, preserves modularity, and appears better
suited for scalability.
Of course, such agent teamwork in cyberspace raises a
variety of important challenges. In particular, agents must
engage in robust and flexible teamwork to overcome the
uncertainties in their environment. They must also adapt by
learning from past failures. Unfortunately, currently,
constructing robust, flexible and adaptive agent teams is
extremely difficult. Current approaches to teamwork suffer
from a lack of general-purpose teamwork models, which
would enable agents to autonomously reason about
teamwork or communication and coordination in teamwork
and to improve the team performance by learning at the
team level. The absence of such teamwork models gives
rise to four types of problems. First, team construction
becomes highly labor-intensive. In particular, since agents
cannot autonomously reason about coordination, human
developers have to provide them with large numbers of
domain-specific coordination and communication plans.
These domain-specific plans are not reusable, and must be
developed anew for each new domain. Second, teams
suffer from inflexibility. In real-world domains, teams face
a variety of uncertainties, such as a team member’s
unanticipated failure in fulfilling responsibilities, team
members’ divergent beliefs about their environment
[CL91], and unexpectedly noisy or faulty communication.
Without a teamwork model, it is difficult to anticipate and
preplan for the vast number of coordination failures
possible due to such uncertainties, leading to inflexibility.
A third problem arises in team scale-up. Since creating
even small-scale teams is difficult, scaling up to larger
ones is even harder. Finally, since agents cannot reason
about teamwork, learning about teamwork has also proved
to be problematic. Thus, even after repeating a failure,
teams are often unable to avoid it in the future.
To remedy this situation and to enable rapid development
of agent teams, we are developing a novel software system
called TEAMCORE that integrates a general-purpose
teamwork model and team learning capabilities.
TEAMCORE provides these core teamwork capabilities to
individual agents, i.e., it wraps them with TEAMCORE.
Here, we call the individual TEAMCORE “wrapper” a
teamcore agent. A teamcore agent is a pure “social agent”,
in that it is provided with only core teamwork capabilities.
Given an existing agent with domain-level action
capabilities (i.e., the domain-level agent), it is made teamready by interfacing with a teamcore agent. Agents made
team-ready will be able to rapidly assemble themselves
into a team in any given domain. That is, unlike past
approaches such as the open-agent-architecture (OAA) that
provides a centralized blackboard facilitator to integrate a
distributed set of agents, TEAMCORE is fundamentally a
distributed team-oriented system.
Our goal is a TEAMCORE system capable of generating
teams that are:
1. Flexible and robust, able to surmount the uncertainties
2. Capable of scale-up to hundreds of team members
3. Able to improve the team performance by learning at the
team level and avoiding past team failures.
An initial version of TEAMCORE system based on the
Soar [Newell90] integrated agent architecture is currently
up and running. A distributed set of teamcore agents can
form teams in cyberspace. The underlying communication
infrastructure is currently based on KQML. The rest of this
document now briefly describes the TEAMCORE design,
architecture and implementation.
D. V. Pynadath, Milind Tambe, N. Chauvat, and L. Cavedon. 1999. “Toward team-oriented programming .” In Agents, theories, architectures and languages (ATAL'99) workshop, to be published in Springer Verlag 'Intelligent Agents VI'.Abstract
t. The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments.
Teamwork among these heterogeneous agents is critical in realizing the full potential of these systems and scaling up to the demands of large-scale applications.
Unfortunately, development of robust, flexible agent teams is currently extremely
difficult. This paper focuses on significantly accelerating the process of building
such teams using a simplified, abstract framework called team-oriented programming (TOP). In TOP, a programmer specifies an agent organization hierarchy and
the team tasks for the organization to perform, abstracting away from the innumerable coordination plans potentially necessary to ensure robust and flexible
team operation. Our TEAMCORE system supports TOP through a distributed,
domain-independent layer that integrates core teamwork coordination and communication capabilities. We have recently used TOP to integrate a diverse team
of heterogeneous distributed agents in performing a complex task. We outline the
current state of our TOP implementation and the outstanding issues in developing
such a framework.