Teamwork in cyberspace: Using TEAMCORE to make agents team-ready

Citation:

Milind Tambe, W. Shen, M. Mataric, D. Goldberg, Pragnesh J. Modi, Z. Qiu, and B. Salemi. 1999. “Teamwork in cyberspace: Using TEAMCORE to make agents team-ready .” In AAAI Spring Symposium on Agents in Cyberspace.

Abstract:

In complex, dynamic and uncertain environments extending from disaster rescue missions, to future battlefields, to monitoring and surveillance tasks, to virtual training environments, to future robotic space missions, intelligent agents will play a key role in information gathering and filtering, as well as in task planning and execution. Although physically distributed on a variety of platforms, these agents will interact with information sources, network facilities, and other agents via cyberspace, in the form of the Internet, Intranet, the secure defense communication network, or other forms of cyberspace. Indeed, it now appears well accepted that cyberspace will be (if it is not already) populated by a vast number of such distributed, individual agents. Thus, a new distributed model of agent development has begun to emerge. In particular, when faced with a new task, this model prescribes working with a distributed set of agents rather than building a centralized, large-scale, monolithic individual agent. A centralized approach suffers from problems in robustness (due to a single point of failure), exhibits a lack of modularity (as a single monolithic system), suffers from difficulty in scalability (by not utilizing existing agents as components), and is often a mismatch with the distributed ground reality. The distributed approach addresses these weaknesses of the centralized approach. Our hypothesis is that the key to the success of such a distributed approach is teamwork in cyberspace. That is, multiple distributed agents must collaborate in teams in cyberspace so as to scale up to the complexities of the complex and dynamic environments mentioned earlier. For instance, consider an application such as monitoring traffic violators in a city. Ideally, we wish to be able to construct a suitable agent-team quickly, from existing agents that can control UAVs (Unmanned Air Vehicles), an existing 3D route-planning agent, and an agent capable of recognizing traffic violations based on a video input. Furthermore, by suitable substitution, we wish to be able to quickly reconfigure the team to monitor enemy activity on a battlefield or illegal poaching in forests. Such rapid agent-team assembly obviates the need to construct a monolithic agent for each new application from scratch, preserves modularity, and appears better suited for scalability. Of course, such agent teamwork in cyberspace raises a variety of important challenges. In particular, agents must engage in robust and flexible teamwork to overcome the uncertainties in their environment. They must also adapt by learning from past failures. Unfortunately, currently, constructing robust, flexible and adaptive agent teams is extremely difficult. Current approaches to teamwork suffer from a lack of general-purpose teamwork models, which would enable agents to autonomously reason about teamwork or communication and coordination in teamwork and to improve the team performance by learning at the team level. The absence of such teamwork models gives rise to four types of problems. First, team construction becomes highly labor-intensive. In particular, since agents cannot autonomously reason about coordination, human developers have to provide them with large numbers of domain-specific coordination and communication plans. These domain-specific plans are not reusable, and must be developed anew for each new domain. Second, teams suffer from inflexibility. In real-world domains, teams face a variety of uncertainties, such as a team member’s unanticipated failure in fulfilling responsibilities, team members’ divergent beliefs about their environment [CL91], and unexpectedly noisy or faulty communication. Without a teamwork model, it is difficult to anticipate and preplan for the vast number of coordination failures possible due to such uncertainties, leading to inflexibility. A third problem arises in team scale-up. Since creating even small-scale teams is difficult, scaling up to larger ones is even harder. Finally, since agents cannot reason about teamwork, learning about teamwork has also proved to be problematic. Thus, even after repeating a failure, teams are often unable to avoid it in the future. To remedy this situation and to enable rapid development of agent teams, we are developing a novel software system called TEAMCORE that integrates a general-purpose teamwork model and team learning capabilities. TEAMCORE provides these core teamwork capabilities to individual agents, i.e., it wraps them with TEAMCORE. Here, we call the individual TEAMCORE “wrapper” a teamcore agent. A teamcore agent is a pure “social agent”, in that it is provided with only core teamwork capabilities. Given an existing agent with domain-level action capabilities (i.e., the domain-level agent), it is made teamready by interfacing with a teamcore agent. Agents made team-ready will be able to rapidly assemble themselves into a team in any given domain. That is, unlike past approaches such as the open-agent-architecture (OAA) that provides a centralized blackboard facilitator to integrate a distributed set of agents, TEAMCORE is fundamentally a distributed team-oriented system. Our goal is a TEAMCORE system capable of generating teams that are: 1. Flexible and robust, able to surmount the uncertainties mentioned above. 2. Capable of scale-up to hundreds of team members 3. Able to improve the team performance by learning at the team level and avoiding past team failures. An initial version of TEAMCORE system based on the Soar [Newell90] integrated agent architecture is currently up and running. A distributed set of teamcore agents can form teams in cyberspace. The underlying communication infrastructure is currently based on KQML. The rest of this document now briefly describes the TEAMCORE design, architecture and implementation.
See also: 1999