Publications

2000
Ranjit Nair, T. Ito, Milind Tambe, and S. Marsella. 2000. “RoboCup Rescue: A Proposal and Preliminary Experiences .” In ICMAS workshop on RoboCup Rescue.Abstract
Abstract RoboCup Rescue is an international project aimed at apply ing multiagent research to the domain of search and rescue in large scale disasters This paper reports our initial experiences with using the Robocup Rescue Simulator and building agents capable of making decisions based on observation of other agents behavior We also plan on analyzing team behavior to obtain rules that explain this behavior.
2000_1_teamcore_rescue.pdf
Gal Kaminka and Milind Tambe. 2000. “Robust agent teams via socially attentive monitoring .” Journal of Artificial Intelligence Research (JAIR), 12, Pp. 105-147.Abstract

Agents in dynamic multi-agent environments must monitor their peers to execute individual and group plans. A key open question is how much monitoring of other agents' states is required to be effective: The Monitoring Selectivity Problem. We investigate this question in the context of detecting failures in teams of cooperating agents, via Socially-Attentive Monitoring, which focuses on monitoring for failures in the social relationships between the agents. We empirically and analytically explore a family of socially-attentive teamwork monitoring algorithms in two dynamic, complex, multi-agent domains, under varying conditions of task distribution and uncertainty. We show that a centralized scheme using a complex algorithm trades correctness for completeness and requires monitoring all teammates. In contrast, a simple distributed teamwork monitoring algorithm results in correct and complete detection of teamwork failures, despite relying on limited, uncertain knowledge, and monitoring only key agents in a team. In addition, we report on the design of a socially-attentive monitoring system and demonstrate its generality in monitoring several coordination relationships, diagnosing detected failures, and both on-line and off-line applications.

2000_3_teamcore_jair_socmon.pdf
Milind Tambe and W. Zhang. 2000. “Towards flexible teamwork in persistent teams: extended report .” Journal of Autonomous Agents and Multi-agent Systems, special issue on 'Best of ICMAS 98,' 3, Pp. 159-183.Abstract
Teams of heterogeneous agents working within and alongside human organizations offer exciting possibilities for streamlining processes in ways not possible with conventional software[4, 6]. For example, personal software assistants and information gathering and scheduling agents can coordinate with each other to achieve a variety of coordination and organizational tasks, e.g. facilitating teaming of experts in an organization for crisis response and aiding in execution and monitoring of such a response[5]. Inevitably, due to the complexity of the environment, the unpredictability of human beings and the range of situation with which the multi-agent systems must deal, there will be times when the system does not produce the results it’s users desire. In such cases human intervention is required. Sometimes simple tweaks are required due to system failures. In other cases, perhaps because a particular user has more experience than the system, the user will want to “steer” the entire multi-agent system on a different course. For example, some researchers at USC/ISI, including ourselves, are currently focused on the Electric Elves project (http://www.isi.edu/agents-united). In this project humans will be agentified by providing agent proxies to act on their behalf, while entities such as meeting schedulers will be active agents that can communicate with the proxies to achieve a variety of scheduling and rescheduling tasks. In this domain at an individual level a user will sometimes want to override decisions of their proxy. At a team level a human will want to fix undesirable properties of overall team behavior, such as large breaks in a visitor’s schedule. However, to require a human to completely take control of an entire multi-agent system, or even a single agent, defeats the purpose for which the agents were deployed. Thus, while it is desirable that the multi-agent system should not assume full autonomy neither should it be a zero autonomy system. Rather, some form of Adjustable Autonomy (AA) is desired. A system supporting AA is able to dynamically change the autonomy it has to make and carry out decisions, i.e. the system can continuously vary its autonomy from being completely dependent on humans to being completely in control. An AA tool needs to support user interaction with such a system. To support effective user interaction with complex multi-agent system we are developing a layered Adjustable Autonomy approach that allows users to intervene either with a single agent or with a team of agents. Previous work has in AA has looked at either individual agents or whole teams but not, to our knowledge, a layered approach to AA. The layering of the AA parallels the levels of autonomy existing in human organizations. Technically, the layered approach separates out issues relevant at different levels of abstraction, making it easier to provide users with the information and tools they need to effectively interact with a complex multi-agent system.
2000_11_teamcore_aa_barcelona.pdf
H. Jung, M. Tambe, W. Shen, and W. Zhang. 2000. “Towards large-scale conflict resolution: Initial results .” In Initial results In Proceedings of the International Conference on Multi-Agent Systems (ICMAS) (POSTER).Abstract
With the increasing interest in distributed and collaborative multi-agent applications, conflict resolution in largescale systems becomes an important problem. Our approach to collaborative conflict resolution is based on argumentation. To understand the feasibility and the scope of the approach, we first implemented the process in a system called CONSA and applied it to two complex, dynamic domains. We then modeled this approach in distributed constraint satisfaction problems (DCSP) to investigate the effect of different conflict resolution configurations, such as the degree of shared responsibility and unshared information, and their effects in large-scale conflict resolution via argumentation. Our results suggest some interesting correlations between these configurations and the performance of conflict resolution.
2000_15_teamcore_icmas2000jung.pdf
D. Pynadath, M. Tambe, N. Chauvat, and L. Cavedon. 2000. “Towards team-oriented programming .” In Intelligent Agents, Volume VI: Workshop on Agents, theories, architectures and Languages. Springer, Heidelberg, Germany.Abstract
. The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments. Teamwork among these heterogeneous agents is critical in realizing the full potential of these systems and scaling up to the demands of large-scale applications. Indeed, to succeed in highly uncertain, complex applications, the agent teams must be both robust and exible. Unfortunately, development of such agent teams is currently extremely dicult. This paper focuses on signicantly accelerating the process of building such teams using a simplied, abstract framework called team-oriented programming (TOP). In TOP, a programmer species an agent organization hierarchy and the team tasks for the organization to perform, but abstracts away from the large number of coordination plans potentially necessary to ensure robust and exible team operation. We support TOP through a distributed, domain-independent teamwork layer that integrates core teamwork coordination and communication capabilities. We have recently used TOP to integrate a diverse team of heterogeneous distributed agents in performing a complex task. We outline the current state of our TOP implementation and the outstanding issues in developing such a framework.
2000_14_teamcore_atal2000.pdf
1999
S. Marsella, J. Adibi, Y. Alonaizan, Gal Kaminka, I. Muslea, and Milind Tambe. 1999. “On being a teammate: Experiences acquired in the design of Robocup teams .” In International conference on Autonomous agents, Agents '99.Abstract
tract Increasingly multiagent systems are being designed for a variety of complex dynamic domains Eective agent inter actions in such domains raise some of the most fundamental research challenges for agentbased systems in teamwork multiagent learning and agent modelling The RoboCup research initiative particularly the simulation league has been proposed to pursue such multiagent research chal lenges using the common testbed of simulation soccer De spite the signicant popularity of RoboCup within the re search community general lessons have not often been ex tracted from participation in RoboCup This is what we attempt to do here We have elded two teams ISIS and ISIS in RoboCup competitions These teams have been in the top four teams in these competitions We compare the teams and attempt to analyze and generalize the lessons learned This analysis reveals several surprises pointing out lessons for teamwork and for multi-agent learning.
1999_3_teamcore_robocup_agents99.pdf
M. Georgeff, B. Pell, M. Pollack, Milind Tambe, and M. Wooldrige. 1999. “The Belief-Desire-Intention model of agency .” In Agents, Theories, Architectures and Languages (ATAL).Abstract
Within the ATAL community, the belief-desire-intention (BDI) model has come to be possibly the best known and best studied model of practical reasoning agents. There are several reasons for its success, but perhaps the most compelling are that the BDI model combines a respectable philosophical model of human practical reasoning, (originally developed by Michael Bratman [1]), a number of implementations (in the IRMA architecture [2] and the various PRS-like systems currently available [7]), several successful applications (including the now-famous fault diagnosis system for the space shuttle, as well as factory process control systems and business process management [8]), and finally, an elegant abstract logical semantics, which have been taken up and elaborated upon widely within the agent research community [14, 16]. However, it could be argued that the BDI model is now becoming somewhat dated: the principles of the architecture were established in the mid-1980s, and have remained essentially unchanged since then. With the explosion of interest in intelligent agents and multi-agent systems that has occurred since then, a great many other architectures have been developed, which, it could be argued, address some issues that the BDI model fundamentally fails to. Furthermore, the focus of agent research (and AI in general) has shifted significantly since the BDI model was originally developed. New advances in understanding (such as Russell and Subramanian’s model of “boundedoptimal agents” [15]) have led to radical changes in how the agents community (and more generally, the artificial intelligence community) views its enterprise. The purpose of this panel is therefore to establish how the BDI model stands in relation to other contemporary models of agency, and in particular where it can or should go next.
1999_7_teamcore_bdi_panel.pdf
Milind Tambe and H. Jungh. 1999. “The benefits of arguing in a team .” AI Magazine, Winter 1999 20 (4).Abstract
In a complex, dynamic multi-agentsetting, coherent team actions are often jeopardized by conflicts in agents’ beliefs, plans and actions. Despite the considerable progress in teamwork research, the challenge ofintra-team conflict resolutionhas remained largely unaddressed. This paper presents CONSA, a system we are developing to resolve conflicts using argumentation-based negotiations. CONSA is focused on exploiting the benefits of argumentation in a team setting. Thus, CONSA casts conflict resolution as a team problem, so that the recent advances in teamwork can be brought to bear during conflict resolution to improve argumentation flexibility. Furthermore, since teamwork conflicts sometimes involve past teamwork, teamwork models can be exploited to provide agents with reusable argumentation knowledge. Additionally, CONSA also includes argumentation strategies geared towards benefiting the team rather than the individual, and techniques to reduce argumentation overhead.
1999_8_teamcore_tambe00benefits.pdf
Gal Kaminka and Milind Tambe. 1999. “I'm OK, You're OK, We're OK: Experiments in Centralized and Distributed Socially Attentive Monitoring .” In International conference on Automonomous Agents, Agents 99.Abstract
Execution monitoring is a critical challenge for agents in dynamic, complex, multi-agent domains. Existing approaches utilize goalattentive models which monitor achievement of task goals. However, they lack knowledge of the intended relationships which should hold among the agents, and so fail to address key opportunities and difficulties in multi-agent settings. We explore SAM, a novel complementary framework for social monitoring that utilizes knowledge of social relationships among agents in monitoring them. We compare the performance of SAM when monitoring is done by a single agent in a centralized fashion, versus team monitoring in a distributed fashion. We experiment with several SAM instantiations, algorithms that are sound and incomplete, unsound and complete, and both sound and complete. While a more complex algorithm appears useful in the centralized case (but is unsound), the surprising result is that a much simpler algorithm in the distributed case is both sound and complete. We present a set of techniques for practical, efficient implementations with rigorously proven performance guarantees, and systematic empirical validation.
1999_2_teamcore_agents99_monitor.pdf
D. V. Pynadath, Milind Tambe, and N. Chauvat. 1999. “Rapid integration and coordination of heterogeneous distributed agents for collaborative enterprises .” In DARPA JFACC symposium on advances in Enterprise Control.Abstract
As the agent methodology proves more and more useful in organizationalenterprises, research/industrial groups are developing autonomous, heterogeneous agents that are distributed over a variety of platforms and environments. Rapid integration of such distributed, heterogeneous agent components could address large-scale problems of interest in these enterprises. Unfortunately, rapid and robust integration remains a difficult challenge. To address this challenge, we are developing a novel teamwork-based agent integration framework. In this framework, software developers specify an agent organization through a team-oriented program. To locate and recruit agent components for this organization, an agent resources manager (an analogue of a “human resources manager”) searches for agents of interest to this organization and monitors their performance over time. TEAMCORE wrappers render the agent components in this organization team ready, thus ensuring robust, flexible teamwork among the members of the newly formed organization. This implemented framework promises to reduce the development effort in enterprise integration while providing robustness due to its teamwork-based foundations. We have applied this framework to a concrete, running example, using heterogeneous, distributed agents in a problem setting comparable to many collaborative enterprises.
1999_4_teamcore_jfacc_symp.pdf
Milind Tambe, W. Shen, M. Mataric, D. Goldberg, Pragnesh J. Modi, Z. Qiu, and B. Salemi. 1999. “Teamwork in cyberspace: Using TEAMCORE to make agents team-ready .” In AAAI Spring Symposium on Agents in Cyberspace.Abstract
In complex, dynamic and uncertain environments extending from disaster rescue missions, to future battlefields, to monitoring and surveillance tasks, to virtual training environments, to future robotic space missions, intelligent agents will play a key role in information gathering and filtering, as well as in task planning and execution. Although physically distributed on a variety of platforms, these agents will interact with information sources, network facilities, and other agents via cyberspace, in the form of the Internet, Intranet, the secure defense communication network, or other forms of cyberspace. Indeed, it now appears well accepted that cyberspace will be (if it is not already) populated by a vast number of such distributed, individual agents. Thus, a new distributed model of agent development has begun to emerge. In particular, when faced with a new task, this model prescribes working with a distributed set of agents rather than building a centralized, large-scale, monolithic individual agent. A centralized approach suffers from problems in robustness (due to a single point of failure), exhibits a lack of modularity (as a single monolithic system), suffers from difficulty in scalability (by not utilizing existing agents as components), and is often a mismatch with the distributed ground reality. The distributed approach addresses these weaknesses of the centralized approach. Our hypothesis is that the key to the success of such a distributed approach is teamwork in cyberspace. That is, multiple distributed agents must collaborate in teams in cyberspace so as to scale up to the complexities of the complex and dynamic environments mentioned earlier. For instance, consider an application such as monitoring traffic violators in a city. Ideally, we wish to be able to construct a suitable agent-team quickly, from existing agents that can control UAVs (Unmanned Air Vehicles), an existing 3D route-planning agent, and an agent capable of recognizing traffic violations based on a video input. Furthermore, by suitable substitution, we wish to be able to quickly reconfigure the team to monitor enemy activity on a battlefield or illegal poaching in forests. Such rapid agent-team assembly obviates the need to construct a monolithic agent for each new application from scratch, preserves modularity, and appears better suited for scalability. Of course, such agent teamwork in cyberspace raises a variety of important challenges. In particular, agents must engage in robust and flexible teamwork to overcome the uncertainties in their environment. They must also adapt by learning from past failures. Unfortunately, currently, constructing robust, flexible and adaptive agent teams is extremely difficult. Current approaches to teamwork suffer from a lack of general-purpose teamwork models, which would enable agents to autonomously reason about teamwork or communication and coordination in teamwork and to improve the team performance by learning at the team level. The absence of such teamwork models gives rise to four types of problems. First, team construction becomes highly labor-intensive. In particular, since agents cannot autonomously reason about coordination, human developers have to provide them with large numbers of domain-specific coordination and communication plans. These domain-specific plans are not reusable, and must be developed anew for each new domain. Second, teams suffer from inflexibility. In real-world domains, teams face a variety of uncertainties, such as a team member’s unanticipated failure in fulfilling responsibilities, team members’ divergent beliefs about their environment [CL91], and unexpectedly noisy or faulty communication. Without a teamwork model, it is difficult to anticipate and preplan for the vast number of coordination failures possible due to such uncertainties, leading to inflexibility. A third problem arises in team scale-up. Since creating even small-scale teams is difficult, scaling up to larger ones is even harder. Finally, since agents cannot reason about teamwork, learning about teamwork has also proved to be problematic. Thus, even after repeating a failure, teams are often unable to avoid it in the future. To remedy this situation and to enable rapid development of agent teams, we are developing a novel software system called TEAMCORE that integrates a general-purpose teamwork model and team learning capabilities. TEAMCORE provides these core teamwork capabilities to individual agents, i.e., it wraps them with TEAMCORE. Here, we call the individual TEAMCORE “wrapper” a teamcore agent. A teamcore agent is a pure “social agent”, in that it is provided with only core teamwork capabilities. Given an existing agent with domain-level action capabilities (i.e., the domain-level agent), it is made teamready by interfacing with a teamcore agent. Agents made team-ready will be able to rapidly assemble themselves into a team in any given domain. That is, unlike past approaches such as the open-agent-architecture (OAA) that provides a centralized blackboard facilitator to integrate a distributed set of agents, TEAMCORE is fundamentally a distributed team-oriented system. Our goal is a TEAMCORE system capable of generating teams that are: 1. Flexible and robust, able to surmount the uncertainties mentioned above. 2. Capable of scale-up to hundreds of team members 3. Able to improve the team performance by learning at the team level and avoiding past team failures. An initial version of TEAMCORE system based on the Soar [Newell90] integrated agent architecture is currently up and running. A distributed set of teamcore agents can form teams in cyberspace. The underlying communication infrastructure is currently based on KQML. The rest of this document now briefly describes the TEAMCORE design, architecture and implementation.
1999_9_teamcore_aaai_spring99.pdf
D. V. Pynadath, Milind Tambe, N. Chauvat, and L. Cavedon. 1999. “Toward team-oriented programming .” In Agents, theories, architectures and languages (ATAL'99) workshop, to be published in Springer Verlag 'Intelligent Agents VI'.Abstract
t. The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments. Teamwork among these heterogeneous agents is critical in realizing the full potential of these systems and scaling up to the demands of large-scale applications. Unfortunately, development of robust, flexible agent teams is currently extremely difficult. This paper focuses on significantly accelerating the process of building such teams using a simplified, abstract framework called team-oriented programming (TOP). In TOP, a programmer specifies an agent organization hierarchy and the team tasks for the organization to perform, abstracting away from the innumerable coordination plans potentially necessary to ensure robust and flexible team operation. Our TEAMCORE system supports TOP through a distributed, domain-independent layer that integrates core teamwork coordination and communication capabilities. We have recently used TOP to integrate a diverse team of heterogeneous distributed agents in performing a complex task. We outline the current state of our TOP implementation and the outstanding issues in developing such a framework.
1999_5_teamcore_atal99.pdf
T. Raines, Milind Tambe, and S. Marsella. 1999. “Towards automated team analysis: a machine learning approach .” In Third international RoboCup competitions and workshop.Abstract
Teamwork is becoming increasingly important in a large number of multiagent applications. With the growing importance of teamwork, there is now an increasing need for tools for analysis and evaluation of such teams. We are developing automated techniques for analyzing agent teams. In this paper we present ISAAC, an automated assistant that uses these techniques to perform post-hoc analysis of RoboCup teams. ISAAC requires little domain knowledge, instead using data mining and inductive learning tools to produce the analysis. ISAAC has been applied to all of the teams from RoboCup’97, RoboCup’98, and Pricai’98 in a fully automated fashion. Furthermore, ISAAC is available online for use by developers of RoboCup teams.
1999_6_teamcore_isaac99.pdf
Z. Qiu, M. Tambe, and H. Jung. 1999. “Towards flexible negotiation in teamwork .” In Third International Conference on Autonomous Agents (Agents).Abstract
In a complex, dynamic multi-agent setting, coherent team actions are often jeopardized by agents' conflicting beliefs about different aspects of their environment, about resource availability, and about their own or teammates' capabilities and performance. Team members thus need to communicate and negotiate to restore team coherence. This paper focuses on the problem of negotiations in teamwork to resolve such conflicts. The basis of such negotiations is inter-agent argumentation based on Toulmin's argumentation pattern. There are several novel aspects in our approach. First, our approach to argumentation exploits recently developed general, explicit teamwork models, which make it possible to provide a generalized and reusable argumentation facility based on teamwork constraints. Second, an emphasis on collaboration in argumentation leads to novel argumentation strategies geared towards benefiting the team rather than the individual. Third, our goal, to realize argumentation in practice in an agent team, has led to decision theoretic and pruning techniques to reduce argumentation overhead. Our approach is implemented in a system called CONSA.
1999_11_teamcore_agents99_poster.pdf
Milind Tambe, Gal Kaminka, S. Marsella, I. Muslea, and T. Raines. 1999. “Two fielded teams and two experts: A Robocup response challenge from the trenches .” In International joint conference on Artificial Intelligence, IJCAI 99.Abstract
The RoboCup (robot world-cup soccer) effort, initiated to stimulate research in multi-agents and robotics, has blossomed into a significant effort of international proportions. RoboCup is simultaneously a fundamental research effort and a set of competitions for testing research ideas. At IJCAI’97, a broad research challenge was issued for the RoboCup synthetic agents, covering areas of multi-agent learning, teamwork and agent modeling. This paper outlines our attack on the entire breadth of the RoboCup research challenge, on all of its categories, in the form of two fielded, contrasting RoboCup teams, and two off-line soccer analysis agents. We compare the teams and the agents to generalize the lessons learned in learning, teamwork and agent modeling.
1999_1_teamcore_ijcai99_isis.pdf
Milind Tambe, Jafar Adibi, Yasar Alonaizan, Ali Erdem, Gal Kaminka, Ion Muslea, and Stacy Marsella. 1999. “Building agent teams using an explicit teamwork model and learning.” Artificial Intelligence 110, Pp. 215-240.Abstract

Multi-agent collaboration or teamwork and learning are two critical research challenges in a large number of multi-agent applications. These research challenges are highlighted in RoboCup, an international project focused on robotic and synthetic soccer as a common testbed for research in multi-agent systems. This article describes our approach to address these challenges, based on a team of soccer-playing agents built for the simulation league of RoboCup—the most popular of the RoboCup leagues so far.

To address the challenge of teamwork, we investigate a novel approach based on the (re)use of a domain-independent, explicit model of teamwork, an explicitly represented hierarchy of team plans and goals, and a team organization hierarchy based on roles and role-relationships. This general approach to teamwork, shown to be applicable in other domains beyond RoboCup, both reduces development time and improves teamwork flexibility. We also demonstrate the application of off-line and on-line learning to improve and specialize agents' individual skills in RoboCup. These capabilities enabled our soccer-playing team, ISIS, to successfully participate in the first international RoboCup soccer tournament (RoboCup'97) held in Nagoya, Japan, in August 1997. ISIS won the third-place prize in over 30 teams that participated in the simulation league.

1999_10_teamcore_tambe98building.pdf
1998
Z. Qiu and Milind Tambe. 1998. “Flexible Negotiations in Teamwork: Extended Abstract .” In AAAI Fall Symposium on Distributed Continual Planning.Abstract
In a complex, dynamic multi-agent setting, coherent team actions are often jeopardized by agents' conflicting beliefs about different aspects of their environment, about resource availability, and about their own or teammates' capabilities and performance. Team members thus need to communicate and negotiate to restore team coherence. This paper focuses on the problem of negotiations in teamwork to resolve such conflicts. The basis of such negotiations is inter-agent argumentation (based on Toulmin's argumentation structure), where agents assert their beliefs to others, with supporting arguments. One key novelty in our work is that agents' argumentation exploits previous research on general, explicit teamwork models. Based on such teamwork models, it is possible categorize the conflicts that arise into different classes, and more importantly provide a generalized and reusable argumentation facility based on teamwork constraints. Our approach is implemented in a system called CONSA (COllaborative Negotiation System based on Argumentation).
1998_6_teamcore_fss98.pdf
Milind Tambe. 1998. “Implementing agent teams in dynamic multi-agent environments .” Applied Artificial Intelligence 12, Pp. 189-210.Abstract
Teamwork is becoming increasingly critical in multi-agent environments ranging from virtual environments for training and education, to information integration on the internet, to potential multi-robotic space missions. Teamwork in such complex, dynamic environments is more than a simple union of simultaneous individual activity, even if supplemented with preplanned coordination. Indeed in these dynamic environments, unanticipated events can easily cause a breakdown in such preplanned coordination. The central hypothesis in this article is that for effective teamwork, agents should be provided explicit representation of team goals and plans, as well as an explicit representation of a model of teamwork to support the execution of team plans. In our work, this model of teamwork takes the form of a set of domain independent rules that clearly outline an agent’s commitments and responsibilities as a participant in team activities, and thus guide the agent’s social activities while executing team plans. This article describes two implementations of agent teams based on the above principles, one for a real-world helicopter combat simulation, and one for the RoboCup soccer simulation. The article also provides a preliminary comparison of the two agent-teams to illustrate some of the strengths and weaknesses of RoboCup as a common testbed for multi-agent systems.
1998_1_teamcore_aai.pdf
1998. “Social comparison for failure detection and recovery .” In Intelligent Agents IV: Agents, Theories, Architectures and Languages (ATAL). Springer Verlag.Abstract
Plan execution monitoring in dynamic and uncertain domains is an important and difficult problem. Multi-agent environments exacerbate this problem, given that interacting and coordinated activities of multiple agents are to be monitored. Previous approaches to this problem do not detect certain classes of failures, are inflexible, and are hard to scale up. We present a novel approach, SOCFAD, to failure detection and recovery in multi-agent settings. SOCFAD is inspired by Social Comparison Theory from social psychology and includes the following key novel concepts: (a) utilizing other agents in the environment as information sources for failure detection, (b) a detection and repair method for previously undetectable failures using abductive inference based on other agents’ beliefs, and (c) a decision-theoretic approach to selecting the information acquisition medium. An analysis of SOCFAD is presented, showing that the new method is complementary to previous approaches in terms of classes of failures detected.
1998_2_teamcore_atal97fin.pdf
Milind Tambe, W. L. Johnson, and W. Shen. 1998. “Adaptive agent tracking in real-world multi-agent domains: a preliminary report.” International Journal of Human-Computer Studies, 48, Pp. 105-124.Abstract
In multi-agent environments, the task of agent tracking (i.e., tracking other agents’ mental states) increases in difficulty when a tracker (tracking agent) only has an imperfect model of the trackee (tracked agent). Such model imperfections arise in many realworld situations, where a tracker faces resource constraints and imperfect information, and the trackees themselves modify their behaviors dynamically. While such model imperfections are unavoidable, a tracker must nonetheless attempt to be adaptive in its agent tracking. In this paper, we analyze some key issues in adaptive agent tracking, and describe an initial approach based on discrimination-based learning. The main idea is to identify the deficiency of a model based on prediction failures, and revise the model by using features that are critical in discriminating successful and failed episodes. Our preliminary experiments in simulated air-to-air combat environments have shown some promising results but many problems remain open for future research.
1998_9_teamcore_adaptive.pdf

Pages