Publications

1995
Milind Tambe, K. Schwamb, and P. S. Rosenbloom. 1995. “ Constraints and design choice's in building intelligent pilots for simulated aircraft: Extended Abstract.” In AAAI Spring Symposium on 'Lessons Learned from Implemented Software Architectures for Physical Agents'.Abstract
This paper focuses on our recent research effort aimedat developing human-like, intelligent agents (virtualhumans) for large-scale, interactive simulationenvironments (virtual reality). These simulatedenvironments have sufficiently high fidelity andrealism[l 1,23] that constructing intelligent agentsrequires us to face many of the hard research challengesfaced by physical agents in the real world -- inparticular, the integration of a variety of intelligentcapabilities, including goal-driven behavior, reactivity,real-time performance, planning, learning, spatial andtemporal reasoning, and natural languagecommunication. However, since this is a syntheticenvironment, these intelligent agents do not have to dealwith issues of low-level perception and robotic control.Important applications of this agent technology can be found in areas such as education [14],manufacturing [11],entertainment [2, 12]and training.
1995_5_teamcore_writeup4.pdf
Milind Tambe. 1995. “ Recursive agent and agent-group tracking in a real-time dynamic environment.” In International conference on multi-agent systems (ICMAS).Abstract
Agent tracking is an important capability an intelligent agent requires for interacting with other agents. It involves monitoring the observable actions of other agents as well as inferring their unobserved actions or high-level goals and behaviors. This paper focuses on a key challenge for agent tracking: recursive tracking of individuals or groups of agents. The paper first introduces aa approach for tracking recursive agent models. To tame the resultant growth in the tracking effort and aid real-time performance, the paper then presents model sharing, an optimization that involves sharing the effort of tracking multiple models. Such shared models are dynamically unshared as needed -- in effect, a model is selectively tracked if it is dissimilar enough to require unsharing. The paper also discusses the application of recursive modeling in service of deception, and the impact of sensor imperfections. This investigation is based on our on-going effort to build intelligent pilot agents for a real-world synthetic air-combat environment.
1995_4_teamcore_tambe95recursive.pdf
Milind Tambe and P. S. Rosenbloom. 1995. “ RESC: An approach for dynamic, real-time agent tracking.” In International joint conference on Artificial Intelligence (IJCAI), 3rd ed., 12: Pp. 499-522.Abstract
Agent tracking involves monitoring the observ­able actions of other agents as well as infer­ring their unobserved actions, plans, goals and behaviors. In a dynamic, real-time environ­ment, an intelligent agent faces the challenge of tracking other agents' flexible mix of goal-driven and reactive behaviors, and doing so in real-time, despite ambiguities. This paper presents RESC (REal-time Situated Commit­ments), an approach that enables an intelligent agent to meet this challenge. RESC's situat-edness derives from its constant uninterrupted attention to the current world situation — it always tracks other agents' on-going actions in the context of this situation. Despite ambigu­ities, RESC quickly commits to a single inter­pretation of the on-going actions (without an extensive examination of the alternatives), and uses that in service of interpretation of future actions. However, should its commitments lead to inconsistencies in tracking, it uses single-state backtracking to undo some of the commit­ments and repair the inconsistencies. Together, RESC's situatedness, immediate commitment, and single-state backtracking conspire in pro­viding RESC its real-time character. RESC is implemented in the context of intelli­gent pilot agents participating in a real-world synthetic air-combat environment. Experimen­tal results illustrating RESC's effectiveness are presented.
1995_3_teamcore_resc.pdf
W. Lewis Johnson and Milind Tambe. 1995. “ Using Machine Learning To Extend Autonomous Agent Capabilities.” In Summer Computer Simulation Conference.Abstract
What kinds of knowledge can Soar/IFOR agents learn in the combat simulation environment? In our investigations so far, we have found a number of learning opportunities in our systems, which yield several types of learned rules. For example, some rules speed up the agents' decision making, while other rules reorganize the agent's tactical knowledge for the purpose of on-line explanation generation. Yet, it is also important to ask a second question: Can machine learning make a significant difference in Soar/IFOR agent performance? The main issue here is that battlefield simulations are a real-world application of AI technology. The threshold which machine learning must surpass in order to be useful in this environment is therefore quite high. It is not sufficient to show that machine learning is applicable "in principle" via small-scale demonstrations; we must also demonstrate that learning provides significant benefits that outweigh any hidden costs. Thus, the overall objective of this work is to determine how machine learning can provide practical benefits to real-world applications of artificial intelligence. Our results so far have identified instances where machine learning succeeds in meeting these various requirements, and therefore can be an important resource in agent development. We have conducted extensive learning experiments in the laboratory, and have conducted demonstrations employing agents that learn; to date, however, learning has not yet been employed in large-scale exercises. The role of machine learning in Soar/IFOR is expected to broaden as practical impediments to learning are resolved, and the capabilities that agents are expected to exhibit are broadened.
1995_6_teamcore_johnson_tambe95.pdf
Milind Tambe, K. Schwamb, and P. S. Rosenbloom. 1995. “Building intelligent pilots for simulated rotary wing aircraft.” In Conference on computer generated forces and behavioral representation.Abstract
Abstract There are two RWA in the scenario, just behind the The Soar/IFOR project has been developing ridge, indicated by the contour lines. The other intelligent pilot agents (henceforth IPs) for vehicles in the figure are a convoy of "enemy" participation in simulated battlefield environments. ground vehicles  tanks and anti-aircraft vehicles  While previously the project was mainly focused on controlled by ModSAF. The RWA are IPs for fixed-wing aircraft (FWA), more recently, the approximately 2.5 miles from the convoy. The IPs project has also started developing IPs for rotaryhave hidden their helicopters behind the ridge (their wing aircraft (RWA). This paper presents a approximate hiding area is specified to them in preliminary report on the development of IPs for advance). They unmask these helicopters by popping RWA. It focuses on two important issues that arise in out from behind the ridge to launch missiles at the this development. The first is a requirement for enemy vehicles, and quickly remask (hide) by reasoning about the terrain  when compared to an dipping behind the ridge to survive retaliatory FWA IP, an RWA IP needs to fly much closer to the attacks. They subsequently change their hiding terrain and in general take advantage of the terrain for position to avoid predictability when they pop out cover and concealment. The second issue relates to later. code and concept sharing between the FWA and RWA IPs. While sharing promises to cut down the development time for RWA IPs by taking advantage of our previous work for the FWA, it is not straightforward. The paper discusses the two issues in some detail and presents our initial resolutions of these issues.
1995_2_teamcore_rwa_final.pdf
Milind Tambe, W. L. Johnson, R. M. Jones, F. Koss, J. E. Laird, P. S. Rosenbloom, and K Schwamb. 1995. “Intelligent Agents for Interactive Simulation Environments.” AI Magazine 16 (1), Pp. 15-39 .Abstract
Interactive simulation environments constitute one of today’s promising emerging technologies, withapplications in areas such as education, manufacturing, entertainment and training. These environmentsare also rich domains for building and investigating intelligent automated agents, with requirements forthe integration of a variety of agent capabilities, but without the costs and demands of low-levelperceptual processing or robotic control.Our project is aimed at developing human-like, intelligent agents that can interact with each other, as wellas with humans in such virtual environments. Our current target is intelligent automated pilots forbattlefield simulation environments. These are dynamic, interactive, multi-agent environments that poseinteresting challenges for research on specialized agent capabilities as well as on the integration of thesecapabilities in the development of "complete" pilot agents. We are addressing these challenges throughdevelopment of a pilot agent, called TacAir-Soar, within the Soar architecture.The purpose of this article is to provide an overview of this domain and project by analyzing thechallenges that automated pilots face in battlefield simulations, describing how TacAir-Soar issuccessfully able to address many of themTacAir-Soar pilots have already successfully participated inconstrained air-combat simulations against expert human pilotsand discussing the issues involved inresolving the remaining research challenges
1995_1_teamcore_tambe95intelligent.pdf
1994
Milind Tambe, R. M. Jones, J. E. Laird, P. S. Rosenbloom, and K. Schwamb. 1994. “ Building believable agents for simulation environments: Extended Abstract.” In AAAI Spring Symposium on 'Believable Agents'.Abstract
The goal of our research effort is to develop generic technology for intelligent automated agents in simulation environments. These agents are to behave believably like humans in these environments. In this context, believability refers to the indistinguishability of these agents from humans, given the task being performed, its scope, and the allowable mode(s) of interaction during task performance. For instance, for a given simulation task, one allowable mode of interaction with an agent may be typewritten questions and answers on a limited subject matter. Alternatively, a different allowable mode of interaction for the same (or different) task may be speech rather than typewritten words. In all these cases, believability implies that the agent must be indistinguishable from a human, given the particular mode of interaction.
1994_2_teamcore_symp_paper.pdf
Milind Tambe and Paul S. Rosenbloom. 1994. “ Event Tracking for an Intelligent Automated Agent.” In Time94: An international workshop on temporal representation and reasoning. .Abstract
In a dynamic, multi-agent environment, an automated intelligent agent is often faced with the possibility that other agents may instigate events that actually hinder or help the achievement of its own goals. To act intelligently in such an environment, an automated agent needs an event tracking capability to continually monitor the occurrence of such events and the temporal relationships among them.  This capability enables an agent to infer the occurrence of important unobserved events as well as obtain a better understanding of interaction among events. This paper focuses on event tracking in one complex and dynamic multi-agent environment: the air-combat simulation environment. It analyzes the challenges that an automated pilot agent must face when tracking events in this environment. This analysis reveals some novel constraints on event tracking that arise from complex multi-agent interactions. The paper proposes one solution to address these constraints, and demonstrates it using a simple re-implementation of an existing automated pilot agent.
1994_5_teamcore_time_final1994.pdf
Milind Tambe and P. S. Rosenbloom. 1994. “ Event tracking in complex multiagent environments.” In Conference on computer generated forces and behavioral representation .Abstract
This paper focuses on event tracking in one complex and dynamic multi-agent environment: the air-combat simulation environment. It analyzes the challenges that an automated pilot agent must face when tracking events in this environment. This analysis reveals three new issues that have not been addressed in previous work in this area: (i) tracking events generated by agents' flexible and reactive behaviors, (ii) tracking events in the context of continuous agent interactions, and (iii) tracking events in real-time. The paper proposes one solution to address these issues. One key idea in this solution is that the (architectural) mechanisms that an agent employs in generating its own flexible and reactive behaviors can be used to track other agents' flexible and reactive behaviors in real-time. A second key idea is the use of a world-centered representation for modeling agent interactions. The solution is demonstrated using an implementation of an automated pilot agent.
1994_1_teamcore_pr_final.pdf
Paul S. Rosenbloom, W. Lewis Johnson, Randolph M. Jones, Frank Koss, and John E. Laird. 1994. “ Intelligent Automated Agents for Tactical Air Simulation: A Progress Report.” In Fourth Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL.Abstract

time, flexibly use a small amount of tactical This article reports on recent progress in the development of TacAir-Soar, an intelligent automated agent for tactical air simulation. This includes progress in expanding the agent’s coverage of the tactical air domain, progress in enhancing the quality of the agent’s behavior, and progress in building an infrastructure for research and development in this area. knowledge about two classes of one-versusone (1-v-1) Beyond Visual Range (BVR) tactical air scenarios. In the non-jinking bogey scenarios, one plane (the non-jinking bogey) is unarmed and maintains a straight-and-level flight path. The other plane is armed with long-range radar-guided, medium-range radar-guided, and short-range infrared-guided missiles. Its task is to set up for a sequence of missile shots, at

1994_4_teamcore_tacairsoar.pdf
M. Tambe and P. S. Rosenbloom. 1994. “Investigating production system representations for non-combinatorial match.” Artificial Intelligence (AIJ), 68, 1, Pp. 155-199.Abstract

Eliminating combinatorics from the match in production systems (or rule-based systems) is important for expert systems, real-time performance, machine learning (particularly with respect to the utility issue), parallel implementations and cognitive modeling. In [74], the unique-attribute representation was introduced to eliminate combinatorics from the match. However, in so doing, unique-attributes engender a sufficiently negative set of trade-offs, so that investigating whether there are alternative representations that yield better trade-offs becomes of critical importance.

This article identifies two promising spaces of such alternatives, and explores a number of the alternatives within these spaces. The first space is generated from local syntactic restrictions on working memory. Within this space, unique-attributes is shown to be the best alternative possible. The second space comes from restrictions on the search performed during the match of individual productions (match-search). In particular, this space is derived from the combination of a new, more relaxed, match formulation (instantiationless match) and a set of restrictions derived from the constraint-satisfaction literature. Within this space, new alternatives are found that outperform unique-attributes in some, but not yet all, domains.

1994_3_teamcore_aij94.pdf
1993
A. Acharya and Milind Tambe. 1993. “ Collection oriented match.” In Conference on Information and Knowledge Management .Abstract
Match algorithms capable of handling large amounts of dat% without giving up expressiveness are a key requirement for successful integration of relational database systems and powerful rule-based systems. Algorithms that have been used for database rule systems have usually been unable to support large and complex rule sets, while the algorithms that have been used for rule-based expert systems do not scale welt with data. Furthermore% these algorithms do not ~ovide support for collection (or set) oriented production languages. This paper ~oposes a basic shift in the nature of match algorithms: from tuple-oriented to collectwn-oriented. A collection-oriented match algorithm matches each condition in a production with a collection of tuples and generatea collection-oriented instarrtiutwns, i.e., instantiation that have collection of tuples corresponding to each condition. This approach shows great promise for efllciently matching expressive productions against large amounts of data. In addition, it provides direct support for collection-oriented production languages. We have found that many existing tuple-oriented match algorithms can be easily rmnsfonned to their collection-oriented analogues. This paper presents the transformation of Rete to Collection Rete as an example and compares the two based on a set of benchmarks. Results presented in this paper show tha~ for large amounts of data, a relatively underoptitnized implementation of Collection Rete achieves orders of magnitude improvement in time and space over an optimized version of Rete. The results establish the feasibility of collection-oriented match for integrated database-production systems.
1993_1_teamcore_cikm_final.pdf
Wilson A. Harvey and Milind Tambe. 1993. “ Experiments in Knowledge Refinement for a Large Rule-Based System.” In School of Computer Science, Carnegie Mellon University, technical report CMU-CS-93-195 .Abstract
Knowledge-refinement is a central problem in the field of expert systems[Buchanan and Shortliffe,1984]. It refers to the progressive refinement of the initial knowledge-base of an expert systeminto a high-performance knowledge-base. For rule-based systems, refinement implies the addition,deletion and modification of rules in the system so as to improve the system’sempirical adequacy,i.e., its ability to reach correct conclusions in the problems it is intended to solve[Ginsberget al.,1988].The goal of our research effort is to understand the methodology for refining large rule-basedsystems, as well as to develop tools that will be useful in refining such systems. The vehicle forour investigation isspam, a production system (rule-based system) for the interpretation of aerialimagery[McKeownet al., 1985, McKeownet al., 1989]. It is a mature research system havingover 600 productions, many of which interact with a variety of complex (non-rule-based) geometricalgorithms. A typical scene analysis task requires between50,000 to 400,000 production firings andan execution time of the order of 2 to 4 cpu hours1.Large, compute-intensive systems likespamimpose some unique constraints on knowledge refine-ment. First, the problem of credit/blame-assignment is complicated; it is extremely difficult toisolate a single culprit production (or a set of culprit productions) to blame for an error observed inthe output. As a result, the methodology adopted in well-known systems such asseekandseek2[Politakis and Weiss, 1984, Ginsberget al., 1988], orkrust[Craw and Sleeman, 1991], cannotbe directly employed to refine knowledge inspam. These systems identify errorful output andbackward-chain through the executed rules to localize the source of the error. A second problemis thatspam’s long run-times make it difficult to rely on extensive experimentation for knowledgerefinement. Iterative refinement and generate and test methods typically require many experimentsto improve the knowledge base. In particular,spam’s long run-times prohibit a thorough search ofthe space of possible refinements.Givenspam’s constraints, we have employed a bottom-up approach for knowledge refinement.Specifically, we begin by refining small portions ofspam, and then attempt to understand the inter-actions of these refinements and their impact on intermediate results. Fortunately,spamis alreadydivided into four phases, facilitating the identification of the ”modular” pieces on which we focusour refinement efforts, as well as our evaluation efforts (discussed below). We begin by identifyinggaps and/or faults within small portions ofspam’s individual phases by comparing their output tothat of an expert. The knowledge is modified to more accurately match the expert’s output. Wethen evaluate the new output to see how well the refined knowledge is performing. This methodchains forward from the refined knowledge to an evaluation ofit’s performance. This is in contrastto the backward-chaining systems cited above, which identify errorful output and reason backwardto find the faulty knowledge (i.e., assign credit/blame). Backward chaining is difficult when theinteractions between rules are complex. In particular, inspam, this implies backward-chaininghundreds of thousands of rule firings as well as applicationsof complex algorithmic constraints,which would be extremely difficult.However, forward-chaining does not obviate the credit/blame assignment problem. It introducesthe problem in a new form — that of evaluating the impact of theknowledge refinements. Inparticular, in forward-chaining systems, given complex rule interactions, refinement in one partof the system need not improve the end result. This may occur because the refined portion maycontribute to overall improvement in a relatively small number of cases; or a second culprit compo-nent involved in the chaining may actually minimize the impact of the refinements. Hence, there is a need for evaluation and analysis ofintermediateresults. Of course, these intermediate resultsshould more than exactly evaluate the parts of the system refined as a result of interaction withthe expert — they should progressively yield a better understanding of how to change the rules tobetter the system’s overall performance.spam’s decomposition into phases helps us to a certainextent: the endpoints of the phases serve as potential intermediate evaluation points.In our work so far, we have focused on the second phase inspamlocal-consistency (lcc). Thisphase was chosen because most ofspam’s time is spent in this phase, and because it shows the mostpotential for future growth.lccperforms a modified constraint satisfaction between hypothesesgenerated inspam’s first phase. It applies constraints to a set of plausible hypotheses and prunesthe hypotheses that are inconsistent with those constraints. For example, objects hypothesized ashangar-buildings and objects hypothesized as parking-aprons would participate in the constrainthangar-buildings and parking-aprons are close together. This rule is realized as a distance con-straint on pairs of objects hypothesized to be hangar-buildings and parking-aprons. A successfulapplication of anlccconstraint provides support for each pair of hypotheses, and an unsuccessfulapplication decreases support for that pair of hypotheses.In essence, each constraint in thelccphase classifies pairs of hypotheses — either the constraintsupports that pair, or it does not. (lccandspamare described in further detail in Section 2.)In working toward refining these constraints, we posed several questions:1. What role does this constraint play in the interpretation process? 2. If the constraint does play a role, is it positive (helpful) or negative (unhelpful)?3. If the role is positive, and the constraint is numeric, can we optimize the constraint values?4. How effective (powerful) is this constant? 5. What is this constraint’s impact on run time?To address these questions, we began by asking a user to manually compile a database of correct intermediate outputs — the ground-truth database. The database and the system outputs were compared. An automatic procedure was created to adjust the spam knowledge base so that the correspondence between the ground-truth database and the knowledge base improved. This adjusted knowledge was then re-run through the system and the entire set of intermediate outputs(refined or otherwise) were evaluated. The somewhat surprising result of this analysis was that the expert’s input did not help as much as expected. This result raised questions about why the expert’s input was not as helpful as anticipated, and how the results should be evaluated.In order to better understand both the interactions between constraints and the effects of modifying the knowledge base, we did a brute-force evaluation of the results, providing some answers to the questions we posed above. Namely, the distance constraints do play positive roles in spam’s interpretation process, though they are not very powerful.More importantly, we now see that re-fined knowledge may apply selectively. That is, the set of objects affected by individual constraints largely overlaps, implying that spam could reduce computation by selectively applying constraints and still achieve the similar performance. Tools eliciting the expert’s input must carefully take these interactions into consideration. Finally, we can also conclude that intermediate result evaluation is not straight-forward. For example, the complex structures that spam generates using the results of constraint application, called functional-areas, must be matched and these matches evaluated.Having provided some of the motivation for this work, we begin by describing the spam image interpretation system system. Following this background material, we present our refinement methodology and an analysis of our results. We then re-evaluate the methodology and discuss further experimental results. Finally, we will outline some areas of future work.
1993_3_teamcore_cmucstechreport1993.pdf
Milind Tambe and P. S. Rosenbloom. 1993. “ On the Masking Effect.” In National conference on Artificial Intelligence (AAAI-93).Abstract

Machine learning approaches to knowledge compilation seek to improve the perfonnance of problem-solvers by storing solutions to previously solved problems in an efficient, generalized fonn. The problem-solver retrieves these learned solutions in appropriate later situations to obtain results more efficiently. However, by relying on its learned knowledge to provide a solution, the problem-solver may miss an alternative solution of higher quality - one that could have been generated using the original (non-learned) problem-solving knowledge. This phenomenon is referred to as the ITUlSking effect of learning.

In this paper, we examine a sequence of possible solutions for the masking effect. Each solution refines and builds on the previous one. The fmal solution is based on cascaded filters. When learned knowledge is retrieved, these filters alert the system about the inappropriateness of this knowledge so that the system can then derive a better alternative solution. We analyze conditions under which this solution will perfonn better than the others, and present experimental data supportivt: of the analysis. This investigation is based on a simulated robot domain called Groundworld.

1993_2_teamcore_mask_final.pdf
1992
M. Tambe, Kalp. D., and P. Rosenbloom. 11/1992. “ An efficient algorithm for production systems with linear-time match.” In International Conference on Tools of Artificial Intelligence.Abstract
Combinatorial match in production systems (rule-based system) is problematical in several areas of production system application: real-time performance, learning new productions for performance improvement. modeling human cognition. and parallelization. The unique-attribute representation in production systems is a promising approach to eliminate match combinatorics. Earlier investigations have focused on the ability of unique-attributes to alleviate the problems caused b~ combinatorial match 1331. This paper reports on an additional benefit of unique-attributes: a specialized match algorithm called Uni-Rete. Uni-Rete is a specialization of the widely used Rete match algorirhm for unique-attributes. The paper presents performance results for Uni-Rete. which indicate over 10-fold speedup with respect to Rete. It also discusses the implications of Uni-Rete for non-unique-attribute systems.
1992_2_teamcore_tai92.pdf
C. lain Stobie, Milind Tambe, and Paul S. Rosenbloom. 11/1992. “ Flexible integration of path-planning capabilities.” MOBILE ROBOTS VII.Abstract
Robots pursuing complex goals must plan paths according to several criteria of quality, including shortness, safety, speed and planning time. Many sources and kinds of knowledge, such as maps, procedures and perception, may be available or required. Both the quality criteria and sources of knowledge may vary widely over time, and in general they will interact. One approach to address this problem is to express all criteria and goals numerically in a single weighted graph, and then to search this graph to determine a path. Since this is problematic with symbolic or uncertain data and interacting criteria, we propose that what is needed instead is an integration of many kinds of planning capabilities. We describe a hybrid approach to integration, based on experiments with building simulated mobile robots using Soar, an integrated problem-solving and learning system. For flexibility, we have implemented a combination of internal planning, reactive capabilities and specialized tools. We illustrate how these components can complement each other's limitations and produce plans which integrate geometric and task knowledge.
1992_4_teamcore_path_planning.pdf
R. Doorenbos, M. Tambe, and A. Newell. 1992. “ Learning 10,000 chunks: what’s it like out there.” In National Conference on Artificial Intelligence (AAAI).Abstract
This paper describes an initial exploration into large learning systems, i.e., systems that learn a large number of rules. Given the well-known utility problem in learning systems, efficiency questions are a major concern. But the questions are much broader than just efficiency, e.g., will the effectiveness of the learned rules change with scale? This investigation uses a single problem-solving and learning system, Dispatcher-Soar, to begin to get answers to these questions. Dispatcher-Soar has currently learned 10,112 new productions, on top of an initial system of 1,819 productions, so its total size is 11,931 productions. This represents one of the largest production systems in existence, and by far the largest number of rules ever learned by an AI system. This paper presents a variety of data from our experiments with Dispatcher-Soar and raises important questions for large learning systems1
1992_3_teamcore_aaai92_128.pdf
A. Acharya, M. Tambe, and A. Gupta. 1992. “Implementation of production systems on message passing computers: Simulation results and analysis.” In IEEE Transactions on Parallel and Distributed Computing (IEEE TPDC), 4th ed., 3: Pp. 477-487.Abstract
In the past, researchers working on parallel implementations of production systems have focused on shared- memory multiprocessors and special-purpose architectures. Message-passing computers have not been given as much attention. The main reasons for this have been the large message- passing latency (as large as a few milliseconds) and high message-handling overheads (several hundred microseconds) associated with the first generation message-passing computers. These overheads were too large for parallel implementations of production systems, which require a fine-grain decomposition to obtain a significant speedup. Recent advances in interconnection network technology and processing element design, however, promise to reduce the network latency and message-handling overhead by 2-3 orders of magnitude, making these computers much more interesting for implementation of production systems. In this paper, we examine the suitability of message-passing computers for parallel implementations of production systems. We present two mappings for production systems on these computers, one targeted toward fine-grained message-passing machines and the other targeted toward medium-grained machines. We also present simulation results for the medium- grained mapping and show that it is possible to exploit the available parallelism and to obtain reasonable speedups. Finally, we perform a detailed analysis of the results and suggest solutions for some of the problems. Index Terms- Coarse-grain mapping, concurrent distributed hash table, fine-grain mapping, medium-grain mapping, message- passing computers, OPS5, parallel production systems, Rete net- work, simulation results.
1992_1_teamcore_ieee1992.pdf
1991
W. Harvey, D. Kalp, M. Tambe, D. McKeown, and A. Newell. 1991. “The effectiveness of task-level parallelism for production system.” Parallel and Distributed Computing (JPDC), 13, 4, Pp. 395-411.Abstract
Large production systems (rule-based systems) continue to suffer from extremely slow execution which limits their utility in practical applications as well as in research settings. Most investigations in speeding up these systems have focused on match parallelism. These investigations have revealed that the total speed-up available from this source is insufficient to alleviate the problem of slow execution in large-scale production system implementations. In this paper, we focus on task-level parallelism, which is obtained by a high-level decomposition of the production system. Speed-ups obtained from task-level parallelism will multiply with the speed-ups obtained from match parallelism. The vehicle for our investigation of task-level parallelism is SPAM, a high-level vision system, implemented as a production system. SPAM is a mature research system with a typical run requiring between 50,000 and 400,000 production firings. We report very encouraging speed-ups from task-level parallelism in SPAM… -our parallel implementation shows near linear speed-ups of over 12-fold using 14 processors and points the way to substantial (50- to 100-fold) speed-ups. We present a characterization of task-level parallelism in production systems and describe our methodology for selecting and applying a particular approach to parallelize SPAM. Additionally, we report the speed-ups obtained from the use of virtual shared memory. Overall, task-level parallelism has not received much attention in the literature. Our experience illustrates that it is potentially a very important tool for speeding up large-scale production systems.
1991_1_teamcore_jpdc91.pdf
Milind Tambe, Dirk Kalp, and Paul Rosenbloom. 1991. “Uni-Rete : specializing the Rete match algorithm for the unique-attribute representation.” In Carnegie Mellon University Computer Science Dept Technical Report.Abstract
The combinatorial match in production systems (rule-based systems) is problematical in several areas of production system application: real-time performance, learning new productions for performance improvement, modeling human cognition, and parallelization. The unique-attribute representation is a promising approach to eliminate match combinatorics. Earlier investigations have focused on the ability of unique-attributes to alleviate the problems caused by combinatorial match [Tambe, Newell and Rosenbloom 90]. This paper reports on an additional benefit of unique-attributes: a specialized match algorithm called Uni-Rete. Uni-Rete is a specialization of the widely used Rete match algorithm for unique-attributes, and it has shown over 10-fold speedup over Rete in performing match.
1991_2_teamcore_uni_rete.pdf

Pages