This paper focuses on our recent research effort aimedat developing human-like, intelligent agents (virtualhumans) for large-scale, interactive simulationenvironments (virtual reality). These simulatedenvironments have sufficiently high fidelity andrealism[l 1,23] that constructing intelligent agentsrequires us to face many of the hard research challengesfaced by physical agents in the real world -- inparticular, the integration of a variety of intelligentcapabilities, including goal-driven behavior, reactivity,real-time performance, planning, learning, spatial andtemporal reasoning, and natural languagecommunication. However, since this is a syntheticenvironment, these intelligent agents do not have to dealwith issues of low-level perception and robotic control.Important applications of this agent technology can be found in areas such as education ,manufacturing ,entertainment [2, 12]and training.
Agent tracking is an important capability an intelligent agent requires for interacting with other agents. It involves monitoring the observable actions of other agents as well as inferring their unobserved actions or high-level goals and behaviors. This paper focuses on a key challenge for agent tracking: recursive tracking of individuals or groups of agents. The paper first introduces aa approach for tracking recursive agent models. To tame the resultant growth in the tracking effort and aid real-time performance, the paper then presents model sharing, an optimization that involves sharing the effort of tracking multiple models. Such shared models are dynamically unshared as needed -- in effect, a model is selectively tracked if it is dissimilar enough to require unsharing. The paper also discusses the application of recursive modeling in service of deception, and the impact of sensor imperfections. This investigation is based on our on-going effort to build intelligent pilot agents for a real-world synthetic air-combat environment.
Agent tracking involves monitoring the observable actions of other agents as well as inferring their unobserved actions, plans, goals and behaviors. In a dynamic, real-time environment, an intelligent agent faces the challenge of tracking other agents' flexible mix of goal-driven and reactive behaviors, and doing so in real-time, despite ambiguities. This paper presents RESC (REal-time Situated Commitments), an approach that enables an intelligent agent to meet this challenge. RESC's situat-edness derives from its constant uninterrupted attention to the current world situation — it always tracks other agents' on-going actions in the context of this situation. Despite ambiguities, RESC quickly commits to a single interpretation of the on-going actions (without an extensive examination of the alternatives), and uses that in service of interpretation of future actions. However, should its commitments lead to inconsistencies in tracking, it uses single-state backtracking to undo some of the commitments and repair the inconsistencies. Together, RESC's situatedness, immediate commitment, and single-state backtracking conspire in providing RESC its real-time character. RESC is implemented in the context of intelligent pilot agents participating in a real-world synthetic air-combat environment. Experimental results illustrating RESC's effectiveness are presented.
What kinds of knowledge can Soar/IFOR agents learn in the combat simulation environment? In our investigations so far, we have found a number of learning opportunities in our systems, which yield several types of learned rules. For example, some rules speed up the agents' decision making, while other rules reorganize the agent's tactical knowledge for the purpose of on-line explanation generation. Yet, it is also important to ask a second question: Can machine learning make a significant difference in Soar/IFOR agent performance? The main issue here is that battlefield simulations are a real-world application of AI technology. The threshold which machine learning must surpass in order to be useful in this environment is therefore quite high. It is not sufficient to show that machine learning is applicable "in principle" via small-scale demonstrations; we must also demonstrate that learning provides significant benefits that outweigh any hidden costs. Thus, the overall objective of this work is to determine how machine learning can provide practical benefits to real-world applications of artificial intelligence. Our results so far have identified instances where machine learning succeeds in meeting these various requirements, and therefore can be an important resource in agent development. We have conducted extensive learning experiments in the laboratory, and have conducted demonstrations employing agents that learn; to date, however, learning has not yet been employed in large-scale exercises. The role of machine learning in Soar/IFOR is expected to broaden as practical impediments to learning are resolved, and the capabilities that agents are expected to exhibit are broadened.
Abstract There are two RWA in the scenario, just behind the The Soar/IFOR project has been developing ridge, indicated by the contour lines. The other intelligent pilot agents (henceforth IPs) for vehicles in the figure are a convoy of "enemy" participation in simulated battlefield environments. ground vehicles tanks and anti-aircraft vehicles While previously the project was mainly focused on controlled by ModSAF. The RWA are IPs for fixed-wing aircraft (FWA), more recently, the approximately 2.5 miles from the convoy. The IPs project has also started developing IPs for rotaryhave hidden their helicopters behind the ridge (their wing aircraft (RWA). This paper presents a approximate hiding area is specified to them in preliminary report on the development of IPs for advance). They unmask these helicopters by popping RWA. It focuses on two important issues that arise in out from behind the ridge to launch missiles at the this development. The first is a requirement for enemy vehicles, and quickly remask (hide) by reasoning about the terrain when compared to an dipping behind the ridge to survive retaliatory FWA IP, an RWA IP needs to fly much closer to the attacks. They subsequently change their hiding terrain and in general take advantage of the terrain for position to avoid predictability when they pop out cover and concealment. The second issue relates to later. code and concept sharing between the FWA and RWA IPs. While sharing promises to cut down the development time for RWA IPs by taking advantage of our previous work for the FWA, it is not straightforward. The paper discusses the two issues in some detail and presents our initial resolutions of these issues.
Interactive simulation environments constitute one of today’s promising emerging technologies, withapplications in areas such as education, manufacturing, entertainment and training. These environmentsare also rich domains for building and investigating intelligent automated agents, with requirements forthe integration of a variety of agent capabilities, but without the costs and demands of low-levelperceptual processing or robotic control.Our project is aimed at developing human-like, intelligent agents that can interact with each other, as wellas with humans in such virtual environments. Our current target is intelligent automated pilots forbattlefield simulation environments. These are dynamic, interactive, multi-agent environments that poseinteresting challenges for research on specialized agent capabilities as well as on the integration of thesecapabilities in the development of "complete" pilot agents. We are addressing these challenges throughdevelopment of a pilot agent, called TacAir-Soar, within the Soar architecture.The purpose of this article is to provide an overview of this domain and project by analyzing thechallenges that automated pilots face in battlefield simulations, describing how TacAir-Soar issuccessfully able to address many of themTacAir-Soar pilots have already successfully participated inconstrained air-combat simulations against expert human pilotsand discussing the issues involved inresolving the remaining research challenges