================================ VOLUME 17 NUMBER 4 DECEMBER 1993 ================================ 1. A MULTISTRATEGY LEARNING SCHEME FOR AGENT KNOWLEDGE ACQUISITION Diana Gordon, Naval Research Laboratory, Code 5514, Washington D.C. 20375, USA, gordon@aic.nrl.navy.mil AND Devika Subramanian, Department of Computer Science, Cornell University, Ithaca, NY 14853, USA, devika@cs.cornell.edu pp. 331-346 keywords: multistrategy learning, advice taking, compilation, operationalization, genetic algorithms abstract: The problem of designing and refining task-level strategies in an embedded multiagent setting is an important unsolved question. To address this problem, we have developed a multistrategy system that combines two learning methods: operationalization of high-level advice provided by a human and incremental refinement by a genetic algorithm. The first method generates seed rules for finer-grained refinements by the genetic algorithm. Our multistrategy learning system is evaluated on two complex simulated domains as well as with a Nomad 200 robot. ----------------------- 2. MULTISTRATEGY LEARNING IN REACTIVE CONTROL SYSTEMS FOR AUTONOMOUS ROBOTIC NAVIGATION Ashwin Ram and Juan Carlos Santamar\'{\i}a, College of Computing, Georgia Institute of Technology, Atlanta, Georgia 30332-0280, U.S.A. pp. 347-369 keywords: robot navigation, reactive control, case-based reasoning, reinforcement learning, adaptive control abstract: This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations. ----------------------- 3. COMBINING KNOWLEDGE-BASED AND INSTANCE-BASED LEARNING TO EXPLOIT QUALITATIVE KNOWLEDGE Gerhard Widmer, Department of Medical Cybernetics and Artificial Intelligence, University of Vienna, and Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria pp. 371-385 keywords: artificial intelligence, multistrategy learning, instance-based learning, knowledge-based learning, qualitative knowledge, music abstract: In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the \either\/ propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend \either\/ to refine \MN\/ rules. The resulting algorithm, \eitherN\/ (New \either), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the \MN\/ format. To demonstrate the advantages of \eitherN\/, we present experimental results from two real-world domains. ----------------------- 4. EXTENDING THEORY REFINEMENT TO M-of-N RULES Paul T. Baffes and Raymond J. Mooney, Department of Computer Sciences, University of Texas, Austin, Texas, 78712-1188 USA pp. 387-397 keywords: artificial intelligence, multistrategy learning, theory refinement In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the \either\/ propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend \either\/ to refine \MN\/ rules. The resulting algorithm, \eitherN\/ (New \either), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the \MN\/ format. To demonstrate the advantages of \eitherN\/, we present experimental results from two real-world domains. ----------------------- 5. MULTITYPE INFERENCE IN MULTISTRATEGY TASK-ADAPTIVE LEARNING: DYNAMIC INTERLACED HIERARCHIES Michael R. Hieb and Ryszard S. Michalski, Center for Artificial Intelligence, George Mason University, Fairfax, VA, hieb@gmu.edu and michalski@gmu.edu pp. 399-412 keywords: multistrategy learning, inferential theory of learning, knowledge transmutation, generalization, abstraction, similization abstract: Research on multistrategy task-adaptive learning aims at integrating all basic inferential learning strategies---learning by deduction, induction and analogy. The implementation of such a learning system requires a knowledge representation that facilitates performing a multitype inference in a seamlessly integrated fashion. This paper presents an approach to implementing such multitype inference based on a novel knowledge representation, called Dynamic Interlaced Hierarchies (DIH). DIH integrates ideas from our research on cognitive modeling of human plausible reasoning, the Inferential Theory of Learning, and knowledge visualization. In DIH, knowledge is partitioned into a``static'' part that represents relatively stable knowledge, and a ``dynamic'' part that represents knowledge that changes relatively frequently. The static part is organized into type, part, or precedence hierarchies, while the dynamic part consists of traces that link nodes of different hierarchies. By modifying traces in different ways, the system can perform different knowledge transmutations (patterns of inference), such as generalization, abstraction, similization, and their opposites, specialization, concretion and dissimilization, respectively.