index
int64
0
18.8k
text
stringlengths
0
826k
year
stringdate
1980-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
1,700
IPAIR: Interactive Met Analysis and Using Configuration Spaces Leo Joskowicz IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 E-mail: josko@watson.ibm.com We present an interactive problem solving environ- ment for reasoning about shape and motion in mecha- nism design. Reasoning about shape and motion plays a central role in mechanism design because mechanisms perform functions by transforming motions via part in- teractions. The input motion, the part shapes, and the part contacts determine the output motion. Designers must reason about the interplay between shape and motion at every step of the design cycle. Reasoning about shape and motion is difficult and time consuming even for experienced designers. The designer must determine which features of which parts interact at each stage of the mechanism work cycle, must compute the effects of the interactions, must identify contact transitions, and must infer the over- all behavior from this information. The designer must then infer shape modifications that eliminate design flaws, such as part interference and jamming, and that optimize performance. The difficulty in these tasks lies in the large number of potential contacts, in the complexity of the contact relations, and in the discon- tinuities induced by contact transitions. Current computer-aided design programs support only a few aspects of reasoning about shape and mo- tion. Drafting programs provide interactive environ- ments for the design of part shapes, but do not support reasoning about motion. Simulation programs, which compute and animate the motions of the parts of mech- anisms, reveal only one of many possible behaviors. Commercial simulators only handle linkages: mech- anisms whose parts interact through permanent sur- face contacts, such as hinges and screws. Other pack- ages handle specialized mechanisms, such as cams and gears. They cannot handle mechanisms whose parts interact intermittently or via point or curve contacts. Yet these higher pairs play a central role in mechanism design. Our survey of 2500 mechanisms in an engineer- ing encyclopedia shows that 66% contain higher pairs and that 18% involve intermittent contacts. We have developed a problem solving environment, called HIPAIR, for reasoning about shape and motion in mechanisms. The core of the environment is a mod- ule that automates the kinematic analysis of mecha- Elisha Sacks Computer Science Department Princeton University Princeton, NJ 08544 E-mail: eps@cs.princeton.edu nisms composed of linkages and higher pairs. This module provides the computational engine for a range of tasks, including simulation, behavior description, and parametric design. It is comprehensive, robust, and fast. HIPAIR handles higher pairs with two de- grees of freedom, including ones with intermittent and simultaneous contacts. This class contains 90% of 2.5D pairs and 80% of all higher pairs according to our sur- vey. HIPAIR computes and manipulates configuration spaces. The configuration space of a mechanism is a geometric representation of the configurations (po- sitions and orientations) of its parts. Configuration spaces encode the relations among part shapes, part motions, and overall behavior in a concise, complete and explicit format. They simplify and systematize reasoning about shape and motion by mapping it into a uniform geometrical framework. The videotape explains configuration spaces and illustrates how HIPAIR supports mechanism design and analysis. HIPAIR has been tested on over 100 parametric variations of 25 kinematic pairs and on dozen multipart mechanisms, including a Fuji dispos- able camera with ten moving parts. 1. 2. 3. 4. 5. eferences “Mechanism Comparison and Classification for De- sign”, L. Joskowicz, in Research in Engineering De- sign, Springer-Verlag, Vol 1. No. 2, 1990. “Computational Kinematics”, L. Joskowicz and E. Sacks, Artificial Intelligence, Vol. 51, Nos. 1-3, North-Holland, 1991. “Automated Modeling and Kinematic Simulation of Mechanisms”, E. Sacks and L. Joskowicz, Computer- Aided Design, Vol. 25, NO. 2, 1993. “Configuration Space Computation for Mechanism Design”, E. Sacks and L. Joskowicz, Proc. of the IEEE Int. Conference on Robotics and Automation, IEEE Computer Society Press, 1994. “Mechanism Analysis and Design Using Configura- tion Space$ , E. Sacks and L. Joskowicz, submitted, Communications of the ACM, 1994. Student Abstracts 1465 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
71
1,701
Learning Sorting Networks By Grammars Thomas E. Kammeyer and Richard K. Belew CS&E Department (0114) University of California, San Diego La Jolla, CA 92093 {tkammeye,rik}@cs.ucsd.edu Definitions and Previous Work A compare-exchange network, or CMPX-net, is a se- quence of operations of the form [i : j], each of which operates on an array, D, of length N. The network is said to have width N. The length of the network is the number of CMPX’s in the network. For each [i : j], we have i < j and i, j E [0, N - 11. To apply a CMPX- net to an array, swap O[i] and O[j] if D[i] > O[j] for each [i : j] in the sequence. A sorting networlc(SNet) is a CMPX-net which will sort D’s contents into nonde- creasing order no matter how D’s contents are ordered initially. A merging network (MNet) is a pair contain- ing a CMPX-net of even width, N, and a partition of the indices into two equal-size sets or “sides.” If the data on each side of the partition are sorted initially then the output will be sorted. The space of CMPX- nets with n, CMPX’s and width N is large, of size C(N; 2)‘% With MNets, the space is still bigger, since we must multiply the number of networks by the num- ber of partitions, C(N; N/2). We use a genetic algorithm(GA) to search for CMPX-nets which are SNets or MNets. The GA re- peatedly samples the space of potential solutions in a series of “generations,” each using the relative “fit- ness” of the previous generation’s samples to apportion more samples in promising regions. Mutation and es- pecially cross-over operators are applied to generate similar but novel new sample points; this process is it- erated until some stopping criterion is achieved. Hillis has had encouraging success using a GA to evolve sort- ing networks( Hillis 1991). In our work, we represent CMPX-nets by grammars which describe CMPX-nets. Terminals define particu- lar CMPX sequences and nonterminals specify ways in which larger networks are built from smaller ones. Merging Networks - Recent Results Our most recent experiments have involved MNets for several reasons. MNets can be fully tested in polyno- mial time using (N/2 + 1)2 input sequences; exhaus- tively testing SNets is much more expensive, requiring 2N input sequences. In addition, our random MNet generation experiments have shown that despite this 1466 Student Abstracts reduction in the cost of exhaustive testing, the problem is still very difficult. Some MNets are “log-sequential” sorters. That is, if we cycle the output of the network back to its input log2N times, then the data will be sorted. Recent analytic work has produced two log- sequential MNets(Dowd et al. 1989; Canfield & Williamson 1991). The network due to Canfield and Williamson is particularly interesting because it is “log spectrum” (The number of distinct j - i over all [i : j] in the network is O(Zog2N)) and “log delay” (it can be parallelized to execute in time O(log2N)). Both of these characteristics can provide critical advantages in hardware implement ation. Using our GA and grammar representation, we have found an interesting network similar to but distinct from the Canfield-Williamson network. The grammar which generates our network requires only two rules and generates an entire family of networks, one for each N = 2i, i >= 2. It appears to be a log-sequential sorter, and has log spectrum and log delay. This net- work embeds the Canfield-Williamson network of half as many inputs several times and in an overlapping fashion. Acknowledgements We gratefully acknowledge many useful conversations with S. Gill Williamson. References Canfield, E. R., and Williamson, S. G. 1991. A sequential sorting network analogous to the hatcher merge. Linear and Multilinear Algebra 29:43-51. Patent applied for March 1991, University of Cali- fornia as Assignee. Dowd, M.; Perl, Y.; Rudolph, L.; and Saks, M. 1989. The periodic balanced sorting network. J. Assoc. Computing Machinery 36(4). Hillis, W. D. 1991. Co-evolving parasites improve simulated evolution as an optimization procedure. In Artificial Life II. Addison-Wesley Publishing Com- pany* From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
72
1,702
The Formation 6f Coalitions A ong Self-Interested Agents Steven Ketchpel Stanford University Computer Science Department Stanford CA, 94305 ketchpel@cs.stanford.edu The Problem Researchers in the multi-agent systems community of DA1 assume that agents will have to interact with others agents that were designed by different designers for different goals. These diverse agents could benefit each other by collaborating, but they will do so only if the resulting deal is beneficial from each agent’s point of view. One useful definition of beneficial is that of economic rationality, maximizing the agent’s expected payoff in terms of a utility function. An open problem in this area is to design a protocol that allows a large pool of agents to determine which of the subsets among them can profit by working together. A solution to a coalition problem is a partition of the agents into subsets (coalitions), such that each agent in every coalition receives the most utility it can expect. Objectives The coalitions that are formed must be stable in the sense that none of the agents would leave their current coalitions to form a new one yielding all of the agents in that new coalition a higher utility than they obtain from their previous coalitions. Formally, the agents al,...,aN are divided into a partition P containing coalitions Cl ,...,CN such that every agent is a member of exactly one coalition. The payoff to an agent is a function u(P, a) of both the partition and the agent. For P to be stable, there must not be any other partition P’ forming Cl,..., C’N such that 3C’i E P’ Vaj E C’i with I.@‘, aj) > u(P, aj). If there were such a C’i, the agents of that coalition would desert their current coalitions and form C’i. Since the agents have different abilities, they may be making different contributions to the final outcome. Therefore, splitting the joint reward equally among the included agents might not be equitable. Finally, there are computational considerations. In addition to efficiency, decentralization is desirable, with more robustness in case of node failures and fewer communication bottlenecks. Methods and Results Although the idea of coalition formation is relatively new to the field of artificial intelligence, it has been studied by economists working in game theory. Their solutions cannot be directly applied to problems in computer science, however, since different assumptions are made and different phenomena are modeled. One approach to solving the coalition problem is to integrate work from game theory with traditional computational methods. In fact, this approach is being taken by a number of researchers [(Ketchpel 1993), (Shechory & Kraus 1993), (Zlotkin & Rosenschein 1993)]. One greedy algorithm based on the solution to the stable marriage problem (Gusfield & Irving 1989) meets the criteria of being decentralized and efficient, though may yield results which are not stable (Ketchpel 1993). Agents pair off in coalitions that improve their utility the most, then re- enter as a single “agent”. The process continues until no new coalitions are formed. A modification to the algorithm (Ketchpell994) proposes a “two agent auction” mechanism for cases where the value of collaboration is uncertain. Many questions in this area still need to be addressed. For example, some problems have no stable solution, others have several. Giving the agents more information about each other could introduce strategic behavior that requires more game theoretic analysis. eferences Gusfield, Dan and Irving, Robert W. 1989. The Stable Marriage Problem: Structure and Algorithms. Cambridge, MA: MIT Press. Ketchpel, Steven P. 1993. Coalition Formation Among Autonomous Agents. In Pre-Proceedings of the 5th European Workshop on “Modeling Autonomous Agents in a Multi-Agent World” Supplement. Neuchatel, Switzerland: Institut d’Informatique et Intelligence artificielle, Universite de Neuchatel. LM[AAMAW -931. Ketchpel, Steven. 1994. Forming Coalitions in the Face of Uncertain Rewards. In Proceedings of the Twelfth National Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press. Shechory, On and Kraus, Sarit. Coalition Formation Among Autonomous Agents: Strategies and Complexity. In [MAAMAW-931. Zlotkin, Gilad and Rosenschein, Jeffrey S. 1993a. One, Two, Many: Coalitions in Multi-agent Systems. In NAAMAW-931. Student Abstracts I.467 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
73
1,703
Learning From Ambiguous Examples Stephen V. Kowalski University of Southern California Department of Electrical Engineering Systems EEB 232, MC 2562 Los Angeles, California 90089-2562 skowalsk@pollux.usc.edu Current inductive learning systems are not well suited to learning from ambiguous examples. We say that an example is ambiguous if it has multiple in- terpretations, only one of which may be valid. Some domains in which ambiguous learning problems can be found are natural language processing (NLP) and com- puter vision. An example of an ambiguous training in- stance with two interpretations is shown below, where @ is the Exclusive-OR function and each interpretation is a conjunction of attribute values. El = il a3 i2 il = [(cat = verb) A (agree = n3sg)] i2 = [(cat = noun) A (num = singular)] Our first thought is to transform this example into disjunctive normal form (DNF). Each conjunction would then become a new example described in a rep- resentation that can be understood by most existing inductive learners. There are several problems with this approach, two of which are described below. First, there would be a combinatorial explosion in the number of training examples and thus the complex- ity of the learning algorithm. A second problem arises from the introduction of negated attribute values1 in the training instances which some learners (e.g. ID3 (Quinlan 1986)) are ill-equipped to handle. We also note that an ambiguous example may take multiple paths down a decision tree during classification. These paths may terminate at leaf nodes that are labeled with different classes. A system that could learn directly from ambiguous examples would broaden the use of inductive learning in the previously mentioned domains. One problem in NLP is sentence classification. Each word in a sentence has multiple interpretations corre- sponding to different dictionary meanings. For exam- ple, El might be a representation for the word plant. A sentence could then be represented as a list of these expressions, one for each word. Our approach to learning from ambiguous examples is to represent the training instances and the hypothe- ses with the same language. This language, at its ‘a@b= (a A lb) v (-a A b). highest level, employs a form of regular expression to match patterns of text. This expression consists of an ordered list of items which are either Kleene stars or expressions that consist of an exclusive disjunction2 of interpretations. Each interpretation is a conjunction of attribute values, and internal disjunctions are per- mitted (e.g. category = noun V verb). An example hypothesis is shown below. HI = {*A) t1 = [(cat = noun) A (num = singular)J @ [(cat = adverb)] This hypothesis, HI, would match any sentence where the last word can be interpreted as either a sin- gular noun or an adverb. Our inductive learning algorithm performs a beam search from general to more specific expressions and relies heavily on a formal definition of subsumption. Since we are using the same language for the training examples and the hypotheses that are learned, we say that a hypothesis matches a training example if the expression for the hypothesis subsumes the expression for the example. A subsumption operator is a binary Boolean operator that takes as arguments two expres- sions, and evaluates true if the expression on the left is more general or has the same generality as the ex- pression on the right, and false otherwise. Experiments have been conducted in one of the domains of the Message Understanding Conference (MUC). Text sentences were classified by the type of information that they contained, and a dictionary- based pre-processor was used to generate the ambigu- ous representation. The learning algorithm was shown to successfully learn concepts that could be used to form information extraction rules. Our current work is on integrating the learning system with a performance element which is a rule-based information extraction system. References Quinlan, J. R. 1986. Induction of Decision Trees. Machine Learning 1:81-106. 2An exclusive disjunction is of the form a $ b @ c. 1468 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
74
1,704
Exploiting the Environment: Urban Navigation as a Case Study Nicholas Kushmerick Department of Computer Science & Engineering University of Washington Seattle nick@cs.washington.edu The Situated Action approach to AI emphasizes the role of the environment in the generation and control of behavior; see (Norman 1993) for an introduction. Work to date has focused mainly on activity within spatially and temporally localized environments such as kitchens and video games (Agre & Chapman 1987; Agre & Horswill 1992). How useful is this perspec- tive when larger-scale activities are considered? I at- tempt to answer this question by considering some sues related to navigation in urban environments. is- identify several constraints on the structure of street grids that make navigation much easier than arbitrary graph search. The ultimate goal is a theory of the re- lationship between features of an urban environment and the computational complexity of navigation. This work extends the sort of analysis advocated by (Agre & Horswill 1992; Horswill 1993). Navigation is an attractive task for studying the ways agents might exploit the structure of their envi- ronment. There is no doubt that people make and use elaborate mental representations when they navigate. But city street grids are constrained in ways that make navigational problems relatively simple, and these con- straints are poorly understood. The constraints take a variety of forms, ranging from the physical struc- ture of space to cultural phenomena such as neigh- borhoods. o Street grids are physically stable. New buildings and streets are constructed, but the time scale at which street grids change is several orders of magnitu .de slower than the scale at which navigation occurs. o Navigation is mu ch simpler than arbitrary graph search because streets are topologically sensible. It is impossible to drive along a street and suddenly end up on the other side of town; culs-de-sac are relatively rare, so hill-climbing tends to work; one-way streets never completely isolate regions. e Navigation occurs - in a topographically translucent environment: some in- formation is available by virtue of the 3-D nature of the environment, although not all. Often one can see more than just the immediate surroundings. Deciding which highway exit to take can involve simply look- ing at the buildings in the distance to decide when thi correct exit il approaching. e Some cities are coherently labeled. Streets might be numbered, alpha- betized or follow some other regular pattern. o Most cities are informatively labeled. Downtown Seattle is filled with signs guiding one to Interstate 5; dead-end streets have signs so indicating; freeway exits indicate the places the exit serves; fast-food restaurant bill- boards guide one to the nearest franchise. a Most street gryds facilitate optimization. Arterials are easily recognizable and uniformly distributed. Near-optimal navigation is thus simplified because a simple policy of using the nearest arterial is easy and effective. e Fi- nallg, cities are composed of neighborhoods, with just a few maior streets running through each. It is often sufficient “to identify a goal-location by neighborhood; navigation within the neighborhood can then be done with a combination of visual and exhaustive physical search. I take navigation to be physical search over a highly constrained graph. Environment al features map di- rectly to constraints on the graph, thereby illuminating each feature’s computational significance. Thus, phys- ical stability means that the graph is static, while topo- logical sensibility requires that the graph be strongly connected and nearly planar. Street signs and topo- graphic constraints are especially interesting; they are treated as node labels informing the agent about re- mote parts of the graph. Further research will clarify the relationship between the environmental and graph constraints. Ultimately, I seek a formal theory of the relationship between the complexity of navigation and graph constraints derived from the environment. The following cases illustrate that the complexity of the agent and the environment are in some sense equiva- lent: an agent with a complete street map can navigate in an arbitrarily complicated city; an agent with a com- pass but no map can navigate if all streets are num- bered. A theory describing these tradeoffs will help elucidate the general principles governing interactions between agents and environments in a wider variety of circumstances. To validate my analysis I have begun to formally model a complete street map of Seattle. References Agre, P., and Chapman, D. 1987. Pengi: An implemen- tation of a theory of activity. In Proc. 6th Nat. Conf. on A.I. Agre, P., and Horswill, I. 1992. Cultural support for improvisation. In Proc. 10th Nat. Conf. on A.I. Horswill, I. 1993. Analysis of adaptation and environment. Artificial Intelligence. To appear. Norman, editor, D. 1993. Special issue on Situated Action. Cognitive Science 17(l). Student Abstracts 1469 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
75
1,705
Quantitative Evaluation of the Exploration Strategies of a Mobile Robot David Lee*and Michael Recce Computer Science Department, University College, Cower Street, London WClE 6BT, U.K. davidleeQcs.ucl.ac.uk How should a mobile robot explore its environment in order to build a high-quality world model as effi- ciently as possible? We address this question through experimentation with a sonar-equipped mobile robot. The robot is taken to be a delivery robot, such as could be used in an office, hospital or home. Its objec- tive is to execute efficient collision-free paths between user-specified locations. A grid-based free-space map is generated for this purpose. This map is derived from a feature-based map, built using techniques similar to those of (Leonard & Durrant-Whyte 1992). Before starting to evaluate an exploration strategy, it is vital to have a clear definition of map quality. Pre- vious map-building research has typically judged map quality either by visual inspection or by measuring the robot’s success in achieving its goals with a completed map. Neither approach provides an objective quality measure during map construction. There is therefore a need for a quantitative measure which can be applied during exploration. We solve this problem by defining a small number of numeric measures which together predict the robot’s efficiency if it were to use its cur- rent world model to achieve its objectives. The quality of the robot’s map is measured by com- paring its performance in a set of tasks using either the robot’s map or an ideal map. The set of bench- mark tasks is created by selecting pairs of locations such that there is an executable path between them, according to the ideal map. For each pair of locations, an attempt is then made to plan a path between them, using the robot’s map. Counts are kept of the num- bers of these paths which fall into each of three cate- gories; “Impossible”, “Collision” or “Feasible”. A path is “Impossible” if the current map shows either that one of the endpoints is occupied or that the path is blocked. A path is a “Collision” if the current map shows the route to be possible, but in fact the planned route would cause a collision with an obstacle. A path is “Feasible” if the planned route is possible without collision. In this case, we compare the cost of execut- ing the planned route with the Ydeal” cost from the true map. The categorisation of the paths and the ef- *Supported by a SERC grant 1470 Student Abstracts ficiency of the feasible paths measure different aspects of the map quality. The relative significance of these aspects can be assessed in the context of the robot’s application. These measures were used to fine-tune the map con- struction process. Parameter values were selected and design choices were made to maximise the map qual- ity obtained from given sensor data. Objective quality measures were essential during this process. We report the results of the evaluation and compar- ison of a number of exploration strategies by monitor- ing the quality measures as the robot explores a set of test environments. The first strategy tested was wall- following, a completely reactive navigation strategy in which all decisions are made on the basis of immedi- ately available sensory data. The map is not used to control the exploration. In sparse environments the map quality increases rapidly at the start of the ex- ploration but reaches a plateau when, for example, the robot is following a wall which has already been observed from elsewhere. An immediate challenge in designing an exploration strategy is to use the infor- mation gathered so far to eliminate such redundant movements. We also report the results of a “Visit All” behaviour (Zelinsky 1992)’ which directs the robot systematically towards unknown regions, and a “Seed-Spreader” tech- nique (Lumelsky, Mukhopadhyay, & Sun 1989)’ which is theoretically guaranteed to find all obstacles in the environment. References Leonard, J. J., and Durrant-Whyte, H. F. 1992. Di- rected Sonar Sensing for Mobile Robot Navigation. MIT: Kluwer Academic Publishers. Lumelsky, V. J.; Mukhopadhyay, S.; and Sun, K. 1989. Sensor-based terrain acquisition: a “seed spreader” strategy. In IEEE/RSJ International Workshop on Intelligent Robots and Systems, 62-67. Academic Press. Zelinsky, A. 1992. A mobile robot exploration algo- rithm. IEEE Transactions on Robotics and Automa- tion 8(6):707-717. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
76
1,706
easoning Meets Thomas F. McDougal University of Chicago Computer Science 1100 E. 58th St. Chicago, IL 60637 email: mcdougal@cs.uchicago.edu In an earlier paper [McDougal and Hammond 19931, we reported on POLYA, a computer program which proves high school geometry theorems. POLYA is a memory- based problem-solver in the case-based planning tradition [Hammond 1989, Kolodner 19931. It uses features of the problem (mostly from the diagram) to index into a library of plans. Some of those plans are solutions to entire problems; others apply a single inference. The plans are indexed in memory by the features which predict the relevance of the plan without necessarily guaranteeing it. POLYA’s domain is the set of problems which occur in a high school textbook (in particular, [Rhoad, Whipple and Milauskas 19881). This domain is interesting because of the ways in which the textbook authors have structured it to help the student: problems build on earlier problems, common idioms recur, and diagrams are drawn to suggest particular ways of thinking. By using ideas from case- based planning, POLYA recognizes and exploits the regularities in the domain to avoid the NP-complete problem of proving theorems from scratch. POLYA can reuse proofs for similar problems, where similarity is defined in terms of the diagram and the goal. POLYA does not directly apply the proof, but uses it to focus attention on those objects in the diagram which were important previously. Features in the diagram enable POLYA to determine the specific inferences to apply. Many of the idioms that help POLYA figure out the specific steps in the proof are similar to the diagram configuration schema described in [Koedinger and Anderson 19901, except that there is an emphasis in POLYA on capturing the details of specific problems in the text. The details make it possible to choose specific inferences without going through a second stage of inference chaining, as Koedinger and Anderson’s system does. An important part of POLYA’s task is to extract features from the problem, and many of its plans (called search plans) are for that purpose alone. For example, if POLYA detects a triangle which appears to be isosceles (based on the numerical coordinates of the vertices), an isosceles- triangle-search-plan computes descriptions of the legs and base angles. Several issues have come up in this project which are basic to planning in general: Plan selection. POLYA often has a choice of several plans, some of which will result in finishing the proof more quickly than others. POLYA uses proofs from past problems to determine which plans are most likely to lead to the solution. Flexibility. The proof for one problem rarely applies exactly to another. POLYA adapts proofs flexibly by establishing a sequence of goals based on the proof, then relying on features of the problem to trigger the appropriate plans for achieving those subgoals. Interrupts. While in the middle of one plan, POLYA will sometimes trigger a plan that would complete the proof immediately. The current algorithm does not allow plans to be interrupted. These issues come up in many other planning domains. Plan selection is important when an agent has multiple goals. Flexibility is important whenever the planner has imperfect knowledge about the world. Interrupts are important if disaster strikes, demanding immediate action. As we address these issues in the geometry domain we hope to contribute to a general theory of planning and action in regular domains. References Hammond, K. 1989. Case-Based Planning: Viewing Planning as a Memory Task. Academic Press. Koedinger, K. R. and Anderson, J. R. (1990). “Abstract planning and perceptual chunks: Elements of expertise in geometry.” Cognitive Science, 14, 5 1 l-550. Kolodner, J. 1993. Case-Based Reasoning. Morgan Kaufmann. McDougal, T. F. and Hammond, K. J. (1993). “Representing and using procedural knowledge to build geometry proofs.” Proceedings of the Eleventh National Conference on Artificial Intelligence. M.I.T. Press. Rhoad, R., Whipple, R., and Milauskas, G. 1988. Geometry for enjoyment and challenge. McDougal, Littell. Student Abstracts 1471 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
77
1,707
ation of eC ition using New Condition monitoring is a developing discipline in machinery mainace. Data such as vibration levels, temperatures, oil analysis values etc, are acquired from plant, and analyzed to determine the condition of the plant at the time of measurement. Software packages are currently available to allow graphical display of the data, with varying levels of diagnostic tools available to assist engineers in performing data analysis. This abstract outlines the development of a condition monitoring system at Blyth Power Station, owned by National Power, the major electricity generating company in the United Kingdom. The abstract goes on to describe research into the development of a data analysis system employing neural networks trained to recognise machinery defects. Blyth Power Station is located on North East coast of England, with a generating capacity of 1,18OMW, and is one of the oldest coal-fired sites in the United Kingdom. The Station recognised the requirement to move away from traditional, manpower-intensive strategies of planned or breakdown maintenance, towards a condition-based maintenance policy for critical areas of auxiliary plant [I]. To achieve this, the Station eatered into a collaborative agreement with the University of Sunderland to develop and implement a condition monitoring system for use within the Station, and to investigate the use of artificial intdligence in data analysis. Figure 1 below shows a typical frequency spectrum acquired from a Cooper rolling element bearing. The spectrum is obtain& by performing a Fast Fourier Transform on the time domain vibration signal, after a band pass filter and envelope filter have been applied to it. In this spectrum, a large peak is visible at the frequency specific to an outer race defect (ORD). This clear and well-defind spectra shows distinct characteristics relating to the bearings. The plant is readily accessible, and the transducer can be placed close to the bearing being monitored. In many cases, the.picture is far more vague, through background noise, difficulty of access, low frequency applications etc; for example, data collected from coal mill gearboxes is a much more complex problem for analysis 121. Neural Networks for ysis Initial work employed the well-documented multi-layer perceptron using back-propagation [2]. This network topology was applied to the analysis of data from rolling 1472 Student Abstracts versions of the Neural Bearing Analyzer (NBA) were developed; first, one which took a limited amount of information from the frequency spectrum as its inputs, and degrees of severity of defect in each train, with problems in achim converge to within a ring engineer, agreeing in 93% topology is now being used to analy7JE? more camp and other topologies such as Self-Qrganising Maps are al’= being inv&iga.ted [2]. References 1. MacIntyre, J.; Smith, P.; Wiblin, C. 1993. Development of a canditian Monitoring System for Off-L&e Monitoring of Auxiliary Plant for National Power. In l%ocedhgs of 5th International on condition Monitoring and Diagnostic Engineering Management, University of the West of England, United Kingdom. 2. Mdntyre, J.; Smith, P.; Harris, T.; Brason, A. 1993. Application of Neural Networks to the Analysis and Interpretatiion of Off-Line Condition Monitoring Data. In l%aceabgs of the 6th International Symposium on Artificial Intelligence, Monterrey, Mexico. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
78
1,708
ing A . trc Kavi Mahesh College of Computing Georgia Institute of Technology Atlanta, Georgia 30332-0280 USA mahesh@cc.gatech.edu Natural language understanding programs get bogged down by the multiplicity of possible syntactic structures while processing real world texts that human understanders do not have much difficulty with. In this work, I analyze the relationships between parsing strategies, the degree of lo- cal ambiguity encountered by them, and semantic feedback to syntax, and propose a parsing algorithm called Head- Signaled Left Corner Parsing (HSLC) that minimizes local ambiguities while supporting interactive syntactic and se- mantic analysis. Such a parser has been implemented in a sentence understanding program called CGMPERE! (Eiselt, Mahesh, & Holbrook 1993). A parser could quickly eliminate many possible syntactic structures for a sentence by using (a) the grammar to gen- erate syntactic expectations, (b) structural preferences such as Minimal Attachment or Right Association, (c) feedback from semantic analysis (d) statistical preferences based on a corpus, or (e) case-based preferences arising from prior texts about stereotypical situations. None of the above strategies suffices by itself for handling real text. In this work, I assume that (a) we must strive to design parsing strategies capable of analyzing general, real life text, (b) it is beneficial to produce immediate, incremental inter- pretations (‘meanings’) of incoming texts, and (c) semantic (and pragmatic) analysis can provide useful feedback to syn- tax without requiring unbounded resources. Given these, my objective is to design a parsing strategy that makes the best use of linguistic preferences-both grammatical and structural, and also semantic and conceptual preferences, while minimizing local ambiguities. Strong cognitive mo- tivations for devising such a solution were presented earlier in (Eiselt, Mahesh, & Holbrook 1993). The question this leads to is: When should the parser interact with the semantic analyzer? It should interact only when such interaction is beneficial to one or both, that is, when one can provide some information to the other to help reduce the number of choices being considered. Parsing strategies can be distinguished along a dimension of “ea- gerness” depending on when they make commitments to a syntactic unit and are ready for interaction with semantics. At one end of the spectrum lies pure bottom-up parsing, being too circumspect and precluding the use of syntactic expectations. Pure top-down parsing, at the other end, is too eager and leads to unwarranted backtracking. Such non- determinism is a problem for incremental interaction with semantics. A combination strategy called Left Corner (LC) Parsing has been shown to be a good middle ground for using top-down expectations as well as avoid unnecessary early commitments (Abney & Johnson 1991). LC Parsing captures the best of both bottom-up and top- down parsing by processing the leftmost constituent of a phrase bottom-up and predicting subsequent constituents top-down from the parent constituent proposed using the leftmost. LC parsing however defines a range of strategies in the spectrum depending on the arc enumeration strategy employed--an important distinction between different LC parsers. In Arc Eager LC (AELC) Parsing, a node in the parse tree is linked to its parent without waiting for all its children. Arc Standard LC (ASLC) Parsing, on the other hand, waits for all the children before making attachments. In this work, I propose an intermediate point in the LC Parsing spectrum between ASLC and AELC strategies and argue that the proposed point, that I call Head-Signaled LC Parsing (HSLC), turns out to be the optimal point for efficient interaction with semantics. In this strategy, a node is linked to its parent as soon as a particular required child of the node is analyzed, without waiting for other children to its right. This required unit is predefined syntactically for each phrase; it is not the same as the standard ‘semantic head’. (E.g., N is the required unit for NP, V for VP, and NP for PP.) HSLC makes the parser wait for essential units before interacting with semantics but does not wait for optional adjuncts (such as PP adjuncts to NPs or VPs). In conclusion, while LC Parsing affords incremental pars- ing and optimizes memory requirements, pure LC parsing does not generate syntactic units in an order suitable for incremental semantic processing. HSLC, being a hybrid of LC and head-driven parsing strategies yields the right mix to enable incremental interaction with semantics and reduce the number of interpretations explored. Empirical evalua- tion of the HSLC algorithm in the CGMPERE system is currently in progress. References Abney, S. P.; Johnson, M. 199 1. Memory Requirements and Local Ambiguities of Parsing Strategies. Journal of Psycholinguistic Research, 20(3):233-250. Eiselt, K. P.; Mahesh, K.; and Holbrook, J. K. 1993. Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing. In Proceedings of AAAI-93,380-385. Student Abstracts 1473 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
79
1,709
On Kernel Rules and Prime Implicants Ron Rymon* Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260 rymon@isp .pitt .edu Abstract We draw a simple correspondence between ker- nel rules and prime s’mphznts. Kernel (mini- mal) rules play an important role in many in- duction techniques. Prime implicants were previ- ously used to formally model many other problem domains, including Boolean circuit minimization and such classical AI problems as diagnosis, truth maintenance and circumscription. This correspondence allows computing kernel rules using any of a number of prime implicant generation algorithms. It also leads us to an algo- rithm in which learning is boosted by an auxiliary domain theory, e.g., a set of rules provided by an expert, or a functional description of a device or system; we discuss this algorithm in the context of SE-tree-based generation of prime implicants. Introduction Rules have always played an important role in Artificial Intelligence (AI). In machine learning, while a variety of other representations have also been used, a great deal of research has focused on rule induction. More- over, many of the other representations (e.g., decision trees) are directly interchangeable with a set of rules. Prime implicants (PIs) are minimal conjunctions of Boolean literals. Always computed with respect to a given logical theory, a prime implicant has the prop- erty that it can be used alone to prove this theory. In the early days of computers, PIS were used in Boolean function minimization procedures, e.g., (Quine 52; Karnaugh 53; McCluskey 56; Tison 67; Slagle, Chang & Lee 70; Hong, Cain & Ostapko 74). In AI, PIS were used to formally model TMSs and ATMSs (Re- iter 87; de Kleer 90), circumscription (Ginsberg 89; Raiman & de Kleer 92), and Model-Based Diagno- sis (de Kleer, Mackworth & Reiter 90). A num- ber of new, and improved PI generation algorithms have emerged, e.g., (Greiner, Smith & Wilkerson 89; *Parts of this work were supported by NLM grant ROl- LM-05217; an AR0 graduate fellowship when the author was at the University of Pennsylvania; a NASA consulting contract; and self-funding. Jackson & Pais 90; Kean & Tsiknis 90; de Kleer 92; Ngair 92; Rymon 94). In machine learning, it is commonly argued that sim- pler models are preferable because they are likely to have more predictive power when applied to new in- stances (a principle often referred to as Occam’s ru- zor). One way in which a model can be simpler is if all of its rules are simple, i.e., have fewer conditions in the antecedent. As it turns, kernel (minimal) rules and prime implicants are closely related. We will show a direct mapping between the two which allows comput- ing kernel rules using PI generation algorithms. This will lead us to an algorithm which combines knowledge induced from examples with knowledge acquired from an expert, or which is otherwise available. This is done by combining the PIS of multiple theories. Given that prime implicants have been actively researched for a few decades now, we believe that this correspondence has the potential to benefit the machine learning com- munity in other ways. ernel ules are Prime Implicants Consider a typical machine learning scenario: we are presented with a training set (TSET) of class-labeled ex- amples. Each example is described by values assigned to a set of attributes (also called features or variables), and is labeled with its correct class. We assume all attributes and the class are Boolean. By partial de- scription we refer to an instantiation of a subset of attributes; an object is a partial description in which all attributes are instantiated. By universe we refer to the collection of all possible objects. It is common to assume that class labels were as- signed based on a set of, unknown as yet, principles; for the purpose of this paper, we assume no noise. It is the role of the induction program to unearth these principles, or at least some approximation thereof. Nu- merous techniques were devised throughout the years for this purpose, ranging from various forms of regres- sions, to neural and Bayesian networks, to decision trees, graphs, rules and more. Rules are one form of representation which has also been heavily used in other branches of AI. One advantage of proving prop- Automated Reasoning 181 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. erties for a rule-based representation is that rules are easily mapped into many of the other representations. In decision trees, for example, a rule corresponds to attribute-value assignments labeling a path from the root to a leaf. Definition 1 Kernel Rules A ruZe is a partial description such that all objects in TSET that agree with its instantiated variables are all labeled with the same class and such that there exists at least one such object. A kernel rule is a rule such that none of its subsets is a rule. A rule is thus a set’of instantiated variables, and a kernel rule is one which is set-wise minimal. (To save notation, we will sometimes refer to this set with the class variable; the distinction should be clear from the context.) Another way to view a rule is as a conjunc- tive set of conditions. We call it a rule because if the training data were representative of the universe, we could use it to predict the class of new instances. The more conditions are included in its conjunction, the more specific the rule is; a kernel rule is thus a most general rule. Kernel rules are the essence of our SE-tree-based induction framework (Rymon 93). Each kernel rule corresponds to the attribute-value assignments label- ing one path from the root to a leaf. We have shown that SEtrees generalize, and often outperform, deci- sion trees as classifiers. Example 2 Consider the following training exam- ples consisting of various test results (a, b, c, and d) of patients suspected of suffering from a disease (2): Patient a b c d Disease (2) 1 true true true true true 2 false false false false false 3 true false false false true The five kernel rules inferable from these examples and their SE-tree representation are depicted next: class:z a-4x class:Z ii*z class:x b+x class: 2 C--rX class:x d+x Definition 3 Prime Implicunts (Implicates) Let V be a set of Boolean variables. A literal is either a variable v, or its negation iv. Let C be a propositional theory. A conjunction of literals R is an implicunt of C if R + C (where b is the entailment operator). A disjunction of literals r is an implicate of C if C + r. Such a conjunction (disjunction) can also be thought of as a set of literals. It is a prime implicant (implicate) if none of its subsets is an impli- cant (implicate). An implicant (implicate) is trivial if it contains a complementary pair of literals. Prime implicates and prime implicants are duals. In particular, any algorithm which computes prime im- plicates from a DNF formula can also compute prime implicants from the corresponding CNF formula, and vice versa. Many PI generation algorithms assume the theory is given in one form or the other. Definition 4 Training Set Theory Let e be an object, and let ai denote the attribute instantiations in e. Let x be an instantiation of the class variable. We define Let TSET be a set of objects {ej}j”,l, each labeled with a class xj. The theory given by TSET is defined: C(TSET) dZf AT=1 o(ej,xj) The purpose of this transformation is to represent logically the information contributed by a each exam- ple alone and by the collection as a whole. For the first patient in Example 2 we have: - - aAbAcAd+x = SiVbVEVdVx As a conjunction, the training set theory can be used to constrain the classifiers considered to those who pro- duce the same class labels for the given examples. Theorem 5 Kernel Rules are Prime Implicants Let TSET be a training set, x the class variable. Let TSET+ be the set of positive examples, and TSET- the set of negative examples. Let C+ denote C(TSET+) and similarly let C- denote C(TSET-). Let KR+ be the set of positive kernel rules, i.e., with x in their consequent, and KR- the set of negative kernel rules, i.e., with z in their consequent. Let PI(T) denote the collection of non-trivial PIS for a theory T. Then KR-=PI(C+)-{ } x modulo subsumption’ with PI@-); KR+=PI(C-)-(Z) modulo subsumption with PI@+). Proof: Let r be a partial description. First, it is clear that x (respectively Z) is a PI for C+ (respec- tively C-)2. We will prove that (1) if r ELI and P # x then either r EKR- or r is subsumed by some r’ EPI(C-), and (2) vice versa, i.e., if r EKR- then r ELI. The proof for the other part of the theo- rem is analogous. (1) Suppose r EPI(C+) and that T is not subsumed by any PI of C-. As a PI, r + C+ and thus contradicts at least one variable assignment in every positive ex- ample, and so covers none of these. We still have to show that there is a negative example that is covered by r and that r is minimal. ‘One rule subsumes another if it is a subset of the other. This operation removes from one set a.ll rules subsumed by any rule from the other set. Note that if a PI appears in both sets, it is removed from both. 2Also note that x does not appear in any other PI for TSET+. In fact, to make things computationally easier, x and f can be omitted from the respective theories; they were only included for pedagogical reasons to emphasize the correspondence between clauses and examples. 182 Automated Reasoning (2) Suppose that none of the negative examples is cov- ered by T. Since every example assigns a value to each variable, it must be the case that r contra- dicts every negative example by at least one variable- assignment. Thus, P is an implicant of C- and there- fore there is a prime implicant in PI@-) which sub- sumes Y. In contradiction to the assumption. As a prime implicant, r must be minimal and there- fore it is a kernel rule. Suppose r EKR-. Then P does not cover any of the positive examples, and therefore it must contradict at least one variable assignment in each and every positive example. Thus, by definition, T is an im- plicant of C +. As a kernel rule, T is minimal and therefore it is a prime implicant. Izi Consider again Example 2: c+ def (iiVbVEVdVx) A(;iiVbVcVdVx), and so PI@+)={x, 8, bc, bd, SC, bd, cd, cd}. Computed sim- ilarly PI@-)={ - x, a, b, c, d}: Six of the former PIS are subsumed by some of the latter, leaving as a negative kernel rule only ii. All the PIS for C-, except for Z which is removed, are positive kernel rules. The first immediate application of Theorem 5 is that kernel rules can be computed using any of a number of PI generation algorithms. We briefly explore this possibility next. This theorem also leads to an op- portunity to combine kernel rules with other available knowledge. As PIS, kernel rules can be combined with PIS of another theory, e.g., an auxiliary domain the- ory, to obtain a more refined classifier. We discuss this possibility in the subsequent sections of this paper. Be- sides these two immediate applications, we believe this correspondence may lead to new insights drawn from one area of research to the other. Computing Rules as rime Implicants Assuming the availability of a PI generation algorithm, Theorem 5 suggests a very simple way to compute ker- nel rules: transform the training set into positive and negative theories; then compute the PIS for each of the theories; then, after removing the trivial x and 5, take the union of the two sets while removing subsumed conjuncts. The consequent of each rule is determined by the set from which it came: x in rules originating from PI( C- ) and Z in those from PI@+). As previously mentioned, research over the years has produced an abundance of PI generation algorithms. Since there may sometimes be an exponential num- ber of PIS, there are also many algorithms which com- pute subsets of these, or which compute them accord- ing to some prioritization scheme. In machine learning, (Hong 93) used a logic minimization algorithm (Hong, Cain & Ostapko 74) to induce a minimally covering set of minimal rules. Each iteration in the STAR algo- rithm (Michalski 83) essentially computes the PIS of all negative examples and one positive example. A version space’s most general rules (Mitchell 82) correspond to the positive kernel rules, or the PIS of the negative the- ory. (Ngair 92) shows that both a version space and PIS are modelable as general order-theoretic structures and are thus computable using simple lattice opera- tions. The SE-tree-based learning framework (Rymon 93) and PI generation algorithm (Rymon 94) both sup- port partial exploration of rules, e.g., minimal covers or maximizers of some user-specified priority. Most of the PI generation algorithms assume that the input theory is given in either CNF or DNF. For the purpose of computing the PIS of a training set theory, a PI generation algorithm should be able to receive its input in CNF. However, as will soon be discussed, one may wish to combine these with the PIS of another the- ory which may be given in a different form; hence the flexibility offered by the variety of algorithms. Fur- thermore, certain algorithms may outperform or un- derperform others, depending on certain features of the underlying theory and of its PIS. The flexibility offered by the fact that positive and negative kernel rules can be computed separately and then combined using a simple union-with-subsumption operator may be of practical importance when dealing with large problems. The disadvantage of this is that many PIS may later be subsumed; a similar consider- ation applies when combining PIS of the training set theory with those of an auxiliary theory. Some of this duplicity can be avoided in an SE-tree-based frame- work, as will be discussed later. oosting ing with an uxiliary ain Theory One major problem in applying machine learning is that examples are often scarce. Even where examples are seemingly abundant, their number is often minus- cule relative to the syntactic size of the domain. Learn- ing programs thus face a hard bias selection problem, having to decide between a large number of distinct classifiers that equally fit the training set. We propose that the PI-based approach lends itself to use of auxil- iary domain knowledge, in the form of a logical theory, to leverage learning by restricting the set of hypotheses considered. Computationally, at least if an SE-tree- based algorithm is used, significant parts of the search space may be discarded without even being searched. Consider Example 2 again. Since the universe size is 16 (24), and since only three examples were given, there are 216-3zz212 d’ff 1 erent classifiers consistent with the training examples. Prime implicants belong to a some- what stricter class, namely conjunctions which entail the training set theory. While each of the kernel rules is consistent with the examples, they may contradict on other objects. Indeed, in the SE-tree-based classifi- cation framework, the number of classifiers potentially embodied in a collection of kernel rules depends on the number of objects on which two or more rules contra- diet. In Example 2, there are 7 such objects (Figure la) and thus 27 classifiers. Automated Reasoning 183 Now suppose that in addition to the training ex- amples, we are also given an auxiliary domain theory (ADT) which we will assume holds in the domain and thus consistent with the examples. It is reasonable to demand that labels assigned by a candidate classifier be consistent with this theory. Furthermore, we will insist that the classifier entails ADT. To achieve this, we will compute rules as PIS of the conjunction of the respective training set theory and ADT. Theorem 6 Rules for Examples + ADT Let TSET be a training set, x a class variable, and C+ and C- as before. Let ADT be an auxiliary domain theory such that ADT def ADTOUADT-UADT+ where ADTO does not mention x nor Z; ADT- is in CNF and does not mention x; and ADT+ is in CNF and does not mention Z. Let PI+ def PI(C+UADT~UADT+) and PI- d!f PI(C-UADTOUADT-). If T is a partial descrip- tion then (1) if r E(PI- modulo subsumption with PI+) then r does not cover any negative example and does cover at least one positive example; r is minimal as such. (2) if T E(PI+ modulo subsumption with PI-) then T does not cover any positive example and does cover at least one negative example; T is minimal as such. Proof: We will only prove (1); the proof for (2) is analogous. If r EPI- then r contradicts at least one assignment in each of the negative examples; thus it does not cover any negative example. If T did not cover any positive example, then Y b C+ and therefore there exists r’ EPI+ such that r’ E r, in contradiction to the assumption. p9.E.D Note that the decomposition of ADT was not used in the proof. The theorem still holds if ADT is taken as a whole and PIS for C%ADT, modulo subsumption, are taken as negative rules and vice versa for positive rules. The problem is that important rules may be lost that way. In particular, consider a situation in which C- was included as part of ADT. Then, PI(C+UADT) is subsumed by PI(FUADT) and we lose all negative rules. The new ADT-boosted induction algorithm will thus partition ADT as above, and then use the respective components to compute positive and negative rules. Compared to its predecessor, the new algorithm will typically result in rules with a more restricted scope. Note that some new PIS may appear which are inde- pendent of the class labeling decision, e.g., a domain rule such as “males can never be pregnant”. However, these will appear in both the positive and negative PIS and will thus be removed by subsumption. Thanks to the diversity of PI generation algorithms, ADTO can be given in a variety of syntactic forms; if it is in DNF, its PIS can be computed separately using an algorithm which accepts DNF input. The PIS of the combined theories can then be computed as the PIS of the combination of the PIS of each of the respective 184 Automated Reasoning theories, by invoking same program again. If ADT’ is also in a CNF then the PIS of the combined theories can be computed in a single shot. Most notably, a set of rules such as the ones typically gathered from domain experts can easily be transformed into a CNF. Consider Example 2 once again. Suppose that in our domain it is impossible for test (attribute) b to be positive if test a is negative, i.e., si + 6. We first transform this statement to a domain theory in CNF: def ADT = ADT’ sf (a V b). Then, we compute PIS for C+UADT, and similarly for C-UADT. After removing subsumed PIS, only two rules are left: a a x, and iib a Z. Figure lb depicts a class-labeled universe according to these two rules. Notably, there are no contradictions left (although this does not hold in general). Also note that part of the syntactic universe that was covered by the previous set of rules is not covered by the new rules; according to the ADT, these object are not part of the real universe as it is impossible for a to be negative without b being negative as well. ub ub iib a6 ub ~7; ab iib x x xii? xz cd x x z cd (a) TSET only (b) with ADT Figure‘l: Class Labelings with and without ADT Kernel rules can be computed in various orders: Compute PIS separately for each of C+UADT+, C-UADT-, and ADTO; then merge while subsuming supersets. In this case, PI(ADT’) is only computed once. Using the SE-tree data structure, merging is linear in the size of the trees. This may be wasteful, however, if many PIS for one theory are subsumed by another. Compute PIS for the two combined theories directly. This may save time and space if many PIS of ADT’ are later subsumed. However, in essence, many of the PIS of ADT’ are computed twice. If the SE-tree method is used, compute PI(ADT’), and then use the resulting SE-tree as the basis for search for the PIS of the combined theories. In ex- panding this tree, nodes previously pruned shall re- main pruned. However, unexplored branches may have to be “re-opened”. Some of the PIS of ADTO may have to be further expanded. An SE-tree-based Implementation Set-Enumeration (SE) trees were proposed in (Rymon 92) as a simple way to systematically search a space of sets. It was suggested they can serve as a uniform model for many problems in which solutions are mod- eled as unordered sets. Given a set of attributes, a complete SE-tree is a tree representation of all sets of attribute-value pairs. It uses an indexing on the set of attributes to do so systematically, i.e., to uniquely represent all such sets. The SE-tree’s root is always labeled with the empty set. Then, a node’s descendants are each labeled with an expansion of the parent’s set with a single attribute- value assignment. The key to systematicity is that a node is only expanded with attributes ranked higher in the appropriate indexing scheme than attributes ap- pearing in its own label. For example, assuming alpha- betic indexing, a node labeled ubd will not be expanded with c nor with C, but only with e, f, etc. Allowed at- tributes are referred to as that node’s View. Of course, the complete SE-tree is too large to be completely ex- plored and so an algorithm’s search will typically be restricted to its most relevant parts. A simple PI gen- eration algorithm is outlined in (Rymon 92) as an ex- ample application of SE-trees. In (Rymon 93), we presented an SE-tree-based in- duction framework and have argued that it general- izes decision trees in several ways. Like decision trees, an SE-tree is induced via recursive partitioning of the training data. Also like decision trees, classification re- quires traversing matching paths in the tree. However, an SE-tree embodies many decision trees and thus al- lows for explicit mediation of conflicts. While here we assume a fixed indexing, attributes in a node’s View can be dynamically re-ordered, e.g. by information- gain, without infringing on completeness. An improved version of the SE-tree-based PI gener- ation algorithm is detailed in (Rymon 94). This algo- rithm accepts input in CNF and works by computing minimal hitting sets for the collection of clauses. It is briefly presented next: First, given a collection of sets, a hitting set is a set which “hits” (shares at least one element with) each set in the collection. Non-trivial PIS correspond to mini- mal hitting sets (excluding those which include both a variable and its negation.). The algorithm works by exploring an imaginary SE- tree in a best-first fashion, where PIS are explored in an order conforming to some user-specified prioritization; thus, if time constraints are imposed, the most impor- tant PIS will be discovered. Exploration starts with the empty set. Then, iteratively, an open nodes with the highest priority is expanded with all possible one- attribute expansions which (a) are in that node’s View, and (b) hit a set not previously hit by that node. Ex- panded nodes which hit all sets are marked as hitting sets and the rest remain open for further expansion. The algorithm uses two pruning rules. First, nodes subsumed by previously discovered hitting sets can be pruned; they cannot lead to minimal hitting sets. Sec- ond, a node is pruned if any of the sets it does not hit is completely outside its View; given the SE-tree structure, such a node cannot lead to a hitting set. (Rymon 94) also suggests a recursive problem de- composition heuristic in which the collection of sets not hit by a node is partitioned into variable-disjoint sub-collections. If such partitioning exists, the minimal hitting sets for the union are given by the product of the minimal hitting sets for the sub-collections. These are computed via recursive application of the algorithm to each of the sub-collections. C-onsider Example 2 again: C+ consists of the sets {a, b, C, d, x}, (8 b, c d x} Figures 2a,b depicts the SE-trees explored fok lomputing PI@+) and PI@-), ignoring PIS with x or 5. Note that, in the former, a branch labeled a was never considered because a does not appear in C+ and that a branch labeled d was nruned because it cannot lead to a solution. However. except for nodes labeled with d or d, other nodes can: not be pruned for having too narrow a View; this is because- examples assign-values to all variables. the same reason, decomposition is also impossible - U b c U b For w w-1 (c) PI(ADT) Figure 2: Original SE-trees Consider now the ADT def (a V b), as discussed be- fore. Figure 2c shows the SE-tree for PI(ADT). Fig- ures 3a,b shows SE-trees for the combined theories. Note that now, branches from the root labeled with c or C can be pruned because they cannot lead to hit- ting sets for ADT. Also, once all sets in the training set-theories are hit, one can take advantage of the de- composition heuristic. Notably, PIS for the combined theories are more complex; they have to hit more sets. This makes the resulting rules less conflicting. Fig- ure 3c shows the kernel rules obtained by merging- with-subsumption the trees in 3a,b. Summary We have shown a simple correspondence nel rules and prime implicants which between ker- a. Allows computing kernel rules using any of a number of prime implicants generation algorithms; and b. Leads to a PI-based learning algorithm which can be boosted with an auxiliary domain theory, e.g., a set of rules provided by a domain expert, or a functional description of a device. Automated Reasoning 185 b bc bd (a) PI(C+UADT) U b U a -A class: 2 I- bc bd 7ib (b) PI@-UADT) (c) kernel rules Figure 3: SE-trees for combined theories We outline an SE-tree-based algorithm which allows exploring rules according to some user-defined prefer- ence criterion. We hope the domain theory enhance- ment will eventually contribute to the applicability of this machine learning approach to real-world domains. In addition, given the significant research involving prime implicants, we believe the correspondence pre- sented here may lead to new insights as researchers reinterpret these results in the realm of machine learn- ing. Acknowledgement Discussions with Dr. John Clarke have motivated this work. I also thank Ron Kohavi, Foster Provost, Bob Schrag and anonymous reviewers for important discus- sions and comments. References de Kleer, J., Exploiting Locality in a TMS. In Pro- ceedings 8th National Conf. on Artificial Intelligence, pp. 254-271, Boston MA, 1990. de Kleer, J., Mackworth, A. K., and Reiter, R., Char- acterizing Diagnoses. In Proceedings 8th National Conf. on Artificial Intelligence, pp. 324-330, Boston MA, 1990. de Kleer, J . , An Improved Incremental Algorithm for Generating Prime Implicates. In Proceedings 10th National Conf. on Artificial Intelligence, pp. 780-785, San Jose CA, 1992. Ginsberg, M., A Circumscriptive Theorem Prover, Artificial Intelligence, 39, pp. 209-230, 1989. Greiner, R., Smith, B. A., and Wilkerson R. W., A Correction to the Algorithm in Reiter’s Theory of Di- agnosis. Artificial Intelligence, 41, pp. 79-88, 1989. Hong, S. J., Cain, R. G., and Ostapko, D. L., MINI: A Heuristic Approach for Logic Minimization. IBM Journal of Research and Development, pp. 443-458, 1974. Hong, S. J., R-MINI: A Heuristic Algorithm for Gen- erating Minimal Rules from Examples. IBM Research Report RC 19145, 1993. Jackson, P., and Pais, J., Computing Prime Impli- cants. In Proceedings Conf. on Automated Deduction, pp. 543-557, 1990. Karnaugh, G., The Map Method for Synthesis of Combinational Logic Circuits. AIEE Trans. Commu- nications and Electronics, vol. 72, pp. 593-599, 1953. Kean, A., and Tsiknis, G., An Incremental Method for Generating Prime implicants/Implicates. Journal of Symbolic Computation, 9:185-206, 1990. McCluskey, E., Minimization of Boolean Functions. Bell System Technical Journal, 35:1417-1444, 1956. Michalski, R., A Theory and Methodology of Induc- tive Learning. Artificial Intelligence, 20, 1983, pp. 111-116 Mitchell, T. M., Generalization as Search. Artificial Intelligence, 18, 1982, pp. 203-226. Ngair, T., Convex Spaces us an Order-Theoretic Basis for Problem Solving, Ph. D. Thesis, Computer and Information Science, Univ. of Pennsylvania, 1992. Quine, J. 0. W., The Problem of Simplifying Truth Functions. American Math. Monthly, 59:521-531, 1952. Raiman, O., and de Kleer, J., A Minimality Main- tenance System. In Proceedings 3rd Int’Z Conf. on Principles of Knowledge Representation and Reuson- ing, Cambridge MA, pp. 532-538, 1992. Reiter, R., A Theory of Diagnosis From First Princi- ples. Artificial Intelligence, 32, pp. 57-95, 1987. Rymon, R., Search through Systematic Set Enumer- ation. In Proceedings 3rd Int ‘I Conf. on Principles of Knowledge Representation and Reasoning, Cam- bridge MA, pp. 539-550, 1992. Rymon, R., An SE-tree-based Characterization of the Induction Problem. In Proceedings 10th Int’l Conf. on Machine Learning, pp. 268-275, Amherst MA, 1993. Rymon, R., An SE-tree-based Prime Implicant Gen- eration Algorithm. To appear in Annals of Math. and A.I., special issue on Model-Based Diagnosis, Console & Friedrich eds., Vol. 11, 1994. Slagle, J. R., Chang, C, and Lee R. C., A New Algo- rithm for Generating Prime Implicants. IEEE Trans. on Computers, 19(4), 1970. Tison, P., Generalized Consensus Theory and Ap- plication to the Minimization of Boolean Functions. IEEE Trans. on Computers, 16(4):446-456, 1967. 186 Automated Reasoning
1994
8
1,710
Using Errors to Create Piecewise Learnable Partitions Oded Maron M.I.T. Artificial Intelligence Lab 545 Technology Square, #755 Cambridge, MA 02139 oded@ai.mit.edu After a learning system has been trained, the usual procedure is to average the testing errors in order to obtain an estimate of how well the system has learned. However, that is tossing away a lot of potentially useful information. We present an algorithm which exploits the distribution of errors in order to find where the algorithm performs badly and partition the space into parts which can be learned easily. We will show a sim- ple example which gives the intuition of the algorithm, and then a more complex one which brings forth some of the details of the algorithm. Let us suppose that we are trying to learn the absolute value function. Almost all learning algorithms perform well along the arms of the function, but do badly around the cusp. If we no- tice th ‘hill’ of errors around II: = 0, then we can par- tition the space which we are trying to learn into two parts which fall on either side of the hill. Those two partitions have the property of not only being linear, but of being learnable. Each partition can be trained separately, and when tested separately gives a better answer since irrelevant and misleading training points from other partitions have not been included. Now let us take a more complex example of trying to learn the function shown in Figure 1. It is made up of constant parts for simplicity’s sake, but in fact the parts can be anything learnable. The discontinuities cannot be learned easily by the particular learning al- gorithm which we are using (local weighted regression which looks at 10% of the nearest points). Figure 2 shows the error distribution gotten by cross validation. The problem of finding l-d ‘hills’ of errors for one di- mensional functions now becomes a problem of finding multi-dimensional ‘ridges’. This is equivalent to the vision problem of edge detection. While techniques ex- ist for two or three-dimensional edge detection, it is a difficult problem for arbitrary dimensions. Therefore, we make the assumption that the ridges occur in axis- parallel hyperplanes. While this is a big assumption, it makes the problem tractable, and this assuption is used in other well known partitioning algorithms such as CART. The algorithm now proceeds as follows: for each dimension, we scan across that dimension, look- ing for ridges which are parallel to our scan line. If we 1474 Student Abstracts find such a ridge, then all points before that are put in one partition, and we continue scanning. After going through all of the dimensions this way, we have par- titioned the space into pieces which are independently learnable. The question which remains is: how do we detect these hills? The hill must be high enough so that noisy data will not cause over-partitioning. In addition, the hill must be wide enough so that a few outliers will not cause over-segmentation of the training set. IJnfortu- nately, there is no rigorous answer to this question. We used the heuristics of detecting hills which are higher than the average error and wider than fi if there are n points in the current partition. Once the partitions are created, we need to assign a learner to each one. The choice of how to do this is up to the particular implementation. The same learning architecture can be used for every partition, or we can try to find the best learner for each piece of the domain. Each parti- tion also has a hyper-rectangle associated with it which encompasses the points in that partition. For a new query point, we find which hyper-rectangle in which it falls and use the learner associated with that partition to answer the query. The main differences between this algorithm and other decision-tree algorithms is that the partitions are not necessarily constant or linear - they are learnable, and that ridges of error are used to decide where to break the space. Figure 1: Figure 2: From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
80
1,711
Development of a Intelligent Forensic System For Hair Analysis and Comparison* C. Medina L. Pratt Department of Mathematical and Computer Sciences C. Ganesh Division of Engineering Crime Scene -w Itexture,color 1 - Figure 1: Microscopic images from two different hairs Colorado School of Mines Golden, CO 80401 Hair fro The Problem: An important forensic task is to an- alyze and compare hair evidence in criminal cases. A forensic expert compares sets of hair images from a crime scene to a set from a suspect. Under a micro- scope, hairs are fairly distinctive, as shown in Figure 1. The medulh, which runs along the center of the hair, can take on a variety of different shapes. The cortez material outside the medulla has different textures and colors. The cuticle, which is located on the exterior of the hair, can be either visible or invisible. Forensic analysis is a tedious procedure. The foren- sic expert must manually compare hundreds of hair samples to determine whether or not two sets of sam- ples came from the same person. The Solution: Neural networks have been used for several problems involving interpretation of images. In forensic hair analysis, a neural network can be used as a preprocessor to extract important features from a microscopic image of a hair. This can facilitate the *This research is supported under a grant from the Colorado Advanced Software Institute (CASI). CASI is sponsored by the Colorado Advanced Technology Insti- tute (CATI), an agency of the state of Colorado. CAT1 promotes advanced technology education and research at universities in Colorado for the purpose of economic development. Colorado School of Mines Golden, CO 80401 comparison process. reject Description: This project explores the au- tomation of hair analysis. We are working with the Colorado Bureau of Investigation to develop a system to aid in the hair comparison process. Our system will take as input a microscopic image of hair and produce classification decisions about features like visibility of the cuticle, presence or absence of a medulla, and cor- tical texture and color. The database for this project contains 725 micro- scopic hair images from 8 different people. To gather the images, a color video camera was connected to a microscope. The video images were sent to a computer- based image viewer. The image was finally captured using a frame-grabber and saved as a TIFF graphic file. 600 of the images were taken from 3 people cor- responding to “suspect” hairs. 125 images were taken from 5 people corresponding to “crime scene” hairs. Each image was segmented into a number of pieces appropriate for classification of different features. We used a variety of image processing techniques to en- hance this information in advance of neural network classification. Statistical tests will be used to deter- mine the degree of match between the resulting collec- tion of hair feature vectors. Analysis of Results: An important issue in the au- tomation of any task used in criminal investigations is the reliability and understandability of the resulting system. To address this concern, we are doing rigor- ous empirical analysis of our networks. In addition, we are developing methods to facilitate explanation of a neural network’s behavior. One way to better describe the internal decision pro- cess of a neural network is to interpret hidden unit hy- perplanes as a decision tree. Each node of the decision tree represents a decision made at a particular point of the classification process. The leaves represent a classi- fication choice. This method provides both a graphical and more readily interpretable description of the means by which the neural network classification is made. Student Abstracts 1475 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
81
1,712
Finding Mult ivariat e S lits in Decision Trees Using Function Optimization George H. John* Computer Science Department Stanford University Stanford, CA 94305 gjohn@cs.Stanford.EDU We present a new method for top-down induction of decision trees (TDIDT) with multivariate binary splits at the nodes. The primary contribution of this work is a new splitting criterion called soft entropy, which is continuous and differentiable with respect to the pa- rameters of the splitting function. Using simple gradi- ent descent to find multivariate splits and a novel prun- ing technique, our TDIDT-SEH (Soft Entropy Hyper- planes) algorithm is able to learn very small trees with better accuracy than competing learning algorithms on most datasets examined. The process of finding a splitting function at a node of a decision tree is a search problem, and we choose to view it as unconstrained parametric function op- timization over the space of hyperplane weight vec- tors w E Rn. Our objective function is soft entropy, a new continuous approximation to the entropy measure (Quinlan 1986). Soft entropy was chosen for two rea- sons. First, it is well-established that entropy is a good splitting criterion (Buntine & Niblett 1992). Second, softness is important to get good generalization in con- tinuous spaces, as shown in Figure 1. Related work is similar overall, but the OCl algorithm of Murthy et al. (1993) uses entropy as a criterion, and Brodley and Utgoff (1992) d escribe algorithms using error, also a hard splitting criterion. 8 i” Figure 1: AII four splits shown on the left have equivalent entropy and error. On the right we show the split found by minimizing soft entropy. The overall learning algorithm is simply the stan- dard TDIDT method (Q uinlan 1986). To choose a split at a node, it uses gradient descent to find the hyperplane with minimal soft entropy. To prune the resulting tree, it uses a new pruning technique which *Inquiries are welcome. This material is based upon work supported under a National Science Foundation Grad- uate Research Fellowship. Dataset Vote 1 Vote1 Monks2 1 Monks3 See Buntine & Niblett Thrun et al. #Train/Test 200/ 135 200/135 1351432 122/422 SEHpr 98.5 (3) 94.8 (3) 100 (9) 90.7 (3) SEHp 98.5 (3) 90.4 (13) 100 (9) 91.2 (5) SEH 96.3 (15) 85.9 (29) 100 (9) 93.8 (9) oc1p* 95.6 (11) 92.6 (17) 97.9( 5) 92.6 (3) c4.5p* 97 (7) 92.6 (27) 75.9(39) 100 (9) c4.5* 94.8 (12) 92.6 (33) 75.5 (135) 97.2 (17) I’able 1: Test set accuracy and number of nodes (in paren- theses) in the induced decision tree for several datasets. “p” indicates use of pruning, “r” indicates re-filtering. “*” in- dicates several versions were run with different parameter settings, and the best results for each dataset are presented. we call iterative re-filtering, a general regularization algorithm that we are investigating further. Table 1 shows results on various datasets. SEH achieves the best test-set accuracy on all datasets ex- cept for Monks3, which was difficult because the al- gorithm pruned away four of the data points which turned out not to be noise. On the rest of the do- mains, SEH was able to achieve high accuracy using very small trees. References BrodIey, C. E., and Utgoff, P. E. 1992. Multivariate versus univariate decision trees. Technical Report COINS TR 92-8, Department of Computer Science, University of Massachusetts, Amherst, MA, 01003. Buntine, W., and Niblett, T. 1992. A further compari- son of splitting rules for decision-tree induction. Machine Learning 8175-85. Mm-thy, S.; Kasif, S.; SaIzberg, S.; and Beigel, R. 1993. OCl: randomized induction of oblique decision trees. In AAAI-93: Proceedings, Eleventh National Conference on Artificial Intelligence, 322-327. MIT Press. QuinIan, J. R. 1986. Induction of decision trees. Machine Learning 1:81-106. Thrun, S. B., et al. 1991. The monk’s problems - a perfor- mance comparison of different learning algorithms. Tech- nical Report CMU-CS-91-197, CMU School of Computer Science. Student Abstracts 1463 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
82
1,713
Dempster-Shafer and Bayesian Networks for CAD-based Feature Extraction: A Comparative Investigation and Analysis Qiang Ji, Michael M. Marefat, and Paul J. A. Lever* Dept. of Electrical and Computer Engineering, and Dept. of Mining and Geological Engineering* The University of Arizona Tucson, Arizona 85721 qiangji@ece.arizona.edu Introduction Information pertaining to real world problems often contains noises and uncertainties. This has been a major challenge faced by the contemporary AI researchers. Of various paradigms developed for handing uncertainties, the Dempster-Shafer theory (DS) and the Bayesian Belief Networks (BBN) have received considerable attention in the AI community recently. They have been successfully applied to problems in medical diagnosis, decision-making, image understanding, machine vision, etc.. Despite their obvious success, blindly using them without understanding their limitations may result in computational difficulty and unsatisfying inference results. The aim of this paper is to analyze and compare the performance of the two paradigms in extracting manufacturing features from the solid model descriptions of objects. Such a comparison will serve to identify their strengths, weakness, and appropriate application domains. Problem Domain and Formulation A major difficulty faced by the previously proposed methods for feature extraction has been the interaction between features. Feature interaction introduces uncertainties to feature representation, making their recognition very difficult [Ji, 19931. We propose to recognize interacting features by identifying a set of correct virtual links, based on generating and combining geometric and topological evidences. A DS approach was developed in this research that can correctly identify multiple virtual links simultaneously by overcoming the mutual exclusiveness assumption. The approach constructed a frame of discernment consisting of the subsets of all potential virtual links. Domain-specific knowledge was used to prune the original frame to a manageable size. A key component of this approach is the principle of association we developed for interpreting evidences and for assigning bpas to proper hypothesis sets. Virtual links were determined through evidence aggregation. An approximate method was developed to reduce the evidence aggregation from exponential to linear time by replacing the newly-generated focal elements with their existing nearest supersets. A hypothesis space consisting of subsets of potential virtual links was used to construct an initial BBN. The casual-consequence relationship between any two connected nodes was represented by the whole-part relationship. Heuristic knowledge was then applied to 1462 Student Abstracts prune the initial network into a singly-connected BBN for effective belief propagation. To identify multiple virtual links, the belief revision algorithm [Pearl, 19881 was employed for belief propagation, resulting in an optimal state for each node in the network that best explained the observed evidences. Virtual links were subsequently determined from the optimal state of each hypothesis. Comparison and Conclusion The measures used for comparison include informational complexity, time complexity, and robustness. Informational complexity study reveals that BBNs require a complete probabilistic model to initiate an inference while DS can function under incomplete model. Furthermore, the auxiliary information required by BBNs may sometimes prove to be difficult and expensive to obtain. In time complexity, both mechanisms are m-hard in general. While linear time may be achieved for special cases, approximate methods are normally employed for general cases. The robustness study indicates that BBNs tolerate large deviations in prior probability and link matrices. DS, on the other hand, is very sensitive to input data change and conflicting evidences. This research concludes that while both mechanisms can overcome the mutual exclusiveness assumption to identify multiple virtual links, the BBNs are well suited to applications where probabilities are known or can be acquired, and where human subjective opinions are important. On the other hand, the DS theory is a good choice for applications where uncertainty is best thought of as being distributed in power sets, and where no prior knowledge is available. One disadvantage with DS is that it still stays in lifeless number manipulation while the BBNs use intuitively meaningful semantic networks. In certain fields however, both algorithms can be applied, and the qualities of the results often depend on the skill of the users in adapting the basic theories to their particular problems. References Q. Ji, 1993. Bayesian Methods for Machine Understanding of CAD Models. MS thesis, Dept. of Electrical and Computer Eng., Univ. of Arizona. Pearl, J., 1988. Probabilistic Reasoning in Intelligent System, Morgan Kaufmann Publishers, INC. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
83
1,714
Preliminary Studies in Agent Design in Simulated Environments Scott B. Hunter Department of Computer Science Cornell University Ithaca, NY 14853 E-mail: hunter@cs.cornell.edu It is known that, in general, the point along the purely-reactive/classical-planning axis of the controller spectrum that is most appropriate for a particular envi- ronment/task (E/T) p air will be determined by charac- teristics of the environment, the agent’s perceptual and effectual capabilities, and the task. Instead of propos- ing another hybrid architecture, we want to determine criteria for determining which architectural compro- mise is best suited for a given E/T. Our goal is to understand relationships between E/T pairs and the agent architecture, so that we can predict the perfor- mance of the architecture under parametric variations of the environment and/or the architecture. This is a first step toward constructing methods for automatic synthesis of agents as in (Ros89). Our example of a domain where the choice of archi- tectural basis is not so clear is the game of XChomp (programmed by Jerry J. Shekhel), a close relative of the commercial game PacMan. This domain allows for easy change of parameters to simulate a number of discrete combinatorial problem domains. Interesting characteristics of the game that make it different from the E/Ts considered in (AC87), (Bro86), (ChaSl), (Sch87) among others, are: There are no+EocaE taslcs. By non-local, we refer to not only spatial and temporal extents, but also univer- sal quantification of parameters of the task. Hostile aspects of the environment may be temporar- ily made not only benign, but positive concrete goals; these conversions are under the control of the player. Classification of objects changes over time, complicat- ing the decision of how to respond to such objects, as these are based on projections of possible futures. There is not much flexibility with respect to move- ment. Not only is movement within this environment restricted to the four cardinal directions, but it is a maze, so that in most locations, only two of those four may be used. When the cost of making mistakes is high, the extra effort to get it right the first time is (possibly) justified. There are multiple conflicting objectives. While hav- ing multiple objectives is not particularly novel, those in this environment have a nasty habit of pulling their acquisitions at cross purposes. Any designer must answer the following questions: On what informational basis does an agent make its action selection choice ? What aspects of the world does it perform forward projection on, what aspects does it sense, and what is the map from external and internal state of the agent to an action (or sequence) that maximizes the objective function of the agent? We need to be able to construct a solution and justify it using methods other than pointing to the constructed solution as an existence proof. This follows in the spirit of work done in (Hor93) for a mobile robot. To enable us to study these questions, we isolate a range of E/T combinations based on the XChomp game. We implement a range of controllers that ex- ploit the information needed for “optimal” play and test their task performance experimentally. We then vary the performance requirements and environmental specifications in a form of perturbation analysis to de- termine how robust the agents are; and how to modify them to be effective in new situations. In addition, starting from a very simplified version of this E/T, we are developing a theoretical basis upon which to justify the agents we develop. This basis is expected to not only be used in determining how an agent should behave, but also what a designer should not be concerned about. eferences P. Agre and D. Chapman. Pengi: an implementation of a theory of activity. In Proceedings of AAAI-87. Morgan Kaufmann, 1987. R.A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automa- tion, 2(l), 1986. D. Chapman. Vision, Instruction and Action. PhD thesis, MIT AI Lab, 1991. I. Horswill. Poly: A vision-based artificial agent. In Proceedings of AAAI-93. Morgan Kaufmann/MIT Press, 1993. S.J. Rosenschein. Synthesizing information-tracking automata from environment descriptions. In Proceed- ings of K&89. Morgan Kaufmann, 1989. M.J. Schoppers. Universal plans for reactive robots in unpredictable domains. In Proceedings of IJCAI-87. Morgan Kaufmann, 1987. Student Abstracts 1461 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
84
1,715
The Automated Mapping of Plans for Plan Recognition* Marcus J. Huber Edmund H. Durfee Michael P. Wellman Artificial Intelligence Laboratory The University of Michigan 110 1 Beal Avenue Ann Arbor, Michigan 48109-2110 { marcush, durfee, wellman ) @engin.umich.edu To coordinate with other agents in its an environ- ment, an agent needs models of what the other agents are trying to do. When communication is impossi- ble or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with specification of the possible plans the other agents may be following and develop special techniques for discriminating among the pos- sibilities. These structures are not the direct nor de- rived output of a planning system. Prior work has not yet addressed the problem of how the plan recognition structures are (or could be) derived from executable plans as generated by planning systems. Furthermore, concerns about building models of agents’ actions in all possible worlds lead to a desire for dynamically con- structing belief network models for situation-specific plan recognition activities. As a step in this direction, we have developed and implemented methods that take plans, as generated by a planning system, and creates a belief network model in support of the plan recognition task. We start from a language designed for plan specifi- cation, PRS (Ingrand, Georgeff, & Rao 1992).l From a PRS plan, we generate a belief network model that directly serves plan recognition by relating potential observations to the candidate plans. Our methods handle a large variety of plan structures such as con- ditional branching, subgoaling, and alternative goals. Furthermore, our application domain is coordinated autonomous robotic teams, where sensor-based obser- vations are inherently uncertain. The methodology we have developed handles this uncertainty through explicit modeling, something not necessary in other plan recognition domains (Charniak & Goldman 1993; Goodman & Litman 1990) where observations are cer- tain. An example of a belief network generated by the mapping methods from a set of simple plans for performing a “bounding-overwatch” surveillance task can be seen in Figure 1. Results from early experi- ments have shown the dynamically constructed belief *This research was sponsored in part by NSF grant IRI- 9158473, and by DARPA contract DAAE-07-9%C-R012. ‘Although any plan language would serve as well. 1460 Student Abstracts Figure 1: Belief network representation. network to be a useful mechanism for inferring the ob- served agent’s plans based solely upon observations of its actions. This research is novel in that the plan recognition model is derived directly from a plan as represented by a planning system, instead of being built from a specially constructed database. Our explicit modeling of the uncertainty associated with observations is also unique. Our future research includes extending the methodolgy to incorporate iteration and recursion and in more extensive evaluation of the utility of using plan recognition for coordination of multiple robotic agents. References Charniak, E., and Goldman, R. P. 1993. A Bayesian model of plan recognition. Artificial Intelligence 64(1):53-79. Goodman, B. A., and Litman, D. J. 1990. Plan recognition for intelligent interfaces. In Proceedings of the Sixth Conference on Artificial Intelligence Ap- plications, 297-303. Ingrand, F.; Georgeff, M.; and Rao, A. 1992. An ar- chitecture for real-time reasoning and system control. IEEE Expert 7(6):34-44. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
85
1,716
Generating Rhythms with Genetic Algorithms Damon Horowitz MIT Media Laboratory, 20 Ames St. E15-488, Cambridge, MA 02139 damon@media.mit.edu Abstract My system uses an interactive genetic algorithm to learn a user’s criteria for the task of generating musical rhythms. Interactive genetic algorithms (Smith 91) are well suited to solving this problem because they allow for a user to simply execute fitness functions (that is, to choose which rhythms or features of rhythms he likes), without necessarily understanding the details or parameters of these functions. As the system learns (develops an increasingly accurate model of the function which represents the user’s criteria), the quality of the rhythms it produces improves to suit the user’s taste. This approach is largely motivated by Richard Dawkins, who succinctly summarizes the attraction of IGAs for artistic endeavors in stating: “Effective searching procedures become, when the search space is sufficiently large, indistinguishable from true creativity” (Dawkins 86). In the context of this project, rhythms are one measure long sequences of notes and rests occurring on natural pulse subdivisions of a beat; I only deal with a specific subset of the enormous class of rhythms, in order to provide a well-defined domain for the application of the learning algorithm. The benefit of this reduction of the domain is that a rhythm phenotype can now be viewed as a simple vector. Thus, the set of rhythms satisfying the user’s criteria could be represented by a Boolean formula. I actually use a slightly more complex representation for the rhythm genotype, motivated by the benefits of using a diploid genetic structure, consisting of several short array templates; the order of the layering of these templates in creating the phenotype effectively determines the dominance hierarchy between the genes. The simplest mode of interaction is for the user to playback each of the rhythms in a randomly generated population, and then subjectively assign them fitness values based upon their satisfaction of his criteria. The system then uses standard GA selection (with fitness scaling), reproduction (with crossover monitors), and mutation operators. In order to deal with the difficulties resultant from the subjectivity and variability of the user’s criteria, there are also several objective functions with which the system can automatically evolve generations of rhythms: syncopation, density, downbeat, beat repetition, cross rhythm, and cluster functions are currently included. Each of these functions represents an axis in a feature space which is useful for distinguishing rhythms. While these are only a few of the many possible objective functions that could be implemented, they provide a richset of possibilities with which to begin exploring. The user can specify the ideal target value for each of these fitness functions, and also their relative importance (weighting of coefficients) in determining the overall fitness of a rhythm. The system then automatically evolves the indicated number of successive generations, using the objective fitness values to determine selection. The system also makes use of a meta-level genetic algorithm designed to evolve populations of parameters (target values and weights) to the objective fitness functions defined above. This is motivated by the research done in the application of genetic algorithms to the k-nearest-neighbor technique of classification (Punch et al. 93); each meta-level individual represents a warping of K-NN space, such that the fitness of each individual is determined by how well its warping of the feature-space helps to discriminate useful features, and thus correctly perform classifications. Evolving populations of meta- individuals allows a user to quickly reduce the search space by subjective evaluation of the rhythms generated by the meta-individuals, without having to directly specify values for the objective functions. This combination of methods proves to be a powerful hybrid approach to the subjectivity problem, one which allows for greater coverage of the search space than would have been possible ordinarily using a small population (which is necessitated by most IGA’s, and is particularly important when dealing with sequential acoustic data), and more efficient convergence on a satisficing solution. The system is able to converge on near-optimal solutions (acceptable to test users) after about fifty user-evaluations of rhythms. While the GA itself is mechanically quite simple, it is important to note that the implementation of appropriate fitness functions is difficult, and largely determines the musicality of the output. The major future improvement will involve adding the capacity for the system to learn to design its own fitness functions to represent features characteristic of rhythms selected by users in past sessions. eferences Dawkins, R. 1986. “Accumulating Small Change,” in The BlindWatchmaker. New York: W.W. Norton and Co. Punch, W. et al 1993. Further Research on Feature Selection and Classification Using Genetic Algotihms. In Proceedings of the Fifth International Conference on Genetic Algorithms. Smith, J. 1991. Designing Biomorphs with an Interactive Genetic Algorithm. In Proceedings of the Fourth International Conference on Genetic Algorithms. Student Abstracts 1459 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
86
1,717
Processing Pragmatics for Computer-Assisted Language nstruct ion Keiko Horiguchi Computational Linguistics Program and Center for Machine Translation Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 keiko@cs.cmu.edu Computer-assisted language instruction systems that only perform syntactic processing of input sentences are not able to offer advice on pragmatic aspects of language use, and they cannot handle the variability generally afforded by natural languages to express a given propositional content. This paper describes a solution to this problem im- plemented as an extension to the existing Japanese tu- torial system ALICE-than (Evans & Levin 1993). We designed the p-structure to represent pragmatic infor- mation as well as the propositional content of a sen- tence. The pragmatic content is encoded in terms of the speech situation, the speaker’s attitude toward the addressee, and the felicity conditions for the in- tended speech act. Linguistic features that express the speaker’s uncertainty are interpreted as reducing factors that weaken felicity conditions of the intended speech act of the sentence. We implemented a p-structure mapping program that generates p-structures from syntactic structures. As an example, the p-structure for the sentence Tegami- wo kaite itadakenai darou ka to omou n desu ga (“I wonder if you might be able to write a letter for me”) is shown below. SPEEC:H-AC:T requesting-action FEATIJRE receive-favor-potential AC:TION ACTEE +#Q (tegami-wo) SENSE “letter” pbr (kaite) SENSE “write” BELIEF DESIRE EXPECTATION FEATURE receive-favor-potential REDUCING-FAC!TOR EXTENDED- PREDIC!ATE /;-CT (n desu) C:ONJUNCTION S (gal THINK L: ,g 3 (toomou) NEGATIVE + TENTATIVE Eis 5 (darou) INTERROGATIVE ~3% (ka) FAVOR IziVh~ (itadakenai) PLAC:EMENT- OF-ADDRESEE higher SPEECH- SITUATION formal 1458 Student Abstracts The system stores the pragmatic analysis template. The error analysis matcher compares this template with the student input. It then reports any features that are missing or different, as well as error features that are inserted during the analysis. Based on this, the er- ror matcher then formulates appropriate feedback. For example, if a student used the verb sashiagerarenai in- stead of itadakenai for the above sentence, the system would respond as follows: You seem to have used the giving verb with the wrong direction. You should have used the verb with inward direction. Our computer assisted instruction system benefits from adopting p-structure in two ways. First, the system al- lows students flexibility for expressing propositions in different ways, since the system can accept similar in- formation expressed in different structures. Second, the system is able to detect errors and give finer feed- hack on pragmatic usage of the language. Students can now express the required proposition more freely without burdening the teacher with the task of typing in all possible correct answers and incorrect answers with appropriate feedback. Acknowledgments: I am grateful to Lori Levin, David Evans, Martin Thurn, and Steve Handerson for their help and guidance. References AIIen, J. F. 1983. Recognizing intentions from natural language utterances. In Brady, M., and Berwick, R., eds., Computational Models of Discourse, 107-166. Cambridge, MA: MIT Press. Evans, D. A., and Levin, L. S. 1993. Intelligent computer- asisted language learning theory and practice in ALICE- than. In Army Resecarch Institute Workshop on Advanced Technologies for Language Learning. Kogure, K.; Iida, H.; Yoshimoto, K.; Maeda, H.; Kume, M.; and Kato, S. 1988. A method of analyzing Japanese speech act types. In Prceedings of the 2nd International Conference on Theoretical Methodological Issues in Ma- chine Translation of Natural Languages. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
87
1,718
lkactable Anytime Temporal Constraint Propagation Louis J Hoebel Department of Computer Science University of Rochester Rochester, NY 14627 hoebel@cs.rochester.edu 1 Introduction A major concern when reasoning about time in artificial intelligence problems is computational tractability. We present a method for applying temporal reasoners to large scale dynamic problems. We present a partitioning of the temporal database and means of constraint propagation that presents an efficient approach for producing tractable systems. Our goal is not to enhance underlying reasoners but to develop mechanisms by which reasoning about time can be practically applied to certain problems. Tractable computation is the basic consideration. In reasoning with time a relation exists between express- ibility and tractability. In real and complex problems, increased expressibility requires tractability to be more than a theoretical concern [Allen92]. Approaches to intrac- tability include restricted expressiveness and using frag- ments of an algebra; or using organizational and heuristic approaches for a particular application. We provide a method for temporal reasoners to be invoked on large sets of dynamic assertions and constraints while providing an adequate inference mechanism within the computational constraints of the class of problems of interest. Our approach to controlling computation comes from the conjecture that time and temporal relations have a structure that can be exploited. The intimate relation of time and space also provide a basis for the heuristic propa- gation of constraints. Assertions and constraints may have limited, although not necessarily only local, effects. In dynamic problems it is infeasible and perhaps unnecessary to compute a complete minimal network upon every invo- cation of a temporal reasoner. 2 Approach We provide a framework and mechanisms for controlling computation costs. The mechanisms are based on the heu- ristic propagation and compiling of constraints. The framework is based on a reference hierarchy and partition- ing of time and space. No restrictive assumption is made on the underlying representation or reasoner. In addition to tractability, the solution developed is anytime [Dean&- Boddy 883 and as theoretically complete as the underlying temporal reasoner permits. The trade-off required is a nec- essary incompleteness at anytime and a relaxed or bounded definition of consistency. The approach taken here is to provide a flexible system that can operate: in a trivial manner of only recording con- straints; in an efficient manner, considering only those constraints necessary; and, as an anytime algorithm. This last method of operation is required as efficient (near opti- mal) may still be intractable in practice. This approach includes developing a structure of refer- ence hierarchy intervals based on a partitioning of time; and by analogy applicable to spatial data also. This approach and its pitfalls has been suggested in [Allen83]. In order to preserve the hierarchy we create non-strict ref- erence intervals, removing restrictions on intervals being strictly during a reference interval Using the reference hierarchy structure, an anytime algorithm is implemented, partitioning inference and restricting the size of any call to the core inference proce- dure. This procedure is called on individual reference intervals. The procedure is restricted to operate on the con- straints and intervals of a single reference interval at any one time. Limiting the size of reference intervals and asso- ciated inference computation time provides halting and resumption for anytime performance. 3 Results We implement propagation and inference schemes using an expressive temporal reasoner, MATS [MATSgO], that provides tractable reasoning and anytime operation. We provide methods for control of propagation and inference based locality and on spatio-temporal relations. Application to a small problem has yielded encouraging results, exhibiting locality of effects. The implementation, with caching and propagation overhead and repeatedly invoking the reasoner on smaller interval sets, is nearly as fast as a single batch application. MATS does not scale well, while our approach has been developed for scaling to large, dynamic problems. 4 References [Allen831 J.F. Allen. “Maintaining Knowledge about Temporal Intervals” Communications of the ACA4,26( 1) [Allen921 J. F. Allen “Getting Serious about Simultaneous Actions” 1st Inter. Planning Con,, Silver Springs MD [Dean &Boddy 88]T. Dean, M. Boddy “An Analysis of Time -Dependent Planning” Proc. M-88 Seattle WA [MATS911 H. Kautz “MATS Documentation” AT&T Bell Laboratories, 199 1 Student Abstracts 1457 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
88
1,719
A Dynamic Organization in Distributed Constraint Satisfaction Katsutoshi Hirayama, Seiji Yamada, and Jun’ichi Toyoda ISIR, Osaka University, 8-1 Mihogaoka, Ibaraki, Osaka 567, JAPAN (hirayama, yamada, toyoda}@ai.sanken.osaka-u.ac.jp Abstract We present a novel dynamic organization to solve DC- SP(Distributed Constraint Satisfaction Problem). DCSP provides a formal framework for studying cooperative distributed problem solving[Yokoo 921. To solve DCSP, we have developed a simple algorithm using iterative improvement. This technique has had great success on certain CSP(Constraint Satisfaction Problems)[Minton 9O][Selman 921. In our algorithm each agent performs iterative improvement and also plural agents can do in parallel. However, one drawback of this technique is the possibility of getting caught in local minima(which are defined specifically in our algorithm). LMO is a technique for escaping from local minima. It is summarized as fol- lows: When an agent(A1) gets caught in a local minimum, (step I) Al sends its CSP(variables, domains and constraints) to an agent(A2). Al selects A2 such that it shares violated constraints at that time. Ties are broken randomly. (step 2) AZ puts its CSP and Al’s CSP together and searches for all possible assignments with simple back- tracking. After that, A2 performs iterative improve- ment. Besides escaping from local minima, LMO prevents agents from getting caught in the same local minima as before. Therefore our algorithm for DCSP is complete. LMO is also the algorithm for a dynamic organization since agents reassign the responsibilities of solving CSP based on a developing view of the problem. As a dynamic organization, LMO is characterized by grouping in re- sponse to the conflicts(i.e., local minima) that arise during problem solving. This produces the effect that the organi- zation with LMO(we call it the LMO organization) makes groups depending on the number of local minima. That is, when there are few local minima in a problem, the LMO organization solves it in a distributed manner, and when there are many, it does in a centralized manner. To evaluate the performance of LMO, we have compared the LMO organization with the following ones. 1. Distributed organization: This organization always solves problems in a distributed manner. In this orga- nization each agent performs iterative improvement. When one agent gets caught in a local minimum, all agents change their assignments randomly and contin- ue to perform iterative improvement. 2. Centralized organization: This organization always solves problems in a centralized manner. In this orga- nization, to begin with agents have to solve the leader election problem. Then all agents(but the leader) send their CSP to the leader. Finally the leader searches for one solution with simple backtraking(the method used in LMQ. Note that agents solve the leader elec- tion problem and send their CSP regardless of the possibility of distributed problem solving. In our experiments, for the problems with few local minima the LMO organization solves them faster than the Cen- tralized organization&cause the cost of leader election exceeds that of distributed problem solving), and for the problems with many local minima it solves them faster than the Distributed organization. Finally, in LMO, we use backtracking as a method to help iterative improvement. We believe that this approach will be applicable to non-distributed CSP. That will be our future work. References [Minton 903 Minton, S., Johnston, M. D., Philips, A. B. and Laird, P., Solving Large-Scale Constraint Satisfac- tion and Scheduling Problems Using a Heuristic Repair Method, AAAI-90,17-24,199O. [Selman 921 Selman, B., Levesque, H. and Mitchell, D., A New Method for Solving Hard Satisfiability Problems, AAAI-92,440446,1992. [Yokoo 921 Yokoo, M., Durfee, E. IQ., Ishida, T. and Kuwabara, K., Distributed Constraint Satisfaction for Formalizing Distributed Problem Solving, 12th IEEE International Conference on Distributed Comptiing Sys- tem, 614-621,1992. 1456 Student Abstracts From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
89
1,720
Using Hundreds of Workstations to Solve First-Order Logic Problems Alberta Maria Segre & David B. Sturgill Department of Computer Science Cornell University Ithaca, NY 14853-7501 { segre,sturgill}@cs.comell.edu Abstract This paper describes a distributed, adaptive, first-order logic engine with exceptional performance characteris- tics. The system combines serial search reduction tech- niques such as bounded-overhead subgoal caching and intelligent backtracking with a novel parallelization strategy particularly well-suited to coarse-grained paral- lel execution on a network of workstations. We present empirical results that demonstrate our system’s perfor- mance using 100 workstations on over 1400 first-order logic problems drawn from the “Thousands of Prob- lems for Theorem Provers” collection. utroduction We have developed an distributed, adaptive, first-order logic engine as the core of a planning system intended to solve large logistics and transportation scheduling problems (Calistri-Yeh & Segre, 1993). This underlying inference engine, called DALI (Distributed, Adaptive, Logical Inference), is based on an extended version of the Warren Abstract Machine (WAM) architecture (AIt-Kaci, 1991) which also serves as the basis for many modem Prolog implementations. DALI takes a first-order specification of some application domain (the domain theory) and uses it to satisfy a series of queries via a model elimination inference procedure. Our approach is inspired by PTIP (Stickel, 1988), in that it is based on Prolog technology (i.e., the WAM) but circumvents the inherent limitations thereof to provide an inference procedure that is complete relative to first-order logic. Unlike PTTP, however, DALI employs a number of serial search reduction techniques such as bounded-overhead subgoal caching (Segre & Scharstein, 1993) and intelligent backtracking (Kumar & Lin, 1987) to improve search efficiency. DALI also exploits a novel parallelization scheme called nagging (Sturgill & Segre, 1994) that supports the effective use of a large number of loosely-coupled processing elements. The message of this paper is that efficient implementation technology, serial search reduction techniques, and parallel nagging can be successfully combined to produce a high- performance first-order logic engine. We support this claim with an extensive empirical performance evaluation. The basis of our implementation is the WAM. The WAM supports efficient serial execution of Prolog: the core idea is that detite clauses may be compiled into a series of primitive instructions which are then interpreted by the underlying abstract machine. The efficiency advantage of the WAM comes from making compile-time decisions (thus reducing the amount of computation that must be repeated at run time), using carefully engineered data structures that provide an efficient scheme for unwinding variable bindings and restoring the search state upon backtracking, and taking several additional efficiency shortcuts, which, while acceptable for Prolog, are inappropriate for theorem proving in general. In our implementation, the basic WAM architecture is extended in three ways. First, we provide completeness with respect to first-order logic. Next, we incorporate serial search reduction techniques to enhance performance. Finally, we employ a novel asynchronous parallelization scheme that effectively distributes the search across a network of loosely-coupled heterogeneous processing elements. er CompIeteness As described in (Stickel, 1988), Prolog - and the underlying WAM - can be used as the basis for an efficient first-order logic engine by circumventing the following intrinsic limitations: (i) Prolog uses unsound unification, i.e., it permits the construction of cyclic terms, (ii) Prolog’s unbounded depth-first search strategy is incomplete, and (iii) Prolog is restricted to definite clauses. PTTP demonstrates how these three limitations can be overcome without sacrificing the high inference rate common to many Prolog implementations. Automated Reasoning 187 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved. Like Stickel, we repair Prolog’s unsound unification by performing the missing “occurs check.” We also borrow a compile-time technique from (Plaisted, 1988) to “flatten” unification and perform circularity checking only when needed. In our implementation, the circularity check is handled efficiently by a new WAM instruction. We restore search completeness by using a depth&st iterative deepening search strategy (Korf, 1985) in the place of depth- first search.’ Finally, the definite-clause restriction is lifted by adding the model elimination reduction operation to the familiar Prolog resolution step and by compiling in all contrapositive versions of each domain theory clause. As suggested in (Stickel, 1988), the use of the model elimination reduction operation enables the inclusion of cycle detection with little additional programming effort (cycle detection is a pruning technique that reduces redundant search). Serial Search Reduction OLU second set of modifications to the WAM support a number of adaptive inference techniques, or serial search reduction mechanisms. In (Segre & Scharstein, 1993) we introduce the notion of a bounded-overhead subgoal cache for definite-clause theorem provers. Bounded-overhead caches contain only a fixed number of entries; as new entries are made, old entries are discarded according to some preestablished cache management policy, e.g., least-recently used. Limiting the size of the cache helps to avoid thrashing, a typical consequence of unbounded-size caches operating within bounded physical memory. Cache entries consist of successfully-proven subgoals as well as subgoals which are known to be unprovable within a given resource limit; matching a cache entry reduces search by obviating the need to explore the same search space more than once. As a matter of policy, we do not allow cache hits to bind logical variables. In exchange for a reduction in the number of cache hits, this constraint avoids some situations where taking a cache hit may actually increase the search space. Cache entries are allowed to persist until the domain theory changes; thus, information acquired in the course of solving one query can help reduce search on subsequent queries. In a deEnite-clause theory, the satisfiability of a subgoal depends only on the form of the subgoal itself. However, ’ Unlike FTl”P’s depth metric which is based on the number of nodes in the proof, our depth-first iterative deepening scheme measures depth as the height of the proof tree. Although each depth meam has its advan- tages and neither leads to uniformly superior performance, our choice is motivated by concerns for compatibility witi, both our intelligent back- tracking and caching schemes. when the model elimination reduction operation is used, a subgoal’s satisfiability may also depend on the ancestor goals from which it was derived. Accordingly, a subgoal may fail in one situation while an identical goal may succeed (via the reduction operation) elsewhere in the search. DALI extends the definite-clause caching scheme of (Segre & Scharstein, 1993) to accommodate the context sensitivity of cached successes. The DALI implementation presented here simply disables the caching of failures in theories not composed solely of definite clauses, although this is unnecessarily extreme. In addition to subgoal caching, we employ a form of intelligent backtracking similar to that of (Kumar & Lin, 1987). Normally, the WAM performs chronological backtracking, resuming search from the most recent OR choicepoint after a failure. Naturally, this new search path may also fail for the same underlying reason as the previous path. Intelligent backtracking attempts to identify the reasons for a failure and backtrack to the most recent choicepoint that is not doomed to repeat it. Our intelligent backtracking scheme requires minimal change to the WAM for the definite-clause case. Briefly, choicepoints along the current search path are marked at failure time depending on the variables they bind; unmarked choicepoints are skipped when backtracking. As with subgoal caching, the marking procedure must also take ancestor goals into account when deciding which choicepoints to mark. In (Sturgill & Segre, 1994) we introduce a parallel asynchronous search pruning strategy called nagging. Nagging employs two types of processes; a master process which attempts to satisfy the user’s query through a sequential search procedure and one or more nagging processes which perform speculative search in an effort to prune the master’s current search branch. When a nagging process becomes idle it requests a snapshot of its master’s state as characterized by the variable bindings and the stack of open goals. The nagging process then attempts to prove a permuted version of this goal stack under these variable bindings using the same resource limit in effect on the master process. If the nagging process fails to find a proof, it guarantees that the master process will be unable to satisfy all goals on its goal stack under current variable bindings. The master process is then forced to backtrack far enough to retract a goal or variable binding that was rejected by the nagger. If, however, the nagging process does tid a proof, then it has satisfied a permuted ordering of all the master’s open goals, thereby solving the original query. This solution is then reported. 188 Automated Reasoning Nagging has many desirable characteristics. In particular, it affords some opportunity to control the granularity of nagged subproblems and is also intrinsically fault tolerant. As a result, nagging is appropriate for loosely-coupled hardware. Additionally, nagging is not restricted to de&rite- clause theories and requires no extra-logical annotation of the theory to indicate opportunities for parallel execution. Finally, nagging may be cleanly combined with other parallelism schemes such as OR and AND parallelism. Readers interested in a more complete and general treatment of nagging are referred to (Sturgill & Segre, 1994). Evaluation We wish to show that (i) subgoal caching, intelligent backtracking, and nagging combine to produce superior performance, and (ii) our approach scales exceptionally well to a large numbers of processors. In order for our results to be meaningful, they should be obtained across a broad spectrum of problems from the theorem proving literature. To this end, we use a 1457-element subset of the 2295 problems contained in the Thousands of Problems for Theorem Provers (TPTP) collection, release 1.0.0 (Suttner et al., 1993). The TPTP problems are expressed in first-order logic: 37% are definite-clause domain theories, 5% are propositional (half of these are detite-clause domain theories), and 79% require equality. The largest problem contains 6404 clauses, and the number of logic variables used ranges from 0 to 32000. For our test, we exclude 838 problems either because they contained more than one designated query clause, or, in one instance, due to a minor flaw in the problem specification. Four different con@urations of the DALI system are applied to this test suite; three are serial conf@urations, while the fourth employs nagging. Each configuration operates on identical hardware and differs only in which serial search reduction techniques are applied and in whether or not additional nagging processors are used. We use a single, dedicated, Sun Spare 670MP “Cypress” system with 128MB of real memory as the main processor for each tested con@uration. Nagging processors, when used, are drawn from a pool of 110 additional Sun Spare machines, ranging from small Spare 1 machines with 12MB of memory to additional 128MB 670MP processors running SunOS (versions 4.1.1 through 4.1.3). These machines are physically scattered throughout two campus buildings and are distributed among three TCP/IP subnets interconnected by gateways. Note that none of the additional machines are intrinsically faster than the main processor; indeed, the majority have much slower CPUs and far less memory than does the main processor. Furthermore, unlike the main processor, the nagging processors represent a shared resource and are used to support some number of additional users throughout the experiment. Three serial configurations of DALI are tested. & is a simple serial system that is essentially equivalent to a WA&I-level reconstruction of PTIP module the previously cited difference in depth bound calcuIation.2 Xr adds intelligent backtracking, while & incorporates intelligent backtracking, cycle detection, and a lOO-element least- recently used subgoal cache. Each cotiguration performs unit-increment depth-first iterative deepening and is limited to exploring 1, 000, 000 nodes before abandoning the problem and marking it as unsolved; elapsed CPU time (sum of system time and user time) is recorded for each problem. Note that the size of the cache is quite arbitrarily selected; larger or smaller caches may well result in improved performance. In addition, a unit increment may well be a substantially suboptimal increment for iterative deepening. Depending on the domain, increasing the increment value or changing the cache size may have a significant effect on the system’s performance. The results reported in this paper are clearly dependent on the values of these parameters, but the conclusions we draw from these results are based only on comparisons between identically- configured systems. The fourth tested configuration, &, adds 99 nagging processors to the configuration of &. Nagging processors are identically configured with intelligent backtracking, cycle detection, and lOO-element least-recently used subgoal caches. For each problem, the currently “fastest” 99 machines (as determined by elapsed time for solving a short benchmark problem set) in the processor pool are selected for use as nagging processors. These additional processors are organized hierarchically, with 9 processors nagging the main processor and 10 more processors nagging each of these in turn. Hierarchical nagging, or meta-nagging, reduces the load on the main processor by amortizing nagging overhead costs over many processors. Recursively nagged processors are more effective as naggers in their own right, since nagging these processors prunes their search and helps them to exhaust their own search spaces more quickly. The main processor is subject to the same 1, 000, 000 node resource constraint as the serial cotigurations tested. & solves 384 problems within the allotted resource bound, or 26.35% of the 1457 problems attempted. &, ’ As a rough measure of performance, this configuration running on the hardware just described performs at about 1OK LIPS on a benchmark definite-clause theory. Automated Reasoning 189 & CPU Time vs. Ze CPU Time Figure 1: Performance of Es (log elapsed CPU time to solution on main processor) vs. performance of C, (log elapsed CPU time to solution or failure). The “cross” datapoints correspond to the 384 problems solved by both systems, while the “dia- mond” datapoints correspond to the 130 problems solved only by &; x-coordinate values for the “diamond” datapoints repre- sent recorded time-to-failure for I&, an optimistic estimate of actual solution time. The two lines represent f(x) = x and f(x) = x/100. Since the granularity of our metering software is only 0.01 seconds, any problem taking less than 0.01 CPU sec- onds to solution is charged instead for 0.005 seconds. identical to &-, save for the use of intelligent backtracking and cycle detection, solves an additional 56 problems, or a total of 440 problems (30.19%). &, which adds a loo-element subgoal cache to the configuration of Cl, solves an additional 20 problems (76 more than &J, for a total of 460 problems solved (3 1.57%). Finally, q, the lOO-processor version of &, solves a total of 5 14 problems (35.27%). Note that in every case adding a search reduction technique results in the solution of additional problems3 The additional problems solved by each successively more sophisticated con&uration represent one important measure of improved performance. A second metric is the relative speed with which the different confQurations solve a given problem; we consider here one such comparison between the most sophisticated system tested, X3, and the least sophisticated system tested, &. Figure 1 plots the 3 While Ze solves 56 problems not solved by I$, 2 problems solved by & were not solved by Es. While no individual technique is likely to cause an increase in the number of nodes explored, interactions between techniques may result in such an increase. For example, changes to search behavior due to nagging will affect the contents of the main processor’s cache; changes in cache contents will in turn affect the main processor’s search behavior with respect to an identical serial system. logarithm of the CPU time to solution for Lj against the logarithm of the CPU required to either solve or fail to solve the same problem for &. Each point in the plot corresponds to a problem solved by at least one of the systems; the 384 “cross” datapoints correspond to problems solved by both systems, while the 130 “diamond” datapoints correspond to problems solved only by Z3. Datapoints falling below the line f(x) = x represent problems that are solved faster by q, while datapoints falling below the line f(x) = x/100 represent problems that are solved more than 100 times faster with 100 processors4 4 The fact that some problems demonstrate superlinear spcedup may seem somewhat alarming. Intuitively, using N identical processors should result in, at best, N times the performance. Here, the additional N - 1 pro- cessors are, on average, much slower than the main processor, aud one would therefore initially expect sublinear speedup. However, superlinear speedup cau result since the parallel system does not explore the space in the same order as the serial system. In particular, a nagging processor may explore a subgoal ordering that provides a solution with significantly less search than the original ordering, resulting in a net performance im- provement much larger than N. In addition, the parallel system has the added advantage of subgoal caching and intelligent backtracking which al- so make substantial performance contributions on some problems. 190 Automated Reasoning If we consider only those problems solved by both systems (the 384 “cross” datapoints in Figure 1) and if we informally define “easy” problems to be those problems requiring at most 1 second to solve with the serial system, then we see that the performance of & on such problems is often worse than that of the more naive serial system &; thus, many of these datapoints lie above the f(x) = x line. We attribute this poor performance to the initial costs of nagging (e.g., establishing communication and transmitting the domain theory to all processors). For “hard” problems, however, the initial overhead is easily outweighed by the performance advantage of nagging. Furthermore, the performance improvement on just a few “hard” problems dwarfs the loss in performance on all of the “easy” problems - an effect that is visually obscured by the logarithmic scale used for both axes of Figure 1. A more precise way of convincing ourselves that & is superior to & is to use a simple nonparametric test such as the one-tailed paired sign test (Arbuthnott, 1710), or the one-tailed Wilcoxon matched-pairs signed-ranks test (Wilcoxon, 1945) to test for statistically significant differences between the elapsed CPU times for problems solved by both systems. These tests are nonparametric analogues to the more commonly used Student t-test; nonparametric tests are more appropriate here since we do not know anything about the underlying distribution of the elapsed CPU times. The null hypothesis we are testing is that the recorded elapsed CPU times for & are at least as fast as the recorded elapsed CPU times for & on the 384 problems solved by both systems. The Wilcoxen test provides only marginal evidence for the conclusion that & is faster than & (N =384,p= .096). However, if we only consider the “harder” problems, then there is significant evidence to conclude that X3 is faster than ZQ (N = 70, p < 10”). Of course, both our informal visual analysis of Figure 1 and the nonparametric analysis just given systematically understate the relative performance of I& by excluding the 130 problems that were solved only by ZZ.3 (the “diamond” datapoints in Figure 1). Since these problems were not solved by &,, we take the time required for &J to reach the resource bound as the abscissa for the datapoint in Figure 1: this is an optimistic estimate of the real solution time, since we know & will require at least this much time to actually solve the problem. Graphically, the effect is to displace each “diamond” datapoint to the left of its true position by some unknown margin. Note that even though their x- coordinate value is understated, 117 (90%) of these datapoints still fall below the f(x) = x line, and a 12 (roughly 10%) still demonstrate superlinear speedup. If we could use the actual & solution time as the abscissa for these 130 problems, the effect would be to shift each “diamond” datapoint to the right to its true position, greatly enhancing G’s apparent performance advantage over &. Is it possible to tease apart the performance contribution due to each individual technique? We can use the Wilcoxon test to compare each pair of successively more sophisticated system con@urations. We conclude that EC1 is significantly faster than & (N = 384, p < lOA), indicating that intelligent backtracking and cycle detection together are effective serial speedup techniques. In a similar fashion, & is in turn significantly faster than IZr (N = 440, p < lOA), indicating that subgoal caching is also an effective serial speedup technique when used with intelligent backtracking and cycle detection. In contrast, we find only marginal evidence that & is uniformly faster than & over the entire problem collection (N = 458, p = .072). However, as with the Zs vs. ZQ comparison, separating “harder” problems, where XJ significantly outperforms & (N = 108, p < 106), from “easier” problems, where the I;2 significantly outperforms IZj (N = 350, p c lOa) enables us to make statistically valid statements about the relative performance of & and Q. As before, all of these results - by ignoring problems left unsolved by one of the systems being compared - systematically understate the performance advantage of the more sophisticated system in the comparison. Unlike nagging, where problem size is a good predictor of performance improvement, it is much more difficult to characterize when caching, intelligent backtracking, or cycle detection are advantageous. Some problems are solved more quickly with these techniques, while others problems are not; knowing whether a problem is “hard” or “easy” a priori @es no information about whether or not caching, intelligent backtracking, or cycle detection will help, a conclusion that is supported by our statistical analysis. Conclusion We have briefly reviewed the design and implementation of the DALI system. The premise of this paper is that efhcient implementation technology, serial search reduction techniques, and nagging can be successfully combined to produce a tit-order logic engine that can effectively bring hundreds of workstations to bear on large problems. We have supported our claims empirically over a broad range of problems from the theorem proving literature. While the results presented here are quite good, we believe we can still do better. We are now in the process of adding an explanation-based learning component that compiles “chains of reasoning” used in successfully solved problems into new macro-operators (Segre & Elkan, 1994). We are Automated Reasoning 191 also exploring alternative cache-management policies, the use of dynamically-sized caches, and compile-time techniques for dete mining how caching can be used most effectively in a given domain. Similarly, we are studying compile-time techniques for selecting appropriate opportunities for nagging and we are also looking at how best to select a topology of recursive nagging processors. Finally, we are exploring additional sources of parallelism. These efforts contribute to a larger study of practical, effective, inference techniques. In the long term, we believe that our distributed, adaptive, approach to first-order logical inference - driven by a broad-spectrum philosophy that integrates multiple serial search reduction techniques as well as the use of multiple processing elements - is a promising one that is also ideally suited to large-scale applications of significant practical importance. Acknowledgements We wish to acknowledge Maria Paola Bonacina, Randy Calistri-Yeh, Charles Elkan, Don Geddis, Geoff Gordon, Simon Kasif, Drew McDermott, David Plaisted, Mark Stickel, and three anonymous reviewers for their helpful comments on an early draft of this paper. Support for this research was provided by the office of Naval Research through grant NOOO14-90-J-1542 (AMS), by the Advanced Research Project Agency through Rome Laboratory Contract Number F30602-93-C-0018 via Odyssey Research Associates, Incorporated @MS), and by the Air Force Office for Scientific Research through a Graduate Student Fellowship (DBS). References &t-Kaci, H. (1991). Warren’s Abstract Machine. Cambridge, MA: MIT Press. Arbuthnott, J. (1710). An Argument for Divine Providence, Taken from the Constant Regularity Observed in the Births of Both Sexes. Philosophical Transactions, 27, 186-190. Calistri-Yeh, R.J. & Segre, A.M. (April 1993). The Design of ALPS: An Adaptive Architecture for Transportation Planning (Technical Report TM-93-0010). Ithaca, NY: Odyssey Research Associates. Korf, R. (1985). Depth-First Iterative Deepening: An Optimal Admissible Tree Search. Artificial Intelligence, 27(l), 97-109. Kumar, V. & Lin, Y-J. (August 1987). An Intelligent Backtracking Scheme for Prolog. Proceedings of the IEEE Symposium on Logic Programming, 406-414. Plaisted, D. (1988). Non-Horn Clause Logic Programming Without Contrapositives. Journal of Automated Reasoning, 4(3), 287-325. Segre, A.M. & Elkan, C.P. (To appear, 1994). A High Performance Explanation-Based Learning Algorithm. Arttjicial Intelligence Segre, A.M. & Scharstein, D. (August 1993). Bounded- Overhead Caching for Definite-Clause Theorem Proving. Journal of Automated Reasoning, II(l), 83-113. Stickel, M. (1988). A Prolog Technology Theorem Prover: Implementation by an Extended Prolog Compiler. Journal of Automated Reasoning, #(4), 353-380. Sturgill, D.B. & Segre, A.M. (To appear, June 1994). A Novel Asynchronous Parallelization Scheme for First-Order Logic. Proceedings of the Twelfth Conference on Automated Deduction Suttner, C.B., Sutcliffe, G. & Yemenis, T. (1993). The TPTP Problem Library (TPTP vl .O.O) (Technical Report FKI-184-93). Munich, Germany: Institut ftir Informatik, Tecnische Universit%t Munchen. Wilcoxon, F. (1945). Individual Comparisons by Ranking Methods. Biometrics, 1, 80-83. 192 Automated Reasoning
1994
9
1,721
Situated Agents Can Have Plans Mark Fasciano Department of Computer Science University of Chicago Chicago, IL 60637 fasciano@cs.uchicago.edu Much of our everyday activity is not made up of solving isolated problems with single clear-cut goals, but rather dedicated to the ongoing maintenance of many goals or policies such as eating when hungry and maintaining a comfortable personal space. Furthermore, in many complex, dynamic worlds, an agent must maintain many goals at the same time and be able to act quickly and flexibly, because some decisions are time critical and the world is not perfectly predictable. The domain of SimCity requires this kind of behavior. In the simulation, you are never finished repairing the roads or fighting crime, because disasters and evolution will always require more work in the future. Survival depends on deciding what you should work on now, and by what you will allow yourself to be interrupted. In short, playing SimCity requires a robust theory of attention. Situated agents are built to address problems of timely activity in complex, dynamic worlds. (Maes) Such systems stress that competent behavior can arise out of continually selecting primitive actions based on information about the environment. Situated activity theories propose that resource conflicts can be avoided by noticing conflicts at compile time. Most situated agents, however, are not required to accomplish long-term activities such as building an industrial complex. Memory-based planning (Hammond), on the other hand, contends that in some interesting worlds an episodic memory of plans can adequately cover the longer-term problems an agent will face, thus providing an alternative to the intractability of exhaustive search. How do we arbitrate between memory-based, long-term behavior, and reactive, short-term maintenance? The Problem of Interruption in SimCity Imagine that in response to the problem of unemployment in the city, you decide to build an industrial park on the outskirts of town. You develop a plan for this, and you have the funds to execute it. Such a project could take three simulator months, but during the execution of this plan, an earthquake strikes. If you continue to carry out your plan, half the city may be destroyed by fires. Since you can only do one thing at a time, you must interrupt your industrial development plan and deal with the earthquake. The MAYOR Project The goal of the MAYOR project is to build a planner which will be sensitive to long-term abstract goals such as increasing population or generating income and long-term activity such as building a suburb outside the city center. At the same time, MAYOR must react to unpredictable phenomena such as earthquakes, and the continual maintenance of the infrastructure of the city (roads, power lines, etc.). The main point of the planner is to provide a model for how attention may be focused in a complex world like SimCity. Inspired by (Minsky), MAYOR consists in a network of agents called advocates, each of which are dedicated to working on specific tasks. Some advocates are designed to monitor certain conditions in the world, and develop plans to address their task or purpose. Other advocates are designed not to monitor the world, but to monitor other advocates, settling disputing claims to the same resources. Some conflicts between goals are settled at compile time; advocates which address the same goals inhibit each other. For example, while the Urban-Planner advocate is building an industrial complex to increase employment, the Roadworker is blocked from maintaining roads while the complex is being constructed, because the Urban-Planner’s plan addresses this goal. On the other hand if a fire breaks out, since the Urban-Planner does not handle fires, the Urban-Planner may be interrupted by the Firefighter advocate. Instead of attempting to foresee all the possible conflicts, MAYOR proposes that such activity conflicts should be handled at run time, with a set of task arbitration strategies. An example of an implemented strategies is “seize-cheap-opportunity” which stops work on an long- term task in favor of an inexpensive, local, short-term plan. Currently MAYOR attempts to service the long-term goal of maintaining a minimum income while handling problems of crime, fire protection and fires, and pollution control. At this stage of development, however, it is clear that success hinges on a robust theory of attention. References Hammond 1989. Case-Based Planning,: Viewing Planning as a Memory Task. Academic Press. Maes, P. 1990. Situated Agents Can Have Goals. Robotics and Autonomous Systems 6: 49-70. Minsky, M. 1985. The Society of Mind. New York, NY: Simon&Shuster. Student Abstracts 1445 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
90
1,722
The KM/KnEd System: An Integrated Approach to Building Large-scale Multifunctional Knowledge Bases” Erik Eilerts Department of Computer Science University of Texas Austin, Texas 78712-1188 (512) 471-9565 eilerts@cs.utexas.edu 1 Background In 1987, Dr. Bruce Porter began work at the Univer- sity of Texas at Austin on the Botany Knowledge Base Project. The goal of the project is to develop a large- scale multi-functional knowledge base in the area of Botany. This Botany Knowledge Base (BKB) is used to support research projects in question answering, au- tomated modeling, and intelligent tutoring. Due to the size and complexity of the BKB, a decision was made in 1990 to begin construction of a new knowledge representation language and interface to support the knowledge base. The knowledge representation lan- guage was named KM, for Knowledge Manager, and the interface was named KnEd, for Knowledge Editor. The KM/KnEd system is similar to Doug Lenat’s CYC project and Doug Skuce’s CODE4 system. 2 KM/KnEd KM is a frame-based knowledge representation lan- guage that uses slot-and-filler structures. KM’s most important feature is that it allows the annotation of values with extra details. Value annotations are used to represent information contextually. Consider the assertion “Texas Bluebon- nets secrete nectar that contains a low concentration of sugar .” This problem could be solved by creating the frame The-Sugar-contained-in-the-Nectar- secreted-by-a-Texas-Bluebonnet and adding the “concentration Low” attribute to it. But, this removes all contextual information about the frame. Instead, KM allows the user to retain contextual information by recursively nesting annotations (KM is one of the few languages with this feature). Figure 1 shows how this assertion is represented using KM’s value annota- tion mechanism. In this figure, the (Texas-Bluebonnet secretes Nectar contains Sugar) address is connected to the “concentration Low” attribute. *Support for this research was provided by a grant from the National Science Foundation (IRI-9120310), a contract from the Air Force Office of Scientific Research (F49620-93- l-0239), and donations from the Digital Equipment Corpo- ration. This work was conducted at the University of Texas at Austin. 1444 Student Abstracts I Texas-Bluebonnet ; ,“ “ “ ‘ --“ --m.---,, I contains: ; sugar II I II II I I I I II I I concentration: Low I I I---,,,,,,,,,,,,,11 I,,,,,,,,------------------ Figure 1: Value Annotation TEXAS-BLUEBONNET Hierarchy English The famous Texas Bluebonnet. Generalizations Central-TexasGlower Filled-slots Secretes Nectar NECTAR Contains Sugar SUGAR Concentration LOW Figure 2: KnEd Text Pane KnEd is a graphical user interface for viewing and editing large-scale knowledge bases. The basic display mechanism used is the text pane. It displays all the re- lations associated with a frame. Figure 2 is an example of a text pane display of the Texas-Bluebonnet frame. KnEd provides several tools to aid the user in adding, removing, changing, and copying values in the knowl- edge base. KnEd also supports navigation around the knowledge base and distributed editing. With the help of KM and KnEd, the BKB has suc- cessfully grown to contain more than 150,000 facts. The complexity added by the annotation mechanism of KM made KnEd’s viewing and editing capabilities in- dispensable. As knowledge bases increase in size, tools such as KM’s annotation mechanisms and the KnEd interface will become essential. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
91
1,723
Exploiting the Ordering of Observed Problem-solving Steps for Knowledge Base Refinement: an Apprenticeship Approae Steven K. Donoho and David C. Wilkins Department of Computer Science, University of Illinois Urbana, IL 61801 donoho@cs.uiuc.edu, wilkins@cs.uiuc.edu Apprenticeship is a powerful method of learning among humans in which a student re- fines his knowledge by observing and analyzing the problem-solving steps of an expert. In this paper we focus on knowledge base (KB) refinement for clas- sification problems and examine how the ordering of the intermediate steps of an observed expert can be used to yield leverage in KB refinement. In the classical classification problem, the problem-solver is given an example consisting of a set of attributes and their corresponding values, and it must put the example in one of a pre-enumerated set of classes. Consider a slightly different situation, though, in which the problem-solver is not given all the attribute/value pairs from the outset but rather must request attributes one at a time and make his classification decision once sufficient evidence is gathered. This situation would arise when it is too costly or otherwise unreasonable to simply be given all the attribute values. When a mechanic is trou- bleshooting a malfunctioning car, he does not run every test possible and then stop to examine his data and make his decision. Rather he checks one thing, and based on the result of that, he decides what to check next. Thus the order in which attributes are requested reflects the internal problem-solving pro- cess going on in the mind of the observed expert. By watching the order in which a superior problem- solver requests attributes, we should be able to re- fine the KB of a weaker problem-solver. The ordering of attribute requests can be used to detect KB shortcomings because it allows the analysis of an attribute request with respect to what was and what was not known at the time of the request. As an example from the audiology domain, if the expert requests the attribute history-noise af- ter age-gt-60 is known to be true, it can be assumed that knowing history-noise is important to know even when age-gt-60 is true. If in the KB we are refining, history-noise is not worth knowing given that age-gt-60 = true, then our KB contradicts the actions of the expert indicating a KB shortcom- ing. Using our KB, we cannot explain why the ex- pert would ask history-noise given what he already knew; thus, the expert must have some knowledge which our KB lacks. Once a KB shortcoming has been detected, an attempt is made to repair it by adding a rule of the form: conditionl A . . . A conditionN _+ classi where each condition is an attribute/value pair such as history-dizziness = true. The repair is built by starting with an initial single-condition rule and greedily adding conditions. The condition in the initial rule consists of the unexplained attribute (history-noise in the above example) and one of its possible values. The class of the initial rule is any class that has not been ruled out by the at- tribute requests preceding the unexplained attribute (with respect to a set of training examples). Since there may be multiple feasible classes and multiple values for the unexplained attribute, multiple ini- tial rules may have to be explored. The conditions which are greedily added each consist of an attribute requested before the unexplained attribute and its known value - this is because knowledge of these attributes gave rise to the request of the unexplained attribute; therefore, they may be related to the un- explained attribute and to the shortcoming. Condi- tions are added until the purity of the set of training examples covered by the rule no longer improves. The principles explored have been imple- mented, and experiments were run using the audi- ology dataset. In a test with a training set of size 100, an initial KB was created using C4.5 which achieved an accuracy of 67.9%. This initial KB was refined, and the final KB achieved an accuracy of 82.1%, a net improvement of 14.2%. The ordered sequences of requested attributes were generated by C4.5 using all 226 audiology examples. The power of the attribute ordering method of apprenticeship is that it does not rely solely on empirical calculations to discover attribute/class relationships. Rather, these attribute/class relationships are suggested by attribute ordering and are only verified empirically requiring less empirical evidence. Student Abstracts 1443 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
92
1,724
Substructure Discovery Using Minimum Description Length Principle and Background Knowledge Surnjani Djoko Department of Computer Science and Engineering University of Texas at Arlington Box 19015, Arlington, TX 76019 djoko @ cse.uta.edu Abstract Discovering conceptually interesting and repetitive substruc- tures in a structural data improves the ability to interpret and compress the data. The substructures are evaluated by their ability to describe and compress the original data set using the domain’s background knowledge and the minimum description length (MDL) of the data. Once discovered, the substructure concept is used to simplify the data by replacing instances of the substructure with a pointer to the newly dis- covered concept. The discovered substructure concepts allow abstraction over detailed structure in the original data. Iteration of the substructure discovery and replacement pro- cess constructs a hierarchical description of the structural data in terms of the discovered substructures. This hierarchy provides varying levels of interpretation that can be accessed based on the goals of the data analysis. The structural data is represented as a labeled graph. A sub- structure is a connected subgraph within the graphical rep- resentation. An instance of a substructure in an input graph is a set of vertices and edges from the input graph that match, graph theoretically, to the graphical representation of the substructure. The substructures are evaluated by their ability to describe and compress the original data set using the domain’s background knowledge and the minimum description length (MDL) of the data. Once interesting sub- structures are discovered, they can be replaced by a single representative node in the original graph, and can be used as part of another substructure definition in a hierarchy of discovered structures. The minimum description length principle states that the best theory to describe a set of data is the theory which minimizes the description length of the entire data set. The minimum description length of a graph is defined to be the number of bits necessary to completely describe the graph. The theory that best accounts for a collection of data is the one that minimizes I(S) + I(GIS), where S is the discovered substructure, G is the input graph, I(S) is the number of bits required to encode the discovered substructure, and I(GIS) is the number of bits required to encode the input graph G with respect to S. Although the principle of minimum description length is useful for discovering substructures that maximize com- 1442 Student Abstracts pression of the data, scientists often employ knowledge or assumptions of a specific domain to the discovery process. To make the discovery process more powerful across a wide variety of domains, the background knowledge have been added to guide the discovery process. This back- ground knowledge is entered in the form of rules for evalu- ating substructures. Because only the most-favored substructures are kept and expanded, these rules control the discovery process of the system. For example, in the CAD circuit domain, circuit com- ponents can be classified according to their passivity. A component which is not passive is said to be active. The active components are the main driving components. Iden- tifying the active components is the first step in understand- ing the main function of the circuit. The component rule assigns relatively higher values to the active components, and assigns lower values to the passive components. Once the active components are selected, attention can be focused on the passive components. Similarly, the loop analysis rule favors subcircuits containing loops. Since the components in the closed path are generally a part of the subcircuit or the subcircuit itself. Furthermore, the compo- nent complexity rule prefers minimum number of distinct component in the substructure. The approch has also been applied to the domains of chemical compound analysis, scene analysis, CAD circuit analysis, and analysis of artificially-generated graphs. The results demonstrate the applicability and significance of the approcah in the above domains. References J. R. Quinlan and R. L. Rivest. Inferring decision trees using the minimum description length principle. Znforma- tion and Computation, 801227-248, 1989. P. Cheeseman, J. Kelly, M. Self, J. Stutz, W. Taylor, and D. Freeman. Autoclass: A bayesian classification system. In Proceedings of the Fifth International Conference on Machine Learning, 54-64, 1988. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
93
1,725
Dynamically Adjusting Categories to Accommodate Changing Contexts Mark Devaney and Ashwin Ram College of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 E-mail: {markd,ashwin}@cc.gatech. edu Context Concept formation is the process by which generaliza- tions are formed through observation of instances from the environment. These instances are described along a number of attributes, which are selected according to their relevance to the problem or task for which the concepts will be used. The context of a concept learning problem consists of the goals and tasks of the learner, as well as its background knowledge and do- main theories and the external environment in which it operates. Context is essential to inductive concept learning for it determines which attributes to use for a given problem out of the infinitely many available, providing a bias for the learner (Mitchell, 1980). Fur- thermore, context is not a static entity, but is con- stantly changing, especially in the types of learning tasks faced by humans (e.g. Seifert 1989, Barsalou 1991). As concept formation systems are employed in tasks more typical of natural domains and “real-world” problems, the ability to respond to changing contexts becomes increasingly important. Attribute-incrementation Toward this end, we introduce the notion of uttribute- incrementution, the dynamic incorporation and re- moval of attributes from existing concepts. This abil- ity allows a concept learner to accommodate changing contexts by altering the set of attributes used to de- scribe instances in a problem domain while retaining its prior knowledge of that domain. This capability has been implemented in a concept formation system called AICC (Attribute Incremental Concept Creator), an extension of an existing concept learner, COBWEB (Fisher, 1987). AICC is capable of both adding new at- tributes and removing existing ones from a COBWEB concept hierarchy and restructuring it accordingly. Performance We have performed extensive evaluations of AICC and compared its performance along several dimensions to that of COBWEB. One of the conclusions of this re- search is that AICC is able to construct concept hi- erarchies by incrementally incorporating attributes in significantly less time than COBWEB. These hierar- chies achieve comparable predictive accuracy and clas- sification efficiency to those produced by COBWEB. In additional experiments, AICC has been used to re- move attributes from existing concept hierarchies as well as add new attributes to hierarchies constructed with varying numbers of initial attributes. These ex- periments have been replicated with a wide variety of data, with similar results. Conclusions Current concept learners are referred to as incremental if they incorporate instances one-at-a-time into their concept hierarchies. However, the attribute set used to describe these instances is an integral part of the concept formation problem. The ability to incorporate attributes incrementally allows concept learners to dy- namically modify their bias and respond to a wider variety of changes in context without discarding prior domain knowledge. This ability is important given the trend toward creating systems that must face real- world tasks and their corresponding constraints. References Barsalou, L. W. 1991. Deriving categories to achieve goals. In G. H. Bower (Ed.), The psychology of Zeurn- ing and motivation: Advances in research and theory, (Vol. 27). New York: Academic Press.’ Fisher, D. H. 1987. Knowledge acquisition via incre- mental conceptual clustering. Machine Learning, 2, 139-172. Mitchell, T. M. 1980. The need for biases in learn- ing generalizations (Tech. Rep. CBM-TR-117). New Brunswick, NJ: Rutgers University Department of Computer Science. Seifert, C. M. 1989. A retrieval model using feature selection. Proceedings of the Sixth International War%- shop on Machine Learning, 52-54). Ithaca, NY: Mor- gan Kaufmann. Student Abstracts 1441 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
94
1,726
Goal-Clobbering Avoidance in Non-Linear Planners Rujith de Silva Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, Pennsylvania 15213-3891 desilva+@cmu.edu A central issue in non-linear planning is the ordering of operators so as to avoid undesirable interactions be- tween their effects. The Modal Truth Criterion (Chap- man 1987) states the conditions under which these interactions will occur. Non-linear planners use the Criterion, directly or indirectly, to promote or demote operators, or to co-designate variables, so as to avoid interactions. - This abstract describes a method, called Goal Clob- bering Avoidance (GCA), to avoid some interactions in a partially-ordered plan by promoting or demoting a sequence of operators, rather than individual opera- tors. Effectively, it simultaneously applies the Modal Truth Criterion to all operators in the sequence, using pm-compiled information about the domain. GCA will be illustrated in the familiar Blocksworld domain, with the operators (put-down ?block) on table (pick-up ?block) from table (stack ?blockA ?blockB) (unstack ?blockA ?blockB) Consider the following problem, related to Suss- man’s Anomaly, in which-Block C has been unstacked from Block A. A Goal R B Furthermore, suppose a partially-ordered plan been built as shown towards solving the problem. has (on A B) (on B C) (stack A B) (stack B C) (holding A) (holding B) (pick-up A) (pick-up B) (arm-empty) (put-down C) This plan has a large number of interactions involv- ing (arm-empty), ( c ear 1 B) and (clear C), and it is not 1440 Student Abstracts immediately clear how to promote or demote the op- erators to achieve the desired goals. Consider the goals and operators under (on A B) in relation to the sibling goal (on B C). If (on A B) is achieved first, then any plan that achieves (on B C) will d&achieve (on A B). Similarly, any such plan will also dis-achieve (holding A). Hence applying (pick-up A) before (stack B C) is pointless, as its effects will be clobbered by the later achievement of (on B C). However, applying (put-down C) is not pointless, as (arm-empty) is not dis-achieved by (on B C). Therefore the planner can split the left-hand branch by constraining (pick-up A) and (stack A B) to occur after the achievement of (on B C). Furthermore, it can do this before even de- ciding how to achieve (on B C), as it makes use of the following judgements expressing properties of csll pos- sible ways of achieving (on B C) : Goal-clobbering by (on ?B ?C) of (on ?A ?B) Goal-clobbering by (on ?B ?C) of (holding ?A) The first states that any plan, justified or not, whose final operator achieves (on ?B ?C) must result in a fi- nal state in which (on ?A ?B) is not true. Similarly for the second. Conditional goal-clobberings that con- strain the initial states in which they are applicable also exist. GCA has yielded large savings in planning-time in numerous domains on the totally-ordered non-linear planner PRQDIGY(Etzioni 1991). I am currently evaluating its performance in partially-ordered plan- ners. In addition, I am automating the derivation of goal-clobbering judgements, which explicitly state provable properties of the domain, by extending the work done by (Etzioni 1991). References Chapman, D. 1987. Planning for conjunctive goals. Artificial Intelligence 32~333-377. Etzioni, 0. 1991. STATIC: A problem-space compiler for PRODIGY. In Proceedings of the Eighth National Conference on Artificial Intelligence, 533-540. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
95
1,727
Experiments Towards Robotic Learning By Imitation John Demiris Department of Artificial Intelligence University of Edinburgh 80 South Bridge, Edinburgh, Scotland email: johnde@aifh.ed.ac.uk Learning by imitation is a form of learning, which de- spite the fact it has been widely studied by etholo- gists, has not been fully understood yet. In fact, there is a considerable disagreement even on the terminol- ogy used despite attempts to clarify it (Davis,1973; Galef, 1988;). We believe that by building robots which instantiate the mechanisms hypothesised to underlie these types of behaviour, those mechanisms will be il- luminated with explanatory adequacy. Our investiga- tion into imitative learning begins with the construc- tion of an appropriate experimental testbed and the design of a suitable architecture which would enable one robot (the learner) to imitate another one (the teacher) which is performing a task, and learn to per- form the task while imitating the teacher. Initially, the idea was to have one robot to perform a sequence of moves (“dance”), while the other robot would learn dancing by imitating the first one. Such an experi- ment however, does not satisfy our need to be able to evaluate whether the robot has actually learned any- thing. Instead, we choose to have as a testbed a maze, where the robot learner would imitate the actions that the robot teacher is performing during its maze nego- tiation strategy. After the learning phase, we could ask the second robot to navigate itself through a new (different) maze, which would provide us with a more evident demonstration that learning has taken place. The architecture that was devised has five distinct modules, largely independent from each other: Maze negotiation How is the teacher dealing with the maze? This can take various forms, from simple dead reckoning to complex maze following behaviours. Teacher following This can also take various forms of increasing complexity. In the simplest form, the learner keeps a fixed distance behind the teacher, re- gardless of the teacher’s motion; more sophisticated implementations would have the learner follow only when the teacher is moving purposively and ignore “loi- tering”. Significant event perception The learner robot recognises when and where the teacher robot is per- forming an action it deems significant. Ways of doing so include detecting changes in direction of movement or the orientation of the teacher, or even having the teacher emitting a sound (“watch me, this is impor- tant”). Environment perception Depends on the sensing ca- pabilities of the learner; we use both simulated robots, and mobile robots equipped with a vision system, ul- trasonic, and infra-red sensors. Finally, in order to successfully learn, the robot learner has to associate the enviroment’s configuration perceived with the appropriate action. Thus, we also need a mechanism for : Derivation of appropriate action We make use of the fact that the learner is performing a teacher- following behaviour all the time. The difference in the states of the robot-learner just before and after the significant event (for example, a go-degrees change in direction of movement) is associated with the enviro- ment’s configuration perceived. The form of associa- tion can also range from simple if-then rules, to con- nectionist solutions. The architecture described has been successfully imple- mented in simulation and the results so far are encour- aging. The robot learner successfully learns to traverse increasingly complicated mazes, indicating that this architecture is promising in dealing with simple learn- ing situations. Currently, we are implementing the ar- chitecture on mobile robots. The next step will be to increase the complexity of each module, and continue the research into more complex tasks (for example, im- itative learning of obstacle avoidance behaviours). We believe that, in addition to the contribution that our findings may have in the field of ethology, this research will result to new versatile forms of robotic learning. The assistance of my supervisor, Dr. Gillian Hayes, acknowledged, and highly appreciated. (Davis, 1973) J.M. Davis, “Imitation: a review and critique”, in Perspectives in Ethology, P.G. Bateson and P.H. Klopfer (eds.), Plenum Press, 1973. (Galef, 1988) B ennett G. Galef, Jr., “Imitation in animals: history, definition, and interpretation of data from the psychological laboratory”, in Social Learning: Psychological and Biological Perspectives, T.R. Zentall and B.G. Galef, Jr. (eds.), Lawrence Erlbaum Associates, 1988. Student Abstracts 1439 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
96
1,728
GICR A Genetic Model of Knowledge Representation Angblica de Antonio, J&&is Camlefiosa, I&c Martinez Nonnand Laboratorio de Inteligencia Artificial. Facultad de Informirtica. Campus de Montegancedo. 28660 Boadilla de1 Monte. Madrid (Spain) E-Mail:lia@fi.upm.es Extended Abstract In the 1956 Darmouth College conference two aspects of the definition of AI were emphasized: a) the separation between the knowledge and the procedures using it and b) the equivalence of the different knowledge representation (KR) formalisms. Taking the last concept as an origin, an idea arose in Knowledge Engineering: Building generic KR’s that could allow to represent any Knowledge Base (KB) developed using any formalism, to work with it without worrying about the actual formalism used in the construction of the KB. This is an objective that has not yet been reached, although research in this area continues as shown in the following examples: - In the area of Validation and Verification (V&V) of Knowledge Based Systems (KBS) we can mention the VALID project (ESPRIT II number 2148 project [CARD- 931). This project was based on the idea of building a generic model of KR called CCR (Common Conceptual Representation) in which the formalism of any system could be translated to apply a set of V&V tools to the translated KB. - In the Knowledge Acquisition area this idea has been used, for example, in the ACKNOWLEDGE project (ESPRIT II number 2576 project [ACK-881). The main objective of this project was to develop a Knowledge Engineering Workbench integrating several knowledge acquisition methods, techniques and tools. In order to integrate the knowledge acquired by each of those, it was necessary to use a generic KR called CKR (Core KR). - Finally, this idea has also been used in Automatic Translation. This idea is de basis of the INTERLINGUA &presentation (used in project PIVOT [NEC-861) which is a representation of the natural language knowledge inde- pendent of the actual language (Spanish, English, etc.) used. We show in this paper a proposal for a new generic model of KR called GKR (Generic Knowledge Representa- tion). This model has been developed as a result of the analysis of the models described in the preceding exam- plcs. The study of the successes and shortcomings of these models helped us to define GKR with several properties that improve its representation ability: 1438 Student Abstracts - We have divided the representation of a KBS into three parts: 1) a static part that represents the knowledge that has the system about its problem domain (that is, the KB), 2) a dvnamic part that represents, using traces of the execu- tion, how does work the KBS faced to a problem (or test case) and 3) some information referring to design particula- rities of the KBS. This part represents why does the static part work as shown by the dynamic part. This part of the systems represents control information. - We have chosen frames [MINS-751 and rules [MAT& 881 as KR formalisms for the static part. These formalisms are defined in GKR with characteristics that were not implemented in the other models, such as: representation of user-defined facets, explicit representation of inheritance rules, representation of non hierarchical relations between frames and the representation of conditions and actions allowing rules to access or modify any part of the KB. The above properties make possible to represent in GKR things that would not be able to represent in the other models. The definition of the GKR design is composed bJ a set of structures that cannot be described in this abstract. Although GKR can be used in other AI areas, it is being used in the definition of a Validation environment based in this representation model. This environment will apply several V&V tools to KBS represented in GKR and it is being developed by the Validation Group of the AI Laboratory of the Universidad PolitCcnica of Madrid. Refemnces [ACK-881 ACKnowledge Project. “ACKnowledge Techni- cal Annex.” 1988. [CARD-931 Cardefiosa, J. and Juristo, N. “General Over- view of the Valid Project.” Proceedings of the European Symposium on the Validation and Verification of KBS, EUROVAV’93. Palma de Mallorca. Spain. 1993. [MAT&881 Mate, J.L. and Pazos, J. “Ingenieria, de1 Conocimiento: disefio y construcci6n de sistemas expertos.” Ed SEPA. 1988. [MINS-751 Minsky, M.. “A framework for Representing Knowledge” en “The Psychology of Computer Vision.” P. H. Wilson (ed.) McGraw-Hill. 1975. [NEC-861 Net. “Overview of Pivot”. C&C systems re- searchs laboratory. NEC Corporation. Japan. 1986. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
97
1,729
Local Search in the Coordination of Intelligent Agents* Daniel E. Damouth and Edmund H. Durfee Artificial Intelligence Laboratory 110 1 Beal Avenue The University of Michigan Ann Arbor, MI 48109-2110 (damouth, durfee)@engin.umich.edu In a world inhabited by numerous agents pursuing distinct goals, conflicts are inevitable. To succeed in the environment, an agent must explicitly reason about the behaviors of other agents as well as itself, and be prepared to find new behaviors that are more coor- dinated. Because traditional AI has had great success viewing problem solving as a search in a problem space, we have chosen to represent the process of coordination as a distributed search (Durfee eZ al. 1994). In search- ing through a joint behavior space for coherent coordi- nation patterns, an agent must observe three kinds of constraints: its abilities, its goals, and the activities of other agents in the environment. The nature of the third constraint is dependent on the abilities and goals of the other agents in the envi- ronment. Knowledge of other agents’ planned actions is often sufficient for conflict avoidance; however, the ability to reason about alternative activities not only for oneself but for other agents requires deeper model- ing of them. Our concept of the behavior as a modeling structure contains not only spatial and temporal infor- mation about agents’ actions but also represents their goals and capabilities. With this modeling information an agent can reason from other agents’ goals and ca- pabilities to arrive at likely alternative behaviors for them and itself. We call this local search. Depending on the distribution of knowledge among the agents, lo- cal search might occur at any number of agents. Our approach complements the distributed search process of (Durfee & Montgomery 1991)’ which emphasized the efficient propagation of information among agents rather than the local search of an individual agent. We are investigating local search in the producer- consumer-transporter (PCT) domain, by imple- menting a search for coordination patterns for solv- ing package delivery problems. In a PCT problem, “producers” create objects that must be delivered by “transporters” to %onsumer” agents, who cause the objects to disappear. Representing coordination schemes as a hierarchy of behaviors, we have been able to generate many different agent organizations by de- composing according to agent goals and agent capa- ‘Supported, in part, by NSF grant IRI-9158473. bilities, respectively. Using the former decomposition we arrive at an analog to “product hierarchies,,, in which agents are grouped according to the products they help make. Using the latter decomposition gives rise to “functional hierarchies,,, in which agents are grouped according to their capabilities. Our use of taxonomic knowledge of capabilities and goal-subgoal relationships also allows us to represent hybrid organi- zations that incorporate features of both kinds of hier- archies. We have identified instances in which a hybrid organization outperforms any “pure” form. Our ongo- ing analysis is focusing on the evaluation of organiza- tional forms in terms of coordination costs (the amount of run-time communicating and thinking), production costs (overall throughput), and vulnerability costs (the effect bn performance-if some agent breaks-down).‘ We are working to characterize these factors with the aim of automating the generation and search of a behavior space for this coordination task. In a broader context, we hope to shed light on issues such as the ef- fect of different task decompositions on the complexity of local processing, and the effects that different coor- dination costs have on effective agent organization. References Durfee, E. H., and Montgomery, T. A. 1991. Coordi- nation as distributed search. IEEE Bunsuctions on Systems, Man, and Cybernetics 21(6). Durfee, E.; Damouth, D.; Huber, M.; Montgomery, T.; ; and Sen, S. 1994. The search for coordination. In Decentralized AI. Springer-Verlag, Lecture Notes in Artificial Intelligence. Student Abstracts 1437 From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
98
1,730
Time Units and Calendars Diana Cukierman and James Delgrande School of Computing Science Simon Fraser University Burnaby, BC, Canada V5A lS6 { diana,jim}@cs.sfu.ca Abstract We are investigating a formal representation of time units and calendars, as restricted tempo- ral entities for reasoning about activities. We examine characteristics of time units, and pro- vide a categorization of the hierarchical relations among them. Hence we define an abstract hierar- chical unit structure (a calendar structure) that expresses specific relations and properties among the units that compose it. Calendar structures subsume systems that can be based on discrete units together with a repetitive containment re- lation. The motivation for this work is to (ultimately) be able to reason about schedulable, repeated activities, specified using calendars. Examples of such activi- ties include going to a specific class every Tuesday and Thursday during a semester, attending a seminar ev- ery first day of a month, and going to swim every other day. Defining a precise representation and developing or adapting known efficient algorithms to this domain would provide a valuable framework for scheduling sys- tems. In a representation scheme for such activities, one would ideally be able to determine consistency among several repeated activities, find a minimal set of po- tential ways repeated activities may interact, convert between time units, etc. Such a structure would be a restriction of, for example, the general algebra of in- tervals (Allen 1983); hence one might hope that the resulting restricted structure would have good com- putational properties. Work has been done around repeated activities, for example (Poesio & Brachman 1991), where a main concern is the implementation of variants of constraint propagation algorithms to detect overlapping repeated activities. We search for a differ- ent (more general and formalized) representation of the temporal entities. We explore further the date concept, used in their work as a reference interval, building a structure that formalizes dates in calendars. Human, schedulable activities are based on conven- tional systems called calendars. Examples of calen- dars include the Gregorian calendar as well as uni- 1436 Student Abstracts versity calendars and a business calendar, where the latter calendars are defined in terms of the Gregorian. Calendars are repetitive, cyclic temporal objects. We define an abstract structure that formalizes calendars as composed of time units which are related by a de- composition relation, which is a containment relation involving repetition and other specific characteristics. Time units decompose into contiguous sequences of other time units in various ways. For example, the expressions year k month, with Cons(year,month,l2) and Alig(year,month) indicate that the time unit year decomposes into month in a constant and aligned way, with a repetition factor of 12. We have studied transitivity properties of constancy and alignment in the decomposition relation. The de- composition relation is defined as a partial order on the set of time units. A calendar structure is a hierarchi- cal structure based on the decomposition of time units and expresses relationships that hold between them in several calendars. At this point in the research, the formalism is under systematic study, particularly regarding its mathemat- ical properties. Representation of specific temporal ob- jects based on the formalism are under study also. Still to be addressed are considerations about algorithms that would best fit with this formalism,. so that we may obtain efficient inferences when reasoning about single and repeated activities. Algorithms developed for temporal constraint satisfaction problems, or vari- ations, will be considered in this matter. We believe also that the hierarchy defined represents a generic ap- proach, appropriate to represent any measurement sys- tem based on discrete units that relate with a repetitive contaimnent relation, such as the Metric or Imperial measurement systems. References Allen, J. F. 1983. Maintaining knowledge about temporal intervals. Communications of the ACM 26( 11):832-843. Poesio, M., and Brachman, R. J. 1991. Metric con- straints for maintaining appointments: Dates and re- peated activities. In Proc. of the AAAI-91, 253-259. From: AAAI-94 Proceedings. Copyright © 1994, AAAI (www.aaai.org). All rights reserved.
1994
99
1,731
Agent Amplified Communication Henry Kautz, Bart Selman, and Al Milewski AT&T Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 {kautz, selman}@research.att.com http://www.research.att.com/-{kautz, selman} Abstract We propose an agent-based framework for assisting and simplifying person-to-person communication for information gathering tasks. As an example, we fo- cus on locating experts for any specified topic. In our approach, the informal person-to-person networks that exist within an organization are used to “referral chain” requests for expertise. User-agents help auto- mate this process. The agents generate referrals by analyzing records of email communication patterns. Simulation results show that the higher responsiveness of an agent-based system can be effectively traded for the higher accuracy of a completely manual approach. E’urthermore, preliminary experience with a group of users on a prototype system has shown that useful au- tomatic referrals can be found in practice. Our expe- rience with actual users has also shown that privacy concerns are central to the successful deployment of personal agents: an advanced agent-based system will therefore need to reason about issues involving trust and authority. Introduction There are basically two ways of finding something out by using a computer: “ask a program” and “ask a person”. The first covers all ways of accessing infor- mation stored online, including the use of the World Wide Web, traditional database programs, file index- ing and retrieval programs such as glimpse (Manber and Wu I994) or netnews readers, and even more sim- ply, the use of tools such as ftp. The second, “ask a person”, covers ways that a com- puter can be used as a communication medium be- tween people. Currently the prime examples are elec- tronic mail, including both personal e-mail and mailing lists, newsgroups, and chat rooms. The growing inte- gration of computers and telephones allows us to also view telephony as a computer-based communication medium. Simple examples of such integration are tele- phone address book programs that run on a personal or pocket computer and dial numbers for you; more sophisticated is the explosion in the use of computer- based FAX. Today it is hard to even buy a modem that does not have FAX capability, and by far the heaviest use of FAX is for person-to-person communication. There are inherent problems with both general ap- proaches to obtaining information. It has often been noted that as the world of online information sources expands, the “ask a program” approach suffers from the problem of knowing where to look. Of course, WWW browsers, such as Mosaic and Netscape, over- come many of the technical problems in accessing a wide variety of information on the Internet, by au- tomatically handling the low-level details of different communication protocols. Such browsers make it easy and entertaining to browse through an enormous hy- permedia space. However, finding an answer to a spe- cific question using a browser tends to be slow and frustrating, despite the various indexing services. One response to this problem has been the creation of var- ious internet indexing services that incorporate more sophisticated knowledge about the location of infor- mation (Etzioni and Weld 1994; Kirk et al. 1995; Knoblock et al. 1994). However, a deeper problem remains, that no solution based solely on building a better search-engine can address. This is the fact that much valuable information is simply not online, but only exists in people’s heads. Furthermore, there are economic, social, and political reasons that much valu- able information will never be made publicly accessible on the Internet or any other network. Indeed, part of the value of a piece of information resides in the degree to which it is not easily accessible. This is perhaps most obvious in relationship to pro- prietary corporate information. For instance, if I am involved in trading General Motors stock, I may be vi- tally interested in knowing the specifications of the cars to be introduced next year. That such information ex- ists is certain - indeed, factories are already being set up to produce the vehicles - but I am certainly not go- ing to be able to find this information in any database to which I have access. For a more mundane example, suppose I need to have my house painted, and want to know if Acme Painters Inc. does good work. It is highly unlikely that I am going to be able to access a database of “reliable house painters”. Conceivably the Better Business Bu- reau might offer a service that could tell me whether many people have actually taken Acme to court, but that would hardly be all that I would want to know. \ Interaction 3 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Any recommendations offered by such a public ser- vice would have to be either advertisements or suffi- ciently innocuous to avoid legal entanglements. An even more telling case would be where I am trying to decide whether or not to hire a certain professor John Smith to do some consulting for me, and I want to know whether or not Smith knows what he is talking about. It is certainly not the case that I will be able to access a database of academics, with entries such as “solid, careful thinker” or “intellectually dishonest”. Thus, many important kinds of information can only be obtained by what we called “ask a person”. As with the “ask a program” paradigm, there is an ini- tial technical problem of establishing a communication link that is yielding to current technology. Bridges exist between most major e-mail systems, the Inter- net domain naming standard is becoming universally adopted, and eventually global directory services (very roughly approximated by the current “netfind” pro- gram) will make it easy to determine the electronic ad- dress for anyone in the world. Looking further into the future we see wireless personal communicators replac- ing stationary telephone sets and computer terminals, so that all of electronic mail, voice mail, and ordinary telephone calls can really be routed directly to a per- son, rather than merely to a location that the person is likely to frequent. But in this scenario the problem of determining how to ask the right person remains Quite often someone I know (or someone with whom I have a potential profes- sional or social relationship) has the information that I need, but I am not sure who they are. At first e-mail seems to offer to offer a solution to this problem: just mail the question to everyone who I think might know the answer. It is easy to add recipients to a message, and easier still to send mail to an alias that expands to a large number of potential candidates. But in this context the phrase “potential candidate” might better be replaced by “potential victim”, because such wide- area broadcasting of electronic mail quickly becomes obnoxious. In getting one person to help me, I annoy hundreds or even thousands of others. For example, using one alias an employee at AT&T can easily send mail to everyone who works at the company - that is, more than 60,000 people (and from time to time some- one does just that, to his or her everlasting regret). If the employee persists in this kind of activity, peo- ple will soon ignore his requests. Ultimately mail he sends may be electronically filtered out by his intended recipients, or he may even face disciplinary action. What about netnews? Posting to netnews does elim- inate the annoyance factor, because only people who actively want to read the messages do so. Unfortu- nately the people I truly wish to reach, those who have the valuable information that I need, are the least likely to read netnews. Many would agree that as access to and the volume of netnews has increased over the past years, the “quality level” of most newsgroups (never that high to begin with) has declined. Informed, busy people simply drop out of the electronic community, so that most groups become forums for ill-informed opin- ions, unanswered pleas, downright misinformation, and various exhibitions of social pathology. The current tools for “ask a program” and “ask a person” are largely disconnected at the top level, de- spite the fact that they rely on a shared electronic in- frastructure. We believe that systems that integrate the two paradigms can provide solutions to the prob- lems inherent in each. We are designing and building systems that use software agents to assist and sim- plify person-to-person communication for information gathering tasks; we call this approach agent amplified communication. Expertise Location In a previous paper (Kautz et al. 1994), we described our view of software agents and the particular agent platform we had built. We take agents to be programs that assist users in routine, communication intensive activities, such as locating information scheduling meetings coordinating paper reviews, and so on (Dent et al. 1992; Maes 1993; Maes and Kozierok 1993; Coen 1994). A user delegates a task to an agent, which can then engage in many transactions with other agents and other people. Delegation thus reduces the total communication load on the user. As a philosophi- cal point, we believe that agent systems should blend transparently into normal work environments and ex- isting infrastructure, and take advantage of (rather than trying to replace) the networks of formal and in- formal systems and social relations that exist in an organization. The system we built used two basic classes of agents: task specific agents, called “taskbots”, and personal agents for each user, called “userbots”. Our initial task specific agent was used for scheduling meetings with visitors to our lab, and was thus named the ‘%is- itorbot”. A host would merely need to provide the visitorbot with a talk abstract and the times that a visitor was available for meetings and the visitorbot would carry out all the steps necessary to set up a se- ries of meetings (i.e., sending out a talk announcement, obtaining preferred meeting times from interested par- ties, and generating and distributing a schedule for the day). The userbots provide a graphical, customizable interface to the taskbots, as well as a repository for information that is private to the user. For example, a user could tell his userbot his preferred meeting times, and the userbot would then transmit this information to the visitorbot. We are now designing a new generation of taskbots and userbots for information-gathering tasks of the kind described above. The specific task we are work- ing on is expertise location. In any large organization, determining who is an expert on a particular topic is a crucial problem. The need for expertise location ranges from informal situations, such as where I might need to find an expert on LaTex macros to help fix a type- setting problem in a paper I’m writing, to formal con- struction of project teams to meet business needs. The 4 Agents range of expertise specifications may range from the generic (“who knows about logic programming?“) to the highly specific (“who knows how to modify the in- terrupt vector handling microcode in the reboot mod- ule of the XZY999 processor?“). Online directories of expertise rarely exist, and when they do, the information that they contain is certain to be out of date and incomplete. In fact, expertise needs are potentially so specific that it is simply im- possible to determine a comprehensive set of categories in advance. Expertise location is therefore generally an “ask a person” task, with the all the problems associ- ated with that approach outlined above. Let us consider for a moment how expertise location actually works when it is successful. In a typical case I contact a small set of colleagues whom I think might be familiar with the topic. Because each person knows me personally, they are quite likely to respond. Usually none of them is exactly the person I want; however, they can refer me to someone they know who might be. After following a chain of referrals a few layers deep I finally find the person I want. Note that in this successful scenario I needed to walk a fine line between contacting too few people (and thus not finding the true expert) and contacting too many (and eventually making a pest of myself). Even in the end I might wonder if I might not have found even a better expert if only I could have cast the net a bit wider. I may have had difficulty bringing to mind those people I do know personally who have some expertise in the desired area. If only all of my colleagues em- ployed endlessly patient assistants that I could have contacted initially, who would have known something about their bosses’ areas of expertise, and who could have answered my initial queries without disturbing everyone.. . Now let us consider how software agents could be used to augment the expert location process. Each person’s userbot would create a model of that person’s areas of interest. This model would be created au- tomatically by using information retrieval techniques (such as inverted indexes) on all the documents cre- ated and received by the user. The user model could be quite large and detailed, and would be private to the user, that is, not stored in a central database. The userbot would also create a much more coarse-grained model of my contacts by applying similar techniques to all the electronic mail that I exchange with each person. When problem I have an expertise location need, to my userbot as an unstructured I present the text descrip- tion. Again using IR techniques, my userbot selects-a medium-to-large set of my contacts to whom the query may be relevant. It then broadcasts the query, not to the people themselves, but to their userbots. Upon re- ceipt of the question each userbot checks if it’s owner’s user model does indeed provide a good match. If there is a good match, the userbot presents my request to it’s owner. If the owner’s model does not match, but the model of one of the owner’s contacts does, then the userbot can ask the owner if it can provide a re- ferral. Finally, if there is no match at all, the query is silently logged and deleted. A great deal of flexibil- ity can be built into each userbot, depending upon it’s owner’s preferences. For example, I might allow auto- matic referrals to be given to requests that come from my closest colleagues. This system provides several benefits over the net- news and the “send personal e-mail to everyone”, ap- proaches described above. First, it is largely passive on the part of the recipients - they do not need to be reading netnews and wading through dozens of articles. Second, queries are broadcast in a focused manner to those who are at least somewhat likely to find them of interest. Third, users are shielded from seeing a large number of completely irrelevant messages; each userbot may process dozens of messages for every one the user sees. Finally, messages that a user does see do not come from “out of the blue”, but rather are tagged with a chain of referrals from colleague to colleague - thus increasing the likelihood of response.’ One reason to believe that the system just described would be useful in practice is that it basically models the manner in which expertise location actually works now, while allowing more people to be contacted with- out causing disruption and disturbance (Granovetter 1978; Grosser 1990; Kraut 1990; Krackhardt and Han- son 1993). In fact, some sociologists and economists have argued that more accurate economic models of society can be developed by viewing communities as networks of individuals, where the key ingredient for economic progress is the identification of individuals who can link people with complementary needs and resources (Burt 1995). For a description of another Internet-based system currently being built that at- tempts to take advantage of such personal ties between colleagues, see Foner (1996). Implementation of an Expertise Locator An initial version the expertise location has been im- plemented by extending the user-agents as developed in the “visitorbot” project (Kautz et al. 1994). Each user-agent has access to the following kinds of database files, each of which is specific to and owned by the in- dividual user. It is important to note that we do not assume that these files can be directly accessed by any- one other than the user and the user’s agent. o A user-contacts file containing a list of some of the user’s colleagues, and for each a list of keywords de- scribing their areas of expertise. a An indexed-email file that stores for each word that appears in any em .ail message, a list of the messages ‘While we are assuming that friendship and professional ties are enough to encourage responses to requests, the “Six Degrees of Separation” project led Merrick Furst at CMU has proposed the use of monetary rewards. Their prototype Internet Information Exchange (INEX) rewards users who provide helpful responses to posted questions with vouchers for free soda. Interaction 5 that contain that word. It also contains the names of those who sent the messages. The index is built from email records collected over a substantial pe- riod of time. This kind of index, called an “inverted index” ) can be generated using standard information retrieval algorithms and indexing programs (Salton et al. 1989; Manber and Wu 1994). o A user-profile file containing a list of keywords that describe some of the user’s own areas of expertise. e A file with names of “close colleagues”. We use a Netscape browser as a user-interface. The user begins the process by accessing the home page of their user-agent (password protected). He or she can than formulate a query by simply giving one or more relevant keywords. The user-agent will then scan the user-contacts file and the indexed email files to look for names of people who might be able to help with this re- quest. The user is presented with a list of such names. The names are ordered according to how frequently the keywords were mentioned in the email correspon- dence. By simply by clicking on some of the names on the list, the user can initiate the referral chaining pro- cess. For the selected names, the query is passed on to the corresponding user agents. As we discussed earlier in general terms, when a user agent receives a request for expertise, it tries to match the request against its owner’s data files. If there is a good match with the user-profile information, the request is passed on to its owner directly, since there is a good chance that he or she can answer the query. If there is no good match with the user-profile, the user agent generates a list of possible referrals using the email records and the user- contact file. This list is passed back automatically to the requesting user agent, if the request originates from someone on the list of close colleagues. If the originator of the request is not a close colleague, the user agent will contact its owner before passing back any infor- mation. Finally, if no good match is found with any of the stored records, the query is simply ignored. The user agent of the person who originally requested the information collects all possible referrals, and can con- tinue the process by contacting some of the suggested referrals. We are currently experimenting with the system us- ing a group of volunteers from our center. Our prelimi- nary experiments show that we can often follow a refer- ral chain leading to a local expert in our group, starting form any arbitrary person in test group (about 25 peo- ple). This indicates to us that email records do provide a surprisingly rich source of information concerning hu- man expertise, and that simple key word matches on those records can be used to locate the right person. Our experience with actual users has also shown that privacy concerns are central to the successful deploy- ment of the system. It appears that many of the pri- vacy concerns can be addressed by giving the users final control about what kinds information is being passed around. However, it appears for more complex scenar- ios, an agent-based system will probably need to be 6 Agents R 1.00 1.00 1.00 1.00 1.00 1.00 0.90 0.90 0.90 0.90 0.90 0.70 0.70 0.70 0.70 0.70 0.50 0.50 0.50 0.50 0.50 0.10 0.10 0.10 0.10 av. final av. number success A dist. messages rate 1.00 0.0 12.1 ----mm 0.70 0.0 42.6 99.7 0.50 0.0 109.8 99.1 0.30 0.1 377.2 94.6 0.10 0.6 1225.2 52.5 0.00 1.2 1974.6 8.5 0.90 0.1 21.3 95.2 0.70 0.2 43.8 92.0 0.50 0.3 92.1 83.1 0.30 0.7 204.9 53.2 0.10 1.5 316.3 13.3 0.90 0.7 16.0 60.3 0.70 1.1 22.3 39.0 0.50 1.7 27.5 19.6 0.30 2.2 28.3 6.8 0.10 2.5 31.9 1.2 0.90 1.4 9.5 26.6 0.70 1.9 10.5 11.1 0.50 2.3 10.8 5.2 0.30 2.6 10.7 3.1 0.10 2.9 11.1 0.2 1.00 2.2 3.3 1.3 0.70 2.7 3.5 0.3 0.50 2.9 3.5 1.1 0.10 3.2 3.5 0.1 Table 1: Simulation results for a 100,000 node network. able to reason about issues involving trust and author- ity. Finally, the automatically generated referrals are, in general, less accurate than those that people can make themselves. In the next section, we consider the effect of this loss of accuracy on the referral chaining process. Measuring Effectiveness of Amplified Com- municat ion A key question concerning agent amplified communi- cation is whether it will actually improve upon the traditional person-to-person referral chaining process. It is clear that agents can handle larger numbers of messages which may increase the chance of finding the desired information. On the other hand, as noted above, referrals suggested by user agents will generally be less accurate than those provided by people. This decreases the effectiveness of the referral process, and could in fact render the process ineffective. In order to study this issue, we ran a series of simulations of agent amplified communication. Pn our simulation, we consider a communication net- work consisting of a graph, where each node represents a person with his or her user agent, and each link mod- els a personal contact. In the referral chaining pro- cess, messages are sent along the links from node to node. A subset of nodes are designated “expert nodes” ! which means that the sought-after information resides at those nodes. In each simulation cycle, we randomly designate a node to be the “requester”, and the refer- R 0.90 0.90 0.90 0.90 0.90 0.90 0.70 0.70 0.70 0.70 0.50 0.50 0.50 A 0.70 0.50 0.40 0.30 0.20 0.10 0.90 0.70 0.50 0.30 0.90 0.70 0.50 av. final av. number success dist. messages rate 0.1 90.1 ----ST 0.1 200.4 96.4 0.1 307.4 93.3 0.2 558.3 86.7 0.4 955.8 68.9 0.7 1466.4 43.9 0.4 32.4 74.5 0.7 52.4 57.1 1.2 72.2 32.3 1.7 87.0 14.1 1.2 17.6 29.4 1.7 20.0 16.8 2.1 21.4 6.6 Table 2: Simulation results for the network from Table 1, but with B = 4. ral chaining process starts from that node. The goal is to find a series of links that leads from the requester node to an expert node. First, we model the fact that the further removed a node is from an expert node, the less accurate the referral will be. Why we expect the accuracy of re- ferral to decrease is best illustrated with an example. Assume that one of your colleagues has the requested expertise. In that case, there is a good chance that you know this and can provide the correct referral. How- ever, if you don’t know the expert personally, but know of someone who knows him or her, then you may still be able to refer to the correct person (in this case, your colleague who knows the expert), but you’ll probably be less likely to give the correct referral than if you had known the expert personally. In general, the more steps away from the expert, the less likely it is that you will provide a referral in the right direction To model this effect, we introduce an accuracy of referral factor A (a number between 0.0 and 1.0). If a node is $ steps away from the nearest expert node, it will refer in the direction of that node with a probability p(A,d) = Aad where Q is a fixed scaling factor. With probability of 4 - p(A, d), the request will be referred to a random neighbor (i.e., an inaccurate referral). Similarly, we model the fact that the further re- moved a node is from the requester node, the less likely the node will respond. This aspect is modeled us- ing a responsiveness factor R (again between 1.0 and 0.0). If a node is d links removed from the origina- tor of the request, its probability of responding will be p(R,d) = Rpd, where p is a scaling factor. Finally, we use a branching factor B to represent the average number of neighbors that will be contacted or referred to by a node during the referral chaining process. Table I gives the results of our first simulation exper- iment. The network consists of a randomly generated graph with 100,000 nodes. Each node is linked on av- erage to 20 other nodes. Five randomly chosen nodes were marked as expert nodes. The average branching R 0.90 0.90 0.90 0.90 0.70 0.70 0.70 0.70 0.50 0.50 0.50 0.50 0.50 0.30 0.30 0.30 0.30 1 II 1 A 0.90 0.50 0.30 0.10 0.90 0.50 0.30 0.10 0.90 0.70 0.50 0.30 0.10 0.90 0.70 0.50 0.10 av. final dist. 0.1 0.1 0.2 0.8 0.3 0.9 1.4 1.9 0.8 1.1 1.5 1.9 2.2 1.2 1.6 1.8 2.4 av. number messages 10.1 39.6 99.2 246.9 9.7 20.3 26.3 30.7 7.2 9.0 9.7 10.7 10.9 5.0 5.6 5.8 5.9 success rate 95.8 93.4 83.3 37.8 78.4 42.3 20.1 4.6 45.7 30.4 17.2 8.8 1.3 21.9 11.5 7.9 0.4 Table 3: Simulation results for a network as in Table 1, but with an average of 50 neighbors per node. R 0.90 0.90 0.90 0.90 0.90 0.70 0.70 0.70 0.70 0.70 0.50 0.50 0.50 0.50 0.30 A 0.90 0.70 0.50 0.30 0.10 0.90 0.70 0.50 0.30 0.10 0.90 0.70 0.50 0.30 0.90 av. final av. number success dist. messages rate 0.1 30.1 94.7 0.2 77.3 89.1 0.7 170.8 60.7 1.4 281.9 22.9 2.2 329.3 3.2 1.0 21.5 46.7 1.6 26.1 23.4 2.3 29.6 7.5 2.8 31.4 1.8 3.1 33.3 0.3 1.9 9.9 11.9 2.5 10.8 5.5 3.0 10.6 1.8 3.3 10.9 0.2 2.5 6.0 2.9 Table 4: Simulation results for the network from Table I, but with only 1 expert node in the net. B, while referral chaining, is fixed at 3. The scaling factors cx and ,0 are fixed at 0.5. The table shows re- sults for a range of values of R and A. The values are based on a average over 1,000 information requests for each parameter setting. The final column in Table 1 gives the success rate of the referral chaining process, i.e., the chance that an expert node is found for a given request. We also included the average number of messages sent when processing a request. Note that because of the decay in responsiveness, at some distance from the requester node, the referral process will die out by itself. So, the processing of a request ends when either the chain dies out or an expert source node is reached. The third column shows how close (on average) a request came to an expert node. The distance is in terms of number of links. (This average includes the successful referral Interaction 7 Fig. 1: Success rate as a function of responsiveness and referral accuracy. runs, where the final distance is zero.) On average, in the network under consideration here, the length of the shortest chain between a random node and the nearest expert nodes is about four links. Our simulation results reveal several surprising as- pects of the referral chaining process. Let us first con- sider some of the extreme cases. With R = 1 .O and A = 1.0, i.e., total responsiveness and perfect referral, we have a 100% success rate, and, on average, it takes about 12 messages to process a request. Now, if we reduce the accuracy of referral to 0.0 (i.e., nodes will simply refer to random neighbors), our success rate drops to 8.5%~,~ and it takes almost 2000 messages per request. There is a reasonably intuitive explanation for the large number of messages and the low success rate. Without any referral information, the search basically proceeds in a random manner. With 100,000 nodes, and 5 expert nodes, we have 1 expert per 20,000 nodes. In a random search, a request would visit on average about 20,000 nodes to locate an expert. (For simplic- ity, we assume a completely connected graph where the request travels randomly from node to node, and may visit a node more than once.) Given that the search only involves 2,000 message, the measured value of 8.5% is intuitively plausible.3 Comparing these num- 2 With maximum responsiveness, the referral process would only halt when an expert node is found, giving a success rate of 100%. However, in our simulation we used a maximum referral chain length of 10. This limit was only reached when R was set to 1.0. 3Note our argument is only meant to provide some in- bers with perfect information chaining, where we get a 100% success rate using only 12 messages, reveals the power of referral chaining. Of course, in practice we have neither perfect refer- ral nor complete responsiveness. Also, in our agent amplified communication approach, we are specifically interested in the question of how the referral chaining process degrades with a decrease in referral accuracy. Let’s consider this issue by looking at the results for R = 0.9 and A = 0.3, i.e., a slightly lower responsive- ness but drastically reduced referral accuracy. Table 1 shows that we still have a success rate of over SO%, and it takes about 200 messages per request. These num- bers suggest that with a reduced referral accuracy, one can still obtain effective referral chaining, albeit at the cost of an increase in the number of messages. This should not pose a problem in our agent amplified com- munication model, where many messages will be pro- cessed automatically. Also, the 200 messages should be compared to the approximately 10,000 messages that would be required in a search without any referral in- formation to reach the same success rate (see argu- ment above). So, even with limited referral informa- tion, the referral chaining process can be surprisingly good, which suggests that an agent amplified approach can be very effective. Let us now briefly consider a setting of the param- eters that would model more closely direct person-to- tuition behind the numbers. A rigorous analysis is much more involved. See, for example, Liestmann (1994). 8 Agents person referral chaining. In that case, we would expect a lower responsiveness, but a much higher referral ac- curacy. For example, consider R = 0.5 and A = 0.9, we now have a success rate of around 27%, with almost 10 messages per request. So, with a high referral ac- curacy, we only need a few messages to get reasonably successful referral chaining. In general, our simulation experiments suggest that a decrease in referral accuracy can be compensated for by having a somewhat higher responsiveness. The plot in Fig. 1 gives the success rate as a function of the re- sponsiveness and referral accuracy. The figure nicely confirms a fairly smooth tradeoff between the two pa- rameters. In fact, we observe a somewhat asymmetric tradeoff - in favor of reduced accuracy. That is, for example, with an accuracy of 0.4, we can still reach a success rate of around 80% by having a fairly high re- sponsiveness of 0.9. However, given a responsiveness of 0.4, we cannot reach more than approximately a 20% success rate even with perfect referrals. One might wonder about the effect of the particular settings of the various network parameters in our sim- ulation. Tables 2, 3, and, 4 give the results of some of our other simulations. We only included the most informative settings of R and A. Each experiment was based on a modification of a single parameter used in the simulation experiment for Table 1. (See the cap- tion of each table.) The results in these tables show again that one can trade referral accuracy for respon- siveness, and vice versa. As we noted above, the simulation experiments were run on random graphs. Schwartz and Wood (1993) have created and analyzed graphs of email communi- cation between groups of people, based on email logs they obtained in the late 1980’s from system adminis- trators around the world. An interesting direction for future work would be to run our simulations on graphs patterned after such actual communication patterns (although the heightened awareness of security and pri- vacy issues concerning email would make it much more difficult to obtain the raw data today). Conclusions We studied the use of agents in assisting and simpli- fying person-to-person communication for information gathering tasks. As an example, we considered the task of expertise location in a large organization. We dis- cussed how user-agents can enhance the referral chain- ing process for expertise location. In our approach, agents gather information about the expertise of their owner and their owner’s contacts. This information is then used in the referral chaining process to filter mes- sages and to automate request referrals. We also pre- sented simulation results of the referral chaining pro- cess. Our experiments suggest that an the use of soft- ware agents for referral chaining can be surprisingly effective. References Coen, M. (1994). SodaBottle. M.Sc. Thesis, MIT AI Lab, Cambridge, MA, 1994. Dent, L.; Boticario, J.; McDermott, J.; Mitchell, T.; and Zabowski, D. (1992). A personal learning apprentice. In Proc. AAAI-92, AAAI Press/The MIT Press, 1992, 96-103. Burt, R. (1995). Structural Holes: the Social Structure of Competition. Harvard University Press, Cambridge, MA, 1995. Etzioni, O., and Weld, D. S. (1994). A softbot-based inter- face to the internet. Comm. ACM, 37(7), 1994, 72-79. Foner, L. (1996). A multi-agent referral system for match- making. Proc. First International Conference and Exhi- bition on the Practical Application of Intelligent Agents & Multi-Agent Technology (PAAM-96), London, UK, 1996. Granovetter, M. (1973). Strength of Weak Ties. American Journal of Sociology, vol. 78, 1973, 1360-1380. Grosser, D. (1990). H uman communication. Annual Re- view of Info. Sci. and Techn., 1990. Kautz, H.; Selman, B.; and Coen, M. (1994). Bottom-up design of software agents. Comm. ACM, 37(7), 1994, 143-146. Kirk, T.; Levy, A.; and Srivastava, D. (1995). The in- formation manifold project. In Proceedings AAAI-95 Spring Symposium on Information Gathering in Dis- tributed Environments. Palo Alto, CA, April 1995. Knoblock, C.; Arens, Y.; and Hsu, C.-N. (1994). Coop- erating agents for information retrieval. In Proc. Sec- ond International Conference on Cooperative Informa- tion Systems, 1994. Krackhardt, D., and Hanson, J. (1993). Informal net- works: The company behind the chart. Harvard Busi- ness Review, 1993. Kraut, R., Galegher, and Edigo C. (1990). Intellectual Teamwork: Social and Technological Bases for Coopera- tive Work. Lawrence Erlbaum Associates, Hillsdale, NJ, 1990. Liestmann. (1990). B roadcasting and gossiping. Discrete Applied Mathematics, 1990. Maes, P., ed. Designing Autonomous Agents. MIT/Elsevier, 1993. Maes, P., and Kozierok, R. (1993). Learning interface agents. In Proc. AA AI-93, AAAI Press/The MIT Press, 1993, 459-464. Manber, U., and Wu, S. (1994). Glimpse: A tool to search through entire file systems. In Usenix Winter 1994 Tech- nical Conference, 1994, 23-32. Salton, R. (1998). Automatic Text Processing. Addison- Wesley, 1989. Schwartz, M. F. and Wood, D. C. M. (1993). Discovering shared interests using graph analysis. Comm. ACM, 36(8), 1993, 78-89. Interaction 9
1996
1
1,732
Cooperative Learning over Composite Search S Experiences with a Multi-agent M V Nagendra Prasadl, Susan E. Lander2 and Victor lDepartmen t of Computer Science University of Massachusetts Amherst, MA 01003 (nagendra,lesser)Qcs.umass.edu Abstract We suggest the use of two learning techniques - short term and long term - to enhance search efficiency in a multi-agent design system by letting the agents learn about non-local requirements on the local search process. The first technique allows an agent to ac- cumulate and apply constraining information about global problem solving, gathered as a result of agent communication, to further problem solving within the same problem instance. The second technique is used to classify problem instances and appropriately in- dex and retrieve constraining information to apply to new problem instances. These techniques will be pre- sented within the context of a multi-agent parametric- design application called STEAM. We show that learning conclusively improves solution quality and processing- time results. Introduction In this article, we study machine-learning techniques that can be applied within multi-agent systems (MAS) to improve solution quality and processing-time re- sults. A ubiquitous problem with multi-agent systems that use cooperative search techniques is the “local perspective” problem. Constraining information is dis- tributed across the agent set but each individual agent perceives a search space bounded only by its local con- straints rather than by the constraints of all agents in the system. This problem could be easily addressed if all expertise could be represented in the form of ex- plicit constraints: the constraints could be collected and processed by a centralized constraint-satisfaction algorithm. However, the most compelling reasons for building multi-agent systems make it unlikely that the agents are that simple. More commonly, agents are complex systems in their own right and their expertise is represented by a combination of declarative and pro- cedural knowledge that cannot be captured as a set of *This material is based upon work supported by the Na- tional Science Foundation under Grant Nos. IRI-9523419 and EEC-9209623. The content of this paper does not nec- essarily reflect the position or the policy of the Government, and no official endorsement should be inferred. 68 Agents 2Blackboard Technology Croup, Inc. 401 Main Street Amherst, MA 01002 landerQbbtech.com explicit constraints. Therefore, each agent must oper- ate independently to solve some subproblem associated with an overall task and the individual solutions must be integrated into a globally consistent solution. An agent with only a local view of the search space can- not avoid producing subproblem solutions that conflict with other agents’ solutions and cannot always make intelligent decisions about managing conflicts that do occur. We suggest the use of two learning techniques - short term and long term - to address the local perspective problem. The first technique allows an agent to accumulate and apply constraining informa- tion about global problem solving, gathered as a re- sult of agent communication, to further problem solv- ing within the same problem instance. The second technique is used to classify problem instances and ap- propriately index and retrieve constraining information to apply to new problem instances. These techniques will be presented within the context of a multi-agent parametric-design application called STEAM. The remainder of this article is organized as follows: We first formalize our view of distributed search spaces for multi-agent systems and briefly present the STEAM prototype application (Lander 1994; Lander & Lesser 1996). The subsequent section introduces two learning mechanisms used to enhance search efficiency. Next, we present experimental results that demonstrate the effectiveness of the learning mechanisms. In the last two sections, we discuss related work and present our conclusions. istributed Search Spaces Search has been one of the fundamental concerns of Artificial Intelligence. When the entire search space is confined to a single logical entity, the search is centralized. In contrast, distributed search involves a state space along with its associated search op- erators and control regime partitioned across mul- tiple agents. Lesser (Lesser 1990; 199 1) recognizes distributed search as a framework for understanding a variety of issues in Distributed Artificial Intelli- gence( DAI) and Multi-agent Systems (MAS). From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. In distributed search, multiple agents are required to synergistically search for a solution that is mutually acceptable to all of them and satisfies the constraints on global solutions. However, the constraining infor- mation is distributed across the agent set and each in- dividual agent perceives a search space bounded only by its local constraints rather than by the constraints of all the agents in the system. Thus we need to distin- guish between a boccll search space and the composite search space. A local search space is private to an agent and is defined by the domain values of the parameters used to constrain the local search. A set of local constraints along with the problem specification, define the local solution space as a subset of the local search space. More formally, for each agent 4 the following can be defined: The parameter set Pi = ( pijr 1 < j < xi ) with respective domains I)ij, 1 5 j 5 ZLfrom which the parameters take their values. Vi, x /Di, x . . . x Di,. defines the local search space for agent A. A doma& can be a set of discrete values, real values, labels or intervals . A parameter is a shared parameter if it belongs to pa- rameter sets of more than one agent. More formally, let AS(p) represent the set of agents that have pa- rameter p as a part of their parameter set. WP) = (4 IPE(P;jtl<j<Xi)) “p” is a shared parameter iff 1 AS(p) 1 > 1. Hard constraints ‘?K’~ = ~~C~j, 1 < j < yi) that represent the solution requirements that have to be satisfied for any local solution that agent JLi pro- duces at time t. Soft constraints SC: = (SC~j, 1 5 j 5 2;) that rep- resent the solution preferences of agent .& at time t. Soft constraints can be violated or modified without affecting the ability of the agents to produce globally acceptable designs. The set of hard and soft constraints, Ci = XC: U SC:, defines a local solution space Sit C_ Di, x Dt, x . . . x v&* Si” = Space(C~ ) In constraint-optimization problems, not all con- straints need to be satisfied. Hard constraints are nec- essarily satisfied and soft constraints are satisfied to the extent possible. Soft constraints may have a vary- ing degree of flexibility with some being softer than the others. Some of them may be relaxed when an agent is not able satisfy all of them together. When Sj = 4 the problem is over-constrained with respect to ,c2;. In this situation, domain-specific strategies are used to relax one of the soft constraints cik. Si’ = i$XZCe(c~ - Cik) How soft constraints are relaxed can strongly effect the system performance. Lander (Lander 1994) dis- cusses some studies related to this issue. We will not attempt to further discuss this in the present paper. A local solution sk E Si for agent J&, consists of an assignment Of values vij E Vij t0 parameters Pii, 1 < j < Xi. Each of the local solutions may have u&y measures attached to it to facilitate selection of a preferred solution from Si. Agent .& prefers solution sm E Sit over solution s,, E Si if the utility of sn is more than the utility of s,. We also define the notion of projection that will be used in the later sections to formalize the description of the learning algorithms. For each local solution si = (PiI= ‘Vii, pia = vi2, . . . pi=; = Viz;) E 27: and a set of parameters X, we can define a projection (si 4 X) as follows: (Si -l-X) = (Pij = vi j 1 Pij E x) For a solution space Sit, projection (Si 4 X) is de- fined as (si’ 3- X) = ((Si j- X)1 Si E Si} Constraints can be explicitly represented as in, for example, (run-speed 5 3600) or they may be implicit as in procedural representation. An example of a pro- cedurally embedded constraint may look as follows: if (run-speed <= 3000) then water-f low-rate = max (50 water-f low-rate) end if In this constraint, the run-speed parameter implic- itly constrains the water-flow-rate. In complex sys- tems like STEAM, it is often the case that such implicit constraints are not easily discernible and, even when they are, it may not be possible to represent them in a declarative form that is suitable for sharing with other agents. Soft constraints in STEAM are associated with four levels of flexibility - 1 to 4 - with constraints at level 4 representing the highest preference. An agent tries to satisfy as many of the constraints at highest level as possible. Solutions are evaluated based on the con- straint level, the number of constraints satisfied at that level, and, finally, on the cost of the design. Solutions satisfying more constraints at a higher level are pre- ferred. If there are multiple design components sat- isfying the same number of constraints at the highest level, a component with least cost is chosen. The composite search space CS is a shared search space derived from the composition of the local search spaces of the agents. The desired solution space for the multi-agent system is a subset of the composite search space. Parameters of the composite search space consist of: i=n j=z; PGk c IJ-IJ Pij i=l j=l Multiagent Learning 69 where 1 5 k 5 g. The parameter set for the composite space repre- sents the features of the overall design that are con- strained by the composite search space. The domain for a parameter in this set is derived as the intersection of all the domains of the corresponding parameters in the agents that have this parameter as a shared pa- rameter. STEAM: A Multi-agent Design System STEAM is a parametric design system that produces steam condensers where a motor powers a pump that injects water into a heat-exchanger chamber. High temperature steam input into this chamber is out- put as condensed steam. STEAM consists of seven agents for designing the components of such steam condensers: pump-agent, heat-exchanger-agent, motor-agent , platform-agent , vbelt-agent , shaft-agent, system-frequency-critic Each agent takes the responsibility to either design a component of a steam condenser for which it possesses expertise or critique an evolving partial design. STEAM is a globally cooperative system which implies that any local performance measures may be sacrificed in the pursuit of better global performance. Thus an agent is willing to produce a component that is poorly rated locally but may participate in a design of high overall quality. STEAM is a distributed search system. Each agent performs local search based upon the implicit and ex- plicit constraints known within its local context. It has its own local state information, a local database with static and dynamic constraints on its design com- ponents and a local agenda of potential actions. The search in STEAM is performed over a space of partial de- signs. A partial design represents a partial solution in the composite search space. It is initiated by placing a problem specification in a centralized shared memory that also acts as a repository for the emerging com- posite solutions (i.e. partial solutions) and is visible to all the agents. Any design component produced by an agent is placed in the centralized repository. Some of the agents initiate base proposals based on the prob- lem specifications and their own internal constraints and local state. Other agents in turn extend or critique these proposals to form complete designs. The evolu- tion of a composite solution in STEAM can be viewed as a series of state transitions. For a composite so- lution in a given state, an agent can play a role like initiate-design, extend-design or critique-design. An agent can be working on several composite solutions concurrently. Learning Efficient Search Problem solving in STEAM starts with agents possess- ing only local views of the search and solution spaces. Given such a limited perspective of the search space, an agent cannot avoid producing components that con- flict with other agents’ components. This section intro- duces two machine learning techniques to exploit the situations that lead to conflicts so as to avoid similar conflicts in future. Conflict Driven Learning (CDL) CDL has been presented as negotiated search in Lan- der (Lander 1994). Below, we reinterpret this process as a form of learning and provide a formal basis for the learning mechanisms. As discussed previously, purely local views that agents start out with are unlikely to lead to composite solutions that are mutually accept- able to all of them. When an agent attempts to ex- tend or critique a partial design, it may detect that the design violates some of its local constraints. Let the set of parameters shared by agents & and .A.j be Xii. Then agent Jzi trying to extend or critique a so- lution ST E Sit detects a conflict iff The violated constraints can be either explicit or implicit. Explicit constraints can be shared and an agent detecting violations of explicit constraints gen- erates feedback to the agents that proposed the partial design involved in the conflict. In STEAM, explicit con- straints are limited to simple boundary constraints of the form (z < n), (Z < n), (Z > n), or (Z 2 n) that specify maximum or mynimum values for a parameter. If z is a shared parameter, then an explicit constraint on x can be shared with other agents. Such a conflict- driven exchange of feedback on non-local requirements allows each agent to develop an approximation of the composite (global) search space that includes both its local perspective and explicit constraining information that it assimilates from other agents. This type of “negotiated search” can be viewed as learning by be- ing told and is short-term in nature-the exchanged information is applied only to the current problem in- stance. The following lemma shows that each exchange in CDL progressively improves an agent’s view of the composite search space. Let the set of constraints communicated as feedback by agent J& to .& at time t upon detecting a conflict be 3Cj. Lemma: (CS 4 P;) & (Sf= Space(Ci U 3Cj)) C Sit The lemma says that A’s view of the composite search space with the new conflict information assimi- lated is a refinement over the previous one with respect to the relevant portion of the actual composite search space. The design leading to a conflict is abandoned and the agents pursue other designs but with an enhanced knowledge of composite solution requirements from there on. 70 Agents Exchange of explicit constraints does not guarantee that the agents will find a mutually acceptable solution because of the presence of implicit constraints that can- not be shared. Thus, even ifall the explicit constraints that lead to conflicts are exchanged by time tf , the re- sulting view of agent 4 of the composite search space is still an approximation of the true composite search space: (CS J. Pi) - sit’ However, to the extent that an agent’s local view approaches the global view, an agent is likely to be more effective at proposing conflict-free proposals. Case-Based Learning (CBL) Agents can also learn to predict composite solution re- quirements based on their past problem-solving experi- ence. We endow agents with capabilities for Case-based learning (CBL) to accumulate local views of the com- posite search space requirements across many design runs. This can be viewed as long-term learning-the learned information is available for retrieval with fu- ture problem instances. During the learning phase, the agents perform their search with conflict-driven learning as discussed above. However, at the end of each search, an agent stores the problem specification and the non-local constraints it received as feedback from the other agents as an ap- proximation of the non-local requirements on the com- posite solution space for that problem specification. After the agents learn over a sufficiently large train- ing set, they can replace the process of assimilating feedback with learned knowledge. When a new prob- lem instance is presented to the agent set, it retrieves the set of non-local constraints that are stored under a past problem specification that is similar to the present problem specification and adds them to the set of local requirements at the start of the search. Thus agents can avoid communication to achieve approximations of the composite search space. (CS 4 Pi) - s;=o where Sit=’ is defined by the local domain constraints and the constraints of the similar past case from the case base cpo a = Space(Cf + FC~NN) We use the l-Nearest-Neighbor (1NN) algorithm based on the Euclidean metric to obtain the most sim- ilar past case. Experimental Results In this section, we will empirically demonstrate the merits of learning composite search-space require- ments, both short-term and long-term. In order to demonstrate the merits of our learning methods, we experimented with 3 search strategies as described be- low: e Blind Search (BS): No learning is applied. When an agent detects a conflict in a particular design, it chooses another design to pursue. Agents do not communicate any information. Conflict-Driven Learning (CDL): An agent that de- tects a conflict generates a feedback that is assim- ilated by the recipient agents. The recipients use the conflict information to constrain future searches within a single design run: for example, when proposing or extending alternative designs. e Case-Based Learning (CBL): The agents use pre- viously accumulated cases to start their problem- solving with an awareness of the non-local con- straints. Agents do not communicate during problem-solving. Each agent uses a l-NN algorithm to find the case most similar to the present problem solving instance and initializes its non-local require- ments with the constraints in the case. We ran the algorithm at different case base sizes: 50, 100 150, 200. As described previously, the STEAM system used Four in these experiments had seven agents. of them - pump-agent, heat-exchanger-agent, motor-agent, and vbelt-agent - can either ini- tiate a design or extend an existing partial design. Platform-agent and shaft-agent can only extend a design and frequency-critic always critiques a partial design. Each agent in turn, gets a chance to perform a role during a cycle. The number of cycles represents a good approximation to the amount of search performed by the entire system. Problem specification consisted of three parameters - required-capacity, platform-side-length, and maximum-platform-deflection. Problemsolvingter- minates when the agents produce a single mutually ac- ceptable design. We trained the CBL system with ran- domly chosen instances and then tested all the three search strategies on the same set of 100 instances dif- ferent from the training instances. Table 1 shows the average cost of designs produced by each of the algorithms. Table 2 shows the average number of cycles per design. Wilcoxon matched-pair signed-ranks test revealed that the costs of designs produced by STEAM with CBL and CDL were lower than those produced by STEAM with blind search at significance level 0.05. The same test however, revealed no significant difference between the costs of designs produced by STEAM with CDL and those produced by STEAM with CBL. CBL was able to produce slightly better designs than CDL because CDL performs blind search initially un- til it runs into conflicts and gains a better view of the composite search space through the exchange of feed- back on these conflicts. CBL on the other hand, starts the problem solving with a good approximation of the global solution space requirements and hence manages to do better than CDL. Even though CDL gains a more Multiagent Learning 71 Blind CDL CBL-50 CBL-100 CBL-150 CBL-200 7227.2 6598.0 6572.96 6571.54 6526.03 6514.76 Table 1: Average Cost of a Design Blind CDL CBL-50 CBL-100 CBL-150 CBL-200 15.54 12.98 13.26 13.36 13.03 12.94 Table 2: Average number of cycles per design accurate view of the non-local requirements after all the exchange is done, the fact that the past cases are only an approximation of the present requirements in CBL seems to be offset by the more informed search done in the initial stages. Our results conclusively demonstrate that conflict- driven learning (CDL) and Case-based Learning (CBL) improve both solution quality and processing time compared to blind search. In addition, once the learn- ing is completed, CBL requires no run-time communi- cation. Note however that CDL is required during the learning phase. Related Work We classify the work relevant to the topic on hand into three categories: Distributed Search, Conflict Manage- ment and Multi-agent Learning. Distributed Search has been the explicit focus of re- search amongst a small group of DA1 researchers for the past few years. Yokoo et al. (Yokoo, Durfee, & Ishida 1992), Conry et al. (Conry et al. 1991), and Sycara et al. (Sycara et al. 1991) have investigated various issues in Distributed Search. However, im- plicit in all of these pieces of work are the assumptions that the agents have homogeneous local knowledge and representations and tightly integrated system-wide problem-solving strategies across all agents. Conflict management approaches are very similar to the Conflict Driven Learning mechanism presented here. Klein (Klein 199 1) develops a theory of compu- tational model of resolution of conflicts among groups of expert agents. Associated with each possible con- flict is an advice for resolving the conflict. An advice is chosen from the set of conflict resolution advice of the active conflict classes to deal with the encountered con- flict. We believe that Klein’s work provides a general foundation for handling conflicts in design application systems. However, it falls short of embedding such conflict resolution mechanisms into the larger problem solving context that can involve studying issues like solution evaluation, information exchange and learn- ing. Khedro and Genesereth (Khedro & Genesereth 1993) present a strategy called Progressive Negotia- tion for resolving conflicts among multi-agent systems. Using this strategy, the agents can provably converge 72 Agents to a mutually acceptable solution if one exists. How- ever, the guarantee of convergence relies crucially on explicit declarative representation and exchange of all constraining information. More commonly, STEAM- like systems are aggregations of complex agents whose expertise is represented by a combination of declarative and procedural knowledge that cannot be captured as a set of explicit constraints. Previous work related to learning in multi-agent sys- tems is limited. Tan (Tan 1993), and Sen and Sekaran (Sen & Sekaran 1994) represent work in multi-agent re- inforcement learning systems. While these works high- light interesting aspects of multi-agent learning sys- tems, they are primarily centered around toy prob- lems on a grid world. STEAM is one of the few com- plex multi-agent systems demonstrating the viability of such methods for interesting learning tasks in the do- main of problem solving control, which is a notion that is not explicit in the above systems. Nagendra Prasad et al. (NagendraPrasad, Lesser, & Lander 1995) dis- cuss organization role learning in STEAM for organiz- ing the control of distributed search process among the agents. This work uses reinforcement learning to let the agents organize themselves to play appropriate roles in distributed search. Shoham and Tennenholtz (Shoham & Tennenholtz 1992) discuss co-learning and the emergence of conventions in multi-agent systems with simple interactions. Shaw and Whinston (Shaw & Whinston 1989) discuss a classifier system based multi- agent learning system for resource and task allocation in Flexible Manufacturing Systems. However, genetic algorithms and classifier systems have specific repre- sentational requirements to achieve learning. In many complex real-world expert systems, it may be difficult to achieve such requirements. A related work presented in Weiss (Weiss 1994) uses classifier systems for learn- ing an aspect of multi-agent systems that is different from that presented here. Multiple agents use a variant of Holland’s (Holland 1985) bucket brigade algorithm to learn appropriate instantiations of hierarchical orga- nizations for efficiently solving blocks-world problems. Conclusion Our paper investigates the role of learning in improv- ing the efficiency of cooperative, distributed search among a set of heterogeneous agents for parametric design. Our experiments suggest that conflict-driven short-term learning can drastically improve the search results. However, even more interestingly, these ex- periments also show that the agents can rely on their past problem solving experience across many problem instances to be able to predict the kinds of conflicts that will be encountered and thus avoid the need for communicating feedback on conflicts as in the case of short-term learning (communication is still needed dur- ing the learning phase). The methods presented here address the acute need for well-tailored learning mechanisms in open, reusable-agent systems like STEAM. A reusable agent system is an open system assembled by minimal cus- tomized integration of a dynamically selected subset from a catalogue of existing agents. Reusable agents may be involved in systems and situations that may not have been explicitly anticipated at the time of their design. Learning can alleviate the huge knowledge- engineering effort involved in understanding the agent mechanisms and making them work together. References Conry, S. E.; Kuwabara, K.; Lesser, V. R.; and Meyer, R. A. 1991. Multistage negotiation for distributed constraint satisfaction. IEEE Transactions on Sys- tems, Man, and Cybernetics 21(g). Holland, J. H. 1985. Properties of bucket brigade algorithm. In First International Conference on Ge- netic Algorithms and their Applications, 1-7. Khedro, T., and Genesereth, M. 1993. Progressive negotiation: A strategy for resolving conflicts in coop- erative distributed multi-disciplinary design. In Pro- ceedings of the Conflict Resolution Workshop, IJCA I- 93. Klein, M. 1991. Supporting conflict resolution in cooperative design systems. IEEE Transactions on Systems, Man, and Cybernetics 21(6):1379-1390. Lander, S. E., and Lesser, V. R. 1996. Sharing meta-information to guide cooperative search among heterogeneous reusable agents. To appear in IEEE Transactions on Knowledge and Data Engineering. Lander, S. E. 1994. Distributed Search in Hetero- geneous and Reusable Multi-Agent Systems. Ph.D. Dissertation, University of Massachusetts. Lesser, V. R. 1990. An overview of DAI: Distributed AI as distributed search. Journal of the Japanese So- ciety for Artificial Intelligence 5(4):392-400. Lesser, V. R. 1991. A retrospective view of FA/C distributed problem solving. IEEE Transactions on Systems, Man, and Cybernetics 21(6):1347-1362. Nagendra Prasad, M. V.; Lesser, V. R.; and Lander, S. E. 1995. Learning organizational roles in a hetero- geneous multi-agent system. Computer Science Tech- nical Report 95-35, University of Massachusetts. Sen, S., and Sekaran, M. 1994. Learning to coordi- nate without sharing information. In Proceedings of the Twelfth National Conference on Artificial Intelli- gence, 426-431. Seattle, WA: AAAI. Shaw, M. J., and Whinston, A. B. 1989. Learning and adaptation in DA1 systems. In Gasser, L., and Huhns, M., eds., Distributed Artificial Intelligence, volume 2, 413-429. Pittman Publishing/Morgan Kauffmann Pub. Shoham, Y., and Tennenholtz, M. 1992. Emergent . . - conventions in multi-agent systems: Initial experi- mental results and observations. In Proceedings of KR-92. Sycara, K.; Roth, S.; Sadeh, N.; and Fox, M. 1991. Distributed constrained heuristic search. IEEE Bansactions on Systems, Man, and Cybernetics 21(6):1446-1461. Tan, M. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the Tenth International Conference on Machine Learning, 330-337. Weiss, G. 1994. Some studies in distributed machine learning and organizational design. Technical Report FKI-189-94, Institut fiir Informatik, TU Mcnchen. Yokoo, M.; Durfee, E. H.; and Ishida, T. 1992. Dis- tributed cosntraint satisfaction for formalizing dis- tributed problem solving. In Proceedings of the Twelfth Conference on Distributed Computing Sys- tems. Multiagent Learning 73
1996
10
1,733
Embracing Causality in Specifying t Indeterminate Effects of Actions Fangzhen Lin Department of Computer Science University of Toronto Toronto, Canada M5S 3H5 email: fl@ai.toronto.edu Abstract This paper makes the following two contributions to formal theories of actions: Showing that a causal min- imization framework can be used effectively to specify the effects of indeterminate actions; and showing that for certain classes of such actions, regression, an effec- tive computational mechanism, can be used to reason about them. Logical Preliminaries We shall investigate the problem in the framework of the situation calculus [8]. Our version of it employs a many sorted second-order language. We assume the following sorts: situation for situations, action for ac- tions, fluent for propositional fluents, truth-value for truth values true and false, and object for everything else. We use the following domain independent predicates and functions: Introduction Much recent work on theories of actions has concen- trated on primitive, determinate actions. In this pa- per, we pose ourselves the problem of specifying di- rectly the effects of indeterminate actions,’ like we do for the primitive, determinate ones. The binary function do - for any action a and any situation s, do(a, s) is the situation resulting from performing a in s. There are several reasons why we think this is an im- portant problem. First of all, there are actions whose effects, when described at a natural level, are inde- terminate. Secondly, one can argue that there is no absolute defining line between determinate and inde- terminate actions. The differences have a lot to do with the levels of descriptions. The effects of an action may be determinate at one level of description, but inde- terminate at another. So a theory that treats determi- nate and indeterminate actions in fundamentally dif- ferent ways will have difficulties coping with language changes. Finally, even if all the primitive actions have determinate effects, there are still needs for specifying directly the effects of complex actions that are often indeterminate. For instance, these specifications may be part of the inputs to a program synthesizer. The binary predicate H - for any propositional fluent p and any situation s, H(p, s) is true if p holds in s. The binary predicate Poss - for any action a and any situation s, Poss(a, s) is true if a is possible (executable) in s. The ternary predicate Caused - for any fluent p, any truth value II, and any situation s, Caused(p, v, s) is true if the fluent p is caused to have the truth value v in the situation s. For instance, Caused(Zoaded, true, do(Zoad, s)) means that the ac- tion load causes loaded to be true in the resulting situation. We shall make use some additional special predicates and functions, and will introduce them when they are needed. We assume that all theories in this paper will include the following basic axioms: o For the predicate Caused, the following two basic axioms:2 Our contributions in this paper are two folds. We first show that the causal minimization framework of (Lin [5]) can be used effectively to specify the effects of indeterminate actions. We then show that for certain classes of such actions, regression, an effective compu- tational mechanism, can be used to reason about them. Caused(p, true, s) > H (p, s), (1) Caused(p, false, s) 3 lH(p, s). (2) e For the truth values, the following unique names and domain closure axiom: true # false A (Vv)(v = true V v = false). (3) ‘For the purp ose of this paper, the phrases “the effects 2We use the convention that in displayed formulas, of indterminate actions” and “the indeterminate effects of free variables are implicitly universally quantified from the actions” are considered to be synonyms. outside. 670 Knowledge Representation From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. The unique names assumptions for fluent and action Lemma 1 Let W = T U C U Duna U {1,2,3}. names (we assume there are only finitely many of Then Circum( W, Caused), the result of circumscrib- them). Specifically, if {Fr, . . . . Fn} is the set of the ing Caused in W with all other predicates fixed, is fluent names, then we have: equivalent to E(z) # Fj@L i and j are different, F@) = F&J’) > Z = g. (Circum(T, Caused)} U C U V,,, U { 1,2,3). Similarly for action names. In the following, we shall denote this set of unique names axioms by D,,,. The set C of foundational axioms in [6] for the dis- crete situation calculus. These axioms characterize the structure of the space of situations. For the pur- pose of this paper, it is enough to mention that they include the following unique names axioms for situ- ations: Proof: This is because the predicate Caused occurs only negatively in C U D,,, U {1,2,3}. To illustrate how to use this approach to specify the effects of indeterminate actions, consider Reiter’s ex- ample of dropping a pin on a checkerboard: The pin may land inside a white square, inside a black square, or touching both. s # do@, 4, do(a, s) = do(a’, s’) > (a = a’ A s = s’). In the rest of this paper, we shall frequently make use of the following shorthand notation: If F is a fluent name of arity object” + fluent, then we define the expression F (t 1, . . ..t., ts) to be a shorthand for the formula H(F(tl, . . . . tn), t,), where tl, . . . , tn are terms of sort object, and t, a term of sort situation. So if white is a fluent, then white(s) is a shorthand for H(white, s). We introduce three fluents: white (all or part of the pin is in a white square), black (all or part of the pin is in a black square), and holding (the agent is holding the pin); and two actions: drop (the agent drops the pin on the checkerboard), and pickup (the agent picks up the pin). We have the following action precondition axioms:3 Poss(drop, s) 3 holding(s) A lwhite(s) A Iblack( Poss(pickup, s) E lhoZding(s) A (white(s) V black(s)). Minimizing Causation The approach of (Lin [5]) to specifying the effects of actions can be summarized as follows: We have the following effect axioms: Poss(pickup, s) > Caused(holding, true, do(pickup, s)) Poss(pickup, s) > Caused(white, false, do(pickup, s)), Poss(pickup, s) > Caused(black, false, do(pickup, s)), (5) (6) (7) (8) Formalize the causal laws and constraints of the do- main by a set T of axioms. Circumscribe (minimize) Caused in T U C U D,,, U {1,2,3} with all other predicates fixed. The resulting theory, T’, together with the follow- ing generic frame axiom: Unless caused otherwise, a fluent’s truth value will persist: Poss(a, s) > {Qv)Caused(p, 21, do(a, s)) > [WP, do@, 4) = WP, 411, (4) will generate the appropriate frame axioms. Lin [5] also discusses how the action preconditions can be generated. However, in this paper we shall not con- cern us with this problem, but assume, following (Re- iter [lo]), that for each action A(Z), we are given an action precondition axiom of the form: Poss(A(Z), s) = !I!,&, s), where PA is a formula that does not quantify over situation variables, and does not mention any situation dependent atomic formulas except those of the form H(t, s), where t is a propositional fluent term. We shall be using the following lemma for computing the circumscription of Caused: Poss(drop, s) > Caused(holding, false, do(drop, s)), Poss(drop, s) > [Caused(white, true, do(drop, s)) A Caused(bZack, false, do(drop, s))] V [Caused(white, false, do(drop, s)) A Caused(black, true, do(drop, s))] V [Caused(white, true, do(drop, s)) A Caused(bZack, true, do(drop, s))]. (9) Suppose these are the only effect axioms, and there are no causal rules and state constraints.4 By Lemma 1, it is easy to see that circumscribing Caused in ((5) - (9)) u C U %m u {1,2,3} yields: Poss(a, s) A Caused(p, v, do(a, s)) > 3Recall that as we have defined in Section , holding(s), for instance, is a shorthand for H(hoZding, s). *Notice that the state constraint (Vs)l[hoZding(s) A (white(s) V bZack(s))] has b een built into the action effect and precondition axioms. Reasoning about Action 671 a = pickup A [(p = holding A v = true) V (p = white A v = false) V (p = black A v = false)] V a = drop A [(p = holding A v = false) V p = white VP = black]. From this and the generic frame axiom (4), we can deduce the following successor state axiom (Reiter [lo]) for holding: Poss(a, s) > hoZding(do(a, s)) E a = pickup v (holding(s) A a # drop). We don’t get successor state axioms for white and black. But we have the following explanation closure axioms: Poss(a, s) A -jwhite(s) E white(do(a, s))] > (a = pickup V a = drop), Poss(a, s) A l[bZack(s) E bZack(do(a, s))] > (a = pickup V a = drop). These axioms, toget her with the effect axioms, yield the following disjunction of successor state axioms: Poss(a, s) 3 ([white(do(a, s)) 3 (a = drop V (white(s) A a # pickup))] A [bZack(do(a, s)) E (black(s) A a # pickup A a # drop))]} V { [white(do(a, s)) E (white(s) A a # pickup A a # drop))] A [bZack(do(a, s)) E (a = drop V (black(s) A a # pickup))]} V { [white(do(a, s)) z (a = drop V (white(s) A a # pickup))] A [black(do(a, s)) G (a = drop V (black(s) A a # pickup))]}. Notice the correspondences between the three cases and those in drop’s effect axiom for white and black. Classes of Indeterminate Actions The indeterminate effects of drop are inclusive in that the pin may land on a white square, a black square, or both. To see how such inclusive indeterminate effects can be represented succinctly, notice first that under the two general axioms (1) and (2) about Caused, the effect axiom (9) is equivalent to the following three axioms: Poss(drop, s) > {Caused(white, true, do(drop, s)) V Caused(black, true, do(drop, s))), Poss(drop, s) > {Caused(white, true, do(drop, s)) V Caused(white, false, do(drop, s))}, Poss(drop, s) > (Caused(black, true, do(drop, s)) V Caused(black, false, do(drop, s))). 672 Knowledge Representation Notice that under the domain closure and unique names axiom (3) for truth values, the last axiom is equivalent to Poss(drop, s) > (Elv)Caused(bZack, v, do(drop, s)). This axiom is like the releases propositions in the ac- tion description language of [3]. Notice here the ne- cessity of something like the predicate Caused. The corresponding sentence in terms of H: Poss(drop, s) > {H(bZack, do(drop, s)) V lH(bZack, do(drop, s))} is just a tautology. In general, if the action a has inclusive indetermi- nate effects on the fluent terms PI, . . . . Pn, i.e. causes at least one of them to be true and the rest of them to be false, under the context y, then we have the following causal laws: Poss(cu, s) A y > {Caused( PI, true, do(cr, s)) V . . . V Caused(P,, true, do(cu, s))}, Poss(a, s) A y > {Caused(P,, true, do(cu, s)) V Caused(P;, false, do(a, s))}, where 1 5 i 5 n. The number of indeterminate effects need not be fi- nite. If, under the context y, the action cv has the in- clusive indeterminate effects on F(z) for those x that satisfies cp, then we have the following causal laws: Poss(a, s) A y A (~x)cp(z) 3 (3x)[(p(z) A caused(F(z), true, do(cy, s))], Poss(cu,s) A y 3 (\dz){(~(z) 3 [Caused(F(z), true, do(cx, s)) V Caused(F(x), false, do(a, s))]}. For instance, playing loud rock and roll music will make some of the nearby people (including the person who plays it) happy, and the rest of them unhappy: let y be true, cp(x) be nearby(z,s), and F(x) be happy(z). Contrast to the inclusive indeterminate effects, we have the exclusive ones. For instance, flipping a coin causes exactly one of {head, tail} to be true. Generally, if the action cy has exclusive indeterminate effects on the fluent terms PI, . . . . P,, i.e. causes exactly one of them to be true and the rest of them to be false, under the context y, then we have the following causal laws: Poss(cu, s) A y > {Caused(Pl, true, do(cu, s)) i/ 9 . . i/ Caused(P,, true, do(cx, s))), Poss(cu, s) A y 3 {Caused(Pi, true, do(a, s)) V Cuused(Pi, false, do(a, s))}, where 1 5 i 5 n, and \i is the exclusive or operator: Again, the number of indeterminate effects need not be finite. There are, of course, actions with indeterminate ef- fects that are neither inclusive or exclusive. In general, if the number of the indeterminate effects of an action A(Z) is finite, then its effect axioms can be written of the following forms: Poss(A(z’), s) > (VP, 4 [cp(% P, v, 4 1 Caused(p, v, WWI, s>)], (10) Poss(A(Z), s) > {(VP, 4 [Pl (6 P, 21, s) > Caused(p, v, do(A(Z), s))] v-*-v VP, v>[cp&& P, v, 4 1 Caused(p, v, do(A(~), s>>]), (11) where cp and (pi’s are formulas that do not quantify over situation variables, and do not mention any other situation dependent atomic formulas except those of the form H(t, s). For instance, the two effect axioms about drop can be rewritten as: Poss(drop, s) > (Vp, s){p = holding A v = false > Caused(p, v, do(drop, s))}, (12) Poss(drop, s) > (Vp, v){ b = white A v = true V p = black A v = false] > Caused(p, v, do(drop, s))} V (Vp, v){b = white A v = false V p = black A v = true] 3 Caused(p, v, do(drop, s))} V (VP7 vm = white A v = true V p = black A v = true] > Caused(p, v, do(drop, s)). (13) Notice that (10) and (11) can be combined into a single axiom of the latter form. But as we shall see later, it is beneficial to have a separate axiom for de- terminate effects. Computing Successor State Axioms We now consider how to reason with the theories of the actions whose effects are specified by axioms of the forms (10) and (11). Let T,, be a given set of the effect axioms of the forms (10) and (11). Then the conjunction of the sentences in Tea is separable (Lifschtz [4]) w.r.t. Caused. Therefore, according to a result in [4], Circum(T,, , Caused), the circumscription of Caused in Tea, is computable by a first-order sentence. In gen- eral, this sentence, together with DD,,,, will yield a disjunction of successor state axioms, which is often large and cumbersome to reason with. In particular, it is not clear how to compute regression, a computa- tionally effective mechanism for tasks such as planning and temporal projection [ll, 9, lo], w.r.t. such dis- junctions. A Transformation To overcome this, we introduce a new ternary predi- cate Case of the arity object x action x situation, and a distinguished constant 0 and a unary function succ over sort object. We use the convention that if a nat- ural number n occurs as an object term in a formula, then it is considered to be a shorthand for the term ob- tained from 0 by applying n times the function succ. For instance, in Case(2, a, s), the number 2 is a short- hand for succ( succ( 0)). For now we shall consider Case to be an auxil- iary predicate introduced for computational purposes. Later, we shall consider some possible interpretations of this predicate. Using Case, we transform the indeterminate effect axiom (11) into the following sentences that have the form of a determinate effect axiom: Poss(A(Z), s) A Case(1, A(Z), s) > (VP, 4 [cpl (C Pv v, s) > Caused(p, v, do(A(Z), s))], (14) Poss(A(?), s) A Case(n, A(Z), s) > (VP, v>[cpdK P, v, 4 1 Caused(p, v, WA(z), s))], (15) together with the following constraints on Case: Case(1, A(Z), s) \i -. - i/ Case(n, A(Z), s), (16) { (VP7 4 [Vi 6 P, V, S> 2 (cP(~, P, ~7s) V ‘Pj (2, P, 21, s>>] A 1(vP7 v> iv3.j (z7 P7 v, 4 1 tcptc P, v, 4 v Vi@‘, P, v, s>>l) > S’ase(i, A(Z), s), (17) for any 1 < i # j 5 n. Notice the exclusive or in (16). This is because when Casued is circumscribed, the logical or in (11) will be- come exclusive. The intuitive meaning of (17) is that if the extension of (Xp, v)cpi V cp strictly contains that of (Xp, v)pj V cp, then the conjunct corresponding to Case(i, A(Z), s) cannot be minimal, so Case(i, A(Z), s) must not hold.’ These constraints are best understood in lights of the following Theorem 1 which will estab- lish the correctness of the above transformation. Notice also that this transformation applies only to the indeterminate effect axiom (11). This is why it is beneficial to put as much information as possible into WV In the following, we shall denote by T& the set of axioms obtained from T,, by replacing every indeter- minate effect axiom in it of the form (11) by the ax- ioms (14) - (15). We shall denote by D,,,, the set of constraints (16) and (17). Notice that this set is also depended on Tea. Given two theories Tr and T2 such that Tr ‘s lan- guage is Tz’s augmented by a new predicate P, we say Reasoning about Action 673 that these two theories are equivalent with respect to Tz’s language if Tr is a conservative extension of Tz: a structure is a model of T. iff it can be extended into a model of Ti. As it turns out, this is the same as saying that T2 is the result of forgetting P in Ti according to (Lin and Reiter [7]), and according to a result there, when Tr is finite, this is the same as saying that T2 is logically equivalent to the sentence (EIP) . /\ Tl , where r\ Ti is the conjunction of the sentences in T15. We have: Theorem 1 Under the unique names axioms D,,,, the result of circumscribing Caused in T,‘, u Vcase is a conservative extension of the result of circumscribing Caused in Tea: V,,, j= Circum(T,,, Caused) z (ElCase)Circum(T,‘, U Vc,,,, Caused). Corollary 1 .l Under the unique names assump- tions, for any formula cp that does not mention Case, Circum(T,, , Caused) k cp ifl Circum(T,‘, U v caSe, Caused) + cp. Computing Successor State Axioms Having established the correctness of the above trans- formation, we now proceed to show how to generate successor state axioms from the resulting axioms. Notice first that the sentence (10) can be rewritten into an axiom of the following form: Poss(A(Z), s)Ap(?,p, v, s) > Caused(p, v, do(A(?), s)). Similarly, we can do the same for axioms of the form (14) - (15). N ow from these axioms in TLa, we can generate, for each fluent F, two axioms of the following forms: Poss(a, s) A rs(3?, a, s) > Caused(F(Z), true, do(u, s)), (18) Poss(a, s) A yF (2, a, s) > Caused(F(Z), false, do(u, s)), (19) where 7: and ye do not quantify over situation vari- ables, and the only situation dependent atomic formu- las in them are either of the form H(t, s) or of the form Case(tl, t2, s). Given these two effect axioms, we generate the fol- lowing successor state axiom for F: Poss(a, s) > F(Z, do(a, s)) E rj#, a, s) V (F(C s) A 1~; (5, a, s)). (20) Now let DD,, be the set of successor state axioms, one for each fluent, so generated. Our claim is that, un- der some reasonable conditions, Vs, captures all the 5Since P is a predicate constant, strictly speaking, we cannot quantify over it in a formula. However, we can consider (3P)cp as a shorthand for (3p)(p’, where p is a predicate variable of the same arity as P, and cp’ is the result of substituting P in cp by p. information about the truth values of the fluents in Cirum(T,‘, , Caused) U (1,2,4}. More precisely, we have: Theorem 2 Under the assumption that the following consistency condition [lOI is satisfied for each fluent F: v una U Vap U Vcase k (vz, a, s).Poss(a, s) > $rs(?, a, s) A yF(z, a, s)), the theory C U Vuna U Vu, U {Circum(T,‘,, Casued)} U v case U (17 2739 4) is a conservative extension of the theory C U Vuna U Vap U Vss U Vcase U (3). Corollary 2.1 Under the assumptions in the theorem, for any formula cp that does not mention Caused, C U Vuna U Vap U { Circum(T,‘, , Casued)} U Case U { 1,2,3,4} i= P 8 C U Vuna U Vap U vss U Vcase U (3) b Cp* Corollary 2.2 Under the assumptions in the theorem, for any formula cp that does not mention Caused and Case, CUV,,,UV,,U{Circum(T,,, Caused)}U{l, 2,3,4} k cp iff C U vuna U vap U vss U vcase U (3) k Cp- Proof: Apply Theorem 1 and Theorem 2. Theorem 2 informs us that if we are only concerned with the truth values of fluents, then the original effect axioms as well as the basic axioms about Caused can all be discarded. In particular, this is the case with the projection problem. Technically, the consistency conditions are needed because without these conditions, the successor state axiom (20) may not entail the formula Poss(a, s) > YF(Z, a, s) I -F(Z, do(a, s)), which is a consequence of the effect axiom (19) and the two basic axioms (1) and (2) about causality. Example 1 Consider again our checkerboard exam- ple. We shall consider only the successor state axioms for white and black. The indeterminate effect axiom (13) of drop is translated into: Poss(drop, s) A Case(1, drop, s) > [Caused(white, true, do(drop, s)) A Caused(bZack, false, do(drop, s))], Poss(drop, s) A Case(2, drop, s) I [Caused(white, false, do(drop, s)) A Caused(bZuck, true, do(drop, s))], Poss(drop, s) A Case(3, drop, s) > [Caused(white, true, do(drop, s)) A Caused(bZack, true, do(drop, s))]. 674 Knowledge Representation Together with the original determinate effect axioms, we have: Poss(a, s) 3 [a = drop A (Case(1, drop, s) V Case(3, drop, s))] > Cuused(white, true, do(u, s)), Poss(a, s) 3 [a = pickup v (a = drop A Cuse(2, drop, s))] > Cuused(white, f ulse, do(u, s)). Thus we have the following successor state axiom for white: Poss(u, s) > {white(do(u, s)) G [a = drop A (Cuse(1, drop, s) V Cuse(3, drop, s))] V white(s) A ~[a = pickup V (a = drop A Cuse(2, drop, s))]}. A similar successor axiom can be obtained for black. It can be seen that the consistency conditions are satisfied for both white and black. We shall not get into details regarding the accom- panied constraints about Case, but note that for this example, all constraints of the form (17) are logical consequence of the unique names assumptions. So the following is the only nontrivial constraint about Case: Cuse(1, drop, s) i/ Cuse(2, drop, 2) i/ Cuse(3, drop, 3). Regression and Some of Its Properties Once we have a successor state axiom for each fluent, regression becomes syntactic substitutions [lo]: for any formula p(s) that does not quantify over situation, and action a, the regression of a formula cp(s) over a, writ- ten 7W~(s), 4, is the result of replacing in q(s) every atomic formula of the form H(F(o, s) by @~(c o, s), where Poss(u, s) > [F(i?, do(u, s)) = %+Z, a, s)] is the successor state axiom for F. The following result is immediate: Lemma 2 Let V,, be a set of successor state axioms, one for each fluent. We have: ns k (~Js).POSS(% 4 3 [p(s) = WA a)]. In the rest of this section, we an action theory of the form: assume that we’re given v = c u vu,, u Da, u % u %a,, u 2% 7 where V,,,, is a set of Case constraints of the form (16) or of the form (17), and VsO is a set of sentences that do not mention any other situation term except So, and do not mention Pass, Caused, and Case. The other components of 2> have the usual meaning. Our main concern is the soundness and complete- ness of regression for doing temporal projection with respect to the initial database. Our first positive result is about Case independent temporal projections: Theorem 3 Let p(s) be a formula that does not quan- tify over situation variable, does not mention any other situation term except s, and does not mention Poss, Caused, and Case. Let a be an action term. If, under v una 7 R(cp,a) is equivalent to a formula that does not mention Case, then v I= cp(dO(% So)) %, uvuna k *(so) A R(cp, a)(SO), where Vap I= POSS(~, So) = *(so>, %A a)(So> is obtained from R(cp,cy) by replacing s by So, and cp(do(a, SON is obtained from q(s) by replacing s by do@, So). Notice that this theorem depends on the particular form the constraints in VD,,,, have: they are about Case only, that is, the result of forgetting it will yield a tautology: (ZlCuse)V,,,, E true. One of the conditions in Theorem 3 is that under the unique names assumptions, R(cp, Q) be equivalent to a formula that does not mention Case. This condi- tion holds if the action o’s effects on the fluents in cp are definite. Thus Theorem 3 informs us for reasoning about the determinate effects of actions, the auxiliary predicate Case can be rightly ignored. When either p(s) or its regression mentions Case, we need to include constraints on Case: Theorem 4 Let q(s) be a formula that does not quan- tify over situation variables, and does not mention Poss and Caused. Let cy be an action term. If V,,,, does not mention H, then v I= (P(dob7 so>> !f vS0 u vuna U vca.se I= Wo) A WA 4(So)* Given the forms (16) and (17) the constraints in V caSe must take, Rase does not mention H if all the indeterminate effects of actions are context free. This condition is needed because although VD,,,, itself con- tains no information about H, it can when used to- gether with some assumptions about Case that can be easily incorporated into the query p(s). Finally, notice that Theorem 3 and Theorem 4 can be generalized to temporal projections with sequences of actions. The Ramification Problem This section discusses how to represent indirect effects of actions in our framework. Example: whenever white and black are both true, happy will be true as well: white(s) A black(s) > Cuused(huppy, true, s). (21) Adding this axiom will make happy a possible indirect effect of the action drop. Due to the space limitation, we omit the details which can be found in the online version of this paper at: http://www.cs.toronto.edu/‘cogrobo. Reasoning about Action 675 Related Work and Discussions Case can be seen as playing the role of such new flu- Epistemologically, we have shown how the causal min- ents. For instance, Cuse(1, drop, s) may name the con- imization framework of [5] can be used to specify the text under which drop has the effect of causing the pin indeterminate effects of actions. Computationally, we lying entirely within a white square. We are currently have shown how goal regression can be used to reason exploring the possible impact of this interpretation as about them. well. There have been other proposals in the literature (e.g. [L 2, 3, 121) f or specifying the effects of inde- terminate actions. To the best of our knowledge, the computational contribution of this work is novel. Acknowledgements [I] Among the extant approaches, the ones in [3] and seem closest to ours. As we mentioned in Section , the releases propositions of [3]: A releases F corre- sponds to the following axiom in our language: Poss(A, s) > Cuused(F, true, do(A, s)) V Cuused(F, false, do(A, s)). Thanks to the other members of the University of Toronto Cognitive Robotics group (Yves Lesperance, Hec- tor Levesque, Daniel Marcu, and Ray Reiter), to Vladimir Lifschitz, and to Yan Zhang for helpful discussions and comments. This research was supported by grants from the Government of Canada Institute for Robotics and In- telligent Systems, and from the National Science and En- gineering Research Council of Canada. References Regarding the work of [l], the In(F) and Out(F) pred- icates there correspond to Cuused(F, true, do(u, s)) and Cuused(F, f c&e, do(u, s)), respectively, in our lan- guage. However, the formalism of [3] is limited because no complex releases propositions are allowed. For in- stance, one cannot write expressions like C. Barel. Reasoning about actions: nondeterminis- tic effects, constraints, and qualification. In Proc. of IJCAZ’95, pp. 2017-2023. (Vu).u releases F ++ a releases F’. The formalism of [l] is also limited because the action parameters of its In and Out predicates are not made explicit, thus cannot be quantified over. C. Boutilier and N. Friedman. Nondeterministic ac- tions and the frame problem. In Workign Notes of the A A A I Spring Symposium on Extending Theories of Action, pages 39-44, 1995. G. N. Martha and V. Lifschitz. Action with indirect effects (preliminary report). In Proc. of KR ‘94, pp. 34 l-350. Finally, we want to remark on the auxiliary pred- icate Case. In this paper, we have used it entirely for computational purposes. However, there are some interesting possible interpretations of this predicate. There is a sense that Case can be interpreted in probabilistic terms. For instance, if V. Lifschitz. Computing circumscription. In Proc. of IJCAZ’85, pp. 121-127. Poss(drop, s) A Cuse( 1, drop, s) > Cuused(white, true, do(drop, s)) A Cuused(bluck, false, do(drop, s)), F. Lin. Embracing causality in specifying the indirect effects of actions. In Proc. of IJCA1’95, pp. 1985-1993. F. Lin and R. Reiter. State constraints revisited. J. of Logic and Computation, 4(5):655-678, 1994. F. Lin and R. Reiter. Forget it! In R. Greiner and D. Subramanian, editors, Working Notes of AAA I Fall Symposium on Relevance, pp. 154-159, 1994. then Cuse(1, drop, s) may stand for the probability of the pin lying entirely within a white square after it has been dropped. Under this interpretation, the first constraint (16) on Case, in this example the following one: J. McCarthy and P. Hayes. Some philosophical prob- lems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machane Zn.telli- gence 4, pages 463-502. Edinburgh University Press, Edinburgh, 1969. E. P. Pednault. Synthesizing plans that contain ac- tions with context-dependent effects. Computational Intelligence, 4~356-372, 1988. Cuse(1, drop, s) i/ Cuse(2, drop, s) \i Cuse(3, drop, s), says that the explicitly enumerated possible outcomes are both exclusive and exhaustive, and the constraints (17) simply eliminate redundant outcomes. In this re- gard, it would be interesting to formally connect our approach to probabilistic ones. This is a future re- search that we’re pursuing. Another possible interpretation of Case is based on the view that in principle, it is always possible to re- duce indeterminate actions to determinate ones, and one way of doing this is to introduce new fluents to name those low level contexts under which the effects of actions will be determinate. According to this view, PI PI PI PI PI PI PI PI PI WY Dll WI R. Reiter. The frame problem in the situation calcu- lus: a simple solution (sometimes) and a completeness result for goal regression. In V. Lifschitz, editor, Ar- tificial Intelligence and Mathematical Theory of Com- putation: Papers in Honor of John McCarthy, pages 418-420. Academic Press, San Diego, CA, 1991. R. Waldinger. Achieving several goals simultaneously. In E. Elcock and D. Michie, editors, A4achine Zntelli- gence, pages 94-136. Ellis Horwood, Edinburgh, Scot- land, 1977. Y. Zhang. Reasoning A bout Persistence: A Unified Principle for State Change. PhD thesis, Department of Computer Science, Sydney University, Sydney, Aus- tralia, 1994. 676 Knowledge Representation
1996
100
1,734
Improving Case Retrieval by Remembering Questions* Richard Alterman and Daniel Griffin Computer Science Department Brandeis University Waltham, MA 02254 (alterman,dang}Qcs.brandeis.edu Abstract This paper discusses techniques that improve the per- formance of a case retrieval system, after it is de- ployed, as a result of the continued usage of the sys- tem, by remembering previous episodes of question an- swering. The user generates a request for information and the system responds with the retrieval of relevant case(s). A history of such transactional behavior over a given set of data is maintained by the system and used as a foundation for adapting its future retrieval behavior. With each transaction, the system acquires information about the usage of the system that is sub- sequently used to adjust the behavior of the system. This notion of a case retrieval system draws on a dis- tinction between the system in isolation and the sys- tem as it is used for a particular set of cases. It also draws on distinctions between the designed system, the deployed system, and the system that emerges as it is used. Introduction In this paper we will develop the notion of keeping a history of question/answer transactions as basis for system adaptation as it applies to the construction and usage of a case-based reasoning (CBR) system. Given an existing CBR retriever, the approach we describe is to augment the system with a module that remembers all the questions that were previously asked and the an- swers/cases that were generated in response, and uses that information to improve the performance of the retriever -- we will refer to such a retriever as a use- adapted case retriever. Through continued usage, the use-adapted system gradually acquires skill at answer- ing (with cases) the questions that are most frequently asked. Part of the skill is recognizing that a question has been asked and answered before. Another part is knowing how to fill in missing details for a given question in order to make sense of it. A third part is knowing answers that are inappropriate for a given question. We will show that remembering questions *This work was supported in part by DEC (Contract 1717). Additional support for this work came from NSF (ISI-9634102). 678 Learning enhances the potential of system performance by al- lowing the system to adapt is behavior. Over time the system gradually begins to reason about future ques- tions not only based on the content of the cases, but also based on the previous questions and their associ- ations to cases or other questions. Why A Use-Adapted Case Retriever? In order to build the retrieval system several deci- sions must be made. Each of these decisions effects the overall performance of the retriever. Data must be collected. The system developer must confront the following issues: 1) deciding what constitutes a case, 2) organizing each ‘case’ using some uniform syntac- tic structure, 3) identifying a vocabulary composed of keywords, terms, and expressions, and 4) indexing and clustering the cases. Both automatic and by-hand ap- proaches can be taken to selecting an indexing vocab- ulary for a given case base. Several vocabularies have been touted for specific kinds of domains (see, e.g., Hammond, 1990; Leake 1991); others have proposed a standard indexing vocabulary that could be used to index cases across a wide selection of domains (Schank et. al, 1990); and still others suggest that features can be ranked or refined dynamically (Rissland & Ashley, 1986, Fox & Leake 1995). After an indexing vocab- ulary has been established, additional work must be done in identifying synonyms. Next, each of the cases must be indexed; indexing can be done either by hand or automatically using some clustering algorithm. Fi- nally, an interface for retrieval must be developed. An underlying assumption of this standard process is that there exists a ‘right’ index for a given case; an alternate possibility, and one implicit to the model we present, is that the index for a given case is dependent on the interaction of the end-user and the retrieval sys- tem. Finding a relevant case is dependent on not only the data but also on issues like how does the inter- face present access to the data, how was the question asked, who asked the question, and for what applica- tion. These kinds of issues are contingent on the ongo- ing interaction between the end-user and the system, and it takes experience and skill to be able to identify From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. the nature of the question and its application. These are things that can only be learned after the system has begun to be used. In general, in order to build a useful retrieval sys- tem, the system-builder must cycle through the pro- cess of building a retriever, deploying it, and then try- ing again. The problem is that it is not all clear how each change to an indexing vocabulary and resulting classification effects the overall performance of the sys- tem. Thus, in many cases, if things are not working, it is very difficult to decide what needs to be fixed. If you change the indexing vocabulary and then re- classify the cases, it is difficult to tell whether you have not fixed one problem by introducing another. (These problems are compounded by the fact that the initial retriever might not be effective at all.) Although this approach allows for some field testing, the vocabulary for indexing and the actual indexing of cases is done mostly independent of any extensive usage of the re- triever. In any case, by the time the retriever is de- ployed, and in use by the end-user, the retriever has pretty much settled into a level of performance. The goal of the use-adapted retriever, seeded by the initial efforts of the system builder, is to continue to improve the performance of the retriever by working on building a useful set of correspondences between external user descriptions and the cases in the case base. Through the usage of a case retriever, the sys- tem refines and improves on earlier versions of the re- triever. It replaces a process of development that can tend to lack direction, by one that is automatically guided by the successes and failures of the retrieval system. Finding a better way of indexing a given case, learning synonymous expressions, and identifying the questions most frequently asked, are all things that the retrieval system can learn after it has begun to be used. Outside the research lab Outside of the research lab, many of the issues involved in building a case-retriever become even more compli- cated. o Data may originate from existing data collections; a problem is that the structure and/or meaning of the data may not be clear or easily accessible. The system developers must either work with the data in the form it was collected, or spend time converting the data to some other format which makes it easier to use. o Even when the data is collected with CBR in mind, cases may have been gathered by different people resulting in some non-uniformity in the vocabulary to describe each case. Although it would be nice to establish a uniform vocabulary in advance, for many reasons, including lack of knowledge about the cases, it is often not possible to do this in advance. Thus a fair amount of time can be spent identifying a useful vocabulary and re-encoding cases for the purposes Usage : Transactions case retriever. with Theory : Case Figure 1: A use-adapted case-based reasoner. of establishing some uniformity which is a necessary condition for a successful clustering and retrieval. e The same data might be used for two different appli- cations, with each application emphasizing different vocabularies and clusterings. Similar problems can exist for situations where you have the same appli- cation, but two different end-users of the system. Many of these problems derive from the fact that the data collector, the system builder, and the end-user, may all be different groups of people, with different skills, time constraints, and levels of expertise. By at least in part automating the process of deciding and re-deciding how to get the case retriever to work, a use-adapted case-retriever allows some of the decisions regarding working out the details of the translation to be down-loaded to the period after the system is deployed and in use. At some point one can get out of the cycle of building the retriever and start using it. It is the task of the use-adapted retriever to work out correspondences between queries and cases. Over time, difficulties presented by issues like the non-uniformity of vocabulary, or the biases of different applications, can be gradually improved on by the strategies of the evolving use-adapted system. Use-Adapted Case-Base Retriever Our interest is in the pragmatics of how the user ac- complishes the task of case retrieval given a particular system and a particular set of cases. Our idea is to look at the particulars of usage of the system as they are represented as episodes of user-system retrieval. The basic idea is depicted in Figure 1. There exists as data a set of cases. A system/theory is built for retrieving cases that are relevant to a given request for infor- mation. The usage of the system is characterized as a history of transactions between users of the system and the system over a given set of cases, over an extended period of time. In this paper, we consider techniques that exploit the history of one sort of transaction between end user and system: question and answer. The user generates a request for information and the system responds with the retrieval of relevant case(s). Given an existing CBR retriever, the approach we describe is to augment the Case-Based Reasoning 679 End I User-2 : . End cl User-m Question Base Gas.0 Base Figure 2: Architecture of use-adapted retriever. system with a module that remembers all the questions that were previously asked and the answers/cases that were generated in response, and uses that information to improve the performance of the retriever. Each time the end-user asks a question of the system and sifts through the list of cases that are recalled until an ap- propriate case(s) is found, the system retains a record in the form of a set of transactions between the end- user and the retrieval system, marked for the relevance of the retrieved item to the question, i.e., (associate <question> <ID of case> <relevance-of-case-to-question>) Sometimes the association that is retained is not only between a question and a case, but also between two questions. Attaching relevance markers to a given transaction has a relatively low cost for the end-user. During the normal business of retrieving cases, the user must step through the list of retrieved cases and evaluate them. Marking one or another case that is being looked at as relevant (+), not relevant (-), or neutral (0) adds little additional cost for the end-user, yet it is likely to provide improvement in the retrieval system. Even in the case that the end-user is unwilling to evaluate all the cases on the retrieval list for a given question, the system will continue to improve but at a more gradual pace. Figure 2 shows the basic architecture of a use- adapted case retriever. Interposed between the end- user and the system is a retrieval agent that keeps a history of the interaction between the end-user(s) and the system and uses that information to reason about the retrieval process. Thus the system has two case- bases: one for domain knowledge and a second for the interaction between the user and the system. Rele- vance links represent the positive and negative associ- ations between pairs of questions and cases. Basic Case Retrieval System The first step in our study was to build a base help- desk system that retrieved relevant bug reports. In these experiments, we used a case-base of 293 cases. Each case was a diagnosis of a bug in a release of soft- ware consisting of mostly unstructured pieces of text and was entered by one of a number of different peo- ple. We identified a list of about 348 keywords used in the description of cases. We used the ID3 algorithm (Quinlan 1986) to build a clustering of the cases over the keywords. The usage of the base retrieval system can be de- scribed as follows. A user specifies a question or in- formation need from which a list of keywords are ex- tracted by the system . These keywords are collected in a query that will be referred to as the user-query. The case-base is organized as a binary ID3 tree with nodes separating cases by features (does the case have feature X? Yes or No). The retriever traverses the tree using the user supplied keywords to decide which branch to take at a node. In the situation when an impasse oc- curs, i.e., when a node in the tree specifies a feature Y not supplied by the user, the system moves into an interactive mode where the user is allowed to say: 1. Take the branch with cases that have feature Y 2. Take the branch with cases that don’t have feature Y 3. Take both branches (“I don’t know” option) The user’s decisions are recorded in the interactive portion of the retrieval, and along with the initial user- query will be referred to as the user-expanded query. This process is repeated until cases are reached at the leaf nodes. After retrieval is complete, the user steps through the cases marking each case as relevant (+), not relevant (-), or neutral (0) to the user’s question. Use-Adapted Retrieval Techniques Recognizing Questions Our initial idea was that over time the system would become more effective at answering questions, because it would begin to recognize the most frequently asked questions and gradually develop expertise at respond- ing to those questions. Keeping a question-base allows the retrieval system to build up expertise about the cases most regularly asked about. Questions are ‘rec- ognized’ by retrieving from the question-base using the user-query (see Figure 3). If the question is not ‘recog- nized’ as being similar to any previous questions, or if the similar previous questions had no positive associa- tions, then the clustering based on the content of the cases is used to retrieve cases. Notice that our use of the question-base does not preclude the system from continuing to use the content of the case as a basis for retrieval. This is especially important early on when there are not many previous question and answering episodes available; it is also important on the occasions when the system is evolving due to the development of new applications, interfaces, or kinds of users. Automatic Question Expansion Another 8 way that the system can exploit positive asso- ciations between previously asked questions and cases 680 Learning Given a new user query qnew , let QUERY-RS be the set of queries retrieved from the query base, and let CASE-RS be the empty set. 1. For each query qj E QUERY - RS : (a) If qj produced positively evaluated cases, then include them in CASE-RS 2. If CASE-RS = 0 d o normal case base retrieval using Qnew - 3. Otherwise return CASE-RS. Figure 3: TECHNIQUE 1: Recognize a question. is a technique by which the system ‘fills out’ a user question; when an impasse occurs the system attempts to automatically guess the value of the unspecified fea- ture. The basic idea is to retrieve from the question- base a selection of similar questions and poll the posi- tive cases attached to each retrieved question for values to expand the query (see Figure 4). Let qnew be the user’s initial query. When guessing at decision point for feature X: 1. Retrieve queries from the question base using qnew. 2. From these queries, collect the positively associated cases into CASE - RS. 3. Use each Q E CASE-RS to vote on the value for feature X. 4. If over a certain threshold of the cases vote for a certain value, then guess that value. Otherwise, default to the choice made by the user. Figure 4: TECHNIQUE 2: Expand a question. Pruning Suppose that a case retriever selects a set of cases as relevant to a given question. Figure 5 shows a tech- nique that takes advantage of negative associations be- tween questions and answers. After the initial list of cases are retrieved, they are each evaluated using the list of previous queries associated with that case. If the current query only matches the negatively asso- ciated queries attached to a retrieved case, then the case is deleted from the retrieval set. Using previous questions to delete cases can potentially improve the performance of the system by reducing the number of non-relevant cases retrieved; the system learns to rec- ognize incorrect responses to a question. Evaluation Seven subjects were enlisted to write queries. All of the subjects wrote queries for seven of the same target cases. Each of the subjects also wrote queries for seven unique cases. Finally the subjects were divided into three groups (two of the groups were size 2 and the Given a new query q new and a set CASE-RS of retrieved cases: 1. For each case cd E CASE - RS, let Qi be the set of queries that have retrieved q in the past. (a) If Qi is the empty set do nothing. (b) If only negative previous queries match qnew, delete G from CASE - RS. (c) Else, do nothing. Figure 5: TECHNIQUE 3: Pruning cases. third group was of size 3), and each group wrote queries targeted for 7 cases unique to that group. Thus, in total, each subject wrote queries for 21 cases. After the test subjects finished entering their queries, we had 239 queries. This number is larger than the expected 21 per user because the users were encouraged to try several times if a retrieval was not successful. Of the 239 queries, 132 were used as a training set and the remaining 107 as the test set. The test set was divided into 2 sub-groups: those questions that had no relevant history (numbering 64) and those that did (numbering 43). We used two standard metrics for evaluating the ef- fectiveness of the techniques: precision and recall. For a given retrieval, let a = the number of relevant cases retrieved b = the number of relevant cases in the case-base c = the number of cases in the retrieval-set then precision = z and recall = %. For the test data we present each query has a sin- gle target case. Thus, a high percentage for precision directly translates into a short retrieval list (and less work for the user in evaluating the retrieved cases). Also, since the recall for a given retrieval is either 0 or 1, the average recall number tells you for what percent- age of the test queries the target case was retrieved. We discuss tests on queries with and without history separately. The numbers we present for precision and recall are averages. A precision score of 66.42 should be read as “on average 66.42% of the list of retrieved cases was relevant to the question.” A recall score of 72.09 should be read as “on average 72.09% of the test queries retrieved the targeted case.” We used paired comparison tests with a 95% confidence interval to ver- ify the significance of our results. Testing Technique 1: Recognizing Questions We looked at two versions of the recognition technique. We use the term short form to indicate the question as it was initially posed by the user; the term long form refers to the query as it was interactively expanded by the user. Comparing the basic retrieval mechanism (with in- teractive expansion) to recognition retrieval (short form) gives the following average results for precision Case-Based Reasoning 681 and recall on the questions with a history: Technique Precision Recall ’ Case-Base (interactive expansion) 15.59 58.14 Recognition (short form) 66.42 72.09 The results show that a dramatic improvement in pre- cision was achieved, as well as a significant increase in recall. Moreover, these numbers are achieved without the user having to do the extra work of interactively expanding the query. When the long form of the query is stored in the question base, the results of compar- isons are: Technique 1 Precision 1 Recall Case-Base (interactive expansion) I 15.59 I 58.14 Recognition (long form) - ’ 1 49.96 I 1 69.77 This is still an improvement over case retrieval with interactive expansion, but the results are not as im- pressive as that for recognition using the short form of the query. A possible explanation for this is that the user is either making “bad guesses” or “guessing both branches” (or both) and these decisions are recorded in the long form of the query. Thus, the variability intro- duced by user guesswork would make initially similar questions, dissimilar. Testing Technique 2: Automatic Expansion For basic retrieval with interactive expansion, on av- erage we found that the user interactively added 18.7 features per question. This translates into extra work for the end user. In this part of the analysis we tested the automatic expansion technique as a possible substitute for some interactive expansion. With automatic expansion, we found that the user on average was interactively adding only 4.14 features for the questions that had a history, a clear reduction in work for the user. When we com- pared automatic query expansion to interactive user expansion, we obtained the following results: Technique Precision Recall Case-Base (interactive expansion) 15.59 58.14 Automatic Expansion 40.61 69.77 Automatic expansion does significantly better than user expansion on both precision and recall. These results suggest that over time the user can forego some of the effort of interactively expanding the query as that the system will be able to guess at many of the features for those questions that have a history. Testing Technique 3: Pruning Where both recognition and automatic expansion take advantage of the positive associations between old questions and cases, pruning is a technique that ex- ploits the negative associations between questions and cases. The results of pruning retrievals produced by each discussed technique are shown below. Technique Prec. Pruned Prec. Case-Base (interactive expansion) 15.59 37.33 Recognition (short form) 66.42 66.8 Recognition (long form) 49.96 53.23 Automatic Expansion 40.61 64.48 Our results show that pruning improves precision (by removing irrelevant cases from the retrieval list). Since no significant changes were seen in the recall scores, we show only the precision scores. Tests on Queries with No Relevant History Another thing we tested for was whether performance on the questions that had no relevant history would somehow degrade as a result of using the history-based techniques. For the 64 test questions that had no rele- vant history, there was no significant decrease in perfor- mance for any of the techniques that used the history of the user’s interaction with the system. Initially we had hoped that either automatic expan- sion or pruning would somehow improve system per- formance for questions with no previous history. In neither case was there significant improvement in per- formance. Related Work Relation of Model to CBR Literature Others have explored the role of ‘unanswered’ ques- tions in directing learning (Ram & Hunter, 1992) or reading (Ram, 1991; Carpenter & Alterman, 1994). Our interest here is in retaining and using a memory of previously ‘answered’ questions. The notion of re- membering answered questions is different from other approaches to learning that have been applied to case retrieval. Where incremental clusterers, e.g. CYRUS (Kolodner, 1983), UNIMEM (Lebowitz, 1987), and COBWEB (F’ h 1s er, 1987), update and re-organize a case-base with the addition of each new case by com- paring cases, the model we propose attempts to im- prove retrieval performance of the system on cases al- ready existing in the case-base by acquiring informa- tion about the usage of those cases. There is a differ- ence between updating a system by adding new cases, and updating a system by using it on the cases that already exist in the case-base. The former approach is geared towards updating the clustering of cases by comparing existing cases to new cases as they are ac- quired. The latter, our approach, improves the quality of retrieval by accumulating information about the us- age of a given (existing) case. With our approach the system will continue to improve retrieval performance even after the set of cases in the case-base has stabi- lized. In contrast to systems that attempt to learn a best index for a case by refining the index of a case based on retrieval performance (e.g. Rissland and Ashley, 1986; Veloso and Carbonell, 1993; Fox and Leake, 1995), our approach is to learn the multiple kinds and instances of 682 Learning questions case. positi vely or negatively retrieve a approach to working with case libraries numbering in the hundreds where, over time, effective retrieval may exhibit the characteristics of a skill acquisition prob- lem. It is not clear that techniques developed in IR that work for constantly changing and growing data sets (e.g., the associated press wire service) are likely to be the ideal method for smaller and more stable data sets. Summary Remark In this paper we have shown the efficacy of a use- adapted case retriever. We have presented several techniques and experimentally demonstrated their ef- fectiveness in improving system performance. In each case, these techniques exploited the historic interaction between user and system over a given set of data (not an independent analysis of either the user or the data). Yet another view of case retrieval has been that cases have “right indices”. The view that cases have “right indices” is inherent to the various vocabularies that have been touted for specific kinds of domains (see e.g. Hammond, 1990; Leake 1991) or specific indexing schemes that are claimed to apply to a wide selection of domains (Schank et. al, 1990). Systems that do in- dexing based on a clustering of the cases can also end up with a single index for a given case. By separating questions from cases, the case retriever can view cases from the perspective of their usage - in principle each question for which the case is relevant is potentially another index for the case. Other Work Others have also considered the idea of building re- trieval mechanisms that self-customize to a given data set. In Carpenter & Alterman (1994) a reading agent SPRITe is described that automatically builds cus- tomized reading plans for returning information from specific sets of instructions. In Armstrong et. al. (1995), WebWatcher is described. Webwatcher is an agent for assisting users in navigating the world wide web, and it can be “attached to any web page for which a specialized search assistant would be useful.” Both of these efforts share our intuition that retrieval perfor- mance can be tied to the characteristics of particular data sets: for SPRITe the data sets are text and for Webwatcher, a web page. In IR, there has also been work on query expansion using relevance feedback (Salton & Buckley, 1990) pro- vided by the user. Our work on query expansion offers a history-based approach to expanding a query. A ma- jor advantage of our approach is that the work of the end-user is greatly reduced because expansion occurs automatically. Another important issue in IR is de- ciphering the intended meaning of the query (Sparck Jones, 1992). This research explicitly addresses this issue, as that automatic expansion is an example of a technique that can decipher the meaning of a newly posed question. References Armstrong, R.; Freitag, D.; Joachims, T.; and Mitchell, T. 1995. Webwatcher: A learning apprentice for the world wide web. In Proceedrngs of 1995 AAAI Spnng Sympostum on Informatzon Gathenng from Heterogeneous, Dzstrzbuted Envzronments. Carpenter, T., and Alterman, R. 1994. A reading agent. In Pro- ceedzngs of the Twelfth Natronal Conference on Artzficsal Intel- lzgence. Menlo Park, CA: AAAI Press. Fisher, D. II. 1987. Knowledge acquisition via tual clustering. Machzne Leamzng 2: 139-172. incremental concep- Fox, S., and Leake, D. B. 1995. Using introspective reasoning to refine indexing. In Proceedsngs of the Fourteenth Intematzonal Jotnt Conference on Artzficsal Intellsgence, 391-397. Hammond, K. J. 1990. Case-based planning: A framework for planning from experience. Cognztzve Sczence 14:385-443. Kolodner, J. L. 1983. Reconstructive memory: A computer model. Cognrtsve Science 7~281-328. Leake, D. B. 1991. An indexing vocabulary for casebased expla- nation. In Dean, T., and McKeown, K., eds., Proceedzngs of the Nsnth National Conference on Artaficlal Intellsgence. AAAI. Lebowitz, M. 1987. Experiments with incremental tion: Unimem . Machine Leamzng 2: 103- 138. concept Quinlan, J. R. 1986. Induction of decision trees. Machine Learning 1:81-106. Ram, A., and Hunter, L. 1992. The use of explicit goals for knowl- edge to guide inference and learning. Applied Intelligence 2:47-73. Ram, A. 1991. A theory of questions and question asking. Journal of Learning Scaences 1:273-318. Although there is room for technical transfer be- tween IR and CBR, it is also important to differen- tiate them (see Rissland & Daniels, 1995). One obvi- ous difference is that CBR is not only concerned with retrieval but also with questions of adaptation. Thus in CBR planning, any relevant case may suffice (be- cause it can be adapted), but for IR and document re- trieval all relevant cases mav need to be retrieved. An- other important, and more pertinent, difference con- terns the size of the data set. IR technology was de- veloped to work with document files numbering in the Rissland, E., and Ashley, K. 1986. Hypotheticals as heuristic de- vice. In Proceedtngs of the Fifth National Conference on Artzficzal Intelligence. Rissland, E. L., and Daniels, J. J. 1995. Using cbr to drive ir. In Proceedzngs of the Fourteenth Intematzonal Joint Conference on Artrficsal Intelligence, 400-407. Salton, G., and Buckley, C. 1990. Improving retrieval performance with relevance feedback. Journal of the American Society for Inforrnatzon Science 4 1(4):288-297. Schank, R. 1990. Toward a general content theory of Proceedzngs, A A A I Spring Sympossum, 36-40. indices. In Sparck-Jones, K. 1992. Assumptions and issues in text-based re- trieval. In Jacobs, P., ed., Tezi-Based Intelligent Systems. Hills- Machzne Learning 10:24%278. dale, NJ: Lawrence Erlbaum Associates. 157-177. Veloso, M., and Carbonell, J. 1993. Derivational analogy in prodigy: Automating case acquisition, storage, and utilization. tens of thousands, but many CBR applications only need work with a few hundred cases. Technology that is developed towards working with huge numbers of documents will not necessarily scale down as the best Case-Based Reasoning 683
1996
101
1,735
Acquiring Case Adaptation Knowledge: A Hy rid Approae David B. Leake, Andrew Kinley, and Computer Science Department Lindley Hall 215, Indiana University Bloomington, IN 47405 Abstract The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR sys- tems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reason- ing. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from ex- periences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance. Introduction Case adaptation plays a fundamental role in the flex- ibility of case-based reasoning systems. The ability of CBR systems to solve novel problems depends on re- trieving relevant prior solutions and adapting them to fit new circumstances. Considerable domain knowl- edge may be needed to guide this adaptation process (e.g., Kolodner, 1993), and the need for such knowl- edge in turn raises the difficult question of that knowl- edge should be acquired and applied. Most CBR sys- tems depend on a static library of built-in adaptation rules that are applied by rule-based production sys- tems. Unfortunately, because CBR is often applied to domains that are poorly understood or difficult to codify, developing adaptation rules is particularly dif- ficult. The problem is so acute that experts in CBR research and applications agree that it is not currently practical to deploy CBR applications with automatic adaptation. Consequently, new methods are needed for acquiring case adaptation knowledge. We describe research on an approach for facilitat- ing acquisition of useful adaptation knowledge. In our approach, a CBR system begins with a small set of ab- stract transformation rules and memory search meth- ods. When presented with a new adaptation problem, it first selects a transformation to apply. It then per- forms memory search to find the information needed to operationalize the transformation rule and apply it to the problem at hand (e.g., given a substitution trans- formation, finding what to substitute). The system improves its adaptation capabilities by case-based rea- soning applied to the case adaptation process itself: a trace of the steps in solving an adaptation problem is saved to be reused when similar adaptation problems arise in the future (Leake 1995). In this way, a CBR system doing adaptation can acquire specific adapta- tion knowledge by using “weak methods” for adapta- tion when no specific knowledge is available. When autonomous adaptation is unsuccessful, this framework can also be used as a basis for interactive acquisition of adaptation cases from a human user. We are developing an interface that allows a user to guide transformation selection and memory search for a par- ticular adaptation problem. A trace of the user’s adap- tation process is then added to the adaptation case library for future use. We begin by sketching our testbed system’s archi- tecture. We next summarize the adaptation process and the knowledge sources it uses. We discuss prelim- inary results concerning the relationship of different types of memory search strategies, adaptation learn- ing, and case learning, as well as the relationship of our method to previous approaches. We conclude by highlighting strengths of our approach and questions for further study. DIAL System Overview We are developing our model of adaptation learning in the context of a case-based planner in the domain of disaster response planning. Disaster response plan- ning is the initial strategic planning used to deter- mine how to assess damage, evacuate victims, etc., in 684 Learning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. response to natural and man-made disasters such as earthquakes and chemical spills. Accounts of human disaster response planning suggest that case-based rea- soning is important in response planning by human disaster planners (Rosenthal, Charles, & Hart 1989). Our testbed system, DIAL, processes conceptual representations of a news story describing the initial events in a disaster, and proposes a response plan by retrieving and adapting the response plan for a simi- lar prior disaster. DIAL includes a schema-based story understander, a response plan retriever and instantia- tor, a simple evaluator for candidate response plans, and an adaptation component to adapt plans when problems are found. The system’s case-based planning framework is based in a straightforward way on pre- vious case-based planners such as CHEF (Hammond 1989). Consequently, we will not discuss DIAL’s plan- ning process per se, but instead will focus entirely on the system’s case adaptation and adaptation learning. Summary of DIAL’s Adaptation Process DIAL’s adaptation component starts with a library of domain cases-disaster response plans from previ- ous disasters-and general (domain-independent) rules about case adaptation and memory search. DIAL’s case-based planner provides the adaptation component with two inputs: an instantiated disaster response plan and a description of the problems in the response plan that must be repaired. When the response plan has been successfully adapted, DIAL stores both the new response plan and two types of adaptation knowledge for use in similar future adaptation problems: memory search cases encapsulating information about the steps in the memory searches performed during adaptation, and adaptation cases encapsulating information about the adaptation problem as a whole, the transforma- tions and memory search cases used when solving it, and the solution to the adaptation problem. Thus, the system learns not only new response plan cases but also new ways of adapting existing cases to new situations. To adapt a case, DIAL’s adaptation component per- forms the following steps: 1. Case-based adaptation: DIAL first attempts to retrieve an adaptation case describing the successful adaptation of a similar previous adaptation prob- lem. If retrieval is successful, the adaptation pro- cess traced by that case is re-applied and processing continues with step 3. 2. Rule-based adaptation: When no relevant prior case is retrieved, DIAL selects a transformation as- sociated with the type of problem that is being adapted. For example, it may decide to substi- tute a new plan step for one that does not ap- ply. Given the transformation, the program gener- ates a knowledge goal (Hunter 1990; Ram 1987) for the information needed to apply the transformation. (E.g., when performing a substitution, the knowl- edge goal is to find an object that satisfies all the case’s constraints on the object being replaced.) The knowledge goal is then passed to a planning compo- nent that uses introspective reasoning about possi- ble memory search strategies (Leake 1994) to guide search for the needed information. If the needed in- formation is found, the transformation is applied. If it is not found, the process continues with step 4, manual adaptation. 3. Plan evaluation: The adapted response plan is evaluated by a simple evaluator that checks the compatibility of the current plan with explicit con- straints from the response plan. A human user per- forms backup evaluation. If the new response plan is not acceptable, other adaptations are tried. 4. Manual adaptation: If autonomous case adapta- tion fails to generate an acceptable solution, an inter- face allows the user to guide the adaptation process, selecting a transformation and suggesting features to consider. During the adaptation, the system records a trace of the adaptation process in the same form as the traces of system-generated adaptations. This trace is added to the adaptation case library for fu- ture use. 5. Storage: When adaptation is successful, the re- sulting response plan, adaptation case, and memory search plan are stored for future use. The basic principles of the adaptation process are shown by an example of developing a response plan for the story of a 1994 flood in Allakaket, Alaska. When DIAL processes that story, it retrieves the closest dis- aster case in memory, a flood in Bainbridge, Georgia. Part of the Bainbridge response was to build walls of sand bags to protect the area from water damage as the flood waters rose. In Bainbridge, volunteers helped to build the sand walls, and DIAL generates a knowledge goal to find people who could fill the same role in an Allakaket response plan. However, most of the able-bodied people in Allakaket are unavailable be- cause they are helping to fight fires in the northwest. This prompts a new problem, that the desired role- fillers are unavailable. DIAL has no similar adaptation cases, so it falls back on rule-based memory search to attempt to find a substitution. It checks constraints on possible role-fillers and finds that the previous volun- teers were under the authority of the police. Searching for others who are under the authority of the police, it finds prisoners as a possible substitution. Prisoners are Case-Based Reasoning 685 suggested to build the flood walls, and, when they are judged a reasonable substitution by a human user, the replacement of able-bodied volunteers with prisoners is saved for future use. Guiding Adaptation In order to reason about adaptation problems, a uni- form framework is needed for characterizing the case adaptation problem. DIAL’s rule-based case adapta- tion treats the case adaptation process as involving two parts: applying structural transformations (e.g., additions, substitutions, and deletions) and perform- ing memory search to find the information needed to apply the transformations (Kass 1990). Accordingly, two types of case adaptation knowledge are needed: abstract transformations and memory search strate- gies. A small set of transformations is sufficient to characterize a wide range of adaptations (Kolodner 1993), but much domain-specific reasoning may be re- quired to find the information to apply those transfor- mations. Consequently, a key issue is learning how to find the needed domain-specific knowledge. This in- volves a process of generating knowledge goals, using them to focus search for information, and packaging the reasoning trace to guide future adaptations. Knowledge Goals: DIAL generates knowledge goals to obtain information necessary for a specific adap- tation. For example, the national guard was called out to prevent riots after the Los Angeles earthquake. When using the response to that earthquake as the ba- sis for the response to an earthquake in Liwa, Indone- sia, a problem arises: Indonesia has no national guard. Consequently, the Los Angeles response plan must be adapted. In response to the inappropriate value prob- lem, a knowledge goal is generated to find a substitute for the national guard in the Liwa earthquake context. Memory Search Plans: DIAL’s memory search pro- cess starts from an input knowledge goal and uses a combination of rule-based and case-based reasoning to guide the traversal of memory to find the needed information. DIAL’s memory is frame-based, hierar- chically organized by memory organization packages (MOPS) (Schank 1982). MOPS representing event se- quences include roles such as actor and object (e.g., the MOP for a flood disaster includes a role for the rescuers), constituent sub-events, called scenes (e.g., rescuers traveling to the victims, rescuers evacuating the victims), and constraining relationships between the roles in a main MOP and roles in its scenes (e.g. the fillers of the victims role of the flood MOP are the evacuees in its evacuation scene). DIAL’s MOPS may also include explicit relationships in which role-fillers of a MOP are involved (e.g., because the police respond- ing to a particular flood are directed by the mayor, the MOP represents that they participate in an authority relationship with the mayor). All these of relationships may suggest pathways to be pursued during memory search. Corresponding memory search operations exist to examine roles, scenes, explicit relationships between role-fillers, or the meanings of those relationships; to examine MOPS or response plans that are nearby in the abstraction hierarchy; and to examine a represen- tation of the problem prompting memory search. Memory search cases: When memory search is suc- cessful, a trace of the search process is packaged as a memory search case. A memory search case con- sists of a sequence of primitive memory search opera- tions which was previously used to find some informa- tion in memory. Initial memory search cases are built up interactively by recording traces of manual adapta- tion. When similar knowledge is next needed in a sim- ilar context, a retrieved search case provides an initial strategy for finding the needed information. Memory search cases are indexed under adaptation cases that have successfully used them and the knowledge goals they satisfy, so that they can be used as operators when building up future memory search plans. Adaptation Cases: Adaptation cases package the results of a successful adaptation. They package both a transformation type (e.g., substitute, add, delete) and the memory search steps used to find the informa- tion needed to apply the transformation. An adapta- tion case consists of three parts: indexing information, adaptation information, and evaluation information. The indexing information includes a representation of the type of problem to adapt as the primary index, along with information about the response plan con- text in which the adaptation case was generated. Thus, appropriate adaptations can be retrieved to deal with new adaptation problems. The type of problem to be repaired by adaptation is described in terms of a vocabulary similar in spirit to the problem vocabularies used to guide adapta- tion in other CBR systems (e.g., Hammond, 1989; Leake, 1992). For example, DIAL’s problem types in- clude the following problems involving role-fillers in a candidate plan: UNAVAILABLE-FILLER (e.g., a police commissioner may be out of town and unable to be reached in an emergency situation), FILLER- MISMATCH (e.g., if a workplace response plan in- volving notifying the victims’ union is applied to a school disaster, whose victims-children-do not have unions), UNSPECIFIED-FILLER (e.g., if a plan calls for a rescue without specifying who will carry it out). These categories are used to index adaptation cases. 686 Learning How Adaptation Knowledge is Acquired DIAL acquires adaptation cases in two ways. First, it generates them autonomously, based on the adap- tations it performs. Second, they can be entered into the system with a manual adapter, in which a human user interactively builds an adaptation case in response to a problem, selecting the type of transformation to apply and the types of memory links to follow. The manual adapter lets a user input cases which implic- itly contain the user’s knowledge of important features to consider when performing a particular type of adap- tation. The specific adaptation and a trace of how it was derived are saved for future use. At present, the use of the manual adapter requires considerable knowl- edge of the system’s memory organization, but a future research direction is to allow the user to provide gen- eral suggestions to be operationalized by the system’s own memory search mechanisms. Effects of Adaptation Learning To obtain initial indications of the potential value of adaptation learning, we compared the system’s adap- tation efficiency under six different conditions. In the first four conditions, all memory search during case adaptation was done by “local search,” with different combinations of learning methods: (1) No learning of either cases or adaptations, (2) Learning of response plan cases only (this is the standard learning of CBR systems), (3) No learning of response plan cases, but learning of adaptation cases, and (4) learning of both response plan cases and adaptation cases. Conditions (5) and (6) replaced “local search” with memory search planning to find needed information. In (5), DIAL performed response plan learning only, and in (6), it learned both response plans and adaptation cases. The memory for the trials included nodes for 870 concepts. The initial case library included 6 response plans, for the following disasters: an earthquake in Los Angeles, an air quality disaster at a manufactur- ing plant, a flood in Bainbridge, Georgia, a chemical disaster at a factory, a flood in Izmir, Turkey, and an air quality disaster in a rural elementary school. The system processed conceptual representations of 5 sto- ries taken from the Clarinet News Service newswire and the INvironment newsletter for air quality consultants: An indoor air quality disaster at Brookview School; A chemical disaster at Johnson School; An air quality disaster at the Kirtland military base; A flood at Al- lakaket, Alaska; and an earthquake in Liwa, Indonesia. Appropriate response plans for each of these can be generated by adapting one of the prestored plans. l?or example, one change to adapt the plan for the Bain- bridge flood to apply to Allakaket is that the Salvation Army-which provided shelter during the Bainbridge flood, but does not exist in Allakaket-is replaced by the Red Cross. Each of the input problems required multiple adap- tations. In the trials including adaptation learning, the system built 30 adaptation cases. Efficiency of adapta- tion was compared across the six conditions by count- ing both the number of primitive memory operations performed and the number of memory nodes visited; each gives an indication of memory search effort. Ta- ble 1 shows the results for a single problem order, but changes in problem order did not appear to have a sig- nificant effect. Multiple search strategies combined with adaptation learning performed best overall in terms of memory op- erations performed. This contrasts with the relatively poor behavior of using multiple search strategies with- out adaptation learning, which will be discussed below. The same general pattern follows in results based on the number of memory nodes visited. The poor performance of multiple strategies without learning, compared to local search without learning, was initially surprising. However, it can be explained by the types of adaptations that are most common to these problems and the contents of memory: near-by substitutions, such as sibling nodes, were often appro- priate fillers, and these fillers could be found directly by local search. Conversely, a problem that is difficult for local search is often easier for the other strategies, which perform operations such as attempting to find substitute fillers based on explicit constraints (e.g., as in the case of hav- ing to notify children’s parents instead of their unions). Another illustration involves adapting the Los Angeles earthquake response plan into a plan for Liwa, Peru. The Los Angeles plan involved the Red Cross sending supplies in by truck, but the roads to Liwa are im- passable so the Red Cross cannot deliver the supplies. (In the real episode, the solution was a military air- lift.) Using local search to replace the Red Cross with an agency that can deliver the supplies is costly, be- cause the Red Cross and the military are distant in the system’s memory. A search based on other strategies identifies an old case where “lack of access” was an im- pediment, uses this case to identify vehicles that can make the trip, and then looks for actors who control these vehicles. This leads to the suggestion of the Liwa military after minimal search. Such problems did not arise often, however. When adaptation cases based on both local search and other strategies are saved and reused, average performance is better than for either method individually. Case-Based Reasoning 687 Memory Ops Nodes visited Avg Max Min Avg Max Min Using “local search” to find needed information 1. No learning 103 226 7 53 114 4 2. Plan learn&g only 80 214 7 41 108 4 3. Adaptation learninq or& 68 181 4 40 92 3 4. Plai + Adaptation- Leaking 66 181 4 39 92 3 Using multiple strategies to find needed information 5. Plan Learning 548 812 42 56 83 5 6. Plan + Adapiation Learning 59 312 1 26 50 1 Table 1: Average, maximum and minimum effort expended adapting the five sample cases. As was expected, when no adaptation cases are learned, learning additional response plan cases makes the system able to solve new problems with less adap- tation effort-more similar cases are available. This is the foundation for the benefits of learning found in most CBR systems. The table also shows that for this small test set, adding response plan learning to adaptation case learn- ing produced a very small (and probably insignificant) benefit. This requires further investigation, but sug- gests that for achieving good performance from CBR systems, it is not sufficient to consider only the ef- fects of learning domain cases: serious attention must be paid to the interaction of retrieval, similarity, and adaptation criteria. In our trials, the best performance came from simultaneously learning both response plans and adaptations. We plan to follow up on these initial tests with a more controlled analysis of the effects of learning for a larger set of problem examples. We also intend to study the tradeoff between adaptation effort and adap- tation quality, and the effects of adaptation learning on the quality of solutions generated. Relationship to Other Approaches Some early case-based reasoning systems included components for learning adaptation knowledge. For example, CHEF (Hammond 1989) bases its adapta- tions on both a static library of domain-independent plan repair strategies and a library of special-purpose ingredient critics, which suggest steps that must be added to any recipe using particular ingredients (e.g., that shrimp should be shelled before being added to a recipe). CHEF uses special-purpose procedures to learn new ingredient critics. PERSUADER (Sycara 1988) uses a combination of adaptation heuristics and previously-stored adaptation episodes to suggest adap- tations. In both these systems, learned adaptations can only be reused in very similar situations; the adap- tation cases learned by DIAL can be reused more flex- ibly. In addition, DIAL can perform adaptations from 688 Learning scratch when necessary to augment its library of cases. DIAL’s flexible approach to memory search is in- spired by the memory search process of CYRUS (Kolodner 1984)) and also relates to Oehlmann’s 1993 question-based approach to introspective reasoning for guiding adaptation. Our use of both case-based planning and case- based case adaptation provides advantages of both derivational (Veloso & Carbonell 1994) and stan- dard transformational approaches to CBR. Transfor- mational CBR approaches store and adapt a solution to a problem, while derivational approaches store and replay a derivational truce of the problem-solving steps used to generate a previous solution. For CBR tasks such as disaster response planning, derivations of so- lutions are not generally available, and planning from scratch is not satisfactory because domain theories are inaccurate and intractable. However, examples of prior solutions are readily available in news stories and case- books used to train disaster response planners (e.g., Rosenthal et al., 1989). This favors a transformational approach to reusing disaster response plans. However, derivational approaches can simplify the reapplication of a case to a new situation, and the rationale for the system’s choice of particular steps during adaptation of prior cases is available. There is growing interest in alleviating case adap- tation problems through interactive user adaptation (e.g., Bell, Kedar & Bareiss, 1994; Goel et al. 1991), including presenting the user with derivational traces (Goel et al., 1996). However, because those systems do not capture the results of the user’s adaptations for future use, DIAL contributes a new approach to acquiring adaptation knowledge. Conclusion We have described ongoing research on a method for facilitating the acquisition of case adaptation knowl- edge. The method depends on representing adap- tations as combinations of abstract transformations with memory search plans for finding the information needed to apply them. When no specific adaptation knowledge is relevant, reasoning from scratch is used to search memory for the information needed to per- form an adaptation. When a similar adaptation has been performed in the past, case-based reasoning is used. This hybrid method makes it possible for a CBR system to acquire adaptation expertise through case- based reasoning about adaptation, and also to reason from scratch when needed to solve novel adaptation problems that would be beyond the scope of previous case-based adaptation methods. The view of adapta- tions as involving transformations plus memory search has also been used as the basis for interactive acquisi- tion of adaptation knowledge. Preliminary studies show speedup benefits from adaptation learning for an initial sampling of problems. More extensive studies are needed to corroborate these benefits, to investigate the affects of adaptation learn- ing on the quality of adaptations suggested, and to examine whether the indexing scheme for adaptation cases is sufficient to make the approach resistant to the utility problem as large numbers of cases are learned. Another issue requiring study is how similarity assess- ment criteria should change as adaptations are learned (Leake, Kinley, & Wilson 1996). Nevertheless, we be- lieve that combining transformational CBR for plan- ning with derivational CBR for performing adaptation is a promising approach. Acknowledgments This work was supported in part by the National Sci- ence Foundation under Grant No. IRI-9409348. We thank the AAAI reviewers for their helpful comments. References Bell, B.; Kedar, S.; and Bareiss, R. 1994. Interactive model-driven case adaptation for instructional soft- ware design. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 33-38. Goel, A.; Kolodner, J.; Pearce, M.; and Billington, R. 1991. Towards a case-based tool for aiding conceptual design problem solving. In Bareiss, R., ed., Proceed- ings of the DARPA Case-Based Reasoning Workshop, 109-120. San Mateo: Morgan Kaufmann. Goel, A.; Garza, A.; Grue, N.; Murdock, J.; Reeker, M.; and Govindaraj, T. 1996. Explanatory interface in interactive design environments. In Fourth Inter- national Conference on AI in Design. In press. Hammond, K. 1989. Case-Based Planning: Viewing Planning as a Memory Task. San Diego: Academic Press. Hunter, L. 1990. Planning to learn. In Proceedings of the Twelfth Annual Conference of the Cognitive Sci- ence Society, 261-268. Kass, A. 1990. Developing Creative Hypotheses by Adapting Explanations. Ph.D. Dissertation, Yale Uni- versity. Northwestern University Institute for the Learning Sciences, Technical Report 6. Kolodner, J. 1984. Retrieval and Organizational Strategies in Conceptual Memory. Hillsdale, NJ: Lawrence Erlbaum Associates. Kolodner, J. 1993. Case-Based Reasoning. San Ma- teo, CA: Morgan Kaufmann. Leake, D. 1992. Evaluating Explanations: A Content Theory. Hillsdale, NJ: Lawrence Erlbaum Associates. Leake, D. 1994. Towards a computer model of mem- ory search strategy learning. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 549-554. Leake, D. 1995. Combining rules and cases to learn case adaptation. In Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society, 84-89. Leake, D.; Kinley, A.; and Wilson, D. 1996. Link- ing adaptation and similarity learning. In Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. In press. Qehlmann, R.; Sleeman, D.; and Edwards, P. 1993. Learning plan transformations from self-questions: A memory-based approach. In Proceedings of the Eleventh National Conference on Artificial Intelli- gence, 520-525. Ram, A. 1987. AQUA: Asking questions and under- standing answers. In Proceedings of the Sixth Annual National Conference on Artificial Intelligence, 312- 3I.6. Rosenthal, U.; Charles, M.; and Hart, P., eds. 1989. Coping with crises: The management of disasters, ri- ots, and terrorism. Springfield, IL: C.C. Thomas. Schank, R. 1982. Dynamic Memory: A Theory of Learning in Computers and People. Cambridge, Eng- land: Cambridge University Press. Sycara, K. 1988. Using case-based reasoning for plan adaptation and repair. In Kolodner, J., ed., Proceed- ings of the DARPA Case-Based Reasoning Workshop, 425-434. Veloso, M., and Carbonell, J. 1994. Case-based rea- soning in PRODIGY. In Michalski, R., and Tecuci, G., eds., Machine Learning: A Multistrategy Ap- proach. Morgan Kaufmann. Chapter 20, 523-548. Case-Based Reasoning 689
1996
102
1,736
Detecting iscont inuit ies i Hide0 Shimazu and Y~suke Information Technology Research Laboratories NEC Corporation 41-1 Miyazaki, Miyamae, Kawasaki, 216 Japan {shimazu, yosuke}@joke.cl.nec.co.jp Abstract package tour record data This paper describes a discontinuity detection method for case-bases and data bases. A dis- continuous case or data record is defined as a case or data record whose specific attribute val- ues are very different from those of other records retrieved with identical or similar input specifica- tions. Using the proposed method, when a user gives an input specification, he/she can retrieve not only exactly-matched cases, but also simi- lar cases and discontinuous cases. The proposed method has three steps: (1) Retrieving case records with input specifications which are the same as or similar to a user’s input specification (Mcaybe Similar Case, MSC), (2) Selecting a case record which most closely matches the user’s in- put specification among MSCs (Base Case, BC), and (3) Detecting cases among MSCs whose out- put specifications are veky different from those of BC. The proposed method has been implemented in the CARET case-based retrieval tool operat- ing on commercial RDBMS. Because case-based reasoning systems rely on the underlying assump- tion that similar inpt spec$ccations retrieve sim- s’kzr case records, discontinuity detection in case- bases is indispensable, and our proposed method is especially useful. tour id date place hotel flight price comfort 12i23 111 113 1131 Departure date Figure 1: Discontinuity in package tour da.t,a. records. Scattergram of departure date versus price Introduction This paper analyzes discontinuities in case-bases and the architecture of automatic discontinuity detection in case-bases and data bases. Specifically, we implement a discontinuity detection mechanism on an enhanced version of CARET, a case-based retrieval tool that operates on a relational data base management sys- tem (RDBMS). Wh en a user gives an input specifica- tion, CARET automatically retrieves not only exactly- matched cases, but also similar cases and discontinuous cases. of discontinuities in a data base. The dat’a base con- tains tour data records for a set of packa.ge t,ours from Japan to Hawaii in December and January. Here we plot the departure date of a tour versus the tour price in the record. The ordinary t#our price is generally about $1500. However, since package tours depart- ing between December 23 and .January 3 a.re $1000 more expensive than ordinary ones, they are excep- tional records from the viewpoint, of price. There are discontinuities from Dee 23 to Jan 3. Such except8ional records often cant ain useful informat ion. For example, a user planning to depart on Dee 23 would a,pprecia.t)e hearing that he can save money if he departs 011 Dee 22. Discontinuities in a case-base or a data base exist where exceptional records exist in a cluster. Some (one to all) of the attribute values of such exceptional records are very different from those of the majority of records retrieved with identical or similar input specifi- cations. Figure 1 shows a simple and common example The motivation behind this research is that detect- ing discontinuities in a case-base is necessary for in- stalling case-based reasoning (CBR) systems in a.ct,ual applications. CBR is a problem solving method which retrieves similar precedent cases and adapt#s old solu- tions in them to solve new problems (Schank 1982; X : tour record 690 Learning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Iiolodner 1993). After many failures in expert sys- tem development projects, CBR has been widely used in actual applications, because cases are much easier than rules to formulate and maintain. CBR relies on t,he underlying assumption, simidnr input specificntions re-trzeve simz’l~r cnse records. However, if there are dis- continuous cases in a case-base, a CBR system may un- fort8una8t8ely retrieve such discontinuous cases as prece- dent, cases and generate incorrect solutions by adapting them. Therefore, discontinuities in case-bases should be detected in CBR syst,ems. For example, a previously reported help desk sys- t)em (Shimazu, Shiba.t(a and Nihei 1994) is a case-based retrieval system which stores previous customer in- quiries. While t,he help desk operator answers a user’s inquiry, it shows the operator previous cases similar and related to the inquiry. The following are two dis- continuous cases found in the case-base of the help desk system: Discontinuous case 1: Only a machine type pro- duced during a specific period of time has a very high percentage of trouble claims regarding its switching equipment. Other t#han that, the machine type is t,he same as those produced in different time periods. Discontinuous case 2: Only a* specific version of a8 soft(wa.re product does not run with a specific pe- ripheral I/O board. Other than that, the version is the same ti other versions. When a help desk operator retrieves previous cases while answering a custlomer’s inquiry regarding these symptoms, these cases should not be retrieved if the produced time period is different (Discontinuous case 1) or t,he soft,ware version is different (Discontinuous case 2). Cases a,re defined and st(ored by help desk operat,ors a.fter answering each customer’s inquiry. Because they a.re st,ored in a8 case-base without, special indices, dis- continuities are naturally generated in the case-base. However, no CBR study yet appeared on discontinu- ity detection method in a case-base. Even commercial CBR tools, such as CBR Express (Inference 1993) and ReMind do not incorpora8te them. Thus, we ha.d to develop our own mechanism, optimized for our CBR syst,eins. Design Decision Analysis of Discont inuit ies There a.re three factors which affect the existence of discontinuities in a case-base. 1. Observed attributes in cases: Whether a case is discontinuous or not depends on observed attributes in the case. Figure 2 shows two scattergrams generated from the same data base as in Figure 1. The new scatt,ergram shows hotel names versus customers’ subjective evaluations of the degree of comfort. Depending on observed at- tributes in case records, an identical record (specified in the scattergrams) becomes a discontinuous case in a scattergram, while it is a normal case in another scattergram. In general, the number of observed at- tributes is more than one, and each attribute is given a different weight of importance. package tour record data I” PI Departure date Hl H2 H3 H4 H5 . . . . . Hotel name Figure 2: Two scattergrams generat#ed from ident,ical package tour dat#a records 2. Compared neighboring cases: 3 Whether a case is discontinuous or not depends on the difference from compared neighboring cases. For a person who can depart at any t(ime in December, package tours departing after Dec. 23 seem excep- tionally expensive. However, for a person who must depart on Jan. 1, since any package tours depart- ing around Jan. 1 are as expensive as that 011 .Jan. 1, he does not think a tour departing on .Jan. 1 is exceptionally expensive. Hidden attributes: Discontinuities often exist because of hidden at- tributes which are calculated from a set of exist,ing attribute values in cases. For a person who travels alone, a $1,000 difference in the price attribute may not be so important. However, for a person who t,akes his family (for example, 5 persons), a $5,000 ($1,000 x 5) difference is a big issue. Here, a hid- den attribute, total price is a salient at’tribute when detecting discontinuous cases. Based on the analysis, a discontinuous case or data record can be defined as a case or data record whose output specifications are different from those of ot’her records retrieved with the same or similar input speci- fications. Here, the output specifications a.re defined as Case-Based Reasoning 691 a set of weighted specific attributes including hidden attributes, and the input specifications are another set of weighted attributes. 1 4asl3 Input specifications * Input threshold Figure 3: General scattergram visualizing discontinu- ous case area Three Steps to Detect Discontinuities Based on the above definition of discontinuities, there are three steps in detecting discontinuities. Step 1: Retrieving cases with input specifications which are the same as or similar to a user’s input specification (k?nybe Similnr Case, ItJSC) Step 2: Selecting a case record which most closely matches the user’s input specification among MSCs (Base Cnse, BC) Step 3: Detecting discontinuous cases among MSCs whose output specifications are very different from that of BC Figure 3 is a general scattergram indicating input spec- ifications (X axis) versus output specifications (Y axis) of case records in a case-base. A user’s input specifi- cation is indicated as &se. MSCs are retrieved wit,h input specifications around Ibase within the pre-defined Input threshold. Based on the assumption that similar input specifications re- tpieae slmilnr cnse recor&, all h/ISCs must be simi- lar. However, there is a possibility that discontinuous cases are included in the MSCs. Because the retrieved cases are either the really similar cases or discontinu- ous cases, they are located in either similar case area or discontinuous case men in Figure 3. Commercial-RDBMS Figure 4: Automatic SQL generation BC is a case which most closely matches the user’s input specification among MSCs. In Figure 3, the cir- cled point indicates BC. BC’s output specification is indicated as Obase, CARET carries out the nearest neighbor retrieval efficiently using the SQL data base language. It gen- erates SQL specifications in varying degrees of simi- larity, and dispatches generatIed SQL specifications t#o RDBMS to retrieve cases from RDB. (Figure 4). The output specification of each MSC is compared The CARET algorithm can be enhanced to retrieve with that of Obase. If the difference is smaller than not only similar cases but also discontinuous cases. the pre-defined Output threshold, the case is regarded The essence of this work, therefore, is to extend the <as a similar case (located in the sim.iZnr case area). CARET architecture to be able to retrieve not only Otherwise, the case is regarded as a discontinuous case (located in the discontinuous case area). Case Retrieval using the Nearest Neighbor CARET is a case-based retrieval tool that operates on a commercial relational data base management sys- tem (RDBMS) (Sl limazu, Kitano and Shibata 1993). It carries out similar-case retrieval using the nearest neighbor retrieval method. In the nearest neighbor re- trieval method, similarity between a user’s input, query (Q) and a case (C) in the case-base (S(Q,C)) is de- fined as the weighted sum of similarities for individual at tributes: where Wi is the i-th attribute weight, and s(Qi, Ci) is the similarity between the i-th attribute value for a query (Q) and that for a case (C) in the RDB. Traditional implementations comput,e the similazity value for all records, and sort records based on their similarity. However, this is a time consuming t#ask, as computing time increases linearly with the num- ber of records in the case-base (C : Cnrdinnlity of the data base) and with the number of defined att,ributes (D : Degree of the data base). This results in time- complexity of O(C x 0). This implement~at~ion strategy for RDBMS would be a foolish decision, as individua.1 records must be retrieved to compute similarity. Thus, the total transaction number would be intolerable. CARET 692 Learning I simi1a.r cases but also discontinuous cases. The follow- ing sections describe the architecture and report on its performance. The Extended CARET System Cases are represented as a flat record of n-ary rela- t,ions, and stored in ta.bles in commercial RDBMS. For each ‘case attribute, its similarity measure and weight are defined. The simil arities between values in indi- vidual attributes in case records are defined similarity by domain experts (Figure 5). In this example, the similarity between “C” and “C++” is 0.7. The in- put threshold and output threshold are also defined by domain experts. I any-- SYSTEM-V <s- svR4 Figure 5: Attribute Hiera.rchy Example Step 1: Retrieving MSCs Step l-(l): Creating Neighbor Value Sets A user gives attributes and t,heir values as input speci- fications. For each user specified attribute, CARET refers to a similarity definition (Figure 5), to gener- ake a set, of values neighboring the user specified value. For example, if the user specifies “C++” as a value for the Language attribute, ‘iC++?’ is an element in the first-order neighbor vndue set (lst-NVS). “C” is an element in the second-order neighbor value set (2nd- XVS). “ADA, COBOL and COBOL/S” are elements in the third-order neigh,bor value set (J’rd-NVS). Step l-( 2): Enumerating Combinations of NVSs Next, all possible neighbor value combina- tions a.re created from the n-th order neighbor value setIs. Figure 6 illustrates how such combinations are creaked. This example assumes that the user described “C++” as the value for t,he Language attribute and “BSD4.2” for the OS attribute as input specifications. All value combinations under .attribute Language and OS are created. In this example, 9 combinations (3 x 3) are generated. Each combination of NVSs becomes the seed for SQL specifications. Step l-(3): Calculating Similarity Values for Each Combination For each combination, a sim- ilarity value is calculated using the similarity between the user specified values and thbse in each combination Attribute Weight Attribute User input Specification 1st NVS 2ndNVS 3rdNVS Attribute Specifications 0.3 0.4 Language c++ OS BSD4.2 COBOL/S] (0.2) BSD4.2 (l-0) BSD4.3 Of3 [SVR2, SVR4, . ..I (0.2) Figure 6: An Example showing Possible NVS combi- nations created in the previous step. The calculation is sim- ilar to that for the weighted nearest neighbor, except that not all attributes are involved. Whether an at- tribute is part of the input attribute or not is shower in a mask matrix (M), which is a one-dimensional matrix whose size is equivalent to the case-base degree. The matrix element n/Ii will be 1, if the attribute Z is part of the input specificakions. Ot8herwise, &~i will be 0. The similarity between a input query(Q) and a NVS combination (F) is S(Q, F), and is calculat,ed as where F is an NVS combination, Fi is the i-th at- tribute for the combination, and CVi is the i-th at- tribute weight. For example, the similarit#y bet#ween a combination, ([“C”] ,[ “BSD4. Y’]) and the user’s input specifications is 0.64 by the following calculaGon: S(Q, F) = 0.3 x 0.7 + 0.4 x 0.6 = o 64 0.3 + 0.4 (3) Step l-(4): Generating SQL Specifications Only combinations whose similarit,y value is higher than the pre-defined input threshold are selected, and each selected combination is translated into a corre- sponding SQL specification. The only SQL expression type used here is the SELECT-FROM-WHERE type. For example, the NVS combination example shown above is translated into the following SQL expression: SELECT * FROM case-table WHERE language = 'C' and OS = 'BSD4.3'; Then, CARET dispatches the SQL specifications from the highest similarity score. Since SQL does not Case-Based Reasoning 693 involve a similarity measure, the similarity values pre- computed in this process are stored in CARET, and are referred to when the corresponding query results are returned. Step 2: Selecting a Base Case Because CARET dispatches SQL specifications from the highest similarity score, the first returned case be- comes a Bnse case, because it matches the user’s input specification exactly or most closely. ’ The output specification of the base case, (&se is extracted from the base case. All other retrieved cases are stored with their similarity values in CARET as pot(entially similar cases. Step 3: Detecting Discontinuous Cases among MSCs CARET calculates the similarity between the output specification of each retrieved case and that of the base case, Obase. Because the total number of the retrieved cases is much smaller than the total number of cases in the case-base, tra.ditional implementation for the weighted nearest neighbor is carried out and is suffi- ciently effective here. The calculation method is the same as that for tra- ditional implementation, except that not all attributes are involved. Whether an attribute is part of the out- put specification or not is shown in a mask matrix (N), which is a one-dimensional matrix whose size is equiv- alent to the case-base degree. The matrix element IVi will be 1, if the attribute i is part of the output spec- ification. Otherwise, ~l;‘a will be 0. The similarity be- tween the output specification of a base case (B) and that of a retrieved case (C) is S(B, C) and is calculated a.S S(B,C) = Cyzl Ni X bVi X S(Bi,Ci) zif, IVi X Wi (4) where & is the i-th attribute for the base case, Ca is the i-th attribute for the case (C), and PVi is the i-th attribute weight. For each retrieved case, its similarity value is com- pared with the output threshold. If the similarity value is higher than the output threshold, it is a similar case. Otherwise, it is regarded as a discontinuous case. For example, suppose that an output attribute is Work load (Man-Month), its t&se value is 10 Man- Month Work Load (MM), its similarity definition is as shown in Table 1, and the output threshold is 0.5. If there are cases among MSCs whose Work-load at- tribut,e value is less than 1 MM or more than 36 MM, they are regarded as discontinuous cases. Work Load o-1 1-G 6-12 12-36 36- 0-1 1.0 l-6 0.7 1.0 6-12 0 0.7 1.0 12-36 0 0.2 0.7 1.0 36- 0 0 0 0.7 1.0 Ta.ble 1: Similarity Definition of Work-load (iMa.n- Month) Factors Query- 1 Query-2 Query length 2 3 Tree depth 6 8 Tree width 28 25 Generated SQL number 6 4 Retrieved cases 105 3 Similar cases 102 3 Discontinuous cases 3 0 Input threshold 0.8 0.8 Outout threshold 0.8 0.8 Table 2: Query Characteristics Empirical Results The extended CARET performance has attained an accept able speed. The experiment was evaluated by using a SUN Spare Station 2, and by using ORACLE version 6 installed on SunOS version 4.1.2. Figure 7 shows the response times measured for t’ypica.1 user queries. The algorithm scalability was tested by in- creasing the number of cases in the case-base. The number of cases was increased to 1,500. The avera.ge response time for a query was about 2.0 seconds. The queries in Figure 7 are : Query-l: (LANGUAGE = C++) and (MACHINE = SUN) Query-Z: (TROUBLE-TIME = SYSTEM-GENERATION-TIME) and (TROUBLE-TYPE = VERSION-MISMATCHED) and (CHOSEN-METHOD = CHANGE-LIBRARY-VERSION) Query-l Query-2 0 500 1000 15002ases - : response time of Step I + 2 + 3 w-w : response time of step 1 Figure 7: Case-base retrieval and discontinuity detec- tion time ‘If several base case candidates exist, there are selection methods, swh as calculating average cases and asking the user to choose one. 694 Learning Characteristics for each query are shown in Table 2. Query length refers to the number of conjunctive clauses in the WHERE clause. Tree depth shows the abstraction hierarchy depth for each attribute. Tree width is the number of terminal nodes of the abstrac- tion hierarchy of each attribute. The generated SQL number is the number of SQL specifications generated and actually sent to RDBMS. Retrieved cases shows the number of cases retrieved by each query. Similar cases shows the number of similar cases retrieved. Dis- continuous cases shows the number of discontinuous cases retrieved. These numbers are measured with a case-base containing 1500 cases. The input threshold and output threshold are pre-defined as fixed numbers. The speed of retrieving MSCs is affected by the query length, tree depth, tree width, generated SQL num- ber, and retrieved cases. The speed of detecting dis- continuous cases among MSCs is affected by the input threshold. Based on a field test in our help desk operation, the capability of showing both similar cases and discontin- uous cases was welcomed by help desk operators. They used the information to ignore discontinuous cases as useless cases. However, we found two major problems. First, discontinuous cases should indicnte more infor- mation. For example, if many individual cases of the same type of problem inquiries are retrieved as dis- continuous cases, the CBR system should generalize them and warn about the potential existence of a com- mon problem not recognized yet by domain experts. Combining the proposed method with statistical ap- proaches will be a subject for further research. Second, it is difficult to select appropriate attributes in cases for detecting exceptional cases. As we showed in this pa- per, whether a case is discontinuous or not depends on the observed attributes ip the case. In the package tour example, if we had not observed t4he relation between the departure date attribute and the price attribute, we would not have known about the existence of such discontinuities in the data base. Similarly, if we had not noticed the production time period of a machine or the version of A software product, we would not have recognized exceptions like the cases in the help desk systems described above. Monitoring all combinations of attributes in a case-base is inefficient and wasteful because there are too many combinations. Therefore, determining which attributes should be monitored to detect important discontinuities is another subject for further research. Related Work Among intelligent data base researchers, Siklossy (Sik- lossy 1977) indicated the existence of discontinuities in data bases. Parsaye (Parsaye 1993) proposed a dis- continuity detection mechanism using constraint rules, like “if Department = “Sales” then Salary > 30,000”. System designers pre-define rules which check for the existence of exceptions likely to occur in the future. This mechanism detects and prevents many errors dur- ing data entry. However, it can not deal with disconti- nuities which have not been yet recognized by system designers, such as the discontinuities-describe-d in this paper. Conclusion This paper described a discontinuity detection method in case-bases and data bases. A discontinuous case or data record is defined as a case or data. record whose specific attribute values are very different, from those of other records retrieved with identical or simi1a.r input specifications. Using the proposed method, when a user gives an in- put specification, he/she can retrieve not only exactfly- matched cases, but also similar cases and discontinuous cases. The propose method has three st)eps: (1) Ret,riev- ing case records with input, specifications which are the same as or similar to a user’s input specifica.- tion (Maybe Similnr Case, MSC), (2) Selecting a case record which most closely matches the user’s input specification among h4SCs (Bnse Cnse, BC), and (3) Det,ecting cases among hE3Cs whose out,put specifica- tions are very different from that, of BC. The proposed method has been implemented in t,he CARET case-based retrieval tool operatIing on com- mercial RDBMS. Because case-based reasoning sys- tems rely on the underlying assumption t’hat similar input specifications retrieve similar co.se records, dis- continuity detection in case-bases is indispensable, and our proposed method is especially useful for t,his. How- ever, because discontinuous data records often cont,a.in useful information, discontinuity detection is a.lso use- ful for ordinary data base applications as long as above assumption is applicable to the data bases. the eferences Inference Corporation, 1993. CBR Express and Case- Point: Product Introduction, Presentation slides for NDS Customers, Tokyo. Kolodner, J., 1993. Case-Based Reasoning, Morgan Kauf- mann. Parsaye, K, 1993 Intelligent database tools and applica- tions John VVilley 6: Sons, Inc. Schank, R. C., 198‘2. Dynamic memory: A theory of learning in computers and people, Cambridge Univ. Press. Shimazu, H., Kitano, H., and Shibata, A., 1993. Re- trieving Cases from Relational DataBase: Another Stride Towards Corporate-\Vide Case-Based Systems, In Pro- ceedings of the International Joint Conference on Artifi- cial Intelligence (IJCAI-93) Shimazu, H., Shibata, A., and Xhei, K., 1994. Case-. Based Retrieval Interface Adapted to Customer-Initiated Dialogues in Help Desk Operations, In Proceedings of the National Conference on Artificial Intelligence (AAAI-94) Siklossy, L, Question-Asking Question-Answering, Report TR-71, Austin: University of Texas, Computer Science. Case-Based Reasoning 695
1996
103
1,737
Source Selection for Analogical Reasoning An Empirical Approach William A. Stubblefield Sandia National Laboratories P. 0. Box 5800 Albuquerque, New Mexico 87 185 wastub@ sandia.gov George F. Luger Department of Computer Science University of New Mexico Albuquerque, New Mexico 8713 1 luger@cs.unm.edu Abstract The effectiveness of an analogical reasoner depends upon its ability to select a relevant analogical source. In many problem domains, however, too little is known about target problems to support effective source selection. This paper describes the design and evaluation of SCAVENGER, an analogical reasoner that applies two techniques to this problem: (1) An assumption-based approach to matching that allows properties of candidate sources to match unknown target properties in the absence of evidence to the contrary. (2) The use of empirical learning to im- prove memory organization based on problem solving experience. Introduction There are infinite things on earth; any one of them may be likened to any other. Likening stars to Zeaves is no less arbitrary than fikening them to fish or birds. Jorge Lois Barges Labyrinths Analogical reasoning (Gentner 1983; Hall 1989) makes inferences about novel target problems by transferring knowledge to them from a better under- stood source domain. Analogical reasoners generally choose a source that is known to have properties in common with the target under the assumption that this implies further commonalities and the potential for valid inferences. While this is a reasonable heuristic, many current implementations of it suffer from a number of limitations, including: 1. The requirement that enough be known about the target to select a relevant source. If the target domain is poorly understood, this assumption may not be justified. Reliance on a restricted retrieval vocabulary (Kolodner 1993) to specify properties the reasoner may consider in choosing useful sources. Although efficiency requires such restrictions, many re- trieval methods ignore the problem of their selection by assuming an a priori definition of a retrieval vocabulary. . 3. The use of context independent measures of simi- larity. For example, spreading activation tech- niques (Mitchell 1993; Thagard 1988) measure the similarity of two concepts as a function of their closeness in a semantic network. By fixing simi- larity measures in the structure of the network, such methods have difficulty in responding to changing problem situations. 4. Failure to consider problem solving context when organizing source memory. Many systems use clus- tering algorithms like UNIMEM (Lebowitz 1980) and COBWEB (Fisher 1987) to construct a hierar- chical source organization. These and similar methods ignore the structure of target problems when defining hierarchies. If such approaches are to take the problem solving context into ac- count at all, they must do so implicitly, through biases in the retrieval vocabulary. Hoffman (1995) has demonstrated that reliance upon a single “form, format or ontology” for background knowledge will obscure many analogies that would seem reasonable and useful to humans. These limitations restrict both the theoretical depth and practical usefulness of many current models of analogy. This paper discusses the design 696 Learning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. and evaluation of SCAVENGER, an analogical reasoner that attempts to correct these problems by (1) integrating source selection and inference in an assumption-based retrieval mechanism, and (2) using empirical learning to improve memory organization through problem solving experience. Like most analogical reasoners, SCAVENGER orga- nizes its source base under a hierarchical index. Each index node describes properties that are com- mon to a set of similar sources. Each node adds in- formation to its parent, and indexes a subset of its parent’s sources. SCAVENGER searches its hierarchy for nodes whose descriptions match components of the target. However, SCAVENGER takes an assumption- based approach to matching target problems with the patterns stored at index nodes: It requires properties known for the target to match exactly, but allows properties unknown in the target to assume any values required for a match. In a sense, rather than using the target problem description to select a source, it uses it to eliminate sources known to be irrelevant. Unlike approaches that treat retrieval and inference as separate steps, assumption-based retrieval integrates analogical inference with source selection. On constructing a successful analogy, SCAVENGER updates its hierarchy by specializing the node that led to retrieval of the relevant analogical sources. Empirical memory management selects properties that best distinguish successful analogies from their competitors by using a variation of the ID3 learning algorithm (Quinlan 1986). It then uses these properties to refine the index hierarchy. By limit- ing the properties it considers to those that were ef- fective in solving target problems, SCAVENGER both eliminates the need for a priori restrictions on its retrieval vocabulary, and considers problem solving experience in organizing source memory. Although promising, this approach raises a number of questions and potential problems: 1. If little is known about a target problem, assumption-based retrieval will allow many po- tentially contradictory matches. It is possible that the overhead of maintaining these alter- native hypotheses may cancel any advantage it provides over exhaustive search of all sources. 2. Because analogy is an unsound inference method, there is no guarantee that properties that characterized a useful source in one situation will be useful for other target problems. In using the ID3 learning algorithm with such potentially unsound data, we are violating many of the assumptions underlying its design. This may undermine its effectiveness. The rest of this paper examines SCAVENGER’s design and its ability to overcome these problems. he SCAVENGER Architecture Although we have tested SCAVENGER on several domains (Stubblefield 1995), this paper discusses its application to the problem of diagnosing children’s errors in simple subtraction problems (Brown and VanLehn 1980; Burton 1982; VanLehn 1990). Where their DEBUGGY system used an analytic approach that considered all likely bugs in forming a diagnosis, we have used their diagnostic models and problem data to ask a different question: Can SCAVENGER learn enough to make effective first guesses about the cause of an error without performing an extensive analysis? There are several reasons this is an interesting test for SCAVENGER’s retrieval algorithm: 1. Although human teachers are generally good at inferring the cause of a student’s error, it seems unlikely that they use DEBUGGY’s analytic methods. It is more likely that they recognize similarities between current problems and those they have seen in the past. This suggests the use of some form of analogical reasoning. 2 Subtraction problems offer few surface cues to the underlying causes of mistakes. This makes them a difficult challenge for an analogical reasoner. 3. A single error may result from multiple interacting bugs, further complicating diagnosis. SCAVENGER represents analogical sources as class and method definitions using the Common LISP Object System. The source base used in these tests is a set of LISP functions that reproduce the bugs described in (Brown and VanLehn, 1980). SCAVENGER decomposes each subtraction problem into a series of unknown operations on pairs of digits. This takes the form of a LISP program containing unknown functions. For example, in diagnosing the cause of the erroneous subtraction: 634- 468=276 SCAVENGER reformulates the problem as a sequence of LISP function evaluations: (#:G873 4 8 w) -> ? (#:G874 3 6 w) -> ? (#:G875 6 4 w) -> ? (show-result w) -> 276 In this target, w is an instance of the class working- Case-Based Reasoning 697 memory. w contains two slots: a borrow slot that records the value to be decremented from the next column, and a result slot that accumulates the column results as the program proceeds. Show-result returns the re- sult accumulated in working memory. The methods, #:G873, #:G874 and #:G875, indicate unknown target operations. SCAVENGER diagnoses the error by find- ing an analogical mapping of target methods onto source operators that reproduces the erroneous be- havior. The source library contains methods for subtract- ing digits. Each of these takes two digits and an in- stance of working-memory, and returns an integer. Sources include the normal-subtract method and such error methods as borrow-no-decrement, borrow-from-zero, always-borrow, etc. For example, borrow-no-decrement borrows under the appropriate circumstances, but fails to decrement the column borrowed from. Trying alternative mappings onto the target operators, SCAVENGER eventually reproduces the behavior seen in the target problem. In the example, it diagnoses the error’s cause as a borrow-no- decrement bug: (normal-subtract 4 8 w) -> 6 (borrow-no-decrement 3 6 w) -> 7 (borrow-no-decrement 6 4 w) -> 2 (show-result w) -> 276 Assumption-based Retrieval Each operator in SCAVENGER’s source base is stored along with: 1. Its signature, specifying the types of its argu- ments. Types used in this problem include a digit type, and sub-types for each individual digit (0, 1, . . .). For example, the signature of the borrow- from-zero operator is: digit-0 x digit x working-memory -> digit The numbers and types of arguments are the only information SCAVENGER uses to restrict matches between targets and sources. 2. A description of the bug. The original DEBUGGY research derived bugs from a procedural model of subtraction skills (Brown and VanLehn 1980; Burton 1982; VanLehn 1990). Each bug represents a different failure of one step in this model. In order to avoid representational biases that might distort our evaluation of SCAVENGER, we de- scribed each bug according to the stage of this model it effects. This yielded 6 bug descriptors: 1. normal-op. This describes normal subtraction. 698 Learning 2. transpose-error. An error in which the student subtracts the top digit from the bottom digit. 3. borrow-error. Any failure in borrowing, such as failing to borrow, or always borrowing. 4. decrement-error. Any failure in digit borrowed from. decrementing the 5. subtract-error. A failure in subtracting digits, such as assuming that n-0 = 0. 6. add-error. Adding instead of subtracting. For example, borrow-no-decrement is represented by: Operator: borrow-no-decrement Signature: digit x digit x working-memory -> digit Description: (decrement-error) Definition: (defmethod . . . ) ; the LISP definition SCAVENGER stores sources under a hierachical index, where each node contains the signature and description shared by a class of similar operators. In the example hierarchy of Figure 1, the root contains no description pattern, and the child node describes borrow-no-decrement and 10 similar operators. .----------------p,, References all sources Source base ;“““““““““‘“““““’ : operator: borrow-no-decrement i --+.A signature: digit x digit x w -> digit i i description: decrement-error : ~,,,,,,,,,,,,,,2-,I-,,ll,ll,J- ,: -+ (10 additional matchin operators) Figure 1 In searching this hierarchy, SCAVENGER forms and evaluates alternative hypotheses about the target. Each hypothesis results from a different sequence of index matches, and reflects different as- sumptions about the target. SCAVENGER constructs and evaluates hypotheses using the following algo- rithm: 1. Create an initial hypothesis based on the root node, in which no targets are matched. Move down the hierarchy, creating a new hypothesis for every matching index by: 1.1. For each hypothesis, examine the children of specific index node that it has already matched. the most 1.2. For each match between the description stored at a child index and an unmatched target operator in the hypoth- esis, create a new hypothesis. In it, record the matching index, the target that is matched, and transfer the index signature and description to the target operator. 1.3. Repeat 1 .l & 1.2 until no more hypothesis are produced. 2. Sort all hypotheses according to heuristic merit. 3. Trying each hypothesis in order, retrieve all sources stored under its matching indices, and construct a partial analogy for each consistent combination of retrieved sources. 4. Complete each of these partial analogies using all sources that match the remaining target methods. Note that a single partial analogy may have multiple completions. 5. Test each candidate analogy by attempting to reproduce the target behavior. Repeat steps 3 - 5 until finding an analogy that duplicates the target behavior. Matching (step 1) uses type information stored at an index to prevent invalid matches. For example, the digit 5 cannot match the type digit-O. On matching, the index signature and source description transfer to the target. This can restrict further ex- tensions to the hypothesis as described in (Stubblefield 1995). The heuristics of step 2 use the information trans- ferred to the target under step 1 to rank hypotheses. The heuristics used in this problem were (1) a speci- ficity criterion that preferred matches deeper in the hierarchy, and (2) a simplicity criterion that favored matches that transferred the fewest differ- ent source descriptions to the target. (Stubblefield 1995) discussses the use of assumptions made in retrieval to evaluate competing hypotheses. Because SCAVENGER uses assumed information to match index nodes, it must evaluate all hypotheses, whether produced by internal or leaf nodes. If all other indices fail to produce a viable analogy, the algorithm will eventually “fail back” to the root, where it effects an exhaustive search of the source base. Since this process may generate the same analogies several times, SCAVENGER keeps a list of previously tried analogies, and checks it to avoid testing the same analogy twice. Although this adds to the algorithm’s overhead, the results of section 3 show that it does not outweigh its benefits. In step 4, SCAVENGER completes a hypothesized analogy by matching each of its unmatched targets with all matching sources. Consequently, a single partial analogy can have many completions. Continuing with the example problem, assumption-based retrieval produces four hypotheses, based on the initial match with the root, and a match between the child node and each target operator (#:G873, #G874 and #:G875). SCAVENGER evaluates each hypothesis in turn by retrieving all source operators referenced under the matching index. For example, evaluating the match between target #:G873 and the child node of Figure 1, produced 11 partial analogies, including one that eventually led to a correct diagnosis: #:G873 -----> borrow-no-decrement #:G874 -----> ? #:G875 -----> ? SCAVENGER generates and tests all possible extensions to each partial analogy. Although a given error may have different possible diagnoses, SCAVENGER stops after finding the first. This is not as thorough as the approach taken by DEBUGGY, but it fits our stated goal of testing SCAVENGER’s ability to efficiently find relevant analogies. successful analogy #:G873 operator: borrow-no-decrement signature: digit x digit x w -> digit description: decrement-error #:G874 operator: normal-subtract signature: digit x digit x w -> digit description: normal-op #:G875 operator: borrow-no-decrement signature: digit x digit x w -> digit description: decrement-error failed analogy #:G873 operator: borrow-no-decrement signature: digit x digit x w -> digit description: decrement-error #:G874 operator: normal-subtract signature: digit x digit x w -> digit description: normal-op #:G875 operator: normal-subtract signature: digit x digit x w -> digit description: normal-op Figure 2 Empirical Memory Management When an analogy correctly reproduces the target behavior, SCAVENGER specializes the index that led to its construction according to the algorithm: 1. Consider the index whose match led to the successful anal- ogy. This may be either a leaf or an internal index node. 2. For each target function in the successful analogy that was not matched to an index pattern but matched a source when the partial analogy was extended: 2.1. Use the signature and description transferred to that function in the successful analogy to partition all analogies produced at the index into matching and non-matching sets. Case-Based Reasoning 699 2.2. Using ID3’s information theoretic evaluation function (Quinlan 1986), rank each partition according to its ability to distinguish successful and failed analogies. 2. Create a new child node using the function description that best partitioned the candidate analogies. For example, Figure 2 shows a successful and a failed analogy that resulted from the match of target #:G873 with the index node of Figure 1. Figure 3 shows two potential extensions to the index of Figure 1, resulting from the respective mappings of #:G874 onto normal-op, and #:G875 onto borrow-no-decrement in the successful analogy. Candidate child node #1 partitions the analogies considered into those that mapped #:G875 onto a source operator from the set of decrement-errors and those that gave this target function a different interpretation. Candidate child node #2 partitions them into those that mapped #:G874 onto normal-op and those that gave it a different interpretation. candidate child node #1 candidate child node #2 (later eliminated) Figure 3 SCAVENGER rates each candidate child node ac- cording to its ability to distinguish the successful and failed analogies using the information theoretic evaluation function from the ID3 learning algo- rithm. For details, see Quinlan (1986). In our exam- ple, it determined that mapping target operator #:G875 onto the borrow-no-decrement source made the greatest contribution to the successful analogy, and selected candidate specialization #l from Figure 3. This new node indexes all matching sources. It is important to note that SCAVENGER is using ID3’s evaluation function differently than was orig- inally intended. Each specialization of the index hierarchy results from a different target problem. Since an analogy that proved useful for solving one problem will not necessarily be correct for another, 700 Learning the examples used to construct the indices lack the global consistency usually assumed by ID3. In addi- tion, since SCAVENGER stops evaluating analogies on finding one that solves the target problem, many of the analogies evaluated in step 2.2 may not have been tested. SCAVENGER assumes these to be failures. One of the questions we consider in the next section concerns the algorithm’s ability to produce a useful hierarchy under these circumstances. Evaluating SCAVENGER We evaluated SCAVENGER on a Power Macintosh 6100 computer. The source base consisted of 67 opera- tors. Although this is less than the 97 bugs that were considered in the original DEBUGGY work, many of those were either combinations of simpler bugs or restrictions of bugs to a specific context (e.g., the leftmost column). Consequently, SCAVENGER was able to duplicate all the original bugs when constructing analogies. We randomly chose 500 buggy subtractions from VanLehn’s (1990) study of children’s subtraction; to these, we added the correct solutions to the problems, producing a set of 575 potential test problems. Our test procedure randomly chose 161 problems from this set, dividing them into a training set containing 75 problems, and a test set containing 86 problems. 1600 f 1200 0) cn -z 3 800 Q iij g 400 .- I- O 2 3 4 5 Run # Figure 4 Figure 4 shows SCAVENGER’s improvement across 5 trials of the training data. Due to the many possible combinations of bugs it had to test, the untrained algorithm took approximately 1600 seconds per problem. The trained version took an average of 11 seconds per problem. The longest solution time for the trained algorithm was 30.5 seconds, the shortest was just over 2 seconds. The un- trained version of the algorithm generated an av- erage of 3096 candidate analogies per problem. After training, it generated an average of 78 per problem. In applying SCAVENGER to the test set (problems that the learning algorithm had not seen), the un- trained version took about 1500 seconds per problem. This improved to 76 seconds after training. Although this is a strong result, it is worth noting that it was skewed by the presence of a small num- ber of completely novel problems that, consequently, took a very long time (2220, 1556,593,569, 568 and 116 seconds) to solve. Excluding these, the average time on the test problems drops to under 11 seconds. + Trained algorithm & - Trained algorithm & 7 13 19 25 31 38 45 52 59 66 Number of sources Figure 5 The algorithm scales well as the source base grows. To examine its scaling behavior, we divided the source base into 10 sets of 6 or 7 buggy operators. Each of these “test units” included approximately 16 problems that could be solved with those operators. Repeated trials “grew” the source base by adding test units and problems. At each stage, we randomly divided the problems into test and training sets and repeated the test described above. As the source base grows, the untrained algorithm shows the ex- ponential rise in complexity we would expect of ex- haustive search (Figure 5). However, times for the trained algorithm remain nearly flat when applied to problems it has trained on. The algorithm also remains efficient on the test problems, although the results show some fluctuation due to the existence of novel problems that caused it to perform badly. The effectiveness of SCAVENGER’s retrieval algo- rithm rests on two factors: the first is the repetition of useful patterns of analogy in the problem domain. These tests confirmed the existence of such common patterns in our test domain. The second is the ability of the learning algorithm to construct ef- fective hierarchies. Although SCAVENGER uses a variation of the well tested ID3 learning algorithm, it did so in a very different way than was originally intended. SCAVENGER used a different analogy for each extension of its index hierarchy. In addition, it stopped evaluating analogies on success, and as- sumed unevaluated analogies to be failures. This contrasts with ID3’s usual global analysis of entire sets of training data, and it was not clear that it would support construction of a useful index hierar- chy. The results of our evaluation strongly suggest that it does, providing another example of the robustness of the ID3 algorithm. The positive results on scaling tests indicate that in spite of the algorithm’s complexity and the large numbers of hypotheses it typically evaluates, the algorithm behaves well as its source base grows. SCAVENGER provides an analogical retrieval mechanism that overcomes the four limitations of traditional approaches that were listed in the introduction: 1. By projecting commonly useful patterns of analogy onto poorly defined target problems, assumption- based retrieval enables reasonable analogies in poorly defined problem domains. The use of empirical learning to select properties that are effective predictors of source utility eliminates the need for a priori restrictions on the retrieval vocabulary. 2 3. SCAVENGER determines the similarity of sources and targets by heuristically ranking hypothesized matches between the target and different index nodes. This is more flexible and expressive than context independent approaches. 4. Empirical memory management takes the structure of target problems into account in organizing the retrieval system. It is interesting to contrast SCAVENGER with the more analytic approach taken by DEBUGGY and most expert systems. SCAVENGER does not reason about problems: it simply remembers useful patterns of analogy. This “analogize-test-remember” approach could be useful in poorly understood problem domains. For example, if we are diagnosing Case-Based Reasoning 701 problems in systems where failure of one component could cause or interact with failures of another in poorly understood ways, SCAVENGER could be a useful tool for discovering and recording patterns of interacting failures. The SCAVENGER experiments corroborate the viability of assumption-based retrieval and empirical memory management for analogical source retrieval. In particular, they show that the experience gained in constructing analogies, although limited and contingent in nature, will support construction of effective hierarchical indices for future source selection. Finally, SCAVENGER provides an alternative to the separation of retrieval and analogical inference common to many models of analogy. In doing so, it offers what we believe to be an elegant, flexible and theoretically interesting model of analogical reasoning. Acknowledgements We wish to thank Kurt VanLehn for graciously pro- viding the original data from his Southbay study of children’s performance on subtraction problems. We also thank the Computer Science Departments at Dartmouth College and The University of New Mexico for providing the intellectual communities that made this research possible. References Brown, J. S. and K. VanLehn. 1980. Repair theory: a generative theory of bugs in procedural skills. Cognitive Science 4~379-426. Burton, R. R. 1982. Diagnosing bugs in a simple pro- cedural skill. in Sleeman and Brown (1982). Gentner, Dedre. 1983. Structure-Mapping: A Theoretical Framework for Analogy. Cogni tiue Science 7: 155-170. HaII, R.P. 1989. Computational Approaches to Analogical Reasoning: A Comparative Analysis. Artificial Intelligence 39(l): 39-120. Hoffman, R. I?. 1995. Monster Analogies. A I Magazine 16(3):1 l-35. Kolodner, J. L. 1993. Case-Based Reasoning. San Mateo, Cal.: Morgan Kaufmann. 702 Learning Mitchell, M. 1993. Analogy-making as Perception. Cambridge, Mass.: MIT Press. Quinlan, J. R. 1986. Induction of Decision Trees. Machine Learning 1 (1 ): 81-106. Sleeman, D. and Brown, J. S. 1982. Intelligent Tutoring Systems. New York: Academic Press. Stubblefield, W. A., 1995. Source Selection for Analogical Reasoning: An Interactionist Approach. PhD Dissertation, Department of Computer Science, University of New Mexico. Thagard, P. 1988. Computational Philosophy of Science. Cambridge, Mass.: MIT Press. VanLehn, K. 1990. Mind Bugs: The origins of proce- dural misconceptions. Cambridge, Mass: MIT Press.
1996
104
1,738
Learning Trees an ules with Set-val ed Features William W. Cohen AT&T Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 wcohen@research.att.com Abstract In most learning systems examples are represented as fixed-length “feature vectors”, the components of which are either real numbers or nominal val- ues. W e propose an extension of the feature- vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set- valued feature color, one might use a feature vec- tor with size=small, species=canis-f amiliaris and color={ white, black}. Since we make no assumptions about the number of possible set elements, this exten- sion of the traditional feature-vector representation is closely connected to Blum’s “infinite attribute” rep resentation. We argue that many decision tree and rule learning algorithms can be easily extended to set- valued features. We also show by example that many real-world learning problems can be efficiently and nat- urally represented with set-valued features; in particu- lar, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features. Introduction The way in which training examples are represented is of critical importance to a concept learning system. In most implemented concept learning systems an exam- ple is represented by a fixed-length vector, the compo- nents of which are called attributes or features. Typi- cally, each feature is either a real number or a member of a pre-enumerated set; the latter is sometimes called a nominal feature. Clearly, fixed-length feature vectors are of limited expressive power. Because of this, numerous previ- ous researchers have proposed learning methods that employ more expressive representations for examples. For instance, many “inductive logic programming” sys- tems (Quinlan 1990b; Muggleton 1992) can be viewed as learning from examples that are represented as saturated Horn clauses (Buntine 1988); in a similar vein, KL-one type languages (Cohen and Hirsh 1994; Morik 1989) and conceptual dependency structures (Pazzani 1990) have also been used in learning. While some successes have been recorded, these al- ternative representations have seen quite limited use. In light of this, it is perhaps worthwhile to review some of the practical advantages of the traditional feature-vector representation over more expressive rep- resentations. One advantage is eficiency . The rapid training time of many feature vector systems greatly facilitates systematic and extensive experimentation; feature-vector learners also have been used on very large sets of training examples (Catlett 1991; Cohen 1995a). A second important advantage is simplicity. The sim- plicity of the representation makes it easier to imple- ment learning algorithms, and hence to refine and im- prove them. A simple representation also makes it eas- ier for non-experts to prepare a set of training exam- ples; for instance, a dataset can be prepared by someone with no background in logic programming or knowl- edge representation. The efficiency and simplicity of the feature-vector representation have doubtless con- tributed to the steady improvement in learning algo- rithms and methodology that has taken place over the last several years. In this paper, we propose an alternative ap- proach to generalizing the feature vector representa- tion that largely preserves these practically impor- tant advantages. We propose that the feature-vector representation be extended to allow set-valued fea- tures, in addition to the usual nominal and con- tinuous features. A set-valued feature is simply a feature whose value is a set of strings. For in- stance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a vec- tor with size=small, species=canis-familiaris and color={white,black}. Importantly, we will not assume that set elements (the colors in the example above) are taken from some small, pre-enumerated set, as is typically the case with nominal values. We will show that a connection can be established between set-valued features in this setting and the “infinite attribute model” (Blum 1992). In the remainder of the paper, we first define set- valued features precisely. We then argue that many decision tree and rule learning algorithms can be easily extended to set-valued features, and that including set- valued features should not greatly increase the number of examples required to learn accurate concepts, rel- ative to the traditional feature-vector representation. We next present some examples of problems that nat- urally lend themselves to a set-valued representation, and argue further that for some of these problems, the use of set-valued representations is essential for reasons of efficiency. Finally we summarize our results and con- clude. Decision Trees 709 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Set-valued features function MaxGainTest(S, i) In the interests of clarity we will now define our pro- posed extension of the feature vector representation more precisely. A domain D = (7-h,< il, V, Y) con- sists of a dimension n, a type vector 3 = (tl, . . . , t,), a name vector ii = (ul, , . . , u,), a value vector P = (vl,-.,K>, and a class set Y = (~1,. . . ,gk}. Each component ti of the type vector ?must be one of the symbols cant inuous, nominal, or set. Each compo- nent ui of the name vector u’ must be a string over the alphabet C. (Here C is a fixed alphabet, such as (0, 1) or {a,. . . , z}.) The class set Y and the components Vi of the value vector 7 are sets of strings over C. Intu- itively, a “domain” formalizes the sort of information about a learning problem that is recorded by typical feature-vector learners such as C4.5 (Quinlan 1994)- with the addition of the new feature type set. Using domains we can define the notion of a “le- gal example”. A legal example of the domain D = (n, <, Z, c, Y) is a pair (Z, y) where Z is a legal instance and y E Y. A legal instance of the domain D is a vec- tor Z= (XI,..., x~) where for each component xi of Z, (a) if ti = continuous then xi is a real number; (b) if ti = nominal then xi E V;: (otherwise, if ti # nominal, the value of V;: is irrelevant); (d) if ti = set then xi is a set of strings over C, i.e., xi = (~1,. . . , sul}. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Visited := 0; TotalCount[+]:=O; TotalCount[-]:=O; for each example (Z, y) in the sample S do for each string s E zL do Visited := Visited U(s); ElemCount[s, y] := ElemCount[s, y]+l; endfor TotalCount[y] := TotalCount[y]+l; endfor BestEntropy = -1; for each s E Visited do p := ElemCount[s,+]; n := ElemCount[s,-1; if (Entropy(p, n) > BestEntropy) then BestTest := “3 E uc”; BestEntropy := Entropy(p, n); endif p’ := TotalCount[+] - ElemCount[s,+]; 12’ := TotalCount[--] - ElemCount[s,-1; if (Entropy@‘, n’) > BestEntropy) then BestTest := “s @ Us”; BestEntropy := Entropy@‘, n’); endif ElemCount[s,+] := 0; ElemCount[s,-] := 0 endfor return BestTest Figure 1: Finding the best element-of test Finally, we will define certain primitive tests on these instances. If ui is a component of the name vector c, T is a real number and s is a string, then the following are all primitive tests for the domain D: ui = s and ui # s for a nominal feature ui; ui 5 r and ui 2 r for a continuous feature ui; and s E u; and s @ ui for a set-valued feature ui. The semantics of these primitive tests are all defined in the obvious way-for instance, if ua = color, then “puce E color” denotes the set of legal instances Ic’ = (xl,. . . , x~) such that puce E x3. Boolean combinations of primitive tests are also de- fined in the obvious way. We can now speak precisely of representations such as DNF, decision trees, or decision lists over set-valued representations. Lines l-9 of the function loop over the examples in the sample, and record, for each string s that appears as an element of the i-th feature of an example. the number of times that s appears in a positive example, and the number of times that s appears in a negative example. These counts are stored in ElemCount Cs, +3 and ElemCount Cs, -1. (These counters are assumed to be initialized to zero before the routine is called; they are reset to zero at line 20.) Additionally, a set Visited of all the elements s that appear in feature i of the sample is maintained, and the total number of positive and negative examples is recorded in TotalCount [+I and TotalCount C-l. Implementing Set-Valued Features Let us now consider the problem of learning decision trees for the representation described above-that is, for feature vector examples that contain a mix of set- valued, nominal and continuous features. As a concrete case, we will consider how the ID3 algorithm (Quinlan 1990a) can be extended to allow internal nodes to be labeled with element-of tests of the form s E ui and s $! u; on set-valued features ui, in addition to the usual tests on continuous or nominal features. Lines lo-25 make use of these counts to find the best test. For a given set element s, the number of ex- amples of class y covered by the test s E ui is sim- ply ElemCount Cs, yl ; similarly the number of exam- ples of class y covered by the test s $ ui is given by TotalCount CT-J] -ElemCount [s, yl . The maximal en- tropy test can thus be found by looping over the ele- ments s in the set Visited, and computing the entropy of each test based on these formulae. To implement this extension of ID3, it is clearly nec- essary and sufficient to be able to find, for a given set- valued feature ui and a given set of examples S, the element-of test s E ui or s $ ui that maximizes entropy on S. Figure 1 presents a function MaxGainTest that finds such a maximizing test. For simplicity, we have assumed that there are only two classes. A few points regarding this procedure bear mention. Deficiency. Defining the size of a sample in the nat- ural way, it is straightforward to argue that if access- ing ElemCount and the Visited set requires constant time,l then invoking MaxGainTest for all set-valued features of a sample only requires time linear in the to- ‘In a batch setting, when all the string constants are known in advance, is it trivial to implement constant-time access procedures for ElemCount and Visited. One simple technique is to replace every occurrence of a string s in the 710 Learning tal size of the sample. 2 Hence finding maximal-entropy element-of tests can be done extremely efficiently. No- tice that this time bound is independent of the number of different strings appearing as set elements. “iWonotone” element-of tests. One could restrict this procedure to generate only set-valued tests of the form “s E ui” by simply removing lines 15-19. Henceforth, we will refer to these tests as monotone element-of tests. Theory, as well as experience on practical problems, indicates that this restriction may be useful. Generality. This routine can be easily adapted to maximize a metric other than entropy, such as the GIN1 criteria (Brieman et al. 1984), information gain (Quin- lan 1990b), predictive value (Apt6 et al. 1994), Bayes- Laplace corrected error (Clark and Niblett 1989), or LS-content (Ali and Pazzani 1993). In fact, any metric that depends only on the empirical performance of a condition on a sample can be used. Hence it is possible to extend to set-valued features virtually any top-down algorithm for building decision trees, decision lists, or rule sets. A Theory Of Set-Valued Features Given that it is possible to extend a learning system to set-valued features, the question remains, is it useful? It might be that few real-world problems can be natu- rally expressed with set-valued features. More subtly, it might be that learning systems that use set-valued features tend to produce hypotheses that generalize rel- atively poorly. The former question will be addressed later. In this section we will present some formal results that suggest that extending a boolean hypothesis space to include element-of tests on set-valued features should not sub- stantially increase the number of examples needed to learn accurate concepts. In particular, we will relate the set-valued attribute model to Blum’s (1992) “infi- nite attribute space” model, thus obtaining bounds on the sample complexity required to learn certain boolean combinations of element-of tests. In the infinite attribute space model of learning, an large (possibly infinite) space of boolean attributes A is assumed. This means that an instance can no longer be represented as a vector of assignments to the attributes; instead, an instance is represented by a list of all the attributes in A that are true for that instance. The size dataset with a unique small integer, called the index of s. Then ElemCount can be a T x k matrix of integers, where T is the largest index and k is the number of classes. Similarly, Visited can be a single length-r array of flags (to record what has been previously stored in Visited) and another length-r array of indices. *A brief argu ment: any invocation of MaxGainTest, the number of times line 5 is repeated is bounded by the total size of the i-th features of examples in the sample, and the number of iterations of the for loop at lines lo-21 is bounded by the size of Visited, which in turn is bounded by the number of repetitions of line 5. of an instance is defined to be the number of attributes in this list. One can represent an instance I in the infi- nite attribute model with a single set-valued feature true-attribs, whose value is the set of attributes true for I. If I’ is the set-valued representation of I, then the element-of test “aj E true-attribs” succeeds for I’ exactly when the boolean attribute aj is true for I, and the test “aj $ true-attribs” succeeds for I’ ex- actly when aj is false for 1. Conversely, given an instance I represented by the n set-valued features ur , . . . , u,, one can easily construct an equivalent instance in the infinite attribute model: for each set-valued feature ui and each possible string c E C*, let the attribute s-in-ui be true precisely when the element-of test “s E ui” would succeed. This leads to the following observation. Observation 1 Let D = (n,< ii, c, Y = {+, -}) be a domain contuining only nominal and set-valued fea- tures, and let ,C be any language of boolean combinations of primitive tests on the features in ii. Then there exists a boolean language &’ in the infi- nate attribute model, a one-to-one mappang fI from le- gal instances of D to instances in the infinite attribute model, and a one-to-one mappang fc from concepts an l to concepts in .C’ such that v’c E L (I E C) e (fI(1) E fc(C)) In other words, if one assumes there are only two classes and no continuous features, then every set- valued feature domain D and set-valued feature lan- guage l has an isomorphic formulation in the infinite attribute model. This observation allows one to immediately map over results from the theory of infinite attributes, such as the following: Corollary 2 Let D be a two-class domain containing only set-valued features, but containing any number of these. Let n be an upper bound on the size of legal instances of D. Let Lk be the language of conjunctions of at most k element-of tests, let MI, be the language of conjunctions of at most k monotone element-of tests, and let VCdim(*) denote the Vapnik-Chervonenkis (V- C) dimension of a language. Then e VCdim(Lk) 5 (n + l)(k + 1); e VCdim(Mk) 5 n + 1, irrespective of k. Proof: Immediate consequence of the relationships be- tween mistake bounds and VC-dimension established by Littlestone (1988) and Theorems 1 and 2 of Blum (1992). Together with the known relationship between VC- dimension and sample complexity, these results give some insight into how many examples should be needed to learn using set-valued features. In the monotone case, conjunctions of set-valued element-of tests for in- stances of size n have the same VC-dimension as ordi- nary boolean conjunctions for instances of size n. In Decision Trees 711 the non-monotone case, set-valued features are some- what more expressive than non-monotone boolean fea- tures. This suggests that negative element-of tests should probably be used with some care; although they are computationally no harder to find than monotone element-of tests, they are an intrinsically more expres- sive representation (at least when large conjunctions are possible), and hence they may require more examples to learn accurately. Using Corollary 2 and other general results, bounds on the V-C dimension of related languages can also eas- ily be established. For example, it is known that if l is a language with V-C dimension d, then the language of e-fold unions of concepts in C has V-C dimension of at most 2ed log (et) (K earns and Vazarani 1994, p. 65). Applying this result to XI, immediately yields a poly- nomial upper bound on DNF over set-valued features, which includes as a subset decision trees over set-valued features. Alternatives to set-valued features In the preceding section we showed that if continuous attributes are disallowed then the set-valued feature model is equivalent to the infinite attribute model. An- other consequence of this observation is that in a batch setting, in which all examples are known in advance, set-valued features can be replaced by ordinary boolean features: one simply constructs a boolean feature of the form s-in-ui for every string s and every set-valued fea- ture ui such that s appears in the i-th component of some example. Henceforth, we will call this the charac- teristic vector representation of a set-valued instance. One drawback of the characteristic vector represen- tation is that if there are m examples, and d is a bound on the total size of each set-valued instance, then the construction can generate md boolean features. This means that the size of the representation can grow in the worst case from O(md) to O(m’d)-a. e., quadrati- cally in the number of examples m. We will see later that some natural applications do indeed show this quadratic growth. For even moder- ately large problems of this sort, it is impractical to use the characteristic vector representation if vectors are implemented naively (i.e., as arrays of length n, where n < md is the number of features). However, it may still be possible to use the characteristic vector repre- sentation in a learning system that implements vectors in some other fashion, perhaps by using a “sparse ma- trix” to encode a set of example vectors. Hence, it is clear that there are (at least) two other ways in which we could have described the technical contributions of this paper: as a scheme for extending top-down decision trees and rule learning algorithms to the infinite attribute model; or as a specific data structure for top-down decision tree and rule learning algorithms to to be used in domains in which the feature vectors are sparse. We elected to present our technical results in the model of set-valued features because this model en- joys, in our view, a number of conceptual and peda- gogical advantages over the other models. Relative to the infinite-attribute model, set-valued features have an advantage in that they are a strict generalization of the traditional feature-vector representation; in particular, they allow ordinary continuous and nominal features to co-exist with “infinite attributes” in a natural way. Additionally, the nature of the generalization (adding a new kind of feature) makes it relatively easy to extend existing learning algorithms to set-valued features. We note that to our knowledge, the infinite attribute model has seldom been used in practice. There are also certain conceptual advantages of the set-valued feature model over using a sparse implemen- tation of the characteristic vector representation. For instance, the set-valued feature model lends itself nat- urally to cases in which some features require dense en- coding and others require a sparse encoding. Also, the same learning system also be used without significant overhead on problems with either sparse or non-sparse feature vectors. A more subtle advantage is that for set-valued fea- tures, the representation as perceived by the users and designers of a learning system closely parallels the ac- tual implementation. This has certain advantages when selecting, designing, and implementing learning algo- rithms. For example, set-valued features share with traditional (non-sparse) feature vectors the property that the size of an example is closely related to the V-C dimension of the learning problem. This is not the case for a sparse feature vector, where the number of components in the vector that represents an example depends both on the example’s size and on the size of a dataset. One can easily imagine a user naively asso- ciating the length of a feature vector with the difficulty of a learning problem-even though long feature vectors may be caused by either large amounts of data (which is of course helpful in learning) or by long documents (which is presumably not helpful in learning.) Applications In this section we will present some results obtained by using set-valued features to represent real-world prob- lems. The learning system used in each case is a set- valued extension of the rule learning system RIPPER (Cohen 1995a), extended as suggested above. To date we have discovered two broad classes of prob- lems which appear to benefit from using a set-valued representation. The first class is learning problems derived by propositionalizing first-order learning prob- lems. The second is the class of text categorization problems, i. e., learning problems in which the instances to be classified are English documents. First-order learning A number of theoretical results have been presented which show that certain first-order languages can be converted to propositional form (LavraC and DZeroski 1992; DZeroski et al. 1992; Cohen 1994). Further, at 712 Learning least one practical learning system (LINUS) has been built which learns first-order concepts by proposition- alizing the examples, invoking a propositional learning system on the converted examples, and then translating the resulting propositional hypothesis back to a first- order form (LavraE and DZeroski 1994). There are several reasons why a LINUS-like system might be preferred to one that learns first-order con- cepts in a more direct fashion. One advantage is that it allows one to immediately make use of advances in propositional learning methods, without having to design and implement first-order versions of the new propositional algorithms. Another potential advantage is improved efficiency, since the possibly expensive pro- cess of first-order theorem-proving is used only in trans- lation. A disadvantage of LINUS-like learning systems is that some first-order languages, when propositional- ized, generate an impractically large number of fea- tures. However, often only a few of these features are relevant to any particular example. In this case, using set-valued features to encode propositions can dramat- ically reduce storage space and CPU time. We will illustrate this with the problem of predicting when payment on a student loan is due (Pazzani and Brunk 1991). In Pazzani and Brunk’s formulation of this problem, the examples are 1000 labeled facts of the form no-payment-due(p), where p is a constant sym- bol denoting a student. A set of background predicates such as disabled(p) and enrolled(p, school, units) are also provided. The goal of learning is to find a logic program using these background predicates that is true only for the instances labeled “+“. Previous experiments (Cohen 1993) have shown that a first-order learning system that hypothesizes “k- local” programs performs quite well on this dataset. It is also a fact that any non-recursive logic pro- gram that is “k-local” can be emulated by a mono- tone DNF over a certain set of propositions (Cohen 1994). The set of propositions is typically large but polynomial in many parameters of the problem, in- cluding the number of background predicates and the number of examples.3 For the student loan prob- lem with k = 2, for instance, some examples of the propositions generated would be plsz(A) - true iff 3B :enlist(A,B) A peace-corps(B) and p&A) G true iff 3B : longest_absencefromschool(A,B)A It (B, 4). Often, however, relatively few of these propo- sitions are true for any given example. This suggests giving using a set-valued feature to encode, for a given example, the set of all true constructed propositions which are true of that example. We propositionalized the student loan data in this way-using set-valued features to encode the proposi- tions generated by the k-local conversion process-for various values of k. Propositions were limited to those that satisfied plausible typing and mode constraints. 31t is exponential only in k (the “locahty” of clauses) and the arity of the background predicates. Bias m RIPPER Grende12 Time Error(%) Time Error(%) 2-local 100 0.4 ;:; 10.3 2.8 500 2.0 41.0 0.0 P-local 100 1.9 3.5 88.8 2.7 500 9.1 0.0 376.0 0.0 Table 1: k-local bias: direct VUS. set-valued feature implementations on Pazzani and Brunk’s student loan prediction. The column labeled m lists the number of training examples. CPU times are on a Sun Sparcsta- tion 20/60 with 96Mb of memory. We then ran the set-valued version of RIPPER on this data, and compared to Grendel2 (Cohen 1993) config- ured so as to directly implement the k-local bias. Since there is no noise in the data, RIPPER’s pruning al- gorithm was disabled; hence the learning system be- ing investigated here is really a set-valued extension of-propositional FOIL. Also, only monotone set-valued tests were allowed, since monotone DNF is enough to emulate the k-local bias. For each number of training examples m given, we report the average of 20 trials. (In each trial a randomly selected m examples were used for training, and the remainder were used for testing.) The results are shown in Table 1. None of the dif- ferences in error rates are statistically significant; this is expected, since the learning algorithms are virtually identical. However, the set-valued RIPPER is substan- tially faster than the first-order system Grende12. The speedup in learning time would more than justify the cost of converting to propositional form, if any moder- ately substantial cross-validation experiment were to be carried out;4 for the larger problemseven a single learn- ing run is enough to iustifi the use of set-valued RIP- PER. (Additionally, “one would expect that RIPPER would show an improvement in error rate on a noisy dataset, since Grende12 does not include any pruning mechanisms.) In this case the number of propositional features can be bounded independently of the number of ex- amples. However, other first-order learning systems such as FOIL (Q uinlan 1990b) and Progol (Muggle- ton 1995) allow constant values to appear in learned clauses, where the constant values are derived from the actual training data. If such a first-order language were propositionalized, then this would certainly lead to a number of features linear in the number of examples, causing quadratic growth in the size of the proposition- alized dataset. Text categorization Many tasks, such as e-mail filtering and document rout- ing, require the ability to classify text into predefined 4The time required to convert to propositional form is 35 seconds for k = 2 and 231 seconds for k = 4. A total of 139 propositions are generated for k = 2 and 880 for k = 4. Decision Trees 713 Rocchio RIPPER Domain #errors recall precis #errors recall precis time bonds 31.00 50.00 96.77 34.00 46.67 93.33 1582 boxoffice 26.00 52.38 78.57 20.00 64.29 84.38 2249 budget 170.00 35.53 61.95 159.00 32.99 70.65 2491 burma 46.00 55.91 91.23 33.00 69.89 92.86 2177 dukakis 107.00 0.00 100.00 112.00 17.76 44.19 3593 hostages 212.00 37.72 55.13 206.00 44.30 56.11 4795 ireland 106.00 32.48 58.46 97.00 27.35 72.73 1820 nielsens 49.00 52.87 85.19 35.00 72.41 85.14 10513 quayle 73.00 81.20 69.23 65.00 87.22 70.73 2416 average 91.11 44.23 77.39 84.56 51.43 74.46 3652.50 Table 2: RIPPER and Rocchio’s algorithm on AP titles with full sample categories. Because of this, learning how to classify documents is an important problem. In most text categorization methods used in the in- formation retrieval community, a document is treated as an unordered “bag of words”; typically a special- purpose representation is adopted to make this efficient. For shorter documents a “set of words” is a good ap- proximation of this representation. This suggests rep- resenting documents with a single set-valued feature, the value of which is the set of all words appearing in the document. Traditional feature-vector based symbolic learning methods such as decision tree and rule induction can be and have been applied to text categorization (Lewis and Ringuette 1994; Lewis and Catlett 1994; Apt& et al. 1994; Cohen 199510). A number of repre- sentations for symbolic learning methods have been ex- plored, but generally speaking, features correspond to words or phrases. Since the number of distinct words that appear in a natural corpus is usually large, it is usually necessary for efficiency reasons to select a rela- tively small set of words to use in learning. An advantage of the set-valued representation is that it allows learning methods to be applied without worry- ing about feature selection (at least for relatively short documents). We note that the feature selection process can be complex; for instance one set of authors (Apt6 et al. 1994) d evoted four pages of a paper to explaining the feature selection process, as compared to five pages to explaining their rule induction program. It is also sometimes the case that the number of features must be limited for efficiency reasons to fewer than would be optimal. For instance, Lewis and Ringuette (1994) report a case in which the performance of a decision tree learning method continued to improve as the num- ber of features was increased from 1 to 90; presumably on this problem still more features would lead to still better performance. The following section describes an evaluation of the set-valued version of RIPPER on text categorization problems. The text categorization problems The bench- mark we will use is a corpus of AP newswire head- lines, tagged as being relevant or irrelevant to topics like “federal budget” and “Neilsens ratings” (Lewis and 714 Learning Learner #errors recall precision FP=I Rocchio 91.11 44.23 77.39 0.52 Prob. class. 0.41 RIPPER w/ negation 86.00 60.12 72.26 0.64 RIPPER all words 84.56 51.43 74.46 0.59 10,000 words 85.11 51.61 73.62 0.59 5,000 words 85.22 50.95 73.84 0.59 1,000 words 85.56 49.64 74.17 0.58 500 words 86.67 50.72 72.51 0.58 10 words 87.78 52.80 72.07 0.59 50 words 91.78 44.39 73.17 0.52 10 words 98.56 35.12 72.06 0.41 5 words 109.33 17.94 85.61 0.23 1 word 118.22 0 100.00 0.00 Table 3: Effect of entropy-driven feature selection. Gale 1994; Lewis and Catlett 1994). The corpus con- tains 319,463 documents in the training set and 51,991 documents in the test set. The headlines are an av- erage of nine words long, with a total vocabulary is 67,331 words. No preprocessing of the text was done, other than to convert all words to lower case and re- move punctuation. In applying symbolic learning system to this problem, it is natural to adopt a characteristic vector version of the set-of-words representation--i. e., to construct for each word w one boolean feature which is true for a document d iff w appears in d. This representation is not practical, however, because of the size of the dataset: Lewis and Catlett (1994) estimated that stor- ing all 319,463 training instances and all 67,331 possible word-features would require 40 gigabytes of storage. However, the set-valued extension of RIPPER can be easily run on samples of this size. Table 2 sum- marizes monotone RIPPER’s performance, averaged across nine of the ten categories, and compares this to a learning algorithm that uses a representation optimized for text-Rocchio’s algorithm, which represents a docu- ment with term frequency/inverse document frequency weights (TF-IDF). The implementation used here fol- lows Ittner ef al. (1995). 5 Although both algorithms 5Very briefly, each document is represented as a (sparse) are attempting to minimize errors on the test set 9 we also record the widely used measurements of recall and precision.6 RIPPER achieves fewer errors than Roc- chio on 7 of the 9 categories, and requires a reasonable amount of time (given the size of the training set .) Table 3 gives some additional points of reference on this benchmark. All entries in the table are av- erages over all nine problems (equally weighted). So that we can compare earlier work, we also record the value of the F-measure (Van Rijsbergen 1979, pages 168-176) at ,6’ = 1. The F-measure is defined as Fp = ~~+l)precision.recall - f12precision+recall where ,0 controls the impor- tance given to precision relative to recall. A value of p = 1 corresponds to equal weighting of precision and recall, with higher scores indicating better performance. The first few rows of the table show the average per- formance of Rocchio’s algorithm, a probabilistic-classi- fier used bv Lewis and Gale (1994), and non-monotone RIPPER (“i.e., RIPPER when tests of the form e # S are allowed .) So far, we’have demonstrated that good performance can be obtained without using feature selection by us- ing set-valued features. We will now make a stronger claim: that feature selection is actually harmful in this domain. The final rows of Table 3 show the perfor- mance of monotone RIPPER when feature selection is applied. We used the strategy employed by Lewis and Ringuette (1994) and also Apte et al. (1994) in a similar context: in each learning problem the mutual informa- tion of each word and the class was computed, and the k~ words that scored highest were retained as features. In our experiments, we implemented this by removing the low-information words from the sets that represent examples. Aside from efficiency issues, this is equiva- lent to using the k retained words as binary features; however, by using set-valued features we were able to explore a much wider range of values of k than would be otherwise be possible. To summarize the results, although around 100 fea- tures does give reasonably good performance, more fea- tures always lead to better average performance ( as measured by error rate). This result might be un- vector, the components of which correspond to the words that appear in the training corpus. For-a document d, the value of the component for the word wi depends on the frequency of wi in d, the inverse frequency of w; in the corpus, and the length of d. Learning is done by adding up the vectors corresponding to the positive examples of a class C and subtracting the vectors corresponding to the negative examples of C, yielding a “prototypical vector” for class C. Document vectors can then be ranked according to their distance to the prototype. A novel document will be classified as positive if this distance is less than some threshold tc. In the experiments, tc was chosen to minimize error on the training set. ‘Recall is the fraction of the time that an actual positive example is predicted to be positive by the classifier, and plre- &ion is the fraction of the time that an example predicted to be positive is actually positive. We define the precision a classifier that never predicts positive to be 100%. expected if one were to think in terms of the 66197- component characteristic vector that is used for these problems-one would think that feature selection would surely be beneficial in such a situation. However, the result is unsurprising in light of the formal results. Be- cause the documents to be classified are short (aver- aging only nine words long) the VC-dimension of the hypothesis space is already quite small. Put another way, a powerful type of “feature selection” has already been performed, simply by restricting the classification problem from complete documents to the much shorter headlines of documents-as a headline is by design a concise and informative description of the contents of the document. Other results Although space limitations preclude a detailed discussion, experiments have also been per- formed (Cohen and Singer 1996) with another widely- used benchmark, the Reuters-22173 dataset (Lewis 1992). Compared to the AP titles corpus, this corpus has fewer examples, more categories, and longer docu- ments. The stories in the Reuters-22173 corpus aver- age some 78 words in length, not including stopwords. The vocabulary size is roughly comparable, with 28,559 words appearing in the training corpus. Although the longer documents have a larger effective dimensional- ity, set-valued RIPPER without feature selection also seems to achieve good performance on this dataset. For instance, following the methodology of Apte et al., RIPPER’s “micro-averaged breakeven point” for this benchmark is 80.9%, slightly better than the best reported value of 80.5% for SWAP-l; following the methodology of Lewis and Ringuette (1994), a micro- averaged breakeven point of 71.9% was obtained, again bettering the best previously reported value of 67%. Set-valued RIPPER averages a little over 5 minutes of CPU time to learn from the 15,674-example training sets used by Lewis. Conclusions The feature vector representation traditionally used by machine learning systems enjoys the practically impor- tant advantages of efficiency and simplicity. In this pa- per we have explored several properties of set-valued featzlres, an extension to the feature-vector representa- tion that largely preserves these two advantages. We showed that virtually all top-down algorithms for learning decision trees and rules can be easily extended to set-valued features. We also showed that set-valued features are closely related to a formal model that al- lows an unbounded number of boolean attributes. Us- ing this connection and existing formal results, we ar- gued that the sample complexity of set-valued feature learners should be comparable to that of traditional learners with comparably sized examples. Finally, we demonstrated that two important classes of problems lend themselves naturally to set-valued features: problems derived by propositionalizing first- order representations, and text categorization prob- lems In each case the use of set-valued features leads Decision Trees 715 to a reduction in memory usage that can be as great as quadratic. This dramatic reduction in memory makes it possible to apply set-valued symbolic learners to large datasets-ones that would require tens of thousands of features if traditional representations were used- without having to perform feature selection. References Kamal Ali and Machael Pazzani. HYDRA: A noise- tolerant relational concept learning algorithm. In Proceed- ings of the 13th International Joint Conference on Artif;- cial Intelligence, Chambery, France, 1993. Chidanand Apt&, Fred Damerau, and Sholom M. Weiss. Automated learning of decision rules for text catego- rization. ACM Transactions on Information Systems, 12(3):233-251, 1994. Document Analysis and Information Retrieval, pages 301- 315, Las Vegas, NV, 1995. ISRI; Univ. of Nevada, Las Ve- gas. Michael Kearns and Umesh Vazarani. An introduction to computational learning theory. The MIT Press, Com- bridge, Massachusetts, 1994. Nada LavraZ and Saga Dtieroski. Background knowledge and declarative bias in inductive conceptlearning. In K. P. Jantke, editor, Analogical and Inductive Inference: In- ternational Workshop AII’9Z. Springer Verlag, Daghstuhl Castle, Germany, 1992. Lectures in Artificial Intelligence Series #642. Avrim Blum. Learning boolean functions in an infinite attribute space. Machine Learning, 9(4):373-386, 1992. L. Brieman, J. H. Friedman, R.A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, Belmon, CA, 1984. Wray Buntine. Generalized subsumption and its applica- tion to induction and redundancy. Artificial Intelligence, 36(2):149-176, 1988. Jason Catlett. Megainduction: a test flight. In Proceedings of the Eighth International Workshop on Machine Learn- ing, Ithaca, New York, 1991. Morgan Kaufmann. P. Clark and T. Niblett. The CN2 induction algorithm. Machine Learning, 3( 1)) 1989. William W. Cohen and Haym Hirsh. Learning the CLAS- SIC description logic: Theoretical and experimental re- sults. In Principles of Knowledge Representation and Rea- soning: Proceedings of the Fourth International Confer- ence (KR94). Morgan Kaufmann, 1994. William W. Cohen and Yoram Singer. Context-sensitive learning methods for text categorization. To appear in SIGIR-96, 1996. William W. Cohen. Rapid prototyping of ILP systems us- ing explicit bias. In Proceedings of the 1993 IJCAI Work- shop on Inductive Logic Programming, Chambery, France, 1993. William W, Cohen. Pat-learning nondeterminate clauses. In Proceedings of the Eleventh National Conference on Ar- tificial Intelligence, Seattle, WA, 1994. William W. Cohen. Fast effective rule induction. In Ma- chine Learning: Proceedings of the Twelfth International Conference, Lake Taho, California, 1995. Morgan Kauf- mann. William W. Cohen.- Text categorization and relational learning. In Machine Learning: Proceedings of the Twelfth International Conference, Lake Taho, California, 1995. Morgan Kaufmann. Saga D%eroski, Stephen Muggleton, and Stuart Russell. Pat-learnability of determinate logic programs. In Pro- ceedings of the 1992 Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, 1992. David J. Ittner, David D. Lewis, and David D. Ahn. Text categorization of low quality images. In Symposium on Nada LavraCS and Saga DEeroski. Inductive Logic Program- ming: Techniques and Applications. Ellis Horwood, Chich- ester, England, 1994. David Lewis and Jason Catlett. Heterogeneous uncer- tainty sampling for supervised learning. In Machine Learn- ing: Proceedings of the Eleventh Annual Conference, New Brunswick, New Jersey, 1994. Morgan Kaufmann. David Lewis and William Gale. Training text classifiers by uncertainty sampling. In Seventeenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1994. David Lewis and Mark Ringuette. A comparison of two learning algorithms for text categorization. In Symposium on Document Analysis and Information Retrieval, Las Ve- gas, Nevada, 1994. David Lewis. Representation and learning in informa- tion retrieval. Technical Report 91-93, Computer Science Dept., University of Massachusetts at Amherst, 1992. PhD Thesis. Nick Littlestone. Learning quickly when irrelevant at- tributes abound: A new linear-threshold algorithm. Ma- chine Learning, 2(4), 1988. Katharina Morik. A bootstrapping approach to concep- tual clustering. In Proceedings of the Sixth International Workshop on Machine Learning, Ithaca, New York, 1989. Morgan Kaufmann. Stephen H. Muggleton, editor. Inductive Logic Program- ming. Academic Press, 1992. Stephen Muggleton. Inverse entailment and Progol. New Generation Computing, 13(3,4):245-286, 1995. Michael Pazzani and Clifford Brunk. Detecting and cor- recting errors in rule-based expert systems: an integration of empirical and explanation-based learning. Knowledge Acquisition, 3:157-173, 1991. Michael Pazzani. Creating a Memory of Causal Relation- ships. Lawrence Erlbaum, 1990. J. Ross Quinlan. Induction of decision trees. Machine Learning, l(l), 1990. J . Ross Quinlan. Learning logical definitions from rela- tions. Machine Learning, 5(3), 1990. J. Ross Quinlan. C4.5: programs for machine learning. Morgan Kaufmann, 1994. C. J. Van Rijsbergen. Information Retrieval. Butterworth, London, second edition, 1979. 716 Learning
1996
105
1,739
Lazy Decision Trees Jerome H. Friedman Ron Kohavi Ueogirl Uun Statistics Department and Data Mining and Visualization Electrical Engineering Department Stanford Linear Accelerator Center Silicon Graphics, Inc. Stanford University 2011 N. Shoreline Blvd Stanford University Stanford, CA 94305 Mountain View, CA 94043-1389 Stanford, CA 94305 jhf@playfair.stanford.edu ronnyk@sgi.com yygirlQcs.stanford.edu Abstract Lazy learning algorithms, exemplified by nearest- neighbor algorithms, do not induce a concise hypoth- esis from a given training set; the inductive process is delayed until a test instance is given. Algorithms for constructing decision trees, such as C4.5, ID3, and CART create a single “best” decision tree during the training phase, and this tree is then used to classify test instances. The tests at the nodes of the con- structed tree are good on average, but there may be better tests for classifying a specific instance. We pro- pose a lazy decision tree algorithm-LAzuDT-that conceptually constructs the “best” decision tree for each test instance. In practice, only a path needs to be constructed, and a caching scheme makes the al- gorithm fast. The algorithm is robust with respect to missing values without resorting to the complicated methods usually seen in induction of decision trees. Experiments on real and artificial problems are pre- sented. Introduction Delay is preferable to error-. -Thomas Jeflerson (1743-1826) The task of a supervised learning algorithm is to build a classifier that can be used to classify unlabelled in- stances accurately. Eager (non-lazy) algorithms con- struct classifiers that contain an explicit hypothesis mapping unlabelled instances to their predicted labels. A decision tree classifier, for example, uses a stored de- cision tree to classify instances by tracing the instance through the tests at the interior nodes until a leaf con- taining the label is reached. In eager algorithms, the inductive process is attributed to the phase that builds the classifier. Lazy algorithms (Aha to appear), how- ever, do not construct an explicit hypothesis, and the inductive process can be attributed to the classifier, which is given access to the training set, possibly pre- processed (e.g., data may be normalized). No explicit A longer version of this paper is available at http://robotics.stanford.edu/“ronnyk mapping is generated and the classifier must use the training set to map each given instance to its label. Building a single classifier that is good for all pre- dictions may not take advantage of special character- istics of the given test instance that may give rise to an extremely short explanation tailored to the specific instance at hand (see Example 1). In this paper, we introduce a new lazy algorithm- LAzYDT-that conceptually constructs the “best” de- cision tree for each test instance. In practice, only a path needs to be constructed, and a caching scheme makes the algorithm fast. Practical algorithms need to deal with missing values, and LAZYDT naturally han- dles them without resorting to the complicated meth- ods usually seen in induction of decision trees (e.g., sending portions of instances down different branches or using surrogate features). ecision Trees and Their Limitations Top down algorithms for inducing decision trees usu- ally follow the divide and conquer strategy (Quinlan 1993; Breiman et cal. 1984). The heart of these algo- rithms is the test selection, i.e., which test to conduct at a given node. Numerous selection measures exist in the literature, with entropy measures and the Gini index being the most common. We now detail the entropy-based selection measure commonly used in ID3 and its descendants (e.g., C4.5) because the LAZYDT algorithm uses a related mea- sure. We will then discuss some of the limitations of eager decision tree algorithms and motivate our lazy approach. Test Selection in Decision Trees To describe the entropy-based selection measure, we follow the notation of Cover 8z Thomas (1991). Let Y be a discrete random variable with range Y; the entropy of Y, sometimes called the information of Y, Decision Trees 717 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. is defined as Problems with Decision Trees The problems with decision trees can be divided into two categories: algorithmic problems that complicate the algorithm’s goal of finding a small tree and inherent problems with the representation. Top-down decision-tree induction algorithms imple- ment a greedy approach that attempts to find a small tree. All the common selection measures are based on one level of lookahead. Two related problems inherent to the representation structure are replication and fragmentation (Pagallo & Haussler 1990). The replication problem forces du- plication of subtrees in disjunctive concepts, such as (A A B) v (C A 0) ( one subtree, either A A B or C A D must be duplicated in the smallest possible decision tree); the fragmentation problem causes par- titioning of the data into smaller fragments. Replica- tion always implies fragmentation, but fragmentation may happen without any replication if many features need to be tested. For example, if the data splits ap- proximately equally on every split, then a univariate decision tree cannot test more than O(logn) features. This puts decision trees at a disadvantage for tasks with many relevant features. A third problem inherent to the representation is the ability to deal with missing values (unknown values). The correct branch to take is unknown if a feature tested is missing, and algorithms must employ special mechanisms to handle missing values. In order to re- duce the occurrences of tests on missing values, C4.5 penalizes the information gain by the proportion of un- known instances and then splits these instances to both subtrees. CART uses a much more complex scheme of surrogate features. Friedman estimated that about half the code in CART and about 80% of the program- ming effort went into missing values! w-7 = - c P(Y) h3 P(Y) , YEY (1) where 0 log 0 = 0 and the base of the log is usually two so that entropy is expressed in bits. The entropy is always non-negative and measures the amount of uncertainty of the random variable Y. It is bounded by log IyI with equality only if Y is uniformly distributed over y. The conditionul entropy of a variable Y given an- other variable X is the expected value of the entropies of the conditional distributions averaged over the con- ditioning random variable: N(Y 1 X) = - c p(x)H(Y 1 x = z) XEX = - c P(2) c P(Y I 4 l%P(Y I 4 (3) x:EX YEY = - c CPb,Yh%P(Y I 4 * (4) XEK YEY Note that N(Y 1 X) # H(X I Y). The mzdtzsul information of two random variables Y and X, sometimes called the information g&n of Y given X, measures the relative entropy between the joint distribution and the product distribution: I(Y;X) = c c P(G Y) log ;;;$) (5) YEY XcEX = H(Y) - H(Y 1 X) . The mutual information is symmetric, i.e., I(Y; X) = 1(X; Y), and non-negative (Cover & Thomas 1991). As can be seen from Equation 6, the mutual informa- tion measures the reduction in uncertainty in Y after observing X. Given a set of instances, the above quan- tities can be computed by using the empirical probabil- ities, with the variable Y representing the class labels and X a given feature variable. The test selection step of common decision tree al- gorithms is implemented by testing the mutual infor- mation (or a similar measure) for each feature X with the class label Y and picking the one with the highest value (highest information gain). Many eager decision tree algorithms, such as C4.5 and CART, have a post-processing step that prunes the tree to avoid overfitting. The reader is referred to Quinlan (1993) and B reiman et al. (1984) for the two most common pruning mechanisms. The current im- plementation of our LAZYDT algorithm does no prun- ing because there is no simple analog between pruning in lazy decision trees and pruning in ordinary decision trees. Lazy ecisim Trees We now introduce LAZYDT, a lazy algorithm for in- ducing decision trees. We begin with general motiva- tion and compare the advantages and disadvantages of the lazy construction of decision trees to that of the eager approach. We then describe the specific algo- rithmic details and the caching scheme that is used to speed up classification. Motivation A single decision tree built from the training set is mak- ing a compromise: the test at the root of each subtree is chosen to be the best split on average. Common feature selection criteria, such as mutual information and the Gini index, average the purity of the children 718 Learning by the proportions of instances in those children. En- tropy measures used in C4.5 and ID3 are guaranteed to decrease on average (i.e., the information gain is non- negative) but the entropy of a specific child may not change or may increase. A single tree built in advance can lead to many irrelevant splits for a given test in- stance, thus fragmenting the data unnecessarily. Such fragmentation reduces the significance of tests at lower levels since they are based on fewer instances. A de- cision tree built for the given instance can avoid splits on features that are irrelevant for the specific instance. Example 1 Suppose a domain requires one to classify patients as sick or healthy. A Boolean feature denoting whether a person is HIV positive is extremely relevant. (For this example we will assume that such persons should be classified as sick.) Even though all instances having HIV positive set to true have the same class, a decision tree is unlikely to make the root test based on this feature because the proportion of these instances is so small; the condi- tional (or average) entropy of the two children of a test on the HIV-positive feature will not be much different from the parent and hence the information gain will be small. It is therefore likely that the HIV-positive instances will be fragmented throughout the nodes in the tree. Moreover, many branches that contain such instances will need to branch on the HIV-positive fea- ture lower down the tree, resulting in the replication of tests. The example leads to an interesting observation: trees, or rather classification paths, built for a spe- cific test instance may be much shorter and hence may provide a short explanation for the classification. A person that is healthy might be explained by a path testing fever, blood-cell counts, and a few other fea- tures that fall within the normal ranges. A person might be classified as sick with the simple explanation that he or she is HIV positive. Another advantage to lazy decision trees is the nat- ural way in which missing values are handled. Missing feature values require special handling by decision tree classifiers, but a decision tree built for the given in- stance simply need never branch on a value missing in that instance, thus avoiding unnecessary fragmen- tation of the data. The Framework for Lazy Decision Trees We now describe the general framework for a lazy de- cision tree classifier and some pitfalls associated with using common selection measures, such as mutual in- formation or the Gini index. We assume the data has been discretized and that all features are nominal. Input: A training set T of labelled instances and an unlabelled instance I to classify. Output: A label for instance 1. 1. If T is pure, i.e., all instances in T have label e, return C. 2. Otherwise, if all instances in T have the same feature values, return the majority class in T. 3. Otherwise, select a test X and let x be the value of the test on the instance I. Assign the set of instances with X = II: to T and apply the algorithm to T. Figure 1: The generic lazy decision trees algorithm. As with many lazy algorithms, the first part of the induction process (i.e., building a classifier) is non- existent; all the work is done during the classification of a given instance. The lazy decision tree algorithm, which gets the test instance as part of the input, follows a separate and classify methodology: a test is selected and the sub- problem containing the instances with the same test outcome as the given instance is then solved recur- sively. The overall effect is that of tracing a path in an imaginary tree made specifically for the test instance. Figure 1 shows a generic lazy decision tree algorithm. The heart of the algorithm is the selection of a test to conduct at each recursive call. Common measures used in decision tree algorithms usually indicate the average gain in information after the outcome of the chosen test is taken into account. Because the lazy decision tree algorithm is given extra information, namely the (unlabelled) test instance, one would like to use that information to choose the appropriate test. The simplest approach that comes to mind is to find the test that maximally decreases the entropy for the node our test instance would branch to and define the information gain to be the difference between the two entropies. There are two problems with this approach: the first is that the information gain can be negative, in which case it is not clear what to do with negative gains. If class A is dominant but class B is the correct class, then it may be necessary for the created path to go through a node with equal frequencies before class B becomes the majority class. This means that avoiding splits on features that have negative gain is a mistake. A second, related problem, is that only the frequencies are taken into account, not the actual classes. If the parent node has 80% class A and 20% class B and the child node has 80% class B and 20% Decision Trees 719 class A, then there will be no information gain (the entropy will be the same), but the feature tested at the parent is clearly relevant. In light of these problems, we normalize the class probabilities at every node by re-weighting the in- stances such that each class has equal weight. The nor- malization scheme solves both problems. The entropy of the re-weighted parent node will be log Ic, where k is the number of classes (see Equation 1 and the text following it). The normalization implies that the in- formation gain will always be positive and that the 80%/20% split described above will have large infor- mation gain. The LazyDT Algorithm We now describe the exact details of the LAZYDT algo- rithm including the way it handles continuous features and the caching scheme used to speed the classification. Since the LAZYDT algorithm described is only ca- pable of processing nominal features, the training set is first discretized (Dougherty, Kohavi, & Sahami 1995). We chose to discretize the instances using recursive minimization of entropy as proposed by Fayyad & Irani (1993) and as implemented in MLC++ (Kohavi et al. 1994)) which is publicly available and thus allows repli- cation of this discretization step. The exact details are unimportant for this paper. We considered two univariate test criteria. The first is similar to that of C4.5 ( i.e., a multi-way split). The second is a binary split on a single value. To avoid fragmentation as much as possible, we chose the second method and have allowed splitting on any feature value that is not equal to the instance’s value. For example, if the instance has feature A with value a and the do- main of A is {a, b,c}, then we allow a split on A = b (two branches, one for equality, one for non-equality) and a split on A = c. For non-binary features, this splitting method makes more splits, but the number of instances that are split off each time is smaller. Missing feature values are naturally handled by con- sidering only splits on feature values that are known in the test instance. Training instances with unknowns filter down and are excluded only when their value is unknown for a given test in a path. Avoiding any tests on unknown values is the correct thing to do proba- bilistically, assuming the values are truly unknown (as opposed to unknown because there was a reason for not measuring them). The LAZYDT algorithm proceeds by splitting the instances on tests at nodes as described in the previous section. Because we found that there are many ties between features with very similar information gains, we call the algorithm recursively for all features with information gains higher than 90% of the highest gain achievable. The recursive call that returns with the highest number of instances in the majority class of a leaf node that was reached makes the final prediction (ties from the recursive calls are broken arbitrarily). As defined, the algorithm is rather slow. For each in- stance, all splits must be considered (a reasonably fast process if the appropriate counters are kept), but each split then takes time proportional to the number of training instances that filtered to the given node. This implies that the time complexity of classifying a given instance is O(m . n. d) for m instances, n features, and a path of depth d. If we make the reasonable assump- tion that at least some fixed fraction of the instances are removed at each split (say IO%), then the time complexity is O(m . n). In order to speed up the pro- cess in practice, we cache the information gains and create lists of pointers to instances, representing the sets of instances that filter to each node. After a few instances have been classified, commonly used paths al- ready exist, and the calculations need not be repeated, especially at higher levels. The caching scheme was found to be very efficient time-wise, but it consumes a lot of memory. Experiments We now describe experiments that compare LAZYDT with other algorithms for inducing decision trees. The Algorithms and Datasets We compare LAZYDT to three algorithms: simple ID3, C4.5, and C4.5-NP. Simple ID3 is a basic basic top- down induction of decision trees algorithm. It selects the features based on information gain and considers unknowns to be a separate value. C4.5 (Quinlan 1993) is a state-of-the-art algorithm that penalizes multi-way splits using the gain-ratio, prunes the tree, and splits every instance into multiple branches when hitting un- known values. We used the default parameter settings. C4.5NP is C4.5 without pruning and it is compared in order to estimate the effect of pruning. Because LAZYDT does not prune, the difference between C4.5 and C4.5-NP might indicate that there is similar room for improvement to LAZYDT if a pruning algorithm were added. The datasets we use are common ones used in the lit- erature and stored in the U.C. Irvine repository (Mur- phy & Aha 1996). Th e estimated prediction accuracy was computed by doing five-fold cross-validation for all domains except the artificial domains where a standard training set was used and the test set was the complete inst ante space. 720 LeaJming Results and Discussion Characteristics of the datasets and accuracy results are shown in Table 1, and a graph presenting the difference in accuracies and standard deviations is shown in Fig- ure 2. The LAZYDT algorithm is a reasonably fast al- gorithm. The largest running time by far was for mushroom with 8.4 Spare-10 cpu minutes per cross- validation fold (equivalent to a run), followed by chess with 1.59 cpu minutes. These datasets have 8124 in- stances and 3196 instances, respectively. From the table we can see that simple ID3 is gener- ally inferior, as is C4.5 without pruning. Pruning im- proves C4.5-NP’s performance, except for a few cases. The LAZYDT algorithm and C4.5 (with pruning) be- have somewhat similarly but there are some datasets that have large differences. The LAZYDT’S average er- ror rate is 1.9% lower, which is a relative improvement in error of 10.6% over C4.5’~ 17.9% average error rate. Three datasets deserve special discussion: anneal, au- diology, and the monk2 problem. Anneal is interesting because ID3 manages so well. An investigation of the problem shows that the main difference stems from the dissimilar ways in which un- known values are handled. Simple ID3 considers un- knowns as a separate value whereas C4.5 has a special mechanism for handling unknowns. In this dataset, changing the unknown values into a separate feature value improves the performance of C4.5 to 98.7%. Schaffer (1993) h s owed that neural nets considerably outperformed C4.5 on the anneal dataset, but we can now attribute this difference to the fact that for back- propagation Schaffer has converted the unknown val- ues to an additional discrete value. The second file we discuss is audiology. The per- formance of LAZYDT on this dataset is significantly lower than that of C4.5. This dataset has 69 features, 24 classes, and only 226 instances. LAZYDT is likely to find a pure class on one of the features because of the small number of instances. Thus the extra flexi- bility to branch differently depending on the test in- stance hurts LAZYDT in cases such as audiology. This is a bias-variance tradeoff (Kohavi & Wolpert 1996; Geman, Bienenstock, & Doursat 1992) and to over- come such cases we would have to bias the algorithm to avoid early splits that leave only a few instances to classify the test instance. The final file to discuss is monk2, where the per- formance of LAZYDT is superior to that of C4.5. The monk2 problem is artificial and the concept is that any two features (and only two) have to have their first value. Quinlan (1993) writes that “[The monk2 prob- lem] is just plain difficult to express either as trees or as rules. . . all classifiers generated by the programs are very poor .” While the problem is hard to represent in a univariate decision tree, the flexibility of LAZYDT (which is still restricted to univariate splits), is helpful here. The root test in the examined runs indeed tends to pick a feature whose value is not equal to the first value and thus separate those inst antes from the rest. Missing Values To test the robustness of LAZYDT to missing val- ues, we added noise to the datasets. The “noise pro- cess” changes each feature value to unknown with the 20% probability. The average accuracy over all the datasets changed as follows: ID3’s accuracy decreased to 68.22%; C4.5’~ accuracy decreased to 77.10%, and LAZYDT’S accuracy decreased to 77.81%. Some of the biggest differences between the accuracy on the original datasets and the corrupted datasets occur on the artificial datasets: monkl, monk2, and monk3; and pseudo-artificial datasets: tic-tat-toe, and chess. Hayes-roth and glass2 also have large differ- ences probably because they have many strongly rele- vant features and few weakly relevant features (John, Kohavi, & Pfleger 1994). If we ignore the artificial problems, the average accuracy for LAZYDT on the datasets without missing values is 82.15% and the ac- curacy on the datasets with 20% missing values is 78.40%. Thus there is less than 4% reduction in per- formance when 20% of the feature values are missing. With many missing values pruning may be impor- tant, but our current implementation of LAZYDT does no pruning. For example, the worse difference in ac- curacy on the corrupted datasets is on the breast (L) dataset. LAZYDT overfits the data and has accuracy of 61.54% while majority is 70.28%. Most work on lazy learning was motivated by nearest- neighbor algorithms (Dasarathy 1990; Wettschereck 1994; Aha to appear). LAZYDT was motivated by Friedman (1994)) who defined a (separate) distance metric for a nearest-neighbor classifier based on the respective relevance of each feature for classifying each particular test instance. Subsequently, Hastie & Tib- shirani (1995) proposed using local linear discrimi- nant methods to define nearest-neighbor metrics. Both these methods are intended for continuous features and thus they can control the number of instances removed at each step. In contrast, the caching scheme used by LAZYDT cannot be applied with these methods, and hence they are much slower. Smyth & Goodman (1992) described the use of the J-measure, which is the inner sum of Equation 4. The Decision Trees 721 Table 1: Comparison of the accuracy of simple ID3, C4.5 with no pruning, C4.5 with pruning, and LAZYDT. The number after the f indicates one standard error of the cross-validation folds. The table is sorted by difference between LAZYDT and C4.5. No. Dataset Feat- Train Test Simple ID3 C4.5-NP c4.5 LAZYDT ures sizes accuracy accuracy accuracy accuracy monk-2 169 432 69.9lf2.21 65.30f2.29 65.00f2.30 82.18fl.84 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 monk- 1 tic-tat-toe cleve glass hayes-roth glass2 anneal heart diabetes soybean-small labor-neg 1ymphography hepatitis german pima mushroom iris vote monk-3 chess breast-(W) breast-(L) horse-colic austrahan crx vote1 audiology Average 6 6 9 13 9 4 9 24 13 8 35 16 18 19 24 8 22 4 16 6 36 10 9 22 14 15 15 69 124 432 81.25fl.89 76.60f2.04 75.70f2.07 91.90fl.31 958 cv-5 84.38f2.62 85.59rtl.35 84.02fl.56 93.63f0.83 303 cv-5 63.93f6.20 72.97xk2.50 73.62f2.25 81.2lf3.55 214 cv-5 62.79zt7.46 64.47f2.73 65.89f2.38 72.92fl.81 160 cv-5 68.75h8.33 71.25f3.03 74.38f4.24 78.75f1.53 163 cv-5 81.82f6.82 72.97f4.05 73.6Ok4.06 77.92fl.11 898 cv-5 100.00f0.00 94.10f0.45 91.65fl.60 95.77f0.87 270 cv-5 77.78f5.71 75.93f3.75 77.04f2.84 81.11f2.89 768 cv-5 66.23f3.82 67.72f2.76 70.84fl.67 74.4831.27 47 cv-5 100.OOfO.OO 95.5652.72 95.56f2.72 97.78f2.22 57 CV-t 83.33511.2 79.09f5.25 77.42f6.48 79.09zt4.24 148 cv-5 73.33f8.21 74.97f2.98 77.OlxkO.77 78.4lf2.19 155 cv-5 67.74f8.54 76.77f4.83 78.06f2.77 79.35f2.41 1000 cv-5 63.00f3.43 70.20fl.70 72.30fl.37 73.50f1.57 768 cv-5 67.53f3.79 69.79fl.68 72.65Al.78 73.7O~tl.58 8124 cv-5 100.00f0.00 100.00f0.00 100.00f0.00 100.00f0.00 150 cv-5 96.67f3.33 95.33f0.82 94.67kl.33 94.67f0.82 435 cv-5 93.lOf2.73 94.71f0.59 95.63f0.43 95.17f0.76 122 432 90.28xtl.43 92.60fl.26 97.203x0.80 96.53f0.88 3196 cv-5 99.69xt0.22 99.3lf0.15 99.34Zto.12 98.22hO.21 699 cv-5 95.71fl.72 93.99Ztl.05 94.7lZtO.37 92.99zt0.69 286 cv-5 62.07f6.43 64.34fl.67 71.00f2.25 68.55f2.86 368 cv-5 75.68f5.02 82.88f2.15 84.78fl.31 82.07kl.79 690 cv-5 78.26f3.52 82.90fl.14 85.36xt0.74 81.74fl.56 690 cv-5 79.71f3.44 83.62fl.35 85.80f0.99 82.03f0.87 435 cv-5 85.061t3.84 86.44f2.00 86.67fl.13 81.84~tl.56 226 CV-5 80.43f5.91 76.52f3.30 78.74k3.05 66.36&1.69 80.30 80.93 82.09 84.00 Act diff 2c _ LazyDT - C4.5 DataSet Figure 2: The difference between the accuracy of LAZYDT and C4.5. Positive values indicate LAZYDT outperforms C4.5. Error bars indicate one standard deviation. 722 Learning J-measure can be used in LAZYDT because it was shown to be non-negative. However, initial experi- ments showed it was slightly inferior on the tested datasets. Perhaps our measure would be useful in systems where the J-measure is currently used (e.g., ITrule). Holte, Acker, & Porter (1989) noted that existing inductive systems create definitions that are good for large disjuncts but are far from ideal for small dis- juncts, where a disjunct is a conjunction that correctly classifies few training examples. It is hard to assess the accuracy of small disjuncts because they cover few examples, yet removing all of them without signifi- cance tests is unjustified since many of them are sig- nificant and the overall accuracy would degrade. The authors propose a selective specificity bias and present mixed results; Quinlan (1991) suggests an improved estimate that also takes into account the proportion of the classes in the context of the small disjunct. We believe that LAZYDT suffers less from the problem of small disjuncts because the training set is being “fit- ted” to the specific instance and hence is likely to be less fragmented. The normalization of class probabil- ities in LAZYDT is in line with Quinlan’s suggestions (Quinlan 1991) oft k a ing the context (the parent node in our case) into account. Quinlan (1994) h c aracterizes classification problems as sequential or parallel. In parallel tasks, all input features are relevant to the classification; in sequen- tial type tasks, the relevance of features depends on the values of other features. Quinlan conjectures that parallel type tasks are unsuitable for current univari- ate decision-tree methods because it is rare that there are enough instances for doing splits on all the n rele- vant features; similarly, he claims that sequential type tasks require inordinate amounts of learning time for backpropagation based methods because if a feature i is irrelevant, inopportune adjustment to a weight wij will tend to obscure the sensible adjustments made when the feature is relevant. LAZYDT might be infe- rior to backpropagation and nearest-neighbor methods on some parallel tasks with many relevant features, but it should fare better than decision trees. Good examples are the monk2 and tic-tat-toe domains: all features are relevant, but if a split is to be made on all features, there will not be enough instances. LAZYDT makes the relevant splits based on the feature values in the test-instance and thus fragments the data less. F’uture Work LAZYDT is a new algorithm in the arena of machine learning. The weakest point of our algorithm is the fact that it does no regularization (pruning). The aus- tralian dataset has 14 features, but the background knowledge file describes five features as the important ones. If we allow LAZYDT to use only those five fea- tures, its accuracy increases from 81.7% to 85.6%. An even more extreme case is breast-cancer (L), where re- moval of all features improves performance (i.e., ma- jority is a better classifier). Data is currently discretized in advance. The dis- cretization algorithm seems to be doing a good job, but since it is a pre-processing algorithm, it is not taking advantage of the test instance. It is possible to extend LAZYDT to decide on the threshold during classifica- tion, as in common decision tree algorithms, but the caching scheme would need to be modified. The caching algorithm currently remembers all tree paths created, thus consuming a lot of memory for files with many features and many instances. An en- hancement might be made to allow for some space-time tradeoff. In practice, of course, the caching scheme might be avoided altogether; a doctor, for example, can wait a few seconds for classification. Our experiments required hundreds of test instances to be classified for twenty-eight datasets, so caching was a necessity. The dynamic complexity of an algorithm (Holte 1993) is the number of features used on average. An in- teresting experiment would be to compare the dynamic complexity of C4.5 with that of LAZYDT. Summary We introduced a novel lazy algorithm, LAZYDT, that can be used in supervised classification. This algorithm differs from common lazy algorithms that are usually based on a global nearest-neighbor metric. LAZYDT creates a path in a tree that would be “best” for a given test instance, thus mitigating the fragmentation problem. Empirical comparisons with C4.5, the state-of-the- art decision tree algorithm, show that the perfor- mance is slightly higher on the tested datasets from the U .C. Irvine repository. However, since no algorithm can outperform others in all settings (Wolpert 1994; Schaffer 1994), the fact that they exhibit different be- havior on many datasets is even more important. For some datasets LAZYDT significantly outperforms C4.5 and vice-versa. Missing feature values are naturally handled by LAZYDT with no special handling mechanisms re- quired. Performance on corrupted data is comparable to that of C4.5, which has an extremely good algorithm for dealing with unknown values. The algorithm is relatively fast due to the caching scheme employed, but requires a lot of memory. We believe that a space-time tradeoff should be investi- Decision Trees 723 gated and hope to pursue issue in the future. the regularizat ion (pruning) Acknowledgments We thank George John, Rob Holte, and Pat Langley for their suggestions. The LAZYDT algorithm was implemented using the M,CC++ library, partly funded by ONR grant N00014- 95-l-0669. Jerome H. Friedman’s work was supported in part by the Department of Energy under contract number DE-AC03-76SF00515 and by the National Sci- ence Foundation under grant number DMS-9403804. References Aha, D. W. to appear. AI review journal: Special issue on lazy learning. Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 1984. Classification and Regression Trees. Wadsworth International Group. Cover, T. M., and Thomas, J. A. 1991. Elements of Information Theory. John Wiley & Sons, Inc. Dasarathy, B. V. 1990. Nearest Neighbor (NN,) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos, California. Dougherty, J.; Kohavi, R.; and Sahami, M. 1995. Supervised and unsupervised discretization of contin- uous features. In Prieditis, A., and Russell, S., eds., Machine Learning: Proceedings of the Twelfth Inter- national Conference, 194-202. Morgan Kaufmann. Fayyad, U. M., and Irani, K. B. 1993. Multi-interval discretization of continuous-valued attributes for clas- sification learning. In Proceedings of the 13th Inter- national Joint Conference on Artificial Intelligence, 1022-1027. Morgan Kaufmann Publishers, Inc. Friedman, J. H. 1994. Flexible metric nearest neigh- bor classification. Technical Report 113, Stanford University Statistics Department. Geman, S.; Bienenstock, E.; and Doursat, R. 1992. Neural networks and the bias/variance dilemma. Neu- ral Computation 411-48. Hastie, T., and Tibshirani, R. 1995. Discriminant adaptive nearest neighbor classification. Technical re- port, Stanford University Statistics Department. Holte, R. C.; Acker, L. E.; and Porter, B. W. 1989. Concept learning and the problem of small disjuncts. In Proceedings of the 11th International Joint Con- ference on Artificial Intelligence, 813-818. Holte, R. C. 1993. Very simple classification rules per- form well on most commonly used datasets. Machine Learning 11:63-90. John, G.; Kohavi, R.; and Pfleger, K. 1994. Irrelevant features and the subset selection problem. In Machine Learning: Proceedings of the Eleventh International Conference, 121-129. Morgan Kaufmann. Kohavi, R., and Wolpert, D. H. 1996. Bias plus vari- ance decomposition for zero-one loss functions. In Saitta, L., ed., Machine Learning: Proceedings of the Thirteenth International Conference. Morgan Kauf- mann Publishers, Inc. Available at http://robotics.stanford.edu/users/ronnyk. Kohavi, R.; John, G.; Long, R.; Manley, D.; and Pfleger, K. 1994. MLC++: A machine learning li- brary in C++. In Tools with Artificial Intelligence, 740-743. IEEE Computer Society Press. Murphy, P. M., and Aha, D. W. 1996. UC1 repository of machine learning databases. http://www.ics.uci.edu/‘mlearn. Pagallo, G., and Haussler, D. 1990. Boolean fea- ture discovery in empirical learning. Machine Learn- ing 5:71-99. Quinlan, J. R. 1991. Improved estimates for the ac- curacy of small disjuncts. Machine Learning 6:93-98. Quinlan, J. R. 1993. C4.5: Programs for Machine Learning. Los Altos, California: Morgan Kaufmann Publishers, Inc. Quinlan, J. R. 1994. Comparing connectionist and symbolic learning methods. In Hanson, S. J.; Drastal, G. A.; and Rivest, R. L., eds., Computational Learn- ing Theory and Natural Learning Systems, volume I: Constraints and Prospects. MIT Press. chapter 15, 445-456. Schaffer, C. 1993. Selecting a classification method by cross-validation. Machine Learning 13( 1):135-143. Schaffer, C. 1994. A conservation law for general- ization performance. In Machine Learning: Proceed- ings of the Eleventh International Conference, 259- 265. Morgan Kaufmann Publishers, Inc. Smyth, P., and Goodman, R. 1992. An information theoretic approach to rule induction from databases. IEEE Transactions on Knowledge and Data Engi- neering 4(4):301-316. Wettschereck, D. 1994. A Study of Distance-Based Machine Learning Algorithms. Ph.D. Dissertation, Oregon State University. Wolpert, D. H. 1994. The relationship between PAC, the statistical physics framework, the Bayesian frame- work, and the VC framework. In Wolpert, D. H., ed., The Mathemtatics of Generalization. Addison Wesley. 724 Learning
1996
106
1,740
oosting, a C4.5 J. R. Quinlan University of Sydney Sydney, Australia 2006 quinlan@cs.su.oz.au Abstract Breiman’s bagging and Freund and Schapire’s boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that are combined by voting, bagging by generating replicated boot- strap samples of the data, and boosting by ad- justing the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater benefit. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classifiers reduces this downside and also leads to slightly better results on most of the datasets considered. Introduction Designers of empirical machine learning systems are concerned with such issues as the computational cost of the learning method and the accuracy and intel- ligibility of the theories that it constructs. Much of the research in learning has tended to focus on im- proved predictive accuracy, so that the performance of new systems is often reported from this perspective. It is easy to understand why this is so - accuracy is a primary concern in all applications of learning and is easily measured (as opposed to intelligibility, which is more subjective), while the rapid increase in comput- ers’ performance/cost ratio has de-emphasized compu- t ational issues in most applications.’ In the active subarea of learning decision tree classi- fiers, examples of methods that improve accuracy are: o Construction of multi-attribute tests using log- ical combinations (Ragavan and Rendell 1993)) arithmetic combinations (Utgoff and Brodley 1990; ‘For extremely large datasets, however, learning time can remain the dominant issue (Catlett 1991; Chan and Stolfo 1995). Heath, Kasif, and Salzberg 1993), and counting op- erations (Murphy and Pazzani 1991; Zheng 1995). e Use of error-correcting codes when there are more than two classes (Dietterich and Bakiri 1995). o Decision trees that incorporate classifiers of other kinds (Brodley 1993; Ting 1994). e Automatic methods for setting learning system pa- rameters (Kohavi and John 1995). On typical datasets, all have been shown to lead to more accurate classifiers at the cost of additional com- putation that ranges from modest to substantial. There has recently been renewed interest in increas- ing accuracy by generating and aggregating multiple classifiers. Although the idea of growing multiple trees is not new (see, for instance, (Quinlan 1987; Buntine 1991)), the justification for such methods is often em- pirical. In contrast, two new approaches for producing and using several classifiers are applicable to a wide va- riety of learning systems and are based on theoretical analyses of the behavior of the composite classifier. The data for classifier learning systems consists of attribute-value vectors or instances. Both bootstrap aggregating or bagging (Breiman 1996) and boosting (F’reund and Schapire 1996a) manipulate the training data in order to generate different classifiers. Bagging produces replicate training sets by sampling with re- placement from the training instances. Roosting uses all instances at each repetition, but maintains a weight for each instance in the training set that reflects its importance; adjusting the weights causes the learner to focus on different instances and so leads to differ- ent classifiers. In either case, the multiple classifiers are then combined by voting to form a composite classifier. In bagging, each component classifier has the same vote, while boosting assigns different voting strengths to component classifiers on the basis of their accuracy. This paper examines the application of bagging and boosting to C4.5 (Quinlan 1993), a system that learns decision tree classifiers. After a brief summary of both methods, comparative results on a substantial num- ber of datasets are reported. Although boosting gen- erally increases accuracy, it leads to a deterioration on Decision Trees 725 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. some datasets; further experiments probe the reason for this. A small change to boosting in which the vot- ing strengths of component classifiers are allowed to vary from instance to instance shows still further im- provement . The final section summarizes the (some- times tentative) conclusions reached in this work and outlines directions for further research. Bagging and Boosting We assume a given set of N instances, each belong- ing to one of K classes, and a learning system that constructs a classifier from a training set of instances. Bagging and boosting both construct multiple classi- fiers from the instances; the number T of repetitions or trials will be treated as fixed, although Freund and Schapire (1996a) note that this parameter could be de- termined automatically by cross-validation. The clas- sifier learned on trial t will be denoted as Ct while C* is the composite (bagged or boosted) classifier. For any instance x, Ct(x) and C*(x) are the classes predicted by Ct and C* respectively. Bagging For each trial t = 1,2,... ,T, a training set of size N is sampled (with replacement) from the original in- stances. This training set is the same size as the orig- inal data, but some instances may not appear in it while others appear more than once. The learning sys- tem generates a classifier Ct from the sample and the final classifier C* is formed by aggregating the T clas- sifiers from these trials. To classify an instance x, a vote for class k is recorded by every classifier for which C”(Z) = k and C*(x) is then the class with the most votes (ties being resolved arbitrarily). Using CART (Breiman, Friedman, Olshen, and Stone 1984) as the learning system, Breiman (1996) reports results of bagging on seven moderate-sized datasets. With the number of replicates T set at 50, the average error of the bagged classifier C* ranges from 0.57 to 0.94 of the corresponding error when a single classifier is learned. Breiman introduces the con- cept of an order-correct classifier-learning system as one that, over many training sets, tends to predict the correct class of a test instance more frequently than any other class. An order-correct learner may not pro- duce optimal classifiers, but Breiman shows that aggre- gating classifiers produced by an order-correct learner results in an optimal classifier. Breiman notes: “The vital element is the instability of the pre- diction method. If perturbing the learning set can cause significant changes in the predictor con- structed, then bagging can improve accuracy.” Boosting The version of boosting investigated in this paper is AdaBoost.Ml (F’reund and Schapire 1996a). Instead of drawing a succession of independent bootstrap sam- ples from the original instances, boosting maintains a weight for each instance - the higher the weight, the more the instance influences the classifier learned. At each trial, the vector of weights is adjusted to reflect the performance of the corresponding classifier, with the result that the weight of misclassified instances is increased. The final classifier also aggregates the learned classifiers by voting, but each classifier’s vote is a function of its accuracy. Let ZV: denote the weight of instance x at trial t where, for every x, wi = l/N. At each trial t = 1,2,. . . ,T, a classifier Ct is constructed from the given in- stances under the distribution wt (i.e., as if the weight wz of instance x reflects its probability of occurrence). The error et of this classifier is also measured with re- spect to the weights, and consists of the sum of the weights of the instances that it misclassifies. If et is greater than 0.5, the trials are terminated and T is altered to t-l. Conversely, if Ct correctly classi- fies all instances so that et is zero, the trials termi- nate and T becomes t. Otherwise, the weight vec- tor wt+’ for the next trial is generated by multiply- ing the weights of instances that Ct classifies correctly by the factor ,# = et/(1 - et) and then renormaliz- ing so that C, wg+r equals 1. The boosted classifier C* is obtained by summing the votes of the classifiers c1,c2,..., CT, where the vote for classifier Ct is worth log(l/pL) units. Provided that et is always less than 0.5, Freund and Schapire prove that the error rate of C* on the given examples under the original (uniform) distribution w1 approaches zero exponentially quickly as T increases. A succession of “weak” classifiers {C”} can thus be boosted to a “strong” classifier C* that is at least as accurate as, and usually much more accurate than, the best weak classifier on the training data, Of course, this gives no guarantee of C*‘s generalization performance on unseen instances; Freund and Schapire suggest the use of mechanisms such as Vapnik’s (1983) structural risk minimization to maximize accuracy on new data. Requirements for Boosting and Bagging These two methods for utilizing multiple classifiers make different assumptions about the learning system. As above, bagging requires that the learning system should not be “stable”, so that small changes to the training set should lead to different classifiers. Breiman also notes that “poor predictors can be transformed into worse ones” by bagging. Boosting, on the other hand, does not preclude the use of learning systems that produce poor predictors, provided that their error on the given distribution can be kept below 50%. However, boosting implicitly re- quires the same instability as bagging; if Ct is the same as Ct-‘, the weight adjustment scheme has the prop- erty that et = 0.5. Although Freund and Schapire’s specification of AdaBoost.Ml does not force termina- tion when et = 0.5, ,@ = 1 in this case so that wt+l = wt and all classifiers from Ct on have votes with zero 726 Learning anneal audiology auto breast-w chess colic credit-a credit-g diabetes glass heart-c heart-h hepatitis hYP0 iris labor letter lymphography phoneme segment sick sonar soybean splice vehicle vote waveform average c4.5 EqYJ 7.67 22.12 17.66 5.28 8.55 14.92 14.70 28.44 25.39 32.48 22.94 21.53 20.39 .48 4.80 19.12 11.99 21.69 19.44 3.21 1.34 25.62 7.73 5.91 27.09 5.06 27.33 --lzm- Table 1 Comparison of C4.5 and its bagged and boosted versions. vs c4.5 err (Yo) w-l ratio 6.25 10-O .814 19.29 19.66 4.23 8.33 15.19 14.13 25.81 23.63 27.01 21.52 20.31 18.52 .45 5.13 14.39 7.51 20.41 18.73 2.74 1.22 23.80 7.58 5.58 25.54 4.37 9-o 2-8 9-o 6-2 O-6 8-2 10-O 9-l 10-O 7-2 8-l 9-o 7-2 2-6 10-O 10-O 8-2 10-O 9-l 7-1 7-l 6-3 9-l 10-O 9-o .872 1.113 .802 .975 1.018 .962 .908 .931 .832 .938 .943 .908 .928 1.069 .752 .626 .941 .964 .853 .907 .929 .981 .943 .943 .864 19.77 10-O .723 14.11 .905 vs c4.5 err YO w-l ratio 4.73 10-O .617 15.71 15.22 4.09 4.59 18.83 15.64 29.14 28.18 23.55 21.39 21.05 17.68 .36 6.53 13.86 4.66 17.43 16.36 1.87 1.05 19.62 7.16 5.43 22.72 5.29 18.53 10-O .710 9-l .862 9-o .775 10-O .537 O-10 1.262 1-9 1.064 2-8 1.025 O-10 1.110 10-O .725 8-O .932 5-4 .978 10-O .867 9-1 .746 O-10 1.361 9-l .725 10-O .389 10-O .804 10-O .842 10-O .583 10-O .781 10-O .766 8-2 .926 9-o .919 10-O .839 3-6 1.046 10-O .678 13.36 247 weight in the final classification. Similarly, an overfit- ting learner that produces classifiers in total agreement with the training data would cause boosting to termi- nate at the first trial. Experiments bought by a single order of magnitude increase in com- putation. All C4.5 parameters had their default values, and pruned rather than unpruned trees were used to reduce the chance that boosting would terminate pre- maturely with 8 equal to zero. Ten complete IO-fold cross-validations were carried out with each dataset. C4.5 was modified to produce new versions incorpo- rating bagging and boosting as above. (C4.5’~ facil- ity to deal with fractional instances, required when some attributes have missing values, is easily adapted to handle the instance weights wi used by boosting.) These versions, referred to below as bugged C4.5 and boosted C4.5, have been evaluated on a representative collection of datasets from the UC1 Machine Learning Repository. The 27 datasets, summarized in the Ap- pendix, show considerable diversity in size, number of classes, and number and type of attributes. The results of these trials appear in Table 1. For each dataset, the first column shows C4.5’~ mean er- ror rate over the ten cross-validations. The second section contains similar results for bagging, i.e., the class of a test instance is determined by voting multi- ple C4.5 trees, each obtained from a bootstrap sample as above. The next figures are the number of com- plete cross-validations in which bagging gives better or worse results respectively than C4.5, ties being omit- ted. This section also shows the ratio of the error rate using bagging to the error rate using C4.5 - a value The parameter T governing the number of classifiers generated was set at 10 for these experiments. Breiman 21n a lo-fold (stratified) cross-validation, the training (1996) notes that most of the improvement from bag- instances are partitioned into 10 equal-sized blocks with similar class distributions. Each block in turn is then used ging is evident within ten replications, and it is inter- as test data for the classifier generated from the remaining esting to see the performance improvement that can be nine blocks. J vs Bagging w-l ratio o-o .758 o-o .814 9-l .774 7-2 .966 o-o .551 O-10 1.240 O-10 1.107 O-10 1.129 O-10 1.192 9-l .872 5-4 .994 3-6 1.037 6-l .955 9-l .804 O-8 1.273 5-3 .963 o-o .621 o-o .854 .0-o .873 .0-o .684 9-l .861 .0-o .824 8-l .944 6-4 .974 .0-o .889 l-9 1.211 8-2 .938 .930 Decision Trees 727 g-9 -8 f3 7 1 6 chess g 18 v 6 17 ii Q, 16 colic - boosting - bagging 3 141. 10 20 30 40 50 1 10 20 30 40 50 number of trials 2’ number of trials 2’ Figure 1: Comparison of bagging and boosting on two datasets less than 1 represents an improvement due to bagging. Similar results for boosting are compared to C4.5 in the third section and to bagging in the fourth. It is clear that, over these 27 datasets, both bagging and boosting lead to markedly more accurate classi- fiers. Bagging reduces C4.5’~ classification error by approximately 10% on average and is superior to C4.5 on 24 of the 27 datasets. Boosting reduces error by 15%) but improves performance on 21 datasets and degrades performance on six. Using a two-tailed sign test, both bagging and boosting are superior to C4.5 at a significance level better than 1%. When bagging and boosting are compared head to head, boosting leads to greater reduction in error and is superior to bagging on 20 of the’27 datasets (significant at the 2% level). The effect of boosting is more erratic, however, and leads to a 36% increase in error on the iris dataset and 26% on colic. Bagging is less risky: its worst performance is on the auto dataset, where the error rate of the bagged classifier is 11% higher than that of C4.5. The difference is highlighted in Figure 1, which com- pares bagging and boosting on two datasets, chess and colic, as a function of the number of trials 2’. For T=l, boosting is identical to C4.5 and both are al- most always better than bagging - they use all the given instances while bagging employs a sample of them with some omissions and some repetitions. As 2’ increases, the performance of bagging usually im- proves, but boosting can lead to a rapid degradation (as in the colic dataset). Why Does Boosting Sometimes Fail? A further experiment was carried out in order to bet- ter understand why boosting sometimes leads to a de- terioration in generalization performance. Freund and Schapire (1996a) put this down to overfitting - a large number of trials 2’ allows the composite classifier C* to become very complex. As discussed earlier, the objective of boosting is to 728 Learning construct a classifier C* that performs well on the training data even when its constituent classifiers Ct are weak. A simple alteration attempts to avoid over- fitting by keeping T as small as possible without im- pacting this objective. AdaBoostMl stops when the error of any Ct drops to zero, but does not address the possibility that C* might correctly classify all the training data even though no Ct does. Further trials in this situation would seem to offer no gain - they will increase the complexity of C* but cannot improve its performance on the training data. The experiments of the previous section were re- peated with T=lO as before, but adding this further condition for stopping before all trials are complete. In many cases, C4.5 requires only three boosted trials to produce a classifier C* that performs perfectly on the training data; the average number of trials over all datasets is now 4.9. Despite using fewer trials, and thus being less prone to overfitting, C4.5’~ generaliza- tion performance is worse. The overfitting avoidance strategy results in lower cross-validation accuracy on 17 of the datasets, higher on six, and unchanged on four, a degradation significant at better than the 5% level. Average error over the 27 datasets is 13% higher than that reported for boosting in Table 1. These results suggest that the undeniable benefits of boosting are not attributable just to producing a com- posite classifier C* that performs well on the training data. It also calls into question the hypothesis that overfitting is sufficient to explain boosting’s failure on some datasets, since much of the benefit realized by boosting seems to be caused by overfitting. Changing the Voting Weights Freund and Schapire (1996a) explicitly consider the use by AdaBoost.Ml of confidence estimates provided by some learning systems. When instance x is classified by Ct, let Ht (x) be a number between 0 and 1 that represents some informal measure of the reliability of the prediction Ct (x). Freund and Schapire suggest us- ing this estimate classifier error. to give a more flexible measure of An alternative use of the confidence estimate Ht is in combining the predictions of the classifiers { Ct } to give the final prediction C*(z) of the class of instance 2. Instead of using the fixed weight log(l/Pt) for the vote of classifier Ct, it seems plausible to allow the voting weight of Ct to vary in response to the confidence with which x is classified. 64.5 can be “tweaked” to yield such a confidence estimate. If a single leaf is used by Ct to classify an instance x as belonging to class k=Ct (x), let S denote the set of training&stances that are’ mapped to the leaf, and Sk the subset of them that belong to class L. The confidence of the prediction that instance x belongs to class Ic can then be estimated by the Laplace ratio Ht(x) = N x CiESk w; + 1 Nx&~w; -I- 2’ (When x has unknown values for some attributes, C4.5 can use several leaves in making a prediction. A similar confidence estimate can be constructed for such situa- tions.) Note that the confidence measure Ht (x) is still determined relative to the boosted distribution wt, not to the original uniform distribution of the instances. The above experiments were repeated with a mod- ified form of boosting, the only change being the use of Ht(x) rather than log(l/Pt) as the voting weight of Ct when classifying instance x. Results show improve- ment on 25 of the 27 datasets, the same error rate on one dataset, and a higher error rate on only one of the 27 datasets (chess)‘: Average error rate is approx- imately 3% less than that obtained with the original voting weights. This modification is necessarily ad-hoc, since the confidence estimate Ht has only an intuitive meaning. However, it will be interesting to experiment with other voting schemes, and to see whether any of them can be used to give error bounds similar to those proved for the original boosting method. Conclusion Trials over a diverse collection of datasets have con- firmed that boosted and bagged versions of C4.5 pro- duce noticeably more accurate classifiers than the stan- dard version. Boosting and bagging both have a sound theoretical base and also have the advantage that the extra computation they require is known in advance - if T classifiers are generated, then both require T times the computational effort of C4.5. In these ex- periments, a lo-fold increase in computation buys an average reduction of between 10% and 19% of the clas- sification error. In many applications, improvements of this magnitude would be well worth the computational cost. In some cases the improvement is dramatic - for the largest dataset (letter) with 20,000 instances, mod- ified boosting reduces C4.5’~ classification error from 12% to 4.5%. Boosting seems to be more effective than bagging when applied to C4.5, although the performance of the bagged C4.5 is less variable that its boosted counter- part. If the voting weights used to aggregate compo- nent classifiers into a boosted classifier are altered to reflect the confidence with which individual instances are classified, better results are obtained on almost all the datasets investigated. This adjustment is decid- edly ad-hoc, however, and undermines the theoretical foundations of boosting to some extent. A better understanding of why boosting sometimes fails is a clear desideratum at this point. F’reund and Schapire put this down to overfitting, although the degradation can occur at very low values of T as shown in Figure 1. In some cases in which boosting increases error, I have noticed that the class distributions across the weight vectors wt become very skewed. With the iris dataset, for example, the initial weights of the three classes are equal, but the weight vector w5 of the fifth trial has them as setosa=2%, versicolor=75%, and vir- ginica=23%. Such skewed weights seem likely to lead to an undesirable bias towards or against predicting some classes, with a concomitant increase in error on unseen instances. This is especially damaging when, as in this case, the classifier derived from the skewed distribution has a high voting weight. It may be possi- ble to modify the boosting approach and its associated proofs so that weights are adjusted separately within each class without changing overall class weights. Since this paper was written, F’reund and Schapire (1996b) have also applied AdaBoost.Ml and bagging to C4.5 on 27 datasets, 18 of which are used in this paper. Their results confirm that the error rates of boosted and bagged classifiers are significantly lower than those of single classifiers. However, they find bag- ging much more competitive with boosting, being su- perior on 11 datasets, equal on four, and inferior on 12. Two important differences between their experiments and those reported here might account for this discrep- ancy. First, F’reund and Schapire use a much higher number T=lOO of boosting and bagging trials than the T=lO of this paper. Second, they did not mod- ify C4.5 to use weighted instances, instead resampling the training data in a manner analogous to bagging, but using wk as the probability of selecting instance x at each draw on trial t. This resampling negates a ma- jor advantage enjoyed by boosting over bagging, viz. that all training instances are used to produce each constituent classifier. Acknowledgements Thanks to Manfred Warmuth and Rob Schapire for a stimulating tutorial on Winnow and boosting. This research has been supported by a grant from the Aus- tralian Research Council. Decision Trees 729 Appendix: Description of Datasets audiology zuto breast-w chess colic credit-a “redit-g diabetes glass heart-c heart-h hepatitis hYP0 iris labor letter lymph phoneme segment sick sonar soybean splice vehicle vote waveform 226 205 699 551 368 690 1,000 768 214 303 294 155 3,772 150 57 20,000 148 5,438 2,310 3,772 208 683 3,190 846 435 300 vame anneal Cases Classes Attributes Cont Discr 898 9 6 6 6 2 2 2 2 2 2 6 2 2 2 5 3 2 26 4 47 7 2 2 19 3 4 2 3 - 15 9 - 10 6 7 8 9 8 8 6 7 4 8 16 29 69 10 - 39 12 9 13 - - - 19 7 60 - - 5 5 13 22 - 8 - 18 7 - 22 - 18 - 21 35 62 - 16 - References Breiman, L. 1996. Bagging predictors. Machine Learning, forthcoming. Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J. 1984. Classification and regression trees. Bel- mont, CA: Wadsworth. Brodley, C. E. 1993. Addressing the selective supe- riority problem: automatic algorithm/model class se- lection. In Proceedings 10th International Conference on Machine Learning, 17-24. San Francisco: Morgan Kaufmann. Buntine, W. L. 1991. Learning classification trees. In Hand, D. J. (ed), Artificial Intelligence Frontiers in Statistics, 182-201. London: Chapman & Hall. Catlett, J. 1991. Megainduction: a test flight. In Proceedings 8th International Workshop on Machine Learning, 596-599. San Francisco: Morgan Kaufmann. Chan, P. K. and Stolfo, S. J. 1995. A comparative eval- uation of voting and meta-learning on partitioned data. In Proceedings 12th International Conference on Ma- chine Learning, 90-98. San Francisco: Morgan Kauf- mann. Dietterich, T. G., and Bakiri, G. 1995. Solving mul- ticlass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research 2: 263-286. Freund, Y., and Schapire, R. E. 1996a. A decision- theoretic generalization of on-line learning and an app- lication to boosting.Unpublished manuscript, available from the authors’ home pages (“http://www.research. att.com/orgs/ssr/people/{yoav,schapire}”). An ex- tended abstract appears in Computational Learning Theory: Second European Conference, EuroCOLT ‘95, 23-27, Springer-Verlag, 1995. Freund, Y., and Schapire, R. E. 199613. Experi- ments with a new boosting algorithm. Unpublished manuscript. Heath, D., Kasif, S., and Salzberg, S. 1993. Learning oblique decision trees. In Proceedings 13th Interna- tional Joint Conference on Artificial Intelligence, 1002- 1007. San Francisco: Morgan Kaufmann. Kohavi, R., and John, G. H. 1995. Automatic pa- rameter selection by minimizing estimated error. In Proceedings 12th International Conference on Machine Learning, 304-311. San Francisco: Morgan Kaufmann, Murphy, P. M., and Pazzani, M. 3. 1991. ID2-of-3: constructive induction of M-of-N concepts for discrim- inators in decision trees. In Proceedings 8th Interna- tional Workshop on Machine Learning, 183-187. San Francisco: Morgan Kaufmann. Quinlan, J. R. 1987. Inductive knowledge acquisition: a case study. In Quinlan, J. R. (ed), Applications of Expert Systems. Wokingham, UK: Addison Wesley. Quinlan, J. R. 1993. C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann. Ragavan, H., and Rendell, L. 1993. Lookahead feature construction for learning hard concepts. In Proceedings 10th International Conference on Machine Learning, 252-259. San Francisco: Morgan Kaufmann. Ting, K. M. 1994. The problem of small disjuncts: its remedy in decision trees. In Proceedings 10th Canadian Conference on Artificial Intelligence, 91-97. Utgoff, P. E., and Brodley, C. E. 1990. An incre- mental method for finding multivariate splits for deci- sion trees. In Proceedings ‘7th International Conference on Machine Learning, 58-65. San Francisco: Morgan Kaufmann. Vapnik, V. 1983. Estimation of Dependences Based on Empirical Data. New York: Springer-Verlag. Zheng, Z. 1995. Constructing nominal X-of-N at- tributes. In Proceedings 14th International Joint Con- ference on Artificial Intelligence, 1064-1070. San Fran- cisco: Morgan Kaufmann. 730 Learning
1996
107
1,741
Hussein Almuallim Information and Computer Science Department King Fahd University of Petroleum & Minerals Dhahran 31261, Saudi Arabia hussein@ccse.kfupm.edu.sa Uasuhiro Akiba Shigeo Kaneda NTT Communication Science Labs l-2356 Take, Yokosuka-shi Kanagawa 238-03, Japan {akiba,kaneda}@nttkb.ntt.jp Abstract Given a set of training examples S and a tree- structured attribute X, the goal in this work is to find a multiple-split test defined on x that maxi- mizes Quinlan’s gain-ratio measure. The number of possible such multiple-split tests grows expo- nentially in the size of the hierarchy associated with the attribute. It is, therefore, impractical to enumerate and evaluate all these tests in order to choose the best one. We introduce an efficient algorithm for solving this problem that guaran- tees maximizing the gain-ratio over all possible tests. For a training set of m examples and an at- tribute hierarchy of height d, our algorithm runs in time proportional to dm, which makes it effi- cient enough for practical use. of the hierarchy. This makes it impractical to enu- merate and evaluate all these tests in order to choose the best one. Nunez discusses a “hierarchy-climbing” heuristic within his EG2 algorithm (Nunez 1991) for handling t,his problem. Quinlan lists the support of tree-structured attribut,es as a “desirable extension” to his C4.5 package. In order t#o allow the current C4.5 code to handle such attributes, Quinlan suggests intro- ducing a new nominal attribute for each level of the hi- erarchy, and encoding the examples using these newly introduced att,ribut#es( Quinlan 1993). This essentially means considering only t,hose tests whose outcomes are all lying on one level in the hierarchy. For example, for the attribute Shape, only the three tests shown in Fig- ure 4 are considered. This approach, however, has the following problems: Motivation Current algorithms for learning decision trees from ex- amples (e.g., CART (Breiman et al. 1984), C4.5 (Quin- lan 1986; Quinlan 1993), GID3 (Fayyad & Irani 1992)) assume attributes that are ordered (which may be con- tinuous or discrete) or nominal. Many domains, how- ever, involve attributes that have a hierarchy of val- ues, rather than a list of values (Almuallim, Akiba & Kaneda 1995). Figure 1 shows two examples of such tree-structured attributes. Each node in the tree as- sociated with a tree-structured attribute is called a cutegory and represents one of the values which the attribute may take. Figure 2 shows examples of the concept “colored polygons” described in terms of the Color and Shape attributes of Figure 1. In this case, the examples are described using leafcategories, while the concept itself is best described using t)he higher level categories Chromatic and Polygon. Although it is true that any multiple-split test can be simulated using Quinlan’s “one-level” tests, this comes at) the expense of extra unnecessary split- ting, and hence, extra complexity of the final de- cision tree. Moreover, t(he replicat,ion problem (as discussed by Pagallo and Haussler (Pagallo & Haus- sler 1990)) ft o en arises as a consequence of such sim- ulation. In most cases, one-level t,ests are not well-defined. For the Color at#tribut#e, for example, unless further background knowledge is available, we do not know whether t,o associate the category Achromatic with the category Chromatic or with t,he categories Pri- mary and Non-primary, and so on. In general, at8temptting to reduce the computational costs by rest8rict,ing t(he at,tention to only a subset of the possible multiple-split, t)ests is obviously associat.ed with the risk of missing favorable t,est,s. Tests on tree-structured attributes may be binary, This paper addresses the problem of how one can effi- or can have multiple outcomes as shown in Figure 3. ciently opt,imize over t#he whole set, of possible multiple- Searching for the category that gives the best binary split test,s. We assume t(liat, t,lie gain-ratsi criterion split is not difficult and is discussed in (Almuallim, (Quinlan 1986) is used to evaluate t,ests. It is well- Akiba & Kaneda 1995). This is not, the case, how- known t,hat tests with too many out,comes (those de- ever, for multiple-split tests. It can be shown that the fined on low level cat.egories) have more “split,ting number of possible multiple-split tests for a given hi- power” than t,hose with few outcomes (defined on erarchy grows exponentially in the number of leaves higher level cat#egories). This necessitates the use of Decision Trees 703 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Any Shape Convex Non-convex Polygon Ellipse /\ Straight-lines CUNY non-convex non-convex /I\ /\ /\ A Triangle Hexagon Square Proper Circle cross star Kideny Crescent ellipse shape Any Color ChronZ/ \ Prim&/ \ Achromatic Non-primary A/ \\ /I\ Yellow Violet Orange Pink Black White Gray Figure 1: The Shape and Color hierarchies ( [ Yellow, Square ] , + > ( [ Green, Hexagon ] , + ) ( [ White, Cross ] , - > ( [ R-4 Circle 17 -> ( [ Black, Circle ] , - ) Figure 2: Examples of the concept “colored polygons” a criterion such as the gain-ratio that involves penalty for excessive splitting. some We introduce an algorithm that, given a set of exam- ples and a tree-structured attribute x, finds a multiple- split test defined on a: with maximized gain-ratio over all possible multiple-split tests defined on x. Our algo- rithm employs a computational technique introduced by Breiman et al. in the context of decision tree prun- ing (Breiman et al. 1984). The proposed algorithm can be called from any top-down decision tree learn- ing algorithm to handle tree-structured attributes. We show that when the number of examples is m and the depth of the hierarchy is d, our algorithm runs in time proportional to dm in the worst case, which is efficient enough from a practical point of view. In the next section, we start by giving a precise definition of the problem studied in this paper. An overview of the pruning algorithm of Breiman et al. is then given, followed by an outline of our algorithm and a discussion of its time complexity. Finally, we con- clude with a discussion of future research directions. Binary Test Multiple-Split Test Figure 3: Binary and multiple outcome tests on at- tribute Shape Figure 4: The three “one-level” tests for attribute Shape Problem Definition Let S be a set of training examples each having the form [ (ul,uz,..., a,), c 1, where al, ~2,. . . , a, are the values of attributes x1, x2, e. . , x,, and c denotes a class. Given such a set S, the basic operation in top- down induction of decision trees is to compute a score for each xi, 1 5 i 5 n, that measures how good it is to use xi for the test at the root of the decision tree being learned for S. The attributes ~:i may be continuous, discrete, and/or nominal. The objective of this work is to extend this to tree-structured attributes such as the Shape and Color attributes shown in Figure 1. A tree- structured attribute x E (~1, x2, . . . , z,} is associat,ed with a hierarchy which we will denote by x-tree. Each node in x-tree is called a category of 2. For simplic- ity, we assume that only the categories at t.he leaves of x-tree appear in the examples as values of x. (See Figure 2.) Following (Haussler 1988), we define a cut, C, of P- tree as a subset! of the categories of x satisfying t,he following two properties: (i) For any leaf e of x-tree, eit#her ! E C or e is a descendant of some cat,egory g E C. (ii) For any two categories ;,j E C, i is not, a descendant) (nor an ancest$or) of j. Figure 5 shows some cuts for the Shape hierarchy. Each cut C of x-tree can be turned int,o a multiple- split, test defined on x in a natural way. Namely, the 704 Learning Any Shape TI - - Polygon --------Ellipse& non-convex, non-convex cut I ---I *iangle Hexagon Square Proper Circle /\\ /\ Cross Star cut 3 Kideny -Crescent - ellipse shape Figure 5: Examples of cuts for the Shape hierarchy test would have ICI outcomes-an outcome for each g E C, where an example in which z = v is of outcome g iff v is a descendant of g, or ZJ is g itself. For exam- ple, Cut 3 in Figure 5 represents a test with the four outcomes { Con zle.r, Straight-lines-non-convex, Icidney shape, Crescent}. Clearly, the value of Shape in any ex- ample would belong to exactly one of these categories. In this work, a test (cut) is evaluated using Quinlan’s gain-ratio criterion. For a given set S of examples with q classes, let Ent(S) = - 2 Freyg7 ‘) X log, (“7;’ “’ ) , j=l where Freq(j,S) denotes the number of examples of class j in S. The mutual information of a cut C for at,tribut,e X, denoted MI(C), is defined as follows: M(C) = ‘);7 !Gfd gEC ISI x Ent(Sg), where S, is the subset of S having the outcome g of t,lie cut, -c for the attribut,e for the cut, is defined as x. The split information Finally, the gain-ratio score foI t#o a training set S is given by the cut C with respect m?(C) = Ent(S) - MI(C) SPF) . With the above statsed as follows: definitions, our problem can now be Given a set of examples S and a tree-structured at,tribut,e x with hierarchy x-tree, find a cut C of x- tree such t,hat GR(C) is maximum over all possible cuts of x-tree. Not,e t,hat, cut,s consist,ing of “specific” categories (t#hose t,liat, appear at, low levels of x-tree) give test#s with t,oo many outcomes and consequently yield good MI scores compared to cuts with more “general” categories which naturally have fewer outcomes. The use of gain-ratio (rather than the pure gain or other similar measures) helps in avoiding the t,endency towards those tests with too many outcomes (Quinlan 1986). It can be shown that the number of possible cuts for a given hierarchy grows exponentially in the number of leaves of the hierarchy. Thus, the challenge here is t,o solve the above optimization problem within afford- able computational costs. It turns out that this task is very similar to the task of decision tree pruning. A natural one-to-one correspondence exists between cuts and trees obtained by pruning x-tree. Namely, a cut C is mapped to the pruned tree in which the subtrees rooted at each g E C are removed (substituted by leaves). Conversely, a pruned tree is mapped to the cut C = {g I g is a leaf in the pruned tree}. This view allows employing a decision tree pruning technique in- troduced by Breiman et al. (Breiman et al. 1984) in solving our problem. reirnan et alh runing Algorithm Breiman et al. present an efficient optimal pruning al- gorit,hm in which the goal is to minimize what they call the cost-complexity of a decision tree. They as- sume that a decision t#ree T is given in which each test node t is associated with an error est,imate e(t) which measures the error int#roduced if the subtree below t is substituted by a leaf. The error of a tree T’ obtained by pruning subtrees from T is then defined as error = c e leaf of T, e(e). They also define size(T’) as the num- ber of leaves of T’. The quality of a pruned decision t,ree T’ is measured as a linear combination of its size and error: Score, (T’) = error + Q siae(T’), for a constant) Q > 0. The goal in Breiman et al.‘s work is to find for each o 2 0 a pruned t)ree that minimizes Score,. Such a tree is said to be optimally pruned with respect to Q. Alt,hough o runs t#hrough a cont,inuum of values, only a finit,e sequence of optimally pruned decision trees ex- Decision Trees 705 ists, where each tree minimizes Score, over a range of 01. Breiman et al. show that such a sequence can be generated by repeatedly pruning at node t for which the quantity 4 - I&L(t) 44 IL@) I - 1 is minimum, where L(t) denotes the set of leaves of the subtree rooted at node t in the current tree. They call this approach weakest link cutting. Although Breiman et al. consider binary decision trees only, extending their algorithm to decision trees with multiple branches is straightforward (Bohanec & Bratko 1994). Moreover, in the setting of Breiman et al., each leaf in a pruned tree contributes a uniform amount (exactly 1) to the size of the tree. Neverthe- less, the same algorithm can be easily generalized by associating a weight, w(t), with each node t in the given tree T, and then letting size(T’) = Cl, leaf of T, w(e), for a pruned tree T’. In this generalized setting, the node t at which pruning occurs-is the node whickmin- imizes the quantity 44 - &L(t) 44 c CEL(t) 44 - w(t) * Thus, generalized as above, the algorithm of Breiman et al. can be characterized as follows: 8 Input: A decision tree T, with error estimate and a weight w(t) at each node t in T. * Output: A sequence ((TGI), (Et, 4, (G,Q& . a., (T,, a,.)}, such that each Ti is a pruned tree that minimizes Score, in the range cri- 1 < cx < CY~, where ~0 = 0, and CE,. = 00. Although Breiman et al. only address the case of bi- nary trees and uniform weight, their arguments can be extended to our generalized case of multiple branch trees and non-uniform weights. Details are omitted here, however, for lack of space. Finding a Multiple-Split Test with Optimal Gain-Ratio We now outline our algorithm for finding a test wit,h maximum gain-ratio for a given set of examples S, and a given attribute x with hierarchy x-tree. The first step of the algorithm is an initialization step: Step I: For each category g in x-tree, attach an array CD, to be used to store the class distribution at that category. This array has an entry for each class which is initially set to 0. We repeat the following steps for each example e E S: 1. Let 21 be the value of at,tribute x in e. 2. Let cl be the class of e. 3. Increment CD, [cl]. 4. Climbing from v towards the root, increment CD,[cl] for every ancestor g of 21 in x-tree. At the end of Step I, each array CD, will be storing the class distribution for those examples in S in which the value of the attribute x is a descendant of the category g. The next step computes the amounts each category g would contribute to MI(C) and Sp(C) if g were a member in a cut C. Step II: For each category g in x-tree, let IS,I = cc C4kl P a is, the number of examples in t which the value of attribute x is a descendant of g), and compute the following two quantities: i(g) = -zF y X log:, (v) 9 9 s(g) = # x log,, (E) Now for any cut C, it is obvious that MI(C) and Sp(C) (as defined in Section 2) can be computed as C;EC iis) and EYCYC s(g), respectively. The next step is a call to the generalized algorithm of Breiman et al.: Step III: Pass the tree x-tree to the generalized Breiman et al.‘s algorithm, viewing each i(g) and s(g) as the error estimate and the weight of node g , respectively. As explained previously, there is a one-to-one corre- spondence between the set of all possible cuts and the set of all possible pruned decision trees. Since we are passing i(g) and s(g) to Breiman et al.‘s algorithm as the error estimates and the weights at the nodes, error and size( T’ ) are respectively equivalent to MI(C) and Sp(C), for the cut C corresponding to T’. This then justifies the following view: Step IV: View the tree sequence returned by Breiman et al.‘s algorithm as a sequence ((Cl, al), (C2, cq), ((2’3, CQ), . . . , (C,, a,.)}, in which each Ci minimizes Score, (C) = 1MZ( C) + 0 Sp(C) over all where cuts a() = C 0 , within and cy, the range = 00. The cut sequence we now have at hand is not directly maximizing the gain-ratio, but rather optimizing under a different criterion (hill(C) + Q Sp(C)) which involves the unspecified parameter CY. However, the following theorem puts things in perspective: Theorem: In the sequence {Cl, C’, . . . , C,.-1) of cuts produced by employing the algorithm of Breiman et al., there exists a cut with maximum gain-ratio. Proof: This is shown by contradiction. Suppose none of the produced cuts maximizes the gain-ratio. Then, there exists some cut C* such that, for all 1 5 i 5 T- 1, we have GR(C*) > GR(Ci), that is Ent(S) - MI(C*) Su(C*) > Ent(S) - M1(ci), 1 < i < r _ 1 SP(Ci) - - * (1) 706 Learning Consider now the following value of o: ct = Q1 = JWS) - Mw*) Sp(C*) - For any cut C, it is true that E&(S) > MI(C). There- fore, al is a legitimate value for a since it is greater than or equal to 0. At this particular value for o, Score, 1 (C”) = MI(c*) + @qjg!pJ x sP(c*) = Ent(S). - ’ ’ On the other hand, for any i, 1 5 i 5 r - 1, Score,,(G) = MI(Ci) + v x sp(ci) > id WG) + ,=$&$Q x Sp(C,) - . = Ent(S). (from (1)) The above means that at oi we have Score,, (C” ) < Score,, (Ci) for all Ci in {Cl, C2, C’s, +. a C.-i}. This is a contradiction since for any value of cy > 0, one of the cuts in { Ci, Cz, C3, . . . C,- 1) must minimize Score,, .O The above theorem leads to the following final step. Step V: Compute the gain-ratio for each Ci, 1 5 i 5 r - 1 and return the cut with maximum gain- ratio among these.’ Various details have been omitted in the above out- line of our algorithm in order to simplify the discussion. In an actual implementation, Steps III and IV (find- ing the sequence of cuts) and Step V (computing the gain-ratio scores) can be run concurrently-each time a cut is generated, its gain-ratio is computed, and the cut is kept if its gain-ratio is the highest so far. In the appendix, we give a full pseudo-code description of the algorithm in which the weakest-link cutting algorithm of Breiman et al. is embedded. Time Complexity Analysis Let m be the number of examples and q the number of classes. Let the number of leaves of x-tree be s and let its height be d. Assume that each node in x-tree has at m&t k children. Then, it can be shown that the implementation given in the appendix runs in time O(dm + (q + kd)s). We can, however, assume that s 5 m, since if this is not the case, then this means that some of the leaf categories in x-tree never show up in any example in S. In such a case, one can reduce the hierarchy by just ignoring these. More precisely, a category in x-tree is considered if and only if it is an ancestor of some leaf category that appears in at least one example in S. Reducing x-tree in this manner re- sults in a hierarchy of at most m leaves. Thus, the time ‘Note that the test corresponding to C, is not interest- ing since it has only a single outcome and does no splitting. complexity of our algorithm is in fact O((q + kd)m) in the worst case. It is interesting to note that the above bound is only linear in the height of the hierarchy. Therefore, when dealing with somewhat balanced hierarchies, d becomes in the order of logs, which is in turn in the order of log m. This then gives time complexity of O((q+ klogm)m). S ince the number of classes, q is usually small compared to klog m, this can be viewed as 0 (km log m). Interestingly enough, this is similar to the time complexity of 0( m log m) for the task of handling continuous attributes (Quinlan 1986; Fayyad & Irani 1992). Conclusion and For a given tree-structured attribute, the goal of this work is to find a multiple-split test that maximizes Quinlan’s gain-ratio measure with respect to a given set of training examples. We presented an algorithm that achieves this goal and runs in time linear in the number of training examples times the depth of the hierarchy associated with the tree-structured attribute. In no way one can claim any superiority of multiple- split tests in generalization performance over other kinds of tests, such as binary tests that are based on a single value of the attribute (See Figure 3). In fact, multiple-split tests and binary tests should not be viewed as mutually exclusive choices. One indeed can find the best multiple-split test using our method, and in parallel, find the best binary split test using the method of (Almuallim, Akiba & Kaneda 1995), and finally choose from these the test with higher score. The gain-ratio criterion of Quinlan is “hard-wired” in our algorithm. It would be interesting to generalize the algorithm to cover other similar measures as well. It is also interesting to consider tests that group dif- ferent values of a tree-structured attribute in a single outcome. This kind of tests is studied in (Fayyad 1994; Fayyad & Irani 1993; Quinlan 1993) for other attribute types. Finally, in certain applications, attributes may be associated with directed acyclic graphs (DAG’s) rather than trees as assumed in our work. Studying this generalized problem is an important future re- search direction. Acknowledgment Hussein Almuallim thanks King Fahd University of Petroleum & Minerals for their support. This work was partially conducted during his visit to Prof. Shimura Lab. of Tokyo Institute of Technology, sponsored by Japan’s Petroleum Energy Center. Thanks also to Hideki Tanaka of NHK, Japan for a useful discussion. Appendix Our algorithm is described below in pseudo code. All the variables are assumed global. i[g] and s[g] are com- puted for each node g in line 2.1 of FindBestCut. At each node g, o[g] stores the value of (Y above which Decision Trees 707 the subtree rooted at g is pruned. This is initialized at lines 2.2.4 and 2.3.4 of FindBestCut. Each call to PruneOnce results in pruning the subtree rooted at g for which ab] is minimum over all (unpruned) nodes of T. At that time, the flag Pruned[g] becomes True and cy[s] is updated for all ancestors s of g. The variable SmaZZest.a.BeZow[g] stores the small- est Q value over all descendant of g. This variable is stored in order to efficiently locate the node with minimum o in the current tree in each pruning iter- ation. SubTreeMl[g] and SubTnzeSp[g] store the sum of i[e] and s[!!], respectively, for all leaves l of the cur- rent subtree rooted at g. These are initialized in step 2 of FindBestCut, and then updated in step 10 of Pru- neOnce each time pruning occurs at a descendant of g. The current best cut is kept track of by the flag InBestCut[g]. A node g is in the current best cut if this flag is True for g and False for all its ancestors. Algorithm FindBestCut Input: A sample S, an attribute x, its hierarchy T 1. Initialize the arrays CD as in STEP I. 2. Traverse T in post-order. For each g in T: 2.1. Compute i[s] and s[s] as in STEP II. 2.2. If g is a leaf then 2.2.1. Pruned[g] = True 2.2.2. Sub TreeMflg] = i[s] 2.2.3. SubTreeSp[g] =s[g] 2.2.4. 491 =CX3 2.2.5. Smallesta. BeZow[g] = 00 2.2.6. InBestCut[g] = True 2.3. else 2.3.1. Pruned[g] = False 2.3.2. SubTreeMl[g] = Cy:=hild of 9 SubTreeMl[y] 2.3.3. SubTreeS&] = Cy:child of g SubT~eSz&d 2.3.4. 2.3.5. 2.3.6. [ 491 = k;-+;::$geF-$;] - SmaZZest.cr.BeZow g] 1 = min {a[g], min{ SmaZZest.a.BeZow[y] jy is a child of g}} InBestCut[g] = False 4. p = PruneOnce -- - 5. While p # root of T: 5.1. 5.2. ThisGR= i[root of T]-SubTreeMqroot of T1 SubTreeSp[root of T] If ThisGR 2 BestGR then 5.2.1. BestGR = ThisGR 5.2.2. InBestCutb] = True 5.2.3. p = PruneOnce 6. Report BestGR as the best gain-ratio over all cuts of T 7. Report the set {g 1 InBestCut[g] = True and for every ancestor g’ of g, InBestCut[g’] =FuZse} as the best cut. Procedure PruneOnce Quinlan, J. R. 1986. Induction of Decision Trees. Mu- 1. Let g = root of T chine Learning, 1( 1):81-106. 2. While a[g] > SmuZZest.cr.BeZow[g] : Quinlan, J. R. 1993. C4.5: Programs for Machine 2.1. g’ = child of g such that Learning, p. 104. San Mateo, CA: Morgan Kaufmann. SnluZZest.a. BeZow[g’] is minimum 2.2. I 3. Pru$;{] = True 4. 491 co 5. Sm ZZest.cr.BeZow[g] = 00 6. Su f- TreeMl[g] = i[g] 7. Sub TreeSp[s] = s[g] 8. p=g 9. Ifp= root of T, return p 10. Repeat 10.1. g = parent of g 10.2. 10.3. 10.4. 10.5. >ub?reeMl[g] i Cy:child of 9 SubTreeMfly] SubTr=%[gl = &.child of g SubTreeSlobl @[sl = ~~-$~eJ$~!~~~ SmuZZest.a.BeZow g] 1 = min{ob], min{ SmuZZest.ar.BeZow[g’] ] g’ is a kid of g}} 10.6. until g = the root of T 11. Return p References Almuallim, H.; Akiba, Y.; and Kaneda, S. 1995. On Handling Tree-Structured Attributes in Decision Tree Learning. In Proceedings of the 12th International Conference on Machine Learning, p. 12-20. San Fran- cisco, California: Morgan Kaufmann. Bohanec, M.; and Bratko, I. 1994. Trading Accuracy for Simplicity in Decision Trees. Machine Leurning, 15~223-250. Breiman, L.; Friedman, J.H.; Olshen, R.A.; and Stone, C.J. 1984. Classification and Regression Trees. Belmont: Wadsworth. Fayyad, U. M.; and Irani, K. B. 1992. On the Han- dling of Continuous Valued Attributes in Decision Tree Generation. Machine Learning, 8:87-102. Fayyad, U. M.; and Irani, K. B. 1993. Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. In Proceedings of the 13th In- ternational Joint Conference on Artificial Intelligence, p. 1022-1027. Fayyad, U. M. 1994. Branching on Attribute Values in Decision Tree Generation. In Proceedings of the 12th National Conference on Artificial Intelligence, p. 601-606. Haussler, D. 1988. Quantifying Inductive Bias: AI Learning Algorithms and Valiant’s Learning Frame- work. A rtificiul Intelligence, 36: 177-22 1. Nunez, M. 1991. The Use of Background Knowl- edge in Decision Tree Induction. Machine Learning, 6: 231-250. Pagallo, G.; and Haussler, D. 1990. Boolean Feature Discovery in Empirical Learning. Machine Learning, 5( 1):71-100. 708 Learning
1996
108
1,742
ses Vincent Corruble, Jean-Gabriel Ganascia LAFORIA-IBP, Universite Paris VI 4, Place Jussieu - Boite 169 75252 PARIS Cedex 05, FRANCE { corruble,ganascia} @laforia.ibp.fr Abstract The role played by the inductive inference has been studied extensively in the field of Scientific Discovery. The work presented here tackles the problem of induction in medical research. The discovery of the causes of leprosy is analyzed and simulated using computational means. An inductive algorithm is proposed, which is successful in simulating some essential steps in the progress of the understanding of the disease. It also allows us to simulate the false reasoning of previous centuries through the introduction of some medical a priori inherited form archaic medicine. Corroborating previous research, this problem illustrates the importance of the social and cultural environment on the way the inductive inference is performed in medicine. Introduction In some previous work (Corruble & Ganascia 1993, 1994) we investigated the role of induction in an important medical discovery. It appeared that an algorithm could, through a simple pure induction, dicover the cause of scurvy using a number of cases from 191h century medical literature. In that respect, our work was in the direct line of the data-driven approach to the computational study of scientific discovery. Pat Langley and Jan Zytkow have summarized in (Langley & Zytkow 1989) some of the key systems based on this approach. They defined the commonality of these systems as << the use of data-driven heuristics to direct their searches through the space of theoretical terms and numeric laws x>. On the other hand, we also showed in this study that, in order to reconstruct rationally some of the false reasoning of the 18* and 19 century about scurvy, it was necessary to introduce some implicit background knowledge inherited from pre-clinical medicine that influenced the inductive reasoning of those physicians. Thus we questioned the validity of a purely data-driven induction for the rational reconstruction of medical discoveries and we introduced the concept of medical system, a body of knowledge that influences, in some cases implicitly, the inductive reasoning of physicians. In the study presented here, our original aim was to investigate whether the results obtained on the discovery of the causes of scurvy would also apply to the discovery of the causes of leprosy. The central role played by the cultural environment on the inductive process is illustrated with this example: it has been necessary to formalize some implicit background knowledge concerning the nature of the concept of disease to reach a plausible computational account of the Nineteenth century reasoning on leprosy cases. In addition, we are to show that the rational reconstruction of the reasoning on the leprosy cases available requires the use of a type of induction which allows for the representation of exceptions. We consider these exceptions as radically different from the noise traditionally studied in the fields of statistics and machine learning. Although medical science was not advanced enough in the Nineteenth century to elaborate a fully satisfactory etiology for the disease, we show that some crucial improvement in the understanding of leprosy could have been reached based on the data then available. The primary lesson drawn from our experiment is that induction in science needs to be considered as an inference taking place within a dynamic context influenced by the previous stages of the domain, and aiming at overcoming its limitations. We begin this paper by giving some perspective on the history of leprosy and of the research regarding, chiefly, its etiology. Then we analyze some specific issues concerning induction in this discovery. More specifically, we show that useful inductions were produced in history despite the presence of obvious counter-examples which were not caused by noise. We then use PASTEUR, a new inductive algorithm, in two simulations on leprosy data collected in Nineteenth century medical literature. First we get some surprising results which can be understood in the light of modern leprosy research. Second, we reproduce the false reasoning of the Nineteenth century by introducing a priori knowledge on the concept of disease. We then sketch the basics of PASTEUR, and highlight its advantages over other classical learning algorithms for the task considered. A Brief History of Leprosy History of leprosy dates back to ancient China and India. We will not dwell here upon its origins (readers interested can refer for example to (Skinsnes 1973)), but an important fact to notice is that the concept of leprosy was ill-defined Discovery 731 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. for many centuries because it seems to have been confused with some other diseases until more recent times. Research on leprosy has followed a particularly interesting path. Modern western medicine was increasingly confronted with the disease during the development of colonialism in the Nineteenth century. In the past century, the main theory on the etiology of leprosy referred to heredity as the main and often only explanation of the disease (see for example (Royal College of Physicians 1867)). It made a lot of sense since the disease would often affect many members of the same families. The other hypothesis, the one of contagion, was also proposed early (Drognat-Landrk 1869), but was contradicted by the fact that many people (for example some nurses) in close contact with lepers were not affected, so that, for a long time, this hypothesis was considered unscientific. However insightful these theories were, they were not confirmed by laboratory experiments and were thus merely hypotheses. It is not until 1872 with Hansen’s discovery of the infectious agent causing leprosy (Hansen 1875) that the theory of contagion gained significant ground. The question was not solved though because this discovery did not, explain how the agent was transmitted. Heredity and contagion were still opposed as two distinct potential explanations. One reason for this debate was the impossibility of in vivo experimentation on animals: the only environment in which Hansen’s bacillus could survive was the human body. Hansen went as far as trying to contaminate some healthy patients in his hospital by pricking their eyes with a contaminated object, but he was not successful in obtaining any result (except that of being found guilty of unethical medical practice by a Danish court). As shown in (Waters 1993), the most radical advances in our century in the understanding of leprosy were produced in the early 60’s by, on the one hand, the discovery of the possibility of in vivo experimentation in the mouse footpad, and on the other hand, the progress of immunology and the new classification of leprosy proposed in (Ridley & Jopling 1966), which focuses on individuals’ immune reaction. The new main axis of research on leprosy then became the study of the human immune system’s reaction to the leprosy bacillus. Induction and Leprosy Research To summarize the history of leprosy and focus on the development of hypotheses, we can isolate three major theories: the theory of heredity, the theory of contagion, and the theory of immunity. What was the role of induction on the formation of these theories? It is clear from reading the works of physicians that these hypotheses result from the observation of patients. However, the underlying inductive reasoning was not described precisely so that it could be directly formalized. Nevertheless, the previous account of the history of the disease tells us that the induction performed in the 19 century did not abide by the rules of the type of logical inference which proved useful in simulating other medical discoveries (Corruble & Ganascia 1993, 1994). Both theories, the one of heredity and the one of contagion obviously had some counter-examples known to most physicians. Some people got ill even though none of their ascendants had been diagnosed as a leper, so that, in fact, the hypothesis of heredity was directly invalidated. Some health workers or close relatives had been in contact with lepers for the major part of their lives and were still in perfect health. This << fact >> invalidated the contagion theory. Despite these limitations, the hypotheses were produced, and proved useful, because they constituted significant steps towards the next breakthroughs. We have designed a new algorithm in order to permit a computational account of the reasoning performed last century. Also, the same algorithm is used to study whether the same computational techniques could have been of significant help to the physicians. The Need for Indulgent Here we introduce a new inductive inference, indulgent induction, which departs significantly from the classical generalization-based induction in its ability to model explicitly exceptions. We will then present an algorithm which performs this inference, and which has been used in our experiments on leprosy. One of the most widely recognized frameworks for the study of induction, in the field of Machine Learning, is the Version Space approach (Mitchell 1982). In this framework, induction is seen as a search for an hypothesis which is consistent with the pre-classified set of examples and counter-examples. This constraint is generally accepted in the community as a primary requirement. However, (Mitchell 1982) recognizes the limitations of the approach in the case of inconsistency, which can be of two kinds: inconsistency can result from (1) an insufficient description language, or (2) error in the training instances. The second case has lead to a huge amount of work, from statistics to machine learning, on induction from noisy data. However, in medical research, it is common to reason within a framework characterized by an insufficient description language. Furthermore, even though Mitchell suggests that other approaches are needed in the case of the impossibility of hypotheses consistent with all the examples, it seems that they should also be considered if consistent hypotheses are available. This opinion is directly linked to the theory of satisficing developed in (Simon 1980): confronted with a complex phenomenon, a scientist has to come up with an hypothesis which favors simplicity over pure logic or optimality. The theories of heredity, and of contagion are two examples of << satisficing theories >), which are both wrong, but yet simple enough to enable the researcher to structure his reasoning toward more elaborate and accurate theories. Therefore, we need to define another type of induction for which strict consistency with the data is not considered 732 Learning as a prerequisite. We have named this new inference indulgent induction, because, as an indulgent father in real life, an indulgent hypothesis will make exceptions for its children. Also, it is through the violation of the consistency constraint that innovative hypotheses which depart radically from the current theory (implicitly encoded in the data) can be suggested. The interest and the validity of these hypotheses are of course not guaranteed, but it is the role of the search heuristics to maximize the chances that they be so. In the next section, we present our experiments on the discovery of leprosy with PASTEUR, an algorithm that implements the principle of indulgent induction. PASTEUR is presented in some detail in the last section of this article. Experimentation on leprosy In this section, we present our experiments with PASTEUR on inductive reasoning on the causes of leprosy. The study is based on a compilation of cases carried out in an Indian leprosy asylum in the 1880’s reported and analyzed in (Phineas 1889). This study came at a time when the theory of contagion was beginning to challenge the prevalent theory of heredity, but it is worth noticing that, in this specific study, as Phineas mentions, the investigator seems to favor the theory of heredity. An evidence of this bias lies in the care taken in researching the list of relatives affected by the disease. The question that we ask ourselves in these experiments is: can we, given observational data on the disease acquired and reported with a pro-heredity bias, obtain through automated induction the suggestion of other interesting hypotheses. The first X< other hypothesis B that could be expected is the contagion theory. We will however see in the following that our experimental results are quite surprising in that respect. The material used for the experiments .Most of the cases available are supposed to have leprosy even though, for some, the diagnosis seems more than doubtful (for example, one patient is said to have no symptom of the disease, but is convinced of having it). However, all these cases are of a great interest because of the care taken to research a number of features potentially relevant to the etiology of the disease. Among them, we have available the sex, the caste, the age of the patient, the variety and duration of the disease, then the relatives affected, some information on the children and spouse, a description of how the disease started, and also the fish diet (considered by some as a key factor in the Nineteenth century). These features are summarized in Figure 1. The more specific question asked in this experiment is: Can the system propose an exploratory model linking the description of the patient and of his/her environment to the health of his/her children ? In other words, can it predict (in an exploratory mode) whether some of the patient’s children will be sick or whether all of them will be healthy. The 61 cases selected have been used in two experiments using PASTEUR. The first experiment aims at testing which hypotheses can be induced on the cause of the disease. The second one is concerned with the reconstruction of Nineteenth century reasoning as it happened in history. Attribute name sex caste Type string unordered set unordered set age disease-type duration father-affected mother-affected father-side mother-side spouse children fish-diet integer unordered set integer unordered set unordered set unordered set unordered set hierarchy unordered set ordered set initial-location unordered set Domain NA m, f Mussulman Sweeper Jheur Kohle Jat Rajpoot Musician Do-potter Doteli Bahte NA mixed do- anaesthetic tuberc yes, no yes, no yes, no yes, no no yes (healthy, sick) some-sick, all-healthy never, rarely, sometimes, often, very-often plenty, in-excess body, arm, leg, hand, foot, hints. face Figure 1. List of attributes with their characteristics dulgent induction on leprosy In this experiment, we give to PASTEUR the description of the 61 patients available. The question posed is formulated as such: given the description of a patient and his/her family, the system is asked to build an exploratory model predicting the health of the children on the basis of other observable features. In this experiment, the results which we would find interesting are of two kinds, as for our experiments on scurvy: e from a descriptive point of view, do the hypotheses produced account for the theories proposed at the time of the observations ? 0 from a normative point of view, do the hypotheses produced suggest a more advanced theory than the ones from the Nineteenth century ? In this latter case, the kind of theory that we would expect to be suggested is, considering the medical debate of the Nineteenth century, the theory of contagion. It would be interesting if the simulation suggested contagion from the material available since these observations were acquired o-heredity bias. The results are shown in Figure 2 (the decision graphs proposed by the algorithm have been translated into an equivalent set of rules). Rl and R2, the two main rules proposed (out of 5) are given. R2 solves some exceptions resulting from the general rule Rl. Discovery 733 R2: IF [father-affected = No THEN children = some-sick Mother-side = yes age > 35 disease-type = aneasthetic Figure 2. Model induced by PASTEUR Analysis. The model tells us that in the general case, it is sufficient to know that the father of the patient is NOT ill to conclude that his children are healthy (rule Rl). This general rule has exceptions however, and these are partly resolved by the more specific rule (R2), which says that among these patients with a healthy father, the ones having ill relatives on the mother’s side, being ill with the anesthetic type of disease, and being over 35 years old, have ill children. The theory of heredity appears in the model induced, since the first feature of interest appears to be the health of a direct ascendant. In that respect, our simulation is close to the models proposed in the Nineteenth century. There are however two major differences. The first one is that the theory of contagion does not appear directly in the hypothesis. The second difference appears critical in the light of modern research on leprosy: What the model proposes is to reason on the absence of the disease. The first rule, RI, tells us that it is relevant to consider that the fact that somebody is healthy tells us something about the disease. We have to put ourselves back in the context of the Nineteenth century to realize how this idea would have been revolutionary then, and we have to use our knowledge of contemporary leprosy research to understand how relevant this revolution could have been. Before studying why this hypothesis was not proposed at the time, let us examine why it would have been particularly relevant. Modern leprosy research, as we suggested earlier, aims at understanding the immune system’s reaction to the bacillus, so that the current accepted classification of the disease is based mainly on the characteristics of this reaction, even though it is initiated by an infectious agent. The absence of the disease is, in that context, an active process in which the human metabolism is fully involved. This point of view could not be articulated in the Nineteenth century for two main reasons: The first, obvious one, is that the domain language did not include the vocabulary needed to describe adequately the phenomenon of immunity. The second one requires the help of history of medicine Historical reconstruction A study in the history of medicine helps us to understand this phenomenon. In [Grmek 19951, a history of the concept of disease is attempted. In archaic western medicine (before Hippocrates, 6th cent. BC), Grmek isolates the primitive ontological conceptualization. In this framework, a disease is identified as one entity which penetrates the organism. This “thing” can be inanimate (corpuscular theory), a material living being (parasite theory) or an immaterial being (demon theory). The disease and its cause are thus naturally confounded. Hippocratic medicine introduced a new concept of disease. Being brought back into the field of nature, diseases are then considered as resulting from a bad mix of some essential humors. Indeed the passing from fitness to illness takes place through the change from a fair mix’ (symmetria) to a bad one (dyskrasia). What is important to notice is that, even after this new dominant framework was introduced, the original view on the nature of diseases remained in the background, and was sometimes particularly vivid. If we hypothesize that these two views on the nature of diseases coexisted and could have influenced the reasoning on leprosy, we need to find a formalization for each of them in order to carry out an experimentation. The archaic concept of disease is best represented by a predicate. In the case of leprosy, “patient X has the disease” is considered as a property that can be expressed as the predicate leper (x) . On the other hand, if the disease is defined by a bad mix (Hippocratic concept), a good representation is an attribute- value one expressing that mix-X = bad if X is ill, and mix-X = if X is healthy. The choice of a formalism over the other one can be considered as an a priori on the nature of the disease. As such, we can formalize it by constructing the features describing if a relative is affected instead of considering them as given in the description. What the description tells us concerns only symptoms (or syndromes). The disease itself is defined by the axioms of Figure 3 (archaic ontological concept) or of Figure 4 (Hippocratic dynamic concept) for father-ufSected. The same axioms apply also for the other attributes (mother-uflected, father-side, mother-side, and spouse). Figure 3. Axioms constructing the concept of leprosy (1) IF leprosy-symptom-father = yes THEN mix-father = bad IF leprosy-symptom-father = no THEN mix-father = good IF mix-father = bad THEN father-affected = yes IF mix THEN father affected = no Figure 4. Axioms constructing the concept of leprosy (2) In our first experiment, through a naive representation of our examples, we eluded a major a priori inherited from archaic medicine. By representing the health status of a family member as a Boolean feature (yes or no, for healthy or ill), we gave as much importance to the presence and to the absence of the disease, implicitly using the Hippocratic dynamic concept of disease. This hypothesis of the importance of the representation chosen is tested in our next experiment. ’ What the elements of this mix are remained ill-defined, but took a more precise shape within the humor theory which defined two pairs of humorus as its four basic elements. 734 Learning Induction with a priori medical knowledge This experiment aims at testing our hypothesis about the role of the conceptualization of disease. We are interested in testing the impact on induction of the use of the archaic concept of disease, and its correspondance with history. This is simulated in this experiment by constructing the attributes describing the health of relatives according to the axioms of Figure 3. Results. With this description language, PASTEUR induces the following model: ,R2: IF disease-type = do. THEN children = all-healthy R4: IF [ disease-type = do THEN children = some-sick I \ mother-affected = yes iR5: IF [ disease-type = do THEN children = some-sick I father-affected = yes Figure 5. Model induced by PASTEUR given the archaic concept of disease Analysis. These results are interesting because they account for the theory proposed in the Nineteenth century on the etiology of leprosy. The two main characteristics are, on the one hand, the importance of the disease type which appears here as a key feature to explain the disease (so that, this classification in types of leprosy finds some kind of posterior justification in the context of Nineteenth century medicine). On the other hand, the heredity theory appears in full force to claim that the ill children are, among the children of those patients having the “do.” type of disease, the children of patients also having a parent affected by the disease. In this experiment, we see that by introducing some appropriate bias concerning the nature of the concept of disease, we can induce a model which is very similar to the one proposed in the Nineteenth century. PASTEUR, algorithm for indulgent induction Indulgent induction has been recognized as important to overcome the limitations of a domain language within the dynamic context of a science in the making such as medicine confronted with the etiology of leprosy. PASTEUR is a new inductive algorithm based on CHARADE (Ganascia 1991), which implements the principles of indulgent induction. It introduces a new hypothesis space and new heuristics for searching the description space. Here, we will only give an idea of the functioning of the algorithm, insisting on the way hypothesis space and search heuristics are both geared toward the design of exploratory satisficing theories, and toward an explicit modeling of exceptions. A basic idea behind the new hypothesis space is to use the properties of the description space. All the nodes of the description space are connected according to a general to specific relation, and this relation is used to ensure a top- down search. All the rules induced by the algorithm can therefore be represented as a set of directed graphs of rules connected by a specialization relation. We call these graphs decision graphs by reference to the more constrained decision lists and decision trees. Decision graphs are acyclic graphs made of a rule at each one of their nodes. Two rules are connected if the premise of one rule is more specific than the premise of the other rule. Among all the rules whose premises are satisfied by an example, only the most specific ones, are activated. A voting scheme among these rules is then used to assign a class to the example. Structures similar to decision graphs have been proposed recently in (Gaines 1995) to address the same kind of problem through a post-processing of existing rule bases induced by various algorithms. The decision graphs used here are however in spirit closer to the “ripple-down rules” proposed in (Compton 1991). ‘rocedure PASTEUR(E) Set nodeset to D, initialize fuel to fuelmax PASTEUR-aux(E,nodeset,(a) ‘rocedure PASTEUR-aux(E,nodeset,Rset) If nodeset is empty or If fuel=0 then return Rset decrement fuel Select node N in Inf(nodeset) that maximizes B(fuel,N,Rset).H 1 (N,Rset,E) Construct R from N If H2(E, Rset, Rset u R) > X, then insert R in Rset and reset fuel to fuelmax Specialize N Set nodeset to nodeset \ Figure 6. PASTEUR algorithm (2-class case) Some simple heuristics have been developed to explore the space of decision graphs. The description space is explored according to a best-first top-down search. Two elements are taken into account when selecting a new node for evaluation. The first element (Hl) measures the interest of a node taking into account the examples it covers from a global perspective, selecting the one that maximizes the number of cases that will be correctly classified, minus the number of cases that will be misclassified. The second element (B) is a bias toward graph construction: the exceptions created by a rule are corrected in priority by refinement of the decision graph. This bias decreases linearly so that after repeated failures, the algorithm is given more flexibility so that it can explore other parts of the description space to initiate new graphs. This flexibility is a distinctive feature of PASTEUR whose search is directed by exceptions and coverage. Each candidate rule is evaluated, and inserted in the current decision graph if it satisfies a criterion (H2) based on the global improvement of the coverage of the learning cases. This criterion takes into account that what seems to Discovery 735 be at first an approximation of more specific rules. can be refined by the insertion eferences PASTEUR in our experiments Here, we briefly justify the use of PASTEUR in our experiments by comparison with other standard inductive algorithms. We do this by showing that the hypotheses proposed by PASTEUR are not in the search spaces of other classical algorithms. This appears best in the model proposed in Figure 5. This model uses a combination of properties which is not shared by other approaches. Figure 7 summarizes three properties needed to induce this model, which are characteristic of PASTEUR but not shared by other classical learning algorithms. The properties reviewed are the “Separate and Conquer” feature (SC) characteristic of algorithms learning Disjnuctive Normal Form (DNF) hypotheses, the ability to learn default hypotheses (characteristic marginally shown by algorithms designed to handle noisy data), and the ability to model explicitly exceptions to default rules. PASTEUR is compared to approaches learning Decision Trees (e.g. C4.5 (Quinlan 1992)), Decision Lists (e.g. CN2 (Clark & Niblett 1989)), and DNF (e.g. CHARADE) hypotheses. Figure 7. Properties needed to induce the model of Figure 5 and comparison with other algorithms Conclusion Our experiments on the discovery of the causes of leprosy have shown that a general inductive algorithm being given patients’ descriptions collected in the 19th century could produce hypotheses corroborated by 20th century medicine on the etiology of the disease. Even though last century’s descriptions were incomplete and collected with a pro- heredity bias, PASTEUR detects the importance of the absence of the disease, and hence, of immunity to predict the children’s health. Our second experiment, confirming previous research, shows that computational simulations can be used to give a dynamic account of the Nineteenth century medical reasoning by taking into account some knowledge on the nature of the concept of disease. Acknowledgments The initial stages of this research were made possible thanks to a grant from the Fondation de France. We are grateful to Jacques Lebbe and the reviewers for their helpful comments on earlier drafts of this article. 736 Learning Clark P. & Niblett T. 1989. The CN2 Induction Algorithm. Machine Learning 3: 261-283. Compton, P. 1991. Ripple-down rules: possibilities and limitations. In Proceedings of the Sixth Knowledge Acquisition for KBS Workshop. Banff. Corruble V., Ganascia J.G., 1993. Discovery of the Causes of Scurvy: Could artificial intelligence have saved a few centuries ? In Proceedings of AAAI 1993 Workshop on Knowledge Discovery in Databases. Corruble V., Ganascia J.G., 1994. Aid to discovery in medicine using formal induction techniques. Blood Cells. 19: 649-659. Drognat-Land& C.-L. 1869. De la contagion, seule cause de la propagation de la lepre. Paris: G. Bail&e Gaines, B. R. 1995. Inducing knowledge. In Proceedings of the 9th Banff Knowledge Acquisition for knowledge-based systems Workshop, Banff, Canada. Ganascia, J.-G. 1991. Deriving the learning bias from rule properties. Machine Intelligence 12. Oxford University Press, Oxford Grmek, M.D. 1995. Le concept de maladie. Histoire de la pen&e mebicale en Occident. Ed. M. Grmek. v. 1. Seuil, 1995 Hansen, G. Armauer. 1875. On the etiology of leprosy. British and foreign medico-chirurgical review, 55. Langley, P. & Zytkow, J., 1989. Data-Driven Approaches to Empirical Discovery. Artificial Intelligence, 40. pp 283- 312. Mitchell, T. 1982. Generalization as Search. Artzficial Intelligence, 18, 203-226. Phineas, S.A. 1889. Analysis of 118 cases of leprosy in the Tarntaran Asylum (Punjab) reported by Gulam Mustafa, Assistant Surgeon Transactions of the Epidemiological Society of London. v. 9 (1889-1890) Quinlan, J. R. 1992. C4.5: Programs for Machine Learning. Morgan Kaufman, Los Altos, California, 1992. Ridley D.S. and Jopling W.H. 1966. Classification of Leprosy According to Immunity, A Five-Group System. International Journal of Leprosy. v. 54, n. 3. pp. 255-273. Royal College of Physicians, 1867. Report on leprosy. London. Simon, H.A. 1980. The Sciences of the Artificial. 1969, 1980. MIT Press, Cambridge MA. Skinsnes, O.K. 1973. Immuno-Pathology of Leprosy: The Century in Review. International Journal of Leprosy, v. 41, n. 3. Waters M.F.R. 1993. Leprosy 1962-1992, Introduction. Transactions of the Royal Society of Tropical Medicine and Hygiene, 87, pp 499.
1996
109
1,743
Scaling Up: Distributed earning Foster John Provost aniel N. ennessy NYNEX Science & Technology Computer Science Department 400 Westchester Avenue University of Pittsburgh White Plains, NY 10604 Pittsburgh, PA 15260 foster@nynexst.com hennessy@cs.pitt.edu Abstract Machine-learning methods are becoming increasingly popular for automated data analysis. However, standard methods do not scale up to massive scientific and business data sets without expensive hardware. This paper investigates a practical alternative for scaling up: the use of distributed processing to take advantage of the often dormant PCs and workstations available on local networks. Each workstation runs a common rule-learning program on a subset of the data. We first show that for commonly used rule- evaluation criteria, a simple form of cooperation can guarantee that a rule will look good to the set of cooperating learners if and only if it would look good to a single learner operating with the entire data set. We then show how such a system can further capitalize on different perspectives by sharing learned knowledge for significant reduction in search effort. We demonstrate the power of the method by learning from a massive data set taken from the domain of cellular fraud detection. Finally, we provide an overview of other methods for scaling up machine learning. heterogeneous workstations. We use a standard rule- learning algorithm, modified slightly to allow cooperation between learners. At a high level, our metaphor for distributed learning is one of cooperating experts, each of which has a slightly different perspective on the concept to be learned. We define cooperation as the learning-time sharing of information to increase the quality of the learned knowledge or to reduce or redirect the search. The learners communicate with each other by passing messages. The group can take advantage of the communication by asking questions or by sharing learned knowledge. Introduction Machine-learning techniques are prime candidates for automated analysis of large business and scientific data sets. Large data sets are necessary for higher accuracy (Catlett, 1991 b), for learning small disjuncts with confidence, and to avoid over-fitting with large feature sets. However, the standard tools of the machine-learning researcher, such as off-the-shelf learning programs on workstation platforms, do not scale up to massive data sets. For example, Catlett estimates that ID3 (Quinlan, 1986) would take several months to learn from a million records in the flight data set from NASA (Catlett, 199 1 a). We present a practical method for scaling up to very large data sets that can be guaranteed to learn rules equivalent to those learned by a monolithic learner, a learner operating on the entire data set. In the next section we show how to guarantee that every rule that a monolithic learner would judge to be satisfactory would appear to be satisfactory to at least one of our distributed learners. Next we discuss how distributed rule learners can take advantage of this property by cooperating to ensure that the ensemble learns only rules that are satisfactory over the entire data set. We present results demonstrating the distributed system’s ability to scale up when learning from a massive set of cellular fraud data. Later we show how further cooperation can increase the scaling substantially. Finally we discuss other approaches to scaling up. artitioning and Accuracy Esti One solution to this scaling problem is to invest in or to gain access to very powerful hardware. Another is to design alternative methods that can deal better with massive data sets. In this paper, we investigate a third solution, namely, to take advantage of existing processing power distributed across a local network and often under- utilized. In particular, we focus on partitioning the set of examples and distributing the subsets across a network of If learning algorithms had access to the probability distribution over the example space, then a useful definition of the quality of a learned rule would be the probability that the class indicated by the rule is correct when its conditions apply to an example. Unfortunately, the probability distribution is not usually available. Thus, statistics from the training set typically are used to estimate the probability that a rule is correct. The positive predictive value discussed by Weiss, et al. (1990), is a frequency-based accuracy estimate; the ruIe certainty factor used by Quinlan (1987) is a frequency-based accuracy estimate adjusted for small samples, and several rule- learning programs use the Laplace accuracy estimate (Clark & Boswell, 1991; Segal & Etzioni, 1994; Webb, 1995; Quinlan & Cameron-Jones, 1995). We show that a 74 Agents From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. distributed learner can make performance guarantees with respect to each of these rule quality metrics. It is useful to begin by defining some terms. A rule, Y, is a class description that will either cover or not cover each example in a data set, E. Thus, coverage statistics can be determined for r and E. Let P and N be the numbers of positive and negative examples in E. The number of true positives, TP, and the number of false positives, FP, count the positive and negative examples covered by r. For a subset, Ei, of E , Ti, Ni, TPi, and FP i are defined analogously. Let us define a rule evaluation criterion to be the combination of a rule evaluation function, f(r,E), which takes a rule and an example set and produces a scalar evaluation score, and a threshold, c. With respect to the rule evaluation criterion, a rule, r, is satisfactory over an example set, E, iff(r,E) 2 c. Rule evaluation criteria can be defined for each of the three rule quality metrics referenced above by defining the appropriate rule evaluation function. For positive predictive value, f(r,E) = ppv(r,E) = Tp/(TP+FP); for the certainty factor used by Quinlan, ftc E) = cf(r,E) = (TP-O..5)/(TP+FP); for the Laplace accuracy estimate, f(r, E) = le(r,E) = (TP+I)/(TP+FP+k), where k is the number of classes in the data. Given a set of examples, E, and a partition of E into N disjoint subsets, Ei, i = I.. N, the invariant-partitioning property (introduced by Provost & Hennessy (1994)) is the phenomenon that for some rule evaluation criteria the following holds for all partitions of E: if a rule r is satisfactory over E, then there exists an i such that r is satisfactory over Ei. The implication of the invariant- partitioning property is that distributed learning algorithms can be designed such that each processor has only a subset of the entire set of examples, but every rule that would appear satisfactory to a monolithic learner will appear satisfactory to at least one distributed learner. It is straightforward to show that the invariant-partitioning property holds for positive predictive value. Unfortunately, it is also straightforward to show that the property does not hold for the rule certainty factor used by Quinlan or for the Laplace accuracy estimate. However, by extending the property to allow weakened criteria on the subsets we can provide the same performance guarantees for these rule evaluation functions. In particular, given a set of examples, E, a partition of E into N disjoint subsets, Ei, i=I..N, and a secondary function f’(r,E, N), define a rule to be acceptable over an example subset, Ei, if f*(r, Ei,N) 2 c, i.e., the rule is satisfactory with respect to f’. The extended invariant-partitioning property is the phenomenon that for some rule evaluation criteria the following holds for all partitions of E: if a rule r is satisfactory over E, then there exists an i such that r is acceptable over Ei. The usefulness of the extended property hinges on the definition off: The global performance guarantee with the extended property is the same as with the original, namely, every rule that is satisfactory to a monolithic learner will be acceptable to at least one distributed learner. With the original property, a rule was acceptable only if it was satisfactory. A weaker definition of acceptability will allow more rules to be found by the distributed learners. Below we utilize cooperation to ensure that spurious rules are eliminated. We now show that for a non-trivial f the extended property holds for the Laplace accuracy estimate. Define the Laplace estimate criterion as: f(r,E)=le(r,E), Define f’(r, E,N) F”; UN)/(TP+ FP+WN). le’(r,E,N) As expectid, le’(r, E, N) G le(r,EI which means that the criterion used on the subsets is approximately the same as that used on the entire data set. In fact, it is easy to verify that for N= I, le’(r, E, N) = le(r, E); as N;, 00, le’(r,E,N) *ppv(r,E,N), and for N>I, le’(r,E,N) is between le(r, E) and ppv(r, E). Assume that for a rule, r: (TP+I)/(TP+FP+k) 2 L, but, given a partition of N subsets of E: b/i, (TPi +l/N)/(TPi + FPi+WN) < L (i.e., r is not acceptable over any Ei), then: I) Vi {TPi + l/N < L * (TPi + FPi + UN)] 2) aTPi +1/W) < ZL * (TPi+FPi + k/N)) 3) gTPi +1/N) < L * gTPi+FPi + UN) 4) TP+ 1 <L. (TP+FP+k) 5) (TP+l)/(TP+FP+k) < L ==> Contradiction Furthermore, it can be shown that le’ is tight; it is the strongest function for which the extended invariant- partitioning property will hold. By using a similar derivation, it is easy to show that the extended property applies to the certainty factor used by Quinlan. It also applies to the certainty factor normalized for skewed distributions. Specifically, f(r, E) = cf-normalized(r, E) = (TP-O..5)/(TP+pFP), where p is the ratio of positive examples to negative examples in the training set. Cooperating istributed Lear We have designed and implemented DRL (Distributed Rule Learner) taking advantage of the invariant- partitioning property. DRL partitions and distributes the examples across a network of conventional workstations each running an instance of a rule learning program. In DRL the learners cooperate based on the communication of partial results to each other. The invariant-partitioning property guarantees that any rule that is satisfactory on the entire data set will be found by one of the sub-learners. Simple cooperation assures that only rules that are satisfactory on the entire data set will be found. Later we will discuss more elaborate cooperation. RL DRL is based upon RL (Clearwater & Provost, 1990). RL performs a general-t - o specific beam search of a syntactically defined space of rules, similar to that of other MetaDENDRAL-style rule learners (Buchanan & Mitchell, 1978; Segal & Etzioni, 1994; Webb 1995), for rules that satisfy a user-defined rule evaluation criterion. For this work, we use cf-normalized (defined above). DRL first partitions the training data into N disjoint subsets, assigns each subset to a machine, and provides the Multiagent Learning 75 infrastructure for communication when individual learners detect an acceptable rule. When a rule meets the evaluation criterion for a subset of the data, it becomes a candidate for meeting the evaluation criterion globally; the extended invariant-partitioning property guarantees that each rule that is satisfactory over the entire data set will be acceptable over at least one subset. As a local copy of RL discovers an acceptable rule, it broadcasts the rule to the other machines to review its statistics over the rest of the examples. If the rule meets the evaluation criterion globally, it is posted as a satisfactory rule. Otherwise, its local statistics are replaced with the global statistics and the rule is made available to be further specialized. Initially, the review of acceptable rules has been implemented as an additional process that examines the entire data set. Empirical Demonstration We have been using a rule-learning program to discover potential indicators of fraudulent cellular telephone calling behavior. The training data are examples of cellular telephone calls, each described by 31 attributes, some numeric, some discrete with hundreds or thousands of possible values. The data set used for the experiments reported here comprises over l,OOO,OOO examples. High- probability indicators are used to generate subscriber behavior profilers for fraud detection. We chose a set of parameters that had been used in previous learning work on the fraud data for monolithic RL as well as for DRL. The invariant-partitioning property is observed. In order to examine whether the invariant-partitioning property is indeed observed (as the above theory predicts), we examined the rules learned by the multiple processors in runs of DRL using multiple processes on multiple workstations (as described below) and compared them to the rules learned by a monolithic RL using the union of the DRL processors’ data sets. As expected, the individual DRL processes learned different rule sets: some did not find all the rules found by the monolithic RL; some produced spurious rules that were not validated by the global review. However, as predicted by the invariant- partitioning property, the final rule set produced by DRL was essentially the same as the final rule set produced by the monolithic RL. The only difference in the rule sets was that DRL found some extra, globally satisfactory rules not found by RL. This is due to the fact that RL conducts a heuristic (beam) search. Because of the distribution of examples across the subsets of the partition, some processors found rules that had fallen off the beam in the monolithic search. Thus, the distributed version actually learned more satisfactory rules than the monolithic version in addition to learning substantially faster. Scaling up. Figure 1 shows the run times of several different versions of the rule learner as the number of examples increases: monolithic RL (RL), a semi-serial version of DRL, and DRL running on four processors plus a fifth for the rule review. RL’s run time increases linearly in the number of examples, until the example set no longer fits in main memory, at which point the learner thrashes, constantly paging the example set during matching. It is possible to create a serial version of DRL that operates on the subsets one after the other on a single machine, in order to avoid memory-management problems when the example sets become large. However, because it does not exhibit true (learning-time) cooperation, there is a significant overhead involved with the further specialization of locally acceptable rules that are not globally satisfactory, which is necessary to guarantee performance equivalent to monolithic RL. Semi-serial-DRL uses a second processor for the rule review, thus avoiding much of the aforementioned overhead. Figure 1 also includes a line corresponding to five times the run time of DRL (DRL*S) to illustrate the efficiency of the distribution. For a fixed number of examples, the run time for each DRL processor does not change significantly as the number of processors increases, suggesting that communication overhead is negligible. For the DRL system used in this demonstration, thrashing set in at just over 300,000 examples (as expected). 8000 7000 6000 -m- RL q -semi- serial I DRL g 4000 ‘S E! 3000 2000 1000 0 -*- DRL”5 - DRL number of examples Figure 1. Run time vs. number of examples for the fraud data. (averages over 3 runs). DRL uses 4 workstations + 1 for rule review. We are interested in the real-time expense of using such systems, so these are real-time results, generated on a university laboratory network of DECstation 5000’s with 32M of main memory. Since the major non-linearity hinges on the amount of main memory, we also experimented with dedicated Spare 1 O’s with 64M of main memory. For RL’s run time, the shape of the graph is the same. Runs with 100,000 examples per processor take approximately 20 minutes on the Spare 1 OS; thrashing sets in just under 300,000 examples. This implies that with 5 76 Agents machines available, DRL can process a million examples while you go get lunch. The semi-serial version of DRL provides a practical method for dealing with very large example sets even when many processors are not available. The invariant- partitioning property allows it to make the same performance guarantees as DRL; the partitioning makes it very efficient by avoiding the scaling problems associated with memory management. Further cooperation The study described above uses a simple form of cooperation to provide a guarantee of an output equivalent to that of a monolithic learner operating with the entire data set. In this section we discuss three further ways in which cooperation can be used to benefit a set of distributed rule learners. Specifically, we discuss sharing learned knowledge for (i) maximizing an accuracy estimate, (ii) pruning portions of the rule space that are guaranteed not to contain satisfactory rules, and (iii) pruning portions of the rule space heuristically. The invariant-partitioning property is based on learning rules whose evaluation function is greater than a threshold. Some existing rule learners search for rules that maximize the positive predictive value (Weiss, et al., 1990) or the Laplace estimate (Webb, 1995; Segal & Etzioni, 1994). DRL can approximate the maximization process by starting with a high threshold and iteratively decreasing the threshold if no rules are found. However, a system of distributed learners can take advantage of cooperation to maximize the rule evaluation function directly. Specifically, each learner keeps track of the score of the globally best rule so far (initially zero). When a learner finds a rule whose local evaluation exceeds the threshold defined by the global best, it sends the rule out for global review. The invariant-partitioning property guarantees that the rule with the maximum global evaluation will exceed the global best-so-far on some processor. Initially there will be a flurry of communication, until a rule is found with a large global evaluation. Communication will then occur only if a learner finds a rule that exceeds this threshold. Another benefit of cooperation is that one learner can reduce its search based on knowledge learned by another learner. A thorough treatment of pruning for rule-learning search is beyond the scope of this paper, but Webb (1995) discusses how massive portions of the search space can be pruned in the admissible search for the rule that maximizes the Laplace accuracy estimate. In a distributed setting, if a learner discovers that a portion of the space is guaranteed not to contain satisfactory rules, it can share this knowledge with the other learners. Consider a simple, intuitive example: we are not interested in rules whose coverage is below a certain level. When a learner finds a rule whose coverage is below threshold, it sends the rule out for review. If the review verifies that the rule is indeed below threshold globally, then the learner shares the rule with the group. It is guaranteed that every specialization of this rule will also be below threshold, so the portion of the rule space below this rule can be pruned. Webb shows how the search space can be rearranged dynamically to maximize the effect of pruning. Cooperation can also be used to reduce search heuristically. Rule-learning programs are used primarily for two types of learning: (i) discovery of rules that individually are interesting to domain experts, e.g., i n the fraud domain, and (ii) learning a disjunctive set of rules that are used to build a classifier, e.g., a decision list (Clark & Niblett, 1989). Often the basis for building classifiers is the common “covering” heuristic: iteratively learn rules that cover at least one example not covered by the current rule set (Michalski, et al., 1986; Clark & Niblett, 1989; Segal & Etzioni, 1994). Distributed learning systems can get much leverage from cooperation based on the covering heuristic. Specifically, as individual learners find good rules they can share them with the group. Allowing different learners to search the space in different orders will increase the effect of the cooperation. Consider the following extreme example: a large search space contains 10 rules that together cover the example set, and there are 10 distributed learners each of which starts its search with a different one of these rules. In this case, after each learner searches I rule (plus the subsequent review and sharing), the learning is complete. We hypothesize that such cooperation can lead to super-linear speedups over a monolithic learner (cF, work on superlinear speedups for constraint satisfaction problems (Kornfeld, 1982; Clearwater, et al., 199 1)). IOOOy - DRL - DRL 800 /SC 700 -*- DRL $ 600 - I E" 500 'S 5 400 300 200 100 0 0 20000 40000 60000 number of examples Figure 2. The effect of the covering heuristic on using 2 workstations 1 for rule review. /SC de simple covering. /cc notes cooperative covering. Multiagent Learning 77 Figure 2 shows the effects of the covering heuristic on Unfortunately, pinpointing a small set of relevant domain the run time of DRL (averages over 3 runs). Let us knowledge begs the very question of machine learning. distinguish between simple covering (/SC), using the covering heuristic within a run of RL to cover an example (sub)set, and cooperative covering (/cc), sharing learned rules to cover the example set by the ensemble of learners. As shown in the figure, for these runs simple covering provided approximately a factor of two speedup over DRL without covering. Cooperative covering provided another factor of two speedup, on the average. Related Work: Scaling Up Machine Learning There are several approaches one might take to apply symbolic machine learning to very large problems. A straightforward, albeit limited, strategy for scaling up is to use a fast, simple method. Holte (1993) showed that degenerate decision trees, decision stumps, performed well for many commonly used databases. While the algorithm for learning decision stumps is fast, the method prohibits the learning of complex concepts. A second strategy is to optimize a learning program’s search and representation as much as possible, which may involve the identification of constraints that can be exploited to reduce algorithm complexity, or the use of more efficient data structures (Segal and Etzioni, 1994; Webb, 1995). These techniques are complementary to the scaling obtained by distributed processing. The most common method for coping with the infeasibility of learning from very large data sets is to select a smaller sample from the initial data set. Catlett (1991 b) studied a variety of strategies for sampling from a large data set. Despite the advantages of certain sampling strategies, Catlett concluded that they are not a solution to the problem of scaling up to very large data sets. Fayyad, et al. (1993), use sampling techniques, inter alia, to reduce a huge data set (over 3 terabytes of raw data). One method they use is to partition the data set, learn rules from subsamples, and use a covering algorithm to combine the rules. This method is similar to incremental batch learning and coarse-grained parallel methods (both described below). Catlett (1991 b; 1992) also found that by looking at subsets when searching for good split values for numeric attributes, the run time of decision-tree learners can be reduced, without a corresponding loss in accuracy. Incremental batch learning (Clearwater, et al., 1989; Provost & Buchanan, 1995), a cross between sampling and incremental learning, processes subsamples of examples in sequence to learn from large training sets. Such an approach is effective for scaling up because even for learners that scale up linearly in the number of examples, if the example set does not fit in main memory, memory- management thrashing can render the learner useless. Such methods can take advantage of the invariant-partitioning property and the covering heuristic to approximate the effects of cooperation, as in a serial version of DRL. Gaines (1989) analyzed the extent that prior knowledge reduces the amount of data needed for effective learning. Aronis and Provost (1994) use parallelism to enable the use of massive networks of domain knowledge to aid in constructing new terms for inductive learning. Finally, three approaches to decomposition and parallelization can be identified. First, in rule-space parallelization, the search of the rule space is decomposed such that different processors search different portions of the rule space in parallel (Cook and Holder, 1990). However, this type of parallelization does not address the problem of scaling up to very large data sets. The second parallelization approach, taken by Lathrop, et al. (1990), and by Provost and Aronis (1996), utilizes parallel matching, in which the example set is distributed to the processors of a massively parallel machine. Provost and Aronis show that the parallel-matching approach can scale a rule-learner up to millions of training data. Our work differs from the massively parallel approaches in that our goal is to take advantage of existing (and often under- utilized) networked workstations, rather than expensive parallel machines. Finally, our work is best categorized by the third approach to parallel learning, the coarse-grained approach, in which the data are divided among a set of powerful processors. Each processor (in parallel) learns a concept description from its set of examples, and the concept descriptions are combined. Brazdil and Torgo (1990) take an approach similar to a distributed version of the approach of Fayyad, et al., (described above), in which a covering algorithm is used to combine rules learned from the subsets, but they do not experiment with very large data sets. Chan and Stolfo (1993) also take a coarse-grained approach and allow different learning programs to run on different processors. Not unexpectedly, as with sampling, such techniques may degrade classification accuracy compared to learning with the entire data set. This degradation has been addressed by learning to combine evidence from the several learned concept descriptions (Chan & Stolfo, 1994). Our method differs from other coarse-grained parallel learners (and from incremental batch learning, and the method of Fayyad, et al.), because it utilizes cooperation between the distributed learners. Cooperation allows guarantees to be made about performance of learned rules relative to the entire data set, and can yield substantial speedups due to sharing of learned knowledge. Conclusion We demonstrate a powerful yet practical approach to the use of parallel processing for addressing the problem of machine learning on very large data sets. DRL does not require the use of expensive, highly specialized, massively parallel hardware. Rather, it takes advantage of more readily available, conventional hardware making it more broadly applicable. Furthermore, DRL provides a performance guarantee. 78 Agents Preliminary results indicate that we can scale up by another order of magnitude by further optimizing the search of the underlying learning system. For the fraud data, a prototype system that uses spreading activation instead of matching as the basic learning operation learns from 100,000 examples plus hierarchical background knowledge in under 5 minutes and l,OOO,OOO examples in five hours (Aronis & Provost, 1996). This suggests that the DRL system (as configured above) using spreading activation instead of matching will learn from l,OOO,OOO examples in about an hour. Acknowledgements John Aronis has been involved in many stimulating discussions on scaling up machine learning. The NYNEX S & T Machine Learning Project and the University of Pittsburgh Dept. of Medical Informatics provided support. eferences Aronis, J. M., & Provost, F. J. (1994). Efficiently Constructing Relational Features from Background Knowledge for Inductive Machine Learning. In Proceedings of the AAAI-94 Workshop on KDD. Aronis, J. & Provost, F. (1996). Using Spreading Activation for Increased Efficiency in Inductive Learning. Intelligent Systems Lab, Univ of Pittsburgh, Tech Report ISL-96-7. Brazdil, P. & Torgo, L. (1990). Knowledge Acquisition via Knowledge Integration. In Wielinga (ed.), Current Trends in Knowledge Acquisition, 90- 104. Amsterdam: 10s Press. Buchanan, B., & Mitchell, T. (1978). Model-directed Learning of Production Rules. In Waterman 8z Hayes-Roth (ed.), Pattern Directed Znference Systems. Academic Press. Catlett, J. (1991a). Megainduction: a Test Flight. In Proceedings of the Eighth International Workshop on Machine Learning, p. 596-599. Morgan Kaufmann. Catlett, J. (1991 b). Megainduction: machine learning on very large databases. Ph.D. Thesis, University of Technology, Sydney. Catlett, J. (1992). Peepholing: choosing attributes efficiently for megainduction. In Proceedings of the Ninth Int. Con. on Machine Learning, 49-54. Morgan Kaufmann. Chan, P., & Stolfo, S. (1993). Toward Parallel and Distributed Learning by Meta-Learning. In Proceedings of the AAAI-93 Workshop on KDD. Chan, P., & Stolfo, S. (1994). Toward Scalable and Parallel Inductive Learning: A Case Study in Splice Junction Prediction. In the working notes of the ML-94 Workshop on Machine Learning and Molecular Biology. Clark, P., & Boswell, R. (1991). Rule Induction with CN2: Some recent improvements. In Proceedings of the Fifth European Working Session on Learning, p. 15 l-163. Clark, P., & Niblett, T. (1989). The CN2 Induction Algorithm. Machine Learning, 3, p. 261-283. Clearwater, S., Cheng, T., Hirsh, H., & Buchanan, B. (1989). Incremental batch learning. In Proc. of the 6th Znt. Wkshp on Machine Learning, 366-370. Morgan Kaufmann. Clearwater, S., Huberman, B., & Hogg, T. (1991). Cooperative Solution of Constraint Satisfaction Problems. Science, 2§4(1991), p. 1181-l 183. Clearwater, S., & Provost, F. (1990). RL4: A Tool for Knowledge-Based Induction. In Proc. of the 2nd Znt. IEEE Con8 on Tools for AI, p. 24-30. IEEE C.S. Press. Cook, D., & Holder, L. (1990). Accelerated Learning on the Connection Machine. In Proc. of the 2nd IEEE Symp. on Parallel and Distributed Processing, p. 448-454. Fayyad, U., Weir, N., & Djorgovski, S. (1993). SKICAT: A Machine Learning System for Automated Cataloging of Large Scale Sky Surveys. In Proc. of the Tenth Int. Conf. on Machine Learning, p. 112-l 19. Morgan Kaufmann. Gaines, B. R. (1989). An Ounce of Knowledge is Worth a Ton of Data. In Proc. of the Sixth Int. Workshop on Machine Learning, p. 156-159. Morgan Kaufmann. Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning, 11(l), p. 63-90. Kornfeld, W. A. (1982). Combinatorially Implosive Algorithms. Comm. of the ACM, 25( lo), p. 734-738. Lathrop, R. H., Webster, T. A., Smith, T. F., & Winston, P. H. (1990). ARIEL: A Massively Parallel Symbolic Learning Assistant for Protein Structure/Function. In AI at MIT: Expanding Frontiers. Cambridge, MA: MIT Press. Michalski, R., Mozetic, I., Hong, J., & Lavrac, N. (1986). The Multi-purpose Incremental Learning System AQ15 and its Testing Application to Three Medical Domains. In Proceedings of AAAI-86, p. 104 1 - 1045. AAAI-Press. Provost, F. J., & Aronis, J. (1996). Scaling Up Inductive Learning with Massive Parallelism. Machine Learning, 23. Provost, F. J., & Buchanan, B. G. (1995). Inductive Policy: The Pragmatics of Bias Selection. Machine Learning, 2Q( l/2), p. 35-61. Provost, F., & Hennessy, D. (1994). Distributed machine learning: scaling up with coarse-grained parallelism. In Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology. Quinlan, J. (1986). Induction of Decision Trees. Machine Learning, 1, p. 81-106. Quinlan, J. (1987). Generating production rules from decision trees. In Proceedings of IJCAZ-87, p. 304-307. Morgan Kaufmann. Quinlan, J. R., & Cameron-Jones, R. M. (1995). Oversearching and Layered Search in Empirical Learning. In Proc. of ZJCAZ-9.5, 10 19- 1024. Morgan Kaufmann. Segal, R., & Etzioni, 0. (1994). Learning Decision Lists using Homogeneous Rules. In Proceedings of AAAZ-94, p. 619-625. AAAI Press. Webb, G. I. (1995). OPUS: An Efficient Admissible Algorithm for Unordered Search. Journal of Artificial Intelligence Research, 3, p. 43 l-465. Weiss, S. M., Galen, R. S., & Tadepalli, P. V. (1990). Maximizing the Predictive Value of Production Rules. Artificial Intelligence, 45, p. 47-7 I. Multiagent Learning 79
1996
11
1,744
ine era Tsuyoshi Murata and asamichi Shimura Department of Computer Science Graduate School of Information Science and Engineering Tokyo Institute of Technology 2-12-1 Oh-okayama, Meguro, Tokyo 152, JAPAN {murata, shimura)@cs.titech.ac.jp Abstract In the discovery of useful theorems or formu- las, experimental data acquisition plays a fun- damental role. Most of the previous discovery systems which have the abilities for experimenta- tion, however, require much knowledge for eval- uating experimental results, or require plans of common experiments which are given to the sys- tems in advance. Only few systems have been at- tempted to make experiments which enable the discovery based on acquired experimental data without depending on given initial knowledge. This paper proposes a new approach for discov- ering useful theorems in the domain of plane ge- ometry by employing experimentation. In this domain, drawing a figure and observing it corre- spond to making experimentation since these two processes are preparations for acquiring geomet- rical data. EXPEDITION, a discovery system based on experimental data acquisition, gener- ates figures by itself and acquires expressions de- scribing relations among line segments and angles in the figures. Such expressions can be extracted from the numerical data obtained in the com- puter experiments. By using simple heuristics for drawing and observing figures, the system suc- ceeds in discovering many new useful theorems and formulas as well as rediscovering well-known theorems, such as power theorems and Thales’ theorem. Introduction Since the beginning of human history, scientists have discovered many useful theorems and formulas from the data acquired by experimentation. Zytkow and Baker (Zytkow & Baker 1991) pointed out the advan- tages of experimentation for discovery: 1) experimen- tation provides an abundance of data, 2) extremely accurate data can be acquired by experimentation, 3) an experimenter can create special situations that are otherwise not available, and 4) an experimenter can create simple experimental situations so that empir- ical regularities are easy to discover. It is expected that the abilities of experimentation carry the above advantages to a discovery system as well as to a hu- man scientist. Although many discovery systems focus on experi- mentation as a method of interaction with the exter- nal world, most of the systems require considerable amount of given knowledge. KEKADA (Kulkarni & Si- mon 1988) focuses its attention on surprising phenom- ena to constrain the search space of experimentation. In order to detect surprising phenomena, however, the system needs to have knowledge about ordinary exper- imental results. DEED (Rajamoney 1993) designs ex- periments which discriminate between two competing theories. Since the system is based on the difference of causal explanations by the competing theories, it is not applicable to the situations where there exist no such theories. In order to make experiments, the abilities of plan- ning experimental procedures are important. Such abilities have been incorporated in some discovery systems, including MOLGEN (Friedland 1979) and STERN (Cheng 1992), which employ experimentation. Most of these systems, however, need to have pre- scribed domain-dependent experimental plans which are given to the systems in advance. For discovering new theorems by a deductive pro- cess, domain knowledge is very important to generate a set of theorematic candidates. Furthermore many heuristics are needed to plan the experiment for ob- taining appropriate data and to avoid computational explosion in a search space. In the knowledge-intensive systems such as AM (Lenat 1983), one of the well- known discovery systems, given knowledge is combined or mutated to generate new theorems for discovery. Since the generation of desired theorems heavily de- pends on the given knowledge, there is the possibility that the system may not be able to discover useful the- orems according to the lack of given knowledge. Also the theorems and formulas discovered are sometimes restricted in domain, since the possible methods of ex- perimentation depend entirely on such knowledge. As described before, experimentation generally pro- vides an abundance of data from external environ- ment. Discovery based on experimental data acqui- Discovery 737 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. sition, therefore, is more desirable for a discovery sys- tem. This is because the method is expected to make up for missing initial knowledge by discovering knowl- edge from observed experimental data. Especially in the domain of plane geometry, much data from which useful theorems are extracted can be obtained by draw- ing figures and by finding their geometrical relations. As Shrager and Langley (Shrager & Langley 1990) pointed out, a discovery system for mathematics is un- usual compared with a system for physics or chemistry, in that the system can generate data internally rather than observing them in a real or simulated environ- ment. This means that a discovery system for mathe- matics has the property of making internal experiments with less knowledge of experimentation than the sys- tems for other domains. A discovery system for plane geometry, which is our target, is also able to take the advantage of the property by generating figures and observing them in the system. This paper proposes a new approach for discover- ing useful theorems in the domain of plane geometry by employing experimentation. EXPEDITION, a dis- covery system based on EXPErimental Data acquisI- TION, generates figures automatically by drawing lines one by one, and observes the figures in order to ex- tract numerical data. From the numerical data, ex- pressions about line segments and angles are acquired. Although many expressions are acquired from a figure, the expressions about line segments and angles which are newly generated by the last additional line are re- garded as useful in the system since the expressions cannot be acquired from the figure before drawing the line. With only two simple heuristics for drawing and observing figures, EXPEDITION succeeds in discover- ing many useful theorems as well as rediscovering well- known theorems such as power theorems and Thales’ theorem. iscovery based on the comparison of experimental results In order to clarify the role of experiments for discover- ing knowledge, the processes of actual discovery have been investigated. The records of actual discovery pro- cesses, such as laboratory notes and recollections of a discoverer, have been often used as the bases for devel- oping discovery systems. There are two approaches to the study of actual human discovery. One involves the analysis of historical records of real scientists, and the other involves the analysis of the behavior of subjects who are working on a discovery task, such as a task of discovering the mechanism of a device or a chemical reaction. Dunbar (Dunbar 1993) analyzed the experimental processes of subjects who were asked to discover how genes were controlled by using a simulator of genes. Klahr et al. (Klahr, Dunbar, & Fay 1990) used a computer-controlled robot tank, which can be pro- grammed with a sequence of commands, as a device for a discovery task. Subjects were asked to discover the operation of an unknown command. When they observed the behavior of the robot tank whose pro- gram included the unknown command, most of them realized that a part of the commands in the program was executed repeatedly. Then they executed similar programs whose numerical parameters were different from the previous program, and compared the results to clarify the range of the repetition. This conser- vative strategy is called the VOTAT (vary one thing at a time) experimental strategy. Schunn and Klahr (Schunn & Klahr 1995) obtained experimental data us- ing a simulator called MilkTruck. Subjects of this re- search also conducted a sequence of similar programs for the discovery of the operation of unknown com- mands. As is seen above, analyzing and comparing each result obtained by similar experiments are very im- portant in evaluating the results and in discovering new knowledge or theorems, even if initial background knowledge is not fully available. Such a mechanism of comparison, therefore, is essential and desirable also in a discovery system employing experimentation in order to detect regularity and peculiarity of the results ob- tained. In the domain of plane geometry, therefore, a system which operates based on comparison of its ex- perimental results is expected to be able to discover useful desired theorems from various data of figures generated in the experiments. iseovery based on experimental data acquisition Drawing figures Figures often enable the detection of visual informa- tion such as neighborhood relations and relative size. This property is called emergent property (Koedinger 1992), which is one of the reasons humans use figures for solving problems. By drawing figures and observ- ing them, a discovery system for plane geometry is also able to acquire much geometrical data. In order to draw various figures for the acquisition of data, lines are added one by one on a given base fig- ure. In this paper, a circle is chosen as the base figure since many interesting figures can be drawn from a cir- cle. To guide line drawing, focus points are introduced such as the center of a circle, a point on the circum- ference, contact points, and intersection points. Lines are drawn in the following way according to the focus points: From a focus point outside the circle e draw a tangential line to the circle e draw circle a line which passes through the center of the e draw an arbitrary line which has common points with the circle From a focus point on the circle circumference 738 Learning draw a tangential line which touches the circle at the focus point o draw a line which passes through the center of the circle e draw an arbitrary line to a point on the circumfer- ence From a focus point inside the circle e draw a line which passes through the center of the circle e draw an arbitrary line which has common points with the circle Figure 1 shows a part of drawn figures in the above way. Dots in the figures indicate focus points. 0 . Figure 1: Drawing figures by adding lines on a circle Acquisition of theorematic candidat es By observing the figure drawn in the above procedure, numerical data are acquired such as the length of line segments and the measure of angles. The length of line segments, and the sum and the product of the length of two arbitrary line segments are listed from the data. An expression, which we call a theorematic candidate, is acquired from two entries of approximately equal numerical values in the list. For example, in the figure shown in Figure 2, a theorematic candidate AB2 = AD . AE is acquired based on the observed numerical data. A&2_ AB =ll.O AB*AB =121.0 BC = 7.5 AB*BC = 82.5 0 E AC= 18.5 AB*AD= 91.3 approximately AD = 8.3 . . . equal DE = 6.2 AD*AE =120.4 AE = 14.5 . . . A B C . . . Figure 2: Acquisition of a theorematic candidate From the data of angles also, theorematic candidates are acquired in the same manner. The following obvi- ous relations are included in the acquired theorematic candidates, which means our approach succeeds in dis- covering the relations. e Radii (diameters) of a circle are equal. A diameter is twice a radius. The sum of divided lines is equal to the original line. If A, B and C are three collinear points, the measure of an angle LABC is 180’. The sum of divided angles is equal to the original angle. The sum of the measures of three angles in a triangle is 180’. Select ion of useful theorematic candidates Many theorematic candidates are acquired from a fig- ure. As additional lines are drawn on a figure, the number of line segments and angles increases, and then the combination of line segments and angles increases accordingly. As a result, numerous theorematic candi- dates can be acquired from a complicated figure com- posed of many lines. To obtain only useful theorems from many acquired theorematic candidates, it is im- portant to select useful theorematic candidates. Let, us focus on the relations about line segments and angles which are newly generated by drawing an addi- tional line on a figure. Since theorematic candidates about the newly generated line segments and angles cannot be acquired from the figure before drawing the additional line, such candidates can be considered as useful. A B A B AB=AC LOAB=LOAC Figure 3: Selection of useful theorematic candidates Figure 3 shows a sequence of figures and correspond- ing useful theorematic candidates. In the middle figure of Figure 3, an expression AB = AC is regarded as a useful theorematic candidate since it, shows the relation about newly generated line segment AC. In the right, figure, an expression LOAB = LOAC is regarded as useful for the same reason. By focusing on the relations about newly generated line segments and angles, the combinatorial explosion is avoided and the discovery from a complicated figure is also enabled. DST (Murata, Mizutani, & Shimura 1994) is one of the discovery systems in the domain of plane geome- try. It discriminates line segments and angles which are generated by auxiliary lines. Such line segments and angles, called subproducts, are eliminated from acquired expressions by transformation to discover the- orems which include no subproduct. DST draws aux- iliary lines only for the purpose of extracting the data of line segments and angles which already exist before drawing the lines. On the other hand, the approach proposed here draws additional lines for the purpose of extracting the data of newly generated line segments Discovery 739 and angles. Since additional lines are regarded as con- stituents in a figure, our new approach enables the dis- covery from various figures. Verification of theorematic candidat es The theorematic candidates which hold only for the original figure, the figure from which they are acquired, are not true theorems. To remove such candidates, ev- ery candidate from the original figure should be tested whether the candidate holds for other figures which topologically resemble the original figure. Such figures are re-drawn by adding lines in the same order as the original figure. This is because the figures are used for making other experiments which resemble the one using the original figure. Since an additional line is drawn at random in length and in direction, re-drawn figures are, in general, partly different from the orig- inal figure. As a result of the above experiments, a theorematic candidate which holds for all the figures is regarded as a useful theorem of great generality. Repetitive experiments are often carried out by hu- man scientists as well in order to test whether an ob- served surprising phenomenon of a substance is exhib- ited generally by other substances of the same class. Such experiments are necessary for assessing the scope of the phenomenon. Drawing the figures which resem- ble the original figure can be considered as making sup- plementary experiments for verifying the generality of discovered theorems. However, unlike the repetitive experiments of previous discovery systems, re-drawing figures is very simple and requires less domain knowl- edge. Experimental results We have developed EXPEDITION, a discovery system based on experimental data acquisition, by using the proposed approach mentioned above. EXPEDITION succeeds in discovering many useful theorems as well as rediscovering well-known theorems about the figures which include a circle. Figure 4 shows some of the figures generated in our system. From these figures, the following well-known theorems are rediscovered by interpreting acquired expressions: A tangential line to a circle is perpendicular to the radius (diameter) from the contact point. (LAHO = 900) Two line segments from a point outside a circle to its contact points are equal. (AB = AC) A line from the vertex of an angle to the center of inscribed circle is a bisector of the angle. (LOAB = LOAC) An angle of the triangle inscribed in a circle is equal to an angle between the chord opposite to the an- gle and the tangential line which touches the cir- cle at the end point of the chord. (LDEB = LDBA, LEDB = LEBC) Figure 4: Figures for rediscovering theorems Power theorems. (Al? - AC = AH2,BE - EC = DE l EH) Thales’ theorem. (LACB = 90”) The sum of the measure of two opposite angles of an inscribed quadrilateral is 180”. (LABC + LCDA = BOO, LBCD + LDAB = BOO) Inscribed angles in a circle are equal when their end points of sides excluding their vertices are the same. (LBAC = LBDC, LABD = LACD) Moreover, EXPEDITION discovers many other the- orems which are not found in a conventional book of geometry. From the figures shown in Figures 5 and 6, the following theorems (1) and (2) are discovered respectively: . I B L!!iL.l D A C E Figure 5: A figure for discovering theorem (1) C B 0 A Figure 6: A figure for discovering theorem (2) LABD+ LBDC = LDCE (1) LOAC+ LABC = 90° (2) Although these theorems can be proved easily, it is quite interesting that EXPEDITION draws the figures by itself and finds these expressions as useful ones. Many theorems about line segments are also discovered 740 Learning Discussion BA 0 & D A C Figure 7: A figure for discovering theorem (3) by the system. From the figure shown in Figure 7, the following simple and elegant theorem is discovered: AD-BE=AB*DE (3) In order to deduce this theorem by using the geomet- rical relations such as similarity and congruence, addi- tion of auxiliary lines and complicated transformation of expressions are required. The fact that EXPEDI- TION discovers such a theorem only from observed data shows that our approach is quite useful and that the system has advanced abilities for discovery. From the figures shown in Figures 8 and 9, the fol- lowing theorems { (4), (5) } and { (6), (7), (8), (9), (1% (WV (12) 1 are discovered respectively: c A D B Figure 8: A figure for discovering theorems (4) and (5) IV& dici E F A C Figure 9: A figure for discovering theorems from (6) to (12) CD-ED = 0B2+ED2 (4) CEOED = 0B2 (5) AE-AF = AC2+AD-EF (6) AD-AF = AE2+CEsEB (7) AD-AF = AE2+DE-EF (8) EFqAF = AD-DE+DF2 (9) AE- EF = DE-AF+CE-EB 00) AD- EF = AE-DE+CE-EB (11) AD- EF = DE-AF (12) In the domains of physics and chemistry, an expres- sion which holds for all similar experiments is consid- ered as a true law. Similarly, an expression which holds for all re-drawn figures is regarded as a true theorem in our system. Practically, there is no need to re-draw figures many times; only a few times of re-drawing are enough for verifying theorematic candidates. From the figure shown in Figure 9, for example, 23 nontrivial theorematic candidates were hypothesized at first. By observing only one re-drawn figure, 10 candidates were invalidated and all the remaining ones, including the theorems described before, were actually true. It must be noted that the discovery from the figures In order to deduce geometrical theorems, some of the which include no similar or congruent triangles, such as Figures 8 and 9, is also realized in our system. previous work on theorem proving, such as Gelernter’s geometry-theorem proving machine (Gelernter 1963) Most of the previous discovery systems employ heuris- tics for controlling their search in order to avoid the combinatorial explosion. Such heuristics, however, of- ten require considerable amount of knowledge. In EX- PEDITION, only the following two heuristics are used: e drawing figures by adding lines focusing on the expressions about line segments and angles which are newly generated by the last addi- tional line The former enables the system to draw various fig- ures automatically and to acquire data by observing the figures. The latter avoids the combinatorial explo- sion without using knowledge for search. Although the above both heuristics of EXPEDITION do not require domain knowledge, they both contribute very much to the discovery of many useful theorems. Figures are generated from simple ones to compli- cated ones by drawing lines one by one. Generating figures in this way enables the system to discover the- orems about various figures efficiently without using much knowledge. The system sets up hypotheses about the relations among line segments and angles, what we call theorematic candidates in this paper, by us- ing numerical data acquired from a figure. In order to verify the theorematic candidates, many figures which resemble the original figure are re-drawn in the same order as an original figure. Since the above experi- ments are made internally, the system does not need to have much knowledge for experimentation. This ap- proach is based on a generate-and-test procedure and is suitable for machine discovery. In general, geometrical theorems are often deduced using the expressions acquired from the geometrical re- lations such as similarity and congruence. In order to deduce theorems from a figure which has no such ge- ometrical relations, auxiliary lines which generate the relations have to be drawn on the figure. However, drawing appropriate auxiliary lines is a very difficult and tricky task. By using numerical data, EXPEDI- TION discovers theorems which are difficult to deduce only from the expressions of the geometrical relations. Discovery 741 and DC model (Koedinger and Anderson 1990), also use geometrical data which are observed from figures. Gelernter’s system uses figures to prune invalid geo- metrical relations that are generated by the backward search. In the DC model, figures are used to generate hypotheses which are pruned by using domain knowl- edge. Our approach is different from the above both approaches in that figures are used for both generating hypotheses and validating them. Therefore, EXPEDI- TION is able to acquire theorems without depending on given domain knowledge. Conclusion We have described an approach for discovering useful theorems in the domain of plane -----+--- k-- ---‘--- ing experimentation. EXPEDI? : fjculllt;bly uy ~lllpluy- ‘ION, which we devel- oped, -succeeds in discovering many useful theorems as well as rediscovering well-known theorems such as power theorems and Thales’ theorem. The success of EXPEDITION shows that experi- mentation plays an important role in discovering the- orems. In general, an empirical method of scientific discovery requires several processes such as making ex- perimental plans, acquiring data by experimentation, setting up appropriate hypotheses, and verifying the hypotheses. Since our system draws figures, which cor- responds to making experimentation, it does not need to have knowledge for making experimental plans; it discovers theorems by using nothing but the heuris- tics of drawing figures and the heuristics of focusing on newly generated line segments and angles. In the domains of physics and chemistry, numerous experimental data which are acquired based on domain knowledge are used for discovering useful laws. A dis- covery system which simulates human discovery pro- cesses in such domain requires much knowledge and heuristics. On the other hand, in the domain of math- ematics, especially in plain geometry, expressions ac- quired from domain axioms or from observed figures are used with insight for discovering theorems and for- mulas. In other words, laws in physics and chemistry are discovered inductively while theorems in mathe- matics are discovered deductively. Although EXPEDI- TION acquires expressions from numerical data rather in an inductive way, the system actually discovers novel theorems in the domain of plane geometry without us- ing much knowledge. Such inductive discovery is desir- able for various domains in which computer-controlled experimentation is available. References Cheng, P. C.-H. 1992. Diagrammatic Reasoning in Scientific Discovery: Modelling Galileo’s Kinematic Diagrams. Technical Report SS-92-02, 1992 AAAI Spring Symposium, Reasoning with Diagrammatic Representations, 33 - 38. Dunbar, K. 1993. Concept Discovery in a Scientific Domain. Cognitive Science 1’7(3):397 - 434. Friedland, P. 1979. Knowledge-based Experiment Design in Molecular Genetics. In Sixth International Joint Conference on Artificial Intelligence, 285 - 287. Gelernter, H. 1963. Realization of a geometry- theorem proving machine. In Feigenbaum, E. A., and Feldman, J., eds., Computers and Thought. McGraw- Hill. 134 - 152. Klahr, D.; Dunbar, K.; and Fay, A. L. 1990. De- signing Good Experiments To Test Bad Hypotheses. In Shrager, J., and Langley, P., eds., Computational Models of S cientific Discovery and Theory Formation. Morgan Kaufmann. chapter 12, 355 - 402. Koedinger, K. R., and Anderson, J. R. 1990. Ab- stract planning and perceptual chunks: Elements of expertise in geometry. Cognitive Science 14(4):511 ~ 550. Koedinger, K. R. 1992. Emergent Properties and Structural Constraints: Advantages of Diagrammatic Representations for Reasoning and Learning. Techni- cal Report SS-92-02, 1992 AAAI Spring Symposium, Reasoning with Diagrammatic Representations, 151 - 156. Kulkarni, D., and Simon, H. A. 1988. The Processes of Scientific Discovery: The Strategy of Experimen- tation. Cognitive Science 12(2):139 - 175. Lenat, D. B. 1983. The Role of Heuristics in Learning by Discovery: Three Case Studies. In Michalski, R. S.; Carbonell, J. G.; and Mitchell, T. M., eds., Machine Learning : An Artificial Intelligence Approach. Tioga. 243 - 306. Murata, T.; Mizutani, M.; and Shimura, M. 1994. A Discovery System for Trigonometric Functions. In Proceedings, Twelfth National Conference on Artiji- cia2 InteZZigence, 645 - 650. The AAAI Press. Rajamoney, S. A. 1993. The Design of Discrimination Experiments. Machine Learning 12:185 - 203. Schunn, C., and Klahr, D. 1995. A $-Space Model .of Scientific Discovery. Technical Report SS-95-03, 1995 AAAI Spring Symposium, Systematic Methods of Scientific Discovery, 40 - 45. Shrager, J., and Langley, P. 1990. Computational Approaches To Scientific Discovery. In Shrager, J., and Langley, P., eds., Computational Models of Scien- tific Discovery and Theory Formation. Morgan Kauf- mann. chapter 1, 1 - 25. Zytkow, J. M., and Baker, J. 1991. Interactive Min- ing of Regularities in Database. In Piatetsky-Shapiro, G., and Frawley, W. J., eds., Knowledge Discovery in Databases. The AAAI Press. 31 - 53. 742 Learning
1996
110
1,745
Using A Genetic A ic for Meta Modeling John Yen and Bogju Lee James C. Liao Center for Fuzzy Logic, Robotics, and Intelligent Systems Department of Chemical Engineering Department of Computer Science Texas A&M University Texas A&M University College Station, TX77843-3122 College Station, TX77843-3112 yen@cs.tamu.edu Abstract The identification of metabolic systems is a com- plex task due to the complexity of the system and limited knowledge about the model. Math- ematical equations and ODE’s have been used to capture the structure of the model, and the conventional optimization techniques have been used to identify the parameters of the model. In general, however, a pure mathematical formula- tion of the model is difficult due to parametric uncertainty and incomplete knowledge of mech- anisms. In this paper, we propose a modeling approach that (1) uses fuzzy rule-based model to augment algebraic enzyme models that are in- complete, and (2) uses a hybrid genetic algorithm to identify uncertain parameters in the model. The hybrid genetic algorithm (GA) integrates a GA with the simplex method in functional opti- mization to improve the GA’s convergence rate. We have applied this approach to modeling the rate of three enzyme reactions in E. cola’ central metabolism. The proposed modeling strategy al- lows (1) easy incorporation of qualitative insights into a pure mathematical model and (2) adaptive identification and optimization of key parameters to fit system behaviors observed in biochemical experiments. Introduction Very often, chemical reactions happen as a series of steps instead of as a single basic action. Therefore, a chemical research problem has been to capture or describe the series of steps called pathway of a chem- ical reaction. To do this, chemical engineers perform experiments with the reaction: measure the overall sto- ichiometry, detect reaction intermediates, hypothesize relations among the products, plot concentrations over time, and so on. A classic example of this in biomod- eling is the pathway of glucose metabolic model which is shown in Figure 1. Each node describes a metabo- lite participating in the pathway, while each reaction is shown in the pathway as an arrow, which is labeled by the variable v denoting the rate of the reaction. Extensive studies have unveiled numerous functions crucial to living cells, such as metabolic pathways, en- GLU m VPU PYR G6P PEP 1 VP@ F6P I VPfi FDP Vald e VgaP DHAP - Vtpi GAP \ P13G <pgk P3G qpgm P2G 1 Veno vmdyf-=T-Ly ACCOA 4iFk MALARATE ISOC Vfum I I Vied FUMARATE a-KETO Vsucdh 1 I Vakg SUCCINATE SUCCOA Vsucd Figure 1: Pathway of glucose metabolic model zyme actions, gene regulations, and global physiolog- ical controls. Several attempts have been reported to simulate or predict system behavior based on in- dividual component models. For example, enzyme ki- netic equations have been derived and assembled to model metabolic pathways (Achs & Garfinkel 1977; Heinrich & Rapoport 1974; Liao et al. 1988); com- ponents of DNA replication and gene expression have been modeled to simulate the replication of plasmids (Lee & Baily 1984; Straus, Walter, & Gross 1988); and key aspects of cellular functions have been rep- resented mathematically to describe the overall cellu- lar behavior (Schuler & Domach 1983). On the other hand, descriptive models either unstructured, struc- tured, or based on optimization principles have been developed (Fredrickson 1976; Kompala, Ramkrishna, & Tsao 1984; Ramkrichna 1983). As a consequence of the reductionist approach and the fast progress of molecular biology, mechanisms at the molecular level are reasonably well established. These molecular mechanisms are combined to explain system behavior, most often in an intuitive manner. For example, reg- Discovery 743 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. ulation of enzyme activity has been used to explain the regulation of metabolic pathways, and the action of each component in a gene regulation network, or regulon, is used to explain the overall response of the network. This intuitive approach has been successful to the extent of first approximation, but has rapidly become unsatisfactory as one demands a detailed explanation of system behavior. Furthermore, when an explanation based on intuitive synthesis of molecular mechanisms fails, it is difficult to determine whether the observa- tion is a manifestation of novel molecular mechanism or is a complex interaction of known mechanisms. Some observations cannot be explained simply by intuitive synthesis of existing mechanisms. In general, complete mechanistic models are rare because of parametric un- certainty and incomplete knowledge of mechanisms, whereas descriptive models lack the ability to link com- ponent properties to system behavior. Moreover, when model prediction does not agree with experimental ob- servations, it is difficult to distinguish between errors in parameters and errors in model structures. In this paper, we propose a modeling approach that (1) uses fuzzy rule-based model to augment algebraic enzyme models that are incomplete, and (2) uses a hybrid genetic algorithm to identify uncertain param- eters in the model. We have applied this approach to modeling the rate of enzyme reactions in E. co/i cen- tral metabolism. The proposed modeling strategy al- lows (1) easy incorporation of qualitative insights into a pure mathematical model and (2) adaptive identifica- tion and optimization of key parameters to fit system behaviors observed in biochemical experiments. The next section describes the basics of fuzzy logic- based modeling and the hybrid genetic algorithm (GA), which integrates a GA with the simplex method in functional optimization to improve the GA’s conver- gence rate. We then describe the proposed modeling strategy and its application to modeling E. co/i central metabolism. Finally, we discuss issues to be addressed in our future research and make some concluding re- marks in section . Background Fuzzy Logic-based Modeling It has been demonstrated that fuzzy modeling can be used to model complex systems that are not well under- stood (Takagi & Sugeno 1985; Sugeno & Kang 1988). The main contribution of fuzzy logic to system model- ing is to introduce a new paradigm of modeling through three fundamental concepts that are closely related: fuzzy partition, fuzzy rules and interpolative reasoning. A fuzzy partition divides an input space to partially overlapping regions using fuzzy sets. Each subregion is associated with a local model for the region through a fuzzy rule. In areas where subregions partially overlap, the corresponding local models are combined to form a global model through a process (called interpolative 2.0 FDP ’ 1 VERY-LOW Figure 2: Fuzzy sets for VPPC modeling reasoning or fuzzy inference) that is analogous to linear interpolation. A fuzzy partition generalizes classical partitions and divides a space into-a collection of disjoint subspaces to allow smooth transitions from one subspace into a neighboring one. This is accomplished using fuzzy sets, which were developed by Lofti A. Zadeh to allow objects to take partial membership in a vague con- cept (i.e., a concept without sharp boundaries) (Zadeh 1965). Th e e ree to which an object belongs to a d g fuzzy set, which is a real number between 0 and I, is called the membership value in the set. The meanink of a fuzzy set is thus characterized by a membership f&c- tion that maps elements in a universe of discourse (i.e., the domain of interest) to their corresponding member- ship values. Figure 2 shows the membership functions of the fuzzy sets for modeling enzyme PPC. CoA repre- sents acetyl-coA, and p denotes the membership value in the fuzzy sets. - Based oh fuzzy set theory, fuzzy logic generalizes modus ponens in classical logic to allow a conclusion to be drawn from a fuzzy if-then rule even when the rule’s condition is partially satisfied (Zadeh 1973). The strength of the conclusion is calculated based on the degree to which the antecedent is satisfied by the in- put data. Conclusions from multiple fuzzy rules are then combined to form a global conclusion. This is the essence of the interpolative reasoning. There are two kinds of fuzzy rule. The first kind of fuzzy model, referred to as the Sugeno-Takagi-Kang model in the literature, uses a linear equation to de- scribe a rule’s local model. An example of this type of rule is shown below for a system with two input variables (2, y) and one output-variable (z): - If 2 is A and y is B then z = a0 + alx + a2y where A and B denote fuzzy sets and as, al and a2 denote constants. Let wi denotes the degree the input to the model matches the condition of thei-th rule, and yi denotes the conclusion of the i-th rule. The formula below combines the conclusion of all rules in a Sugeno- 744 Learning Takagi-Kang model through interpolative reasoning: The second type of fuzzy rule maps a to a fuzzy conclusion as shown below: fuzzy subregion If 2 is A and y is B then z is G The interpolative reasoning process for this kind of rule is analogous to that of Sugeno-Takagi-Kang fuzzy model. Degree of matching in the premise of a rule is propagated to the consequent to form an inferred fuzzy subsets. These fuzzy subsets are combined and defuzzified if necessary. Both types of fuzzy rule is used in the proposed modeling approach. Compared to other approximation technique (e.g., piecewise linear approximation, spline, etc.), a fuzzy model is simpler to develop, easier to understand, and more flexible in providing a smooth approximation to a complex nonlinear relationship. Genetic Algorithms Genetic algorithms are global search and optimization techniques modeled from natural genetics, exploring search space by incorporating a set of candidate solu- tions in parallel (Holland 1975). A genetic algorithm (GA) maintains a population of candidate solutions where each solution is usually coded as a binary string called a chromosome. A chromosome - also referred to as a genotype - encodes a parameter set (i.e., a can- didate solution) for a set of variables being optimized. Each encoded parameter in a chromosome is called a gene. A decoded parameter set is called a phenotype. A set of chromosomes forms a population, which is evaluated and ranked by a fitness evaluation function. The initial population is usually generated at random. The evolution from one generation to the next one involves mainly three steps. First, the current popula- tion is evaluated using the fitness evaluation function, then ranked based on their fitness values. Second, GA stochastically select “parents” from the current popu- lation with a bias that better chromosomes are more likely to be selected. This is accomplished using a selec- tion probability that is determined by the fitness value or the ranking of a chromosome. Third, the GA re- produces “children” from selected “parents” using two genetic operations: crossover and mutation. This cy- cle of evaluation, selection, and reproduction termi- nates when an acceptable solution is found, when a convergence criterion is met, or when a predetermined limit on the number of iterations is reached. The GA has been shown to be an effective search techniques on a wide range of difficult optimization problems (De- jong 1975; Holland 1975). The randomness and paral- lelism of GA often enable it to find a global optimum without being trapped in a local optimum. The GA has been proved to outperform conventional gradient search technique on difficult problems involving dis- continuous, noisy, high dimensional, and multimodal objective functions (Goldberg 1989). However, the computational cost of a GA to find a global optimum is typically very high. That is, it usually requires a large number of generations before it converges to an acceptable solution. This issue is especially important for applying a GA to the param- eter identification of metabolic and physiological sys- tems due to the high computational cost of the fitness evaluation function. To evaluate a particular guess for a set of parameters in a model for such systems, one needs to (1) simulate the model based on the guessed parameters, and (2) calculate the error between simu- lation result and the experimental data. Even though efficient simulation packages are available, the compu- tational cost of simulating many (e.g., a hundred) com- plex models for thousands of generations is extremely high. To reduce the computational cost of GA-based approaches to the identification of parameters for metabolic systems, we have developed a hybrid ap- proach that integrates the GA and the simplex method to speed up the rate of convergence while avoiding be- ing easily entrapped at a local optimum (Yen et al. 1995). This is described in the next two sections in detail. Simplex Method Simplex method is a local search technique that ueses the evaluation of the current data set to determine the promising direction of search. The simplex method was first introduced by Spendley et. al (Spendley, Hext, & Himsworth 1962) and later modified by Nelder and Mead (Nelder & Mead 1965). A simplex is defined by N + 1 points where N is the dimension of the search space. The method continuously forms new simplices by replacing the worst point in the simplex with a new point generated by reflecting the worst point over the centroid of the remaining points. This cycle of evalua- tion and reflection iterates until the simplex converges to an optimum. We chose the simplex method rather than a gradient- based method (e.g., steepest descent, Newton strate- gies) as the local search technique for our hybrid GA because the relationships between the modeling param- eters and the modeling objectives (i.e., close fitness be- tween the model prediction and the experimental data) are too complex to be formulated. Consequently, it is difficult to compute the derivatives needed by the gradient-based methods. A Hybrid Genetic Algorithm Using Simplex Met hod We developed a hybrid GA method by introducing the simplex method as an additional local search operator in the genetic algorithm (Yen et al. 1995). The hy- brid of the simplex method and the genetic algorithm Discovery 745 Ranked Population New Population Figure 3: Reproduction in simplex-GA hybrid applies the simplex method to the top S chromosomes in the GA population to produce S - N children. The top N chromosomes are copied to the next generation. The remaining P - S chromosomes are generated using the GA’s reproduction sheme (i.e., selection, crossover, and mutation) where P is the population size in the GA. Figure 3 depicts the reproduction stage of the hy- brid approach. Empirical results obtained by applying the hybrid method to a subset of the biomodeling problem showed that the hybrid method outperformed the GA in terms of the speed of convergence and the quality of solution (Yen et al. 1995). Metabolic Modeling The Proposed Modeling Strategy The proposed modeling strategy treats the system at two levels: a component (molecular) level such as enzyme reactions, protein-protein interactions and protein-DNA binding, and a system level such as metabolic networks, signal transduction pathways, ge- netic regulation systems, and global responses. In the component level, behavior of the components is described by algebraic equations derived from known molecular mechanisms and/or fuzzy logic models based on descriptive and/or incomplete information. Model parameters at this level are typically estimated from component data using the hybrid GA. A component level model describes the rate of a re- action as an algebraic equation, whose structure re- flects a known molecular mechanism. Several basic types of component level models are described in the next section. Parameters in these nonlinear models can be identified using a nonlinear optimization technique in system identification (e.g., extended Kalman filter, GA). We used the hybrid Simplex-GA to identify these parameters. Although extensively investigated, mechanisms of many enzymes of interest are still only partially under- -L--J n------l-L 1 1. 1 * 1 sboou. complete enzyme regulation mecnanisms, sucn 746 Learning as inhibition or activation, are often undetermined, ex- cept for a few enzymes with known crystal structure. Therefore, mechanisms describing substrate binding (e.g., random/ordered BiBi or Ping Pong BiBi) may be available, but mechanisms for inhibitor or activator actions are often incompletely known. We use fuzzy logic-based modeling to model the aspects of enzyme reactions that are not characterized mechanistically. More specifically, when experi mental data suggests in- hibition or activation factors not accounted for by a component model, we use fuzzy logic rules to augment the model by describing a mapping from the inhibit- ing or activation factors to their effects on the model. These rules are designed by first analyzing the inhibi- tion and/or activation effects for identifying parame- ters in the original model whose values seem to change when inhibition or activation occurs. A set of fuzzy rules is then designed, each of which maps a specific inhibition or activation situation to a linear equ ation or a fuzzy set that ch .aracterizes the desired value of the parameter for the situation. Parameters in the alge- braic equation as well as parameters in the fuzzy logic rules are identified using the hybrid GA. The component models are then synthesized into system models based on known or hypothesized path- ways. Models at this level consist of ordinary differ- ential equations (ODE’s). The system level introduces additional parameters into the model which are esti- mated from system behavior. The hypothesized mech- anism usually defines the structure of model while leav- ing model parameters unspecified. We use the hybrid GA to identify the unspecified system parameters. The hybrid GA’s fitness evaluation involves simulating the system-level model for each candidate system param- eter set in the population. To simulate the behavior of the system, we used an existing simulation software, DDASAC (Caracotsios & Stewart 1984), for solving non-linear ODE’s numerically. In the two remaining section, we focus our discussion on the component level modeling. A more detailed discussion on our system level modeling approach can be found in (Yen et al. 1995). Mechanistic Modeling of Enzyme Kinetics In this section, we introduce four types of mechanisms and the structure of their corresponding mathematical models derived from the mechanisms. Since enzymes form complexes with their substrates, the rate of reaction is limited by the concentration of enzyme-substrate complex . When the level of a sub- strate is varied, the initial velocity with which the re- action begins is generally given by an equation. If the reversibility of the reaction is considered, the equation has the following form: where S and P are the substrate and product, respec- tively, and K, , I$,, and V&l, and VP are kinetic param- eters. If the reaction involves two substrates and two prod- ucts (BiBi reaction), as in many metabolic systems, the kinetic mechanism may involve ternary-complexes or binary complexes. In the former case, the binding may be either random or ordered, and the reaction rate can be expressed as: (A)(B) ’ = Vmas IC1 + &(B) + &(A) + (A)(B) where A and B are the concentrations of the two sub- strates, and Vnaaz, I<i, I<A, and I<B are parameters determined from the initial reaction rate experiments. The binary-complex mechanism involves a covalent intermediate-as the enzyme goes to a modified form. In this case, substrate A first reacts and modifies the en- zyme, producing the first product P. Then the second substrate B reacts with the modified enzyme, produc- ing the second product Q. This mechanism is termed Pz%g-Pony BiBi reaction-and the reaction rate can be expressed as: (A)(B) ’ = vmas KA(B) + .KB(A) + (A)(B) Very often, the reaction rates are inhibited or ac- tivated by products, substrates, or other metabolites not participating in the reaction, When such allosteric effects exist, the Monod- Wyman-Chungeux (MWC) model or its variations can be used. The rection rates involving inhibition and activation are described by: v = A(1 +.),-l+ LcA(1 + cA)~--‘ , L = &(I+ B)” (I+ A)” + L( 1+ CA)” (I+ 6)” We will refer to some of these models in the next sec- tion Integrating Fuzzy Logic with Mechanistic Modeling For enzymes with incomplete mechanisms, fuzzy mod- els are incorporated to mend the deficiency ofthe in- complete mechanistic model. An example is the PPC reaction. The dots in Figure 4 summarized the ex- perimental data in the literature about the reaction rate of PPC in different PEP concentration with dif- ferent activators (Izui et al. 1981). The following ob- servations can be made from the figure. (1) With- out any activator, the reaction proceeds at a very low rate. (2) Acetyl-CoA is a very powerful activator. (3) FDP exhibits no activation alone. (4) FDP produced a strong synertistic activation with acetyl-CoA. Because these activations change the saturation reaction rate (V,,,), we modify the with a fuzzy logic factor nent model: original mechanistic equation (0) into the following compo- V - ~vraat PEP WC - Km + PEP The fuzzy factor cy is modeled by the following four fuzzy rules: If CoA is LOW and FDP is LOW then LY is VERY-LOW If CoA is LOW and FDP is HIGH then LY is LOW If CoA is HIGH and FDP is LOW then LY is MEDIUM If CoA is HIGH and FDP is HIGH then a is HIGH where VERY-LOW, LOW, MEDIUM, HIGH are fuzzy sets. The membership functions of these fuzzy sets are shown in Figure 2. The second example is PYK reaction. It is acti- vated by FDP and inhibited by CoA and ATP (dot data in Figure 6, 7, and 8). This reaction is again modeled with the following mechanistic equation with fuzzy numbers F, L, and c which are determined by fuzzy if-t hen rules. V mk = F PEP( 1+ Pmq3 + L x c x PEP( 1+ c x PJ?q3 PEP(1+ PEP)* + L(1+ c x PEP)4 Modeling of ATP, CoA inhibition is described by the fuzzy factor F which is determined by the following fuzzy rules. If FDP ia LOW and ATP is LOW and CoA is LOW then F = F1 If FDP is LOW and ATP is LOW and CoA is HIGH then F = F2 If FDP is LOW and ATP is HIGH and CoA is LOW then F = F3 If FDP is LOW and ATP is HIGH and CoA is HIGH then F = F4 If FDP is HIGH and ATP is LOW and CoA is LOW then F = F5 If FDP is HIGH and ATP is LOW and CoA is HIGH then F = Fe If FDP is HIGH and ATP is HIGH and CoA is LOW then F = Fy If FDP is HIGH and ATP is HIGH and CoA is HIGH then F = Fs where Fl, F2, F3, F4, F5, F6, Fr, and F5 are parame- ters in the fuzzy model. Modeling of FDP activation is achieved by introducing two fuzzy numbers c and L in the MWC equation. L and c are used for changing the shape of the curve in a MWC model (Cantor & Schimmel 1980). IfFDPisLOWthenc=ci,L=Li If FDP is HIGH then c = ~2, L = L2 where cl, ~2, Ll, and L2 constants. Results We applied the hybrid genetic algorithm to identify the parameters in the proposed model. The fitness of a candidate parameter set is the root means square er- ror between the real experimental data reported in the literature and the candidate model by the GA. Figure 5 plots the fitness versus trials for modeling the reac- tion rate Vppc. The behavior of the identified model is shown in Figure 4. The figure shows a good fit between dots representing real experimental data and the lines representing the prediction of the model identified. Similarly, the behavior of the identified model for Vpslk and corresponding experimental data is shown in the figures 6, 7, and 8. Summary In this paper, we have proposed a novel methodology to integrate fuzzy logic techniques with mechanistic modeling method to model the component level and system level structures of metabolic systems. We also use a hybrid genetic algorithm to identify the key pa- rameters of the model. The strategy here allows one Discovery 747 1.6 & 0.: 0.6 Predl . l tl Y .- “.. “.“.“. “..“. . . . . . . . I .*.-.- 1 i + .“ .“ .“ .“ . 9. “.“.“. ..+“ .-.-.-.“ +“ - “.“. + .“.“.“.” l .“.“.“.” + _“.“.“.” + .“_“.“.“_ + .“.“.“. “. $ “” .“‘” 1 ,“ _._ __._ . . . . “ _.” . . . . . . 5 . . . . ....‘ w . . . . ....Q” . . . . . . . g . . . . ..- “ c4.......“ ~ . . . . . . . . . . q . . . . . . . . “_.. . . . . - ... I .“.. -. =....- .“‘r...- “.⌧ .“.“.. “*- ._.“.” y .“.“._.. k......“.k”..” 2 4 6 8 10 PEP Figure 4: Data and model prediction for the reaction rate of PPC with activators 0’ ’ ’ ’ ’ ’ ’ ’ ’ ’ 1 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 TridS Figure 5: Performance of the hybrid GA on modeling V PPC ATP=CoA=O 80 70 60 Prediction (FDP=l . 0) Prediction(FDP=O) - 4 6 8 10 PEP 100 80 60 x P 40 ATP=CoA=O Data(PEP=2.0) l Data(PEP=O.l) + Data(PEP=O.l) 0 - Prediction(PEP=2.0) - Prediction(PEP=0.4) -.--.-. Prediction(PEP=O.l) -.-.-.- . 0 0.2 0.4 0.6 0.8 1 FDP Figure 7: Data and model prediction for PEP activa- tion in PYK reaction FDP=l.O * 70 Data(ATP=CoA=O) - Data(ATP=2.0) + " Data(CoA=2.0) 0 60 - Data(ATP=2.O,CoA=2.0) x " Prediction(ATP=CoA=O) --.-.-. 50 - Prediction(ATP=2.0) -.-.-'- - Prediction(CoA=2 " 0) ........ 40 - Prediction(ATP=2.0,CoA=2.0) .-.-.-- - 0.06 PEP 0.08 0.1 0.12 Figure 8: Data and model prediction for ATP and CoA inhibition in PYK reaction Figure 6: Data and model prediction for FDP activa- tion in PYK reaction 748 Learning to easily incorporate incomplete information and qual- itative description into a mathematical formulation of the model. The modeling approach is promising for the elucidation of the unknown interactions between central metabolism and global regulation, which is es- sential for understanding biological signal transduction and rational design of metabolic systems for a desired purpose. One of the most important issues remained to be addressed in our future research is to develop a scal- able approach for dealing with the large search space at the system level, for the number of system parameters that may need to be adjusted to fit experimental data are typically very large. We are currently developing a supervisory architecture for dynamically selecting pa- rameters to be optimized based on heuristics, insights about the model, and sensitivity analysis. Acknowledgements This research is currently supported by NSF Award BES-9511737 and was partially supported by NSF Young Investigator Awards IRI-9257293 and BCS- 9257351. The software package for model simulation DDASAC was originated from M. Caracotsios and W. E. Stewart. The GENESIS implementation of a GA was developed by John J. Grefenstette. References Achs, M. J., and Garfinkel, D. 1977. Computer sim- ulation of rat heart metabolism after adding glucose to the perfusate. Am. J. Physiol. 232:175-184. Cantor, C. R., and Schimmel, P. R. 1980. Biophys- ical Chemistry - Part III:The Behavior of Biological Macromolecules. W. H. Freeman and Company. Caracotsios, M., and Stewart, W. E. 1984. DDASAC - Double precision Diflerential Algebraic Sensitivity Analysis Code. Dejong, K. A. 1975. Analysis of the behavior of a class of genetic adaptive systems. Ph.D. Dissertation, De- partment of Computer and Communication Sciences, University of Michigan. Fredrickson, A. G. 1976. Formulation of structured growth models. Biotechnol. Bioeng. 18:1481-. Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. MA: Addison- Wesley. Heinrich, R., and Rapoport, T. A. 1974. A lin- ear steady state treatment of enzymatic chains, gen- eral properties, control and effect strength. Eur. J. Biochem. 42:89-95. Holland, J. H. 1975. Adaptation in Natural and Artifi- cial Systems. Ann Arbor, MI: University of Michigan Press. Izui, K.; Taguchi, M.; Morikawa, M.; and Katsuki, H. 1981. Regulation of escherichia coli phosphoenolpyru- vate carboxylase by multiple effecters in vivo. Journal of Biochemistry 90:1321-1331. Kompala, D. S.; Ramkrishna, D.; and Tsao, G. T. 1984. Cybernetic modeling of microbial growth on multiple substrates. Biotechnol. Bioeng. 26:1272-. Lee, S. B., and Baily, J. E. 1984. Plasmid 11:166-. Lee, M., and Takagi, H. 1993. Integrating design stages of fuzzy systems using genetic algorithm. In Proceedings of 2nd Internatinal Conference on Fuzzy Systems. Liao, J. C.; Lightfoot, E. N.; Jolly, S. 0.; and Jacob- son, G. K. 1988. Application of characteristic reaction paths: Rate-limiting capacity of phosphofructokinase in yeast fermentation. Biotech. Bioeng. 31:855-868. Nelder, J. A., and Mead, R. 1965. A simplex method for function minimization. Computer Journal 7:308- 313. Ramkrichna, D. 1983. A cybernetic perspective of microbial growth. In Blanch, H. W.; Papoutsakis, E. T.; and Stephanopoulos, G., eds., Foundations of Biochemical Engineering, American Chemical So- ciety. Washington, DC: American Chemical Society. 161. Schuler, M. L., and Domach, M. M. 1983. Mathemat- ical models of the growth of the individual cells. In Blanch, H. W.; Papoutsakis, E. T.; and Stephanopou- los, G., eds., Foundations of Biochemical Engineering, American Chemical Society. Washington, DC: Amer- ican Chemical Society. 101. Spendley, W.; Hext, G. R.; and Himsworth, F. R. 1962. Sequential application of simplex designs in op- timization and evolutionary operation. Technomet- rics 4:441-461. Straus, D. B.; Walter, W. A.; and Gross, C. A. 1988. Escherichia coli heat shock gene mutants are deficient in proteolysis. Genes Dev. 2:1851-1858. Sugeno, M., and Kang, G. T. 1988. Structure identifi- cation of fuzzy model. Fuzzy Sets and Systems 28:315- 334. Takagi, T., and Sugeno, M. 1985. Fuzzy identifica- tion of systems and its applications to modeling and control. IEEE Transactions on Systems, Man, and Cybernetics 15:116-132. Yen, J.; Liao, J. C.; Randolph, D.; and Lee, B. 1995. A hybrid approach to modeling metabolic systems us- ing genetic algorithm and simplex method. In Pro- ceedings of the 11th IEEE Conference on Artificial Intelligence for Applications (CAIA95), 277-283. Zadeh, L. A. 1965. Fuzzy sets. Information Control 8:338-353. Zadeh, L. A. 1973. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernet- ics 3:28-44. Discovery 749
1996
111
1,746
Incremental Discovery of Hidden Struct of Elementary Jan M. &tkowt and plications in tcomputer Science Department, Wichita State University, Wichita, Kansas 67260-0083 and Institute of Computer Science, Polish Academy of Sciences, Warsaw; SSterling Commerce, 15301 Dallas Parkway, Suite 400, Dallas TX. 75248; zytkow@cs.twsu.edu, pfischer@gte.net Abstract. Discovering hidden structure is a challeng- ing, universal research task in Physics, Chemistry, Biol- ogy, and other disciplines. Not only must the elements of hidden structure be postulated by the discoverer, but they can only be verified by indirect evidence, at the level of observable objects. In this paper we de- scribe a framework for hidden structure discovery, built on a constructive definition of hidden structure. This definition leads to operators that build models of hid- den structure step by step, postulating hidden objects, their combinations and properties, reactions described in terms of hidden objects, and mapping between the hidden and the observed structure. We introduce the operator dependency diagram, which shows the order of operator application and model evaluation. Different observational knowledge supports different evaluation criteria, which lead to different search systems with ver- ifiable sequences of operator applications. Isomorph- free structure generation is another issue critical for efficiency of search. We apply our framework in the system GELL-MANN, that hypothesizes hidden struc- ture for elementary particles and we present the results of a large scale search for quark models. Introduction Intense research in physics during the 1950s and early 1960s centered on the discovery of elementary parti- cles. After more than one hundred elementary parti- cles were known, many arranged into groups with in- ternal symmetries (e.g., hadron octet shown in Figure la and meson octet in Figure 3), physicists in the 1960s started to postulate theories of smaller particles, called quarks, proposing their number, properties, and struc- tures they form. Eventually, the standard quark model became one of the foundations of physics. The discovery problem has been: “Given a set of observed objects and observational knowledge about them, postulate a hidden layer of objects and their structure that explains observed objects”. This prob- lem has been considered many times in the history of science. Examples of successful discoveries include ele- ments, atoms, ions, genes, and quarks. Today the same problem is being re-visited in particle physics, where 750 Learning scientists search for the next layer of structure beneath quarks. Discovery of hidden structure has been the subject of various case studies, that led to a number of dis- covery systems. STAHL (Langley, Simon, Bradshaw, & Zytkow 1987) and STAHLp (Rose & Langley 1986) discover componential models, while DALTON (Lang- ley et. al 1987) d iscovers atomic models. REVOLVER (Rose 1989) d ea s 1 with revision of beliefs about hidden structure, MECHEM (Valdes-Perez 1992) infers plau- sible intermediate structure in chemical reactions, and BR-3 (Kocabas 1991) demonstrates how hidden proper- ties can be postulated for observable objects. Sleeman, Stacey, Edwards, and Gray (1989) have suggested a search space for hidden qualitative models of chemi- cal structure. Valdes-Perez, Simon, and Zytkow (1993) introduced a matrix representation of structure that works for many discovery systems. This paper presents a general framework which can be used to design various systems that search for hid- den structure in different domains. We discuss rep- resentation of hidden structure, operators which con- struct tentative solutions, and the solution evaluation. All these elements are combined in the operator de- pendency chart that is instrumental in construction of discovery systems. We then discuss GELL-MANN, a system which can postulate hidden structure of ele- mentary particles. It has produced the standard quark model, various alternatives to that model, and many other models of hidden structure. GELL-MANN is a case study in automated discovery. Instead of speculating on the nature of general purpose automated discoverers, we follow the program of scien- tific research which relies on an accumulation of case studies that can be used as facts of high order. Expe- rience of empirical science shows that accumulation of many such cases eventually leads to striking theories. Hidden structure Hidden structure can be described in the same way as visible structure. However, since hidden objects are not observed, a description of hidden structure must also include a mapping to the level of observation. We From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. define hidden structure by the following components: 1. A set T = {tl , ,.., tN} of different types of hidden objects (elements). The number of objects within each type is not limited. 2. A set C = {c 1 c = (tjl, . . . . tj,) & p(c)) of admissible microstructures (microcompounds), each defined as a bag (multiset) of objects tj, from T. y(c) is a constraint on admissible structures. For instance, the constraint used by GELL-MANN requires that each bag contains the same number M of hidden objects. 3. A set of attributes P = {PI, . . . . pk} for objects in T and in 6. 4. A set E of admissible values for each attribute in P. 5. Specific attribute values for each object type and eachattribute,Pi:TbVi,i=l,..., Ic. The properties of admissible combinations in C are related to the attribute values of the components by additivity: For each property Pi, each object c, and all compo- nents cl, . . . . CM Of C in C: Pi(C) = Cj”=, Pi(Cj). 6. A set R of reaction schemes (cil, . . . . ci,) I+ (C 011 “‘> car), in terms of inputs and outputs on the microlevel. 7. Partial mapping $ : C --+ R between microstruc- tures and the set Q of observable objects. Not all components l-7 must be present in each model. On the other hand, this definition can be augmented by further relationships between hidden objects, such as chemical bonds and spatial proximity. A computer model of hidden structure is a data structure that fits our definition. iscovery of hidden structure We will concentrate on the architecture of GELL- MANN, but to illustrate the generality of our frame- work, we will also use examples from DALTON, a sys- tem that simulates discovery of atoms and molecules (Langley et al. 1987). DALTON uses components l-2 and 6-7 of our definition; 1 and 7 are trivial: one type of atom corresponds to each chemical element, and one type of molecule to each substance. GELL-MANN has been developed to explore quark models in the domain of elementary particles. GELL-MANN uses components l-5 and 7 of our definition. The search for hidden structure should propose as much of the data structure as fits our definition, as can be tested by the evidence at the observable level. Input and output of model construction. The in- put to GELL-MANN is a family of elementary particles, their properties, and values for each property of each particle. Figure lb gives an example of input (parti- cle family of hadrons), from which GELL-MANN infers, as output, two models of underlying structure shown in Figure 2. These two quark models postulate three types of hidden objects (a, b, c) occurring in triplets, which are mapped to the input particles. For instance, in Model 1 the quark cs has a charge of 2/3, a third com- ponent of isospin of l/2 and a strangeness of 0. In this model, proton p consists of two quarks a and one quark b. Figure 3 provides another example of GELL-MANN’s input and output: the meson family. -~ particle charpepr~~v~ssm~~’ P 1 l/2 0 n 0 -l/2 0 5 :, () 1 -1 -1 5 -1 -1 0 l/2 -2 -1 we -1 ( -l/2 -2 -1 0 1 Isospin Symmetry in the hadron octet Figure 1: (a) symmetry in the hadron octet family; (b) hadron octet as ir,put given to GELL-MANN quark iTi C aaa aab sac abb abc act bbb bbc bCC properties ccc i-1 1 0 I-3 mapping to particles lModel 2 a b C bag 01 quark aaa aab sac abb abc act bbb bbc bCC ccc properties ch -i 1 0 1 0 -1 1 0 -1 -2 Figure 2: Output of CELL-MANN for the hadmn octet. Both models are equally simple. Model 1 LS the standard quark model m physvs. Quarks u.d.s are CELL-MANN’s quarks a.b.c. ode1 evaluation. Philosophers since IDemocritus have speculated about the makeup of atomic structure, but they could justify neither concrete properties of atoms nor concrete atom combinations, because it was difficult to find observational consequences of specific claims about hidden structure. At certain times, how- ever, the knowledge about hidden structure has pro- gressed remarkably. Historically, such progress has oc- curred when simple symmetries or combination laws expressed in terms of small integers have been de- tected at the observable level. At the beginning of the 19th century, the law of constant proportions in chemical reactions and Gay-Lussac’s law of combin- ing volumes created such an opportunity. Later, the Prout’s hypothesis on atomic numbers of elements and the periodic table stimulated models of the atom and its nucleus. After Mendel discovered simple combina- tion rules for properties of the pea, he postulated the Discovery 751 gene model. Similarly, Murray Gell-Mann proposed the quark model after grouping elementary particles into small families. ; Symmetry in the meson octfzt I CELL-MANN’s input CELL-iMxUN’s output b C d bag of quarks -ii-- ab ac ad bb bc bd cc cd dd properties ch I 1 0 0 1 0 0 -1 -1 -1 I moping l/2 1 0 0 l/2 -1 0 2 -1r2 1 0 0 -1 0 -l/2 -1 0 -2 .w c K” g -I Figure 3: when given tbe meson Octet as input. CELL-MANN finds one model which consists of four quarks in combinations of two. Facts useful for evaluation come from two basic sources: attributes of the observed objects, and de- scriptions of observed reactions. In GELL-MANN, eval- uation is based on properties of elementary particles, whereas DALTON tests its models against knowledge of macro-reactions. GELL-MANN uses each property P of each observed object 0 in the input to verify a model, after it proposes a mapping between 0 and a microstructure c, and applies the additivity principle (cf. component 5 of hidden structure definition) to hid- den components cl, . . . , CM of c. The model is confirmed if P(c) is equal to the observed value P(0). Reactions can be used in a similar manner. DALTON uses knowledge of combining volumes in a reaction on the observable level and the postulated microstructure of each substance in the input of the reaction. Then the micro-output is computed based on the conservation of elementary parts, so that the number of atoms of each type is equal before and after reaction. Finally the out- put is interpreted in terms of volumes on the observable level, and compared with the observed output. It is not sufficient to confirm a model by observa- tional consequences. If there are many models, all jus- tified by their observational consequences, what are the reasons to claim that one of them is true. Each model is questionable because they make mutually inconsis- tent claims about the hidden level. We cannot require absolute uniqueness, because for each model there are many models which are more complex and observation- ally equivalent. We can accept model A4 when all other models are more complex, that is, when M is unique in the simplest class in which a model exists. When the search is arranged in the order of grow- ing complexity, if it is successful, it finds the simplest model. Complexity is measured by model parameters, such as the number of elements postulated and the number of elements in each microcompound. perator ependency Chart Each model can be constructed gradually in steps that correspond to items l-7 in our hidden structure defi- nition in Section . Each item in the definition can be represented by operators that build the corresponding parts of the model: add objects to T, postulate their properties, their structures, and so forth. Operator selection. Not all components of hidden structure are discovered by every system. It does not make sense to propose components which cannot be verified. For instance, the observational data for DAL- TON do not include properties of molecules and there- fore, do not permit verification of hypotheses about properties of atoms. No data on reactions can be used by CELL-MANN, so the inference of hidden structure of reactions is not feasible. Dependencies among operator application. The order of model construction must satisfy the precon- ditions at each step. The preconditions can be in- ferred from the definition. For instance, assigning an attribute value to an object requires an object, an at- tribute, and a candidate value. Similarly, one cannot create structures without having postulated objects. Activities which lead to model generation and their preconditions can be arranged in a chart, depicted in figure 4a. Operator dependency chart and search. Differ- ent components of hidden structure are postulated by operators. Alternative models are constructed by fol- lowing alternative paths; that is, by different operator instantiations. Figure 4b shows the subset of all oper- ators used in GELL-MANN, while figure 4c shows the subsets of operators used by DALTON. Only the oper- ators that lead to observational evaluation have been left in Figures 4b and 4c. The constraints imposed by preconditions leave a great deal of freedom in arranging the search control, so that additional requirements of efficiency can be sat- isfied. To construct efficient search for hidden structure in a given domain we can use the operator dependency chart and several principles: I. All operators which do not contribute components of structure which can be evaluated by the existing evidence should be removed. 2. The evaluators must be used as early as possible. 3. Consider each possible model exactly once. The search should be systematic, so that no model is over- looked, but also isomorph-free (non-redundant). 4. Try models in the order of increasing complexity. 5. Use depth first search within each simplicity class. 752 Learning (b) GELL-MANN reate micro-element (c) DALTON Figure 4: Operator dependency chart for discovery of hidden structure. It shows preconditions for each operator. Labeled, solid arcs represent operators. Unlabeled, dashed arcs show preconditions which must be satisfied before the subsequent operator can apply. (a) general case, (b) GELL-MANN, (c) DALTON. Only the operators that lead to available tests have been retained in (b) and (c). 6. The operators should be used in the most efficient order that satisfies all other principles. Non-redundancy and exhaustiveness (cf. 3, above) have been used by DENDRAL developers, Lindsay, Buchanan, Feigenbaum, and Lederberg (1980) as re- quirements for their structure generator. GELE-MANN’s search According to these principles, GELL-MANN has been arranged in a three phase search (Figure 5abc), imple- mented in common lisp. Each phase generates a part of hidden structure and verifies it by specific evaluators. In Phase I (Figure 5a) CELL-MANN searches for ad- missible classes of microcompounds. It postulates hid- den object types in T (operator Or, “Create Micro El- ements”), then the number of objects in a microcom- pound (Operator 02~ “Create Micro-Compounds”). For a given set of hidden types and a given number of elements in a microcombination, GELL-MANN creates all their combinations (Operator 02~ “Create Micro- Compounds”). Those combinations are often called bags. The same object can occur several times in a bag, but the order in a bag does not matter. The isomorph-free bag generator uses the order of elements in T and creates each bag in that order, with possible repetition or omission of some elements. CELL-MANN starts its search from one hidden ob- ject and keeps adding objects until a solution is found, or the number N of objects in T reaches the number of objects in the input family, so that the model fails to simplify the input. Operator 02~ starts from two elements per bag, as one element would make the struc- ture identical with a single part. OaA increments the number M of elements in a bag by one. For a given N and M, a set of bags is admitted to the next phase when the number of bags is not smaller than the num- ber of particles in the input family, but no greater than three times the number of observed objects. Indeed, we want different observable particles explained by differ- ent quark combinations. We also do not want to postu- late a much larger number of hidden structures than is supported by the number of observed objects. The lim- its on quark combinations will cause the search to stop even if no model has been found. This makes sense, not because more complex models are impossible, but because the available data do not support speculations about them. The next tasks, according to the dependency chart are to determine: (1) what attributes will occur in the model, (2) what attribute values are admissible. GELL- MANN uses all attributes provided in the input (op- erator 03 “Create Properties”). The more attributes used, the more demanding is the evaluation. GELL- MANN starts Phase II by postulating candidate val- ues for each attribute (04, “Determine Possible Val- ues”). Too many values would result in huge search Discovery 753 spaces. Too few values may exclude valid solutions. We turned to the observed objects for guidance. Let Q be the largest absolute value exhibited at the observed layer for attribute Pi. GELL-MANN takes as admis- sible values Vi of Pi all positive and negative integers between -ui and vi. In addition, GELL-MANN pos- tulates rationals compatible with those in the input, and rationals with denominators equal to the number of hidden objects currently postulated per bag. For ex- ample, if three hidden objects are postulated per bag, values down to thirds are used. The exhaustive search must try all assignments of values to hidden objects. However, such a search is typically too complex. For 3 quarks, 3 attributes, and 10 admissible values per attribute, a straightforward search would try 10’ models. Can we eliminate invalid partial solutions ? GELL-MANN’s Phase II of search is an answer. GELL-MANN considers each attribute Pi separately trying all assignments of values in Vi to N elements. This is another application of isomorph-free bag generator, this time generating bags of size N. For each assignment of candidate values, GELL-MANN uses the additivity principle to compute the value of Pi for each combination generated by the first phase of the search. An assignment is admissible if a mapping exists from each observed particle to a microcompound with the same Pi value. Figure 5b depicts that phase. The output is typically a small set of admissible N-tuples of values for each attribute. In Phase III, these separate solutions for each at- tribute are combined to form a solution which works across all attributes and all objects in T. At the same time, to enable evaluation, concrete mappings II, are tried between particles and quark combinations (oper- ator 07 “Map Micro Objects to Macro Level”). Phase III is depicted in figure 5c. If GELL-MANN finds a solu- tion valid for the given N and M, it continues, search- ing for all solutions for the same N and A4 and halts. If it cannot find a solution, the search returns to phase one and increments N or M. In Phase III, the isomorph-free generation faces the biggest challenge. Consider different mappings for pro- ton p, the particle which opens the search depicted in Figure 5c. It could be assigned ten different bags for N = 3 (let us call the three quarks a, b, c) and M = 3. But, for instance, the bags (a a a), (b b b), and (c c c) lead to isomorphic solutions. Only one of them should be considered. The situation becomes more compli- cated after partial solutions have been proposed, but some quarks are still indistinguishable from others by their attribute values. Here GELL-MANN’s generator uses three principles: (1) use the order in which the elements in T are listed; (2) do not skip any elements; (3) each next element can be listed no more times than the previous one. For N = 3 and M = 3, only the bags (a a a), (a a b), (a b c) will be generated. GELL-MANN can search for a model of a single in- put family. However, many families of particles exist. 754 Learning (a) Phase I 01 F&Y 02A Group size 02B Combinations Evaluation: # of particles <= # of combinations <= 3 times # of particles. (b) Phase II 03 Properties 04 Admissible valu eg. (-1 -2/3 -l/3 0 l/3 2/3 1 05 Value N-tuples Zvaluation: Frequency of quark combinatiqn val- les >= observed particle family values. for all models Finished for one model {valuation: Predicted quark combination value = observed particle value for all properties. Figure 5: The three phases of search in GELL-MANN. Phase I generates quark types, bag sizes and quark combi- nations. Phase II generates attributes of quarks and val- ues for each attribute. Phase III maps attribute values to quarks and quark combinations to particles. Is a joint solution possible? Can it be reached incre- mentally? GELL-MANN handles multiple families by working with each in succession. For each next family it tries a solution based on the quarks used in the mod- els that worked for all previous families, and adding new quarks only if necessary. Although the same set of quarks is used for all families, the bag size for each family can be different. Incremental search. GELL-MANN proceeds to the second family, using the same three search phases. It first tries the known quarks, adding new ones only if no solution has been reached for the existing ones. The order of input families can be historical, but we can also try different orders of processing the same set of input families. In that case, we want GELL-MANN to seek the simplest global solution for all families. Generalizations. Attributes such as spin can be com- bined by vector addition of quantum numbers. The rule of vector addition and other combinatorial rules can be plugged into GELL-MANN. Phase II can be eliminated altogether when attribute values are found by solv- ing matrix equations (Valdes-Perez, Zytkow & Simon, 1993). This approach has been implemented by Valdes- Perez in YUVAL (Valdes-Perez & Zytkow, 1996). For small sets of values it turns out that GELL-MANN’s generating and testing value combinations works faster than solving equations. esults of Experiments Early historical data. Initially, three families of ele- mentary particles, postulated by Murray Gell-Mann, formed the theoretical basis for the quark model: hadron octet, meson octet, and baryon decuplet. Fig- ures 2 and 3 present results of GELL-MANN’s non- incremental search on the inputs of hadrons and mesons. Later, we applied our incremental search for the joint quark model to these three families, in all six permutations. Figure 6 depicts our experiment. Each solid arc is labeled with the particle family given as input to GELL-MANN. Each node in the tree, except for START, represents the output of GELL-MANN: the number of quark models found and the complexity of the quark models (N * M) in terms of the number N of quarks postulated and the number M of quarks per bag. GELL-MANN incrementally builds on the previous quark models on the direct path from the root. The initial hadron octet run produced two models of complexity 3*3 (three quarks in groups of three; Fig- ure 2). Of these two models, one was postulated by the physicist Gell-Mann. The other was a new model. The meson octet run yielded one model of complexity 4*2 (Figure 3), much simpler than the 6*2 model ac- cepted in physics. The baryon decuplet family yielded one model 3*3, the model postulated by Gell-Mann. All these models are shown in Figure 6 as direct de- scendents of the START node. The remaining two families of particles were added, one at a time, to the initial models. Building on the meson octet unconventional 4’2 model, 5 models have been found for the baryon decuplet in the 7*3 category, which is more complex than the standard 6*3 model. The hadron octet family led to a 6*2 model, but adding the baryon decuplet produced no solution, so this path was discarded. Building on the unconventional hadron octet model (Model 2 in Figure 2), the baryon decuplet family pro- duced no solution. For the standard model (Model 1 in Figure 2), a 3’3 solution was found. When the meson octet family was added to the remaining model, two solutions of 6*2 were produced. Building on the single baryon decuplet model de- picted as the rightmost child of START in Figure 6, no new quarks were needed to explain the hadron octet. Then adding the meson octet produced the same two 6*2 solutions found following the hadron octet path from the root. Additional results. Expanding the models built for each of the three particle families, we added two additional families, the charmed mesons and charmed hadrons, also depicted in Figure 6. Each 6*2 model has been expanded to a 8*2 model for charmed mesons. When the charmed hadrons were added, only one 8*2 model could be adapted to handle the new class. The output was one 8’3 model, known by physicists as the standard model. We could not further expand our search because too few particles are known to con- tain the bottom quark, so the search is not constrained enough. The other paths in the tree end with either no solu- tion, or solutions of greater complexity than the stan- dard model. This implies that in the space examined by our incremental search without backtracking, the standard model is indeed unique in its simplicity class. As the complexity of the quark model grows, so does the size of the search space and the program execution. For instance, the initial run on the hadron octet found two 3*3 solutions in approximately 4 set of CPU time at GMIPS. But more complex searches took days and even weeks, as indicated in Figure 6. Many other results have been reported by Fischer & Zytkow (1990) and by Valdes-Perez & Zytkow (1996). Conclusions and future work We have presented a theoretical framework for the dis- covery of hidden structure and have demonstrated how the discovery system GELL-MANN fits that framework. We presented results of a large-scale exploration in the space of quark models. The same operators, similar knowledge representation, and the same elements of search are used by many systems that discover hidden structure. This unification suggests a unified computer system which would be able to discover hidden struc- ture in different domains. Two problems must be solved before we can build an autonomous system capable of discovering hidden structure in various domains. First, we lack a gen- eral algorithm that would use domain knowledge on the observational level to set up the search for hid- den structure. Using the experience accumulated by construction of several systems, it is relatively easy to manually generate all elements of search: to select op- erators, to define operator instantiation and evaluation criteria, and to organize them in a simple, “bottom- line” system: simple in structure and able to search exhaustively, yet entirely inefficient. Thus, the second problem is concerned with the automated generation of Discovery 755 no solution no solution Figure 6: This tree summarizes our experiment in incremental construction of quark models by GELL-MANN. Each solid arc represents an input family of elementary particles. Each node represents the solutions to the family indicated on the arc below the node (they are also solution for all families on the path from the START node). The dashed arcs are used to trace individual models if more than one model exists at a given node. The left half of the tree depicts the search for non-standard models. The right half depicts the search for the standard model. an efficient search control. It is possible to increase ef- ficiency by changing the order of operators, by placing the evaluators as early as possible, and by designing isomorph-free generators for different sub-tasks. We believe that the whole task can be eventually auto- mated. Automation may be more and more obvious as new case studies are completed. Hidden structure is very similar to visible structure, so that our definition can be expanded and applied to other machine discovery work on structure (Karp 1990, Sleeman et al. 1989). A search similar to ours can be useful for visible structures, for instance, on experiment design problems (Rajamoney 1990). Acknowledgments: Many thanks to Malcolm Perry and Mary Edgington for their suggestions. References Fischer, P. & Zytkow, J.M. 1990. Discovering Quarks and Hidden Structure, in: Ras Z., Zemankova M. & Emrich M.L. eds. Methodologies for Intelligent Systems 5, Elsevier Science Publishing Co. New York, 1990, 362-370. Karp, P. 1990. Hypothesis Formation as Design. in: J.Shrager & P. Langley, eds. Computational Models of Scientific Discovery and Theory Formation, Morgan Kauf- mann Publ. San Mateo, CA, 275-317. Kocabas, S. 1991. Conflict Resolution as Discovery in Par- ticle Physics. Machine Learning 6, 277-309. 756 Learning Langley, P., Simon, H.A., Bradshaw, G.L. & Zytkow, J.M. 1987. Scientific Discovery: Computational Explorations of the Creative Processes. Cambridge, MA: MIT Press. Lindsay, R., Buchanan, B.G., Feigenbaum, E.A. & Leder- berg, R. 1980. Applications of Artificial Intelligence for Organic Chemistry; The DENDRAL Project. New York: McGraw-Hill. Rajamoney, S. 1990. A Computational Approach to Theory Revision. in: J.Shrager & P. Langley, eds. Computational Models of Scientific Discovery and Theory Formation, Mor- gan Kaufmann Publ. San Mateo, CA, 225-253. Rose, D. 1989. Using Domain Knowledge to Aid Scientific Theory Revision. Proc. 6th Int. Conf. on Machine Learn- ing, Morgan Kaufmann Publ. San Mateo, CA, 272-277. Rose, D. & Langley, P. 1986. Chemical discovery as belief revision. Machine Learning, 1, 423-452. Sleeman, D.H., Stacey, M.K., Edwards, P. & Gray, N.A.B. 1989. An Architecture for Theory-Driven Scientific Discov- ery, in: K.Morik ed. Proc. of EWSL-89., 11-23. Valdes-Perez, R.E. 1992. Theory-driven discovery of reac- tion pathways in the MECHEM system. Proc. 10th Na- tional Conf. on AI, AAAI Press, 63-69. Valdes-Perez, R.E. & Zytkow, J.M. 1996. Systematic Gen- eration of Constituent Models of Particle Families. Submit- ted for publication (revised) to Physics Review E. Valdes-Perez, R.E., Zytkow, J.M. & Simon, H.A. 1993. Sci- entific Model-Building as Search in Matrix Spaces, in Proc. 1 lth National Conf. on AI, AAAI Press, 472-478.
1996
112
1,747
Subbarao Kambhampati* Department of Computer Science and Engineering Arizona State University, Tempe, AZ 85287, rao@asu.edu Abstract The ideas of dependency directed backtracking (DDB) and explanation based learning (EBL) have developed independently in constraint satisfaction, planning and problem solving communities. In this paper, I formalize and unify these ideas under the task-independent frame- work of refinement search, which can model the search strategies used in both planning and constraint satisfac- tion. I show that both DDB and EBL depend upon the common theory ofexplaining search failuresand regress- ing them to higher levels of the search tree. The relevant issues of importance include (a) how the failures are explained and (b) how many failure explanations are re- membered. This task-independent understanding of DDB and EBL helps support cross-fertilization of ideas among Constraint Satisfaction, Planning and Explanation-Based Learning communities. I Introduction One of the main-stays of AI literature is the idea of “depen- dency directed backtracking” as an antidote for the inefficien- cies of chronological backtracking [16l. However, there is a considerable confusion and variation regarding the various implementations of dependency directed backtracking. Com- plicating the picture further is the fact that many “speedup learning” algorithms that learn from failure (cf. [ 10; 1; 91), do analyses that are quite close to the type of anal- ysis done in the dependency directed backtracking algo- rithms. It is no wonder then that despite the long ac- knowledged utility of DDB, even the more comprehensive AI textbooks such as 1151 fail to provide a coherent ac- count of dependency directed backtracking. Lack of a coherent framework has had ramifications on the research efforts on DDB and EBL. For example, the DDB and speedup learning techniques employed in planning and prob- lem solving on one hand [lo], and CSP on the other 13; 171, have hither-to been incomparable. My motivation in this paper is to put the different ideas and approaches related to DDB and EBL in a common perspective, *This research is supported in part by NSF research initiation award (RIA) IRI-9210997, NSF young investigator award (NYI) IRI-9457634 and ARPA/Rome Laboratory planning initiative grants F30602-93-C-0039 and F30602-95-C-0247. The ideas described here developed over the course of my interactions with Suresh Katukam, Gopi Bulusu and Yong Qu. I also thank Suresh Katukam and Terry Zimmerman for their critical comments on a previous draft. and Steve Minton for his encouragement on this line of work. and thereby delineate the underlying commonalities between research efforts that have so far been seen as distinct. To this end, I consider all backtracking and learning algorithms within the context of general refinement search 171. Refinement search involves starting with the set of all potential solutions for the problem, and repeatedly splittingthe set until a solution for the problem can be extracted from one of the sets. The common algorithms used in both planning and CSP can be modeled in terms of refinement search. I show that within refinement search, both DDB and EBL depend upon the common theory of explaining search failures, and regressing them to higher levels of the search tree to compute explanations of failures of the interior nodes. DDB occurs any time the explanation of failure regresses unchanged over a refinement decision. EBL involves remembering the interior node failure explanations and using them in the future. The relevant issues of importance include how the failures are explained, and how many of them are stored for future use. I will show how the existing methods for DDB and EBL vary along these dimensions. I believe that this unified task- independent understanding of DDB and EBL helps support cross-fertilization of ideas among the CSP, planning and EBL communities. The rest of this paper is organized as follows. In Section 2 I review refinement search and show how planning and con- straint satisfaction can be modeled in terms of refinement search. In Section 3, I provide a method for doing depen- dency directed backtracking and explanation based learning in refinement search. In Section 4, I discuss several variations of the basic DDB/EBL techniques. In Section 5, I relate this method to existing notions of dependency directed back- tracking and explanation based learning in CSP and planning. Section 6 summarizes our conclusions. 2 efimement Search Refinement search can be visualized as a process of starting with the set of all potential solutions for the problem, and splitting the set repeatedly until a solution can be picked up from one of the sets in bounded time. Each search node n/ in the refinement search thus corresponds to a set of candidates. Syntactically, each search node is represented as a collection of task specific constraints. The candidate set of the node is implicitly defined as the set of candidates that satisfy the con- straints on the node. Figure 1 provides a generalized template for refinement search. A refinement search is specified by providing a set of refinement operators (strategies) R, and a solution constructor function sol. The search process starts Enhancing Efficiency 757 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Algorithm Refine-Node(N) Parameters: (i) sol: Solution constructor function. (ii) R: Refinement operators. (ii) CE: fn. for computing the explanation of failure. 0. Termination Check: If sol (nf) returns a solution, return it, and terminate. If it returns *fail*, fail. Otherwise, select a flaw F in the node Jz/. 1. Refinements: Pick a refinement operator R E R that can resolve F. (Not a backtrackpoint. ). Let 72 correspond to the n refinement decisions dl , dz, . . . , d,. For each refinement decision d; E d 1, d2 . - - d, do Af’ t d;(n/> If N’ is inconsistent fail. Compute CEfn/‘) the explanation offailure for JV * Propagate(M) Else, Refine:Node(n/‘). Figure 1: General template for Refinement search. The underlined portion provides DDB and EBL capabilities. with the initial node Ng, which corresponds to the set of all candidates. The search process involves splitting, and thereby narrowing the set of potential solutions until we are able to pick up a solution for the problem. The splitting process is formalized in terms of refinement operators. A refinement operator R takes a search node n/, and returns a set of search nodes (n/l, n/,, - - .JV~), called refinements of n/, such that the candidate set of each of the refinements is a subset of the candidate set of n/. Each complete refinement operator can be thought of as corresponding to a set of decisions ddw-, d, such that di(n/) = N’i. Each of these decisions can be seen as an operator which derives a new search node by adding some additional constraints to the current search node. To give a goal-directed flavor to the refinement search, we typically use the notion of “flaws” in a search node and think of individual refinements as resolving the flaws. Specifically, any node n/ from which we cannot extract a solution directly, is said to have a set of flaws. Flaws can be seen as the absence of certain constraints in the node JV. The search process thus involves picking a flaw, and using an appropriate refinement that will “resolve” that flaw by adding the missing constraints. Figure 2 shows how planning and CSP problems can be modeled in terms of refinement search. The next two subsections elaborate this formulation. 2.1 Constraint Satisfaction as Refinement Search A constraint satisfaction problem (CSP) [171 is specified by a set of n variables, Xt , X;! - 0 -X,, their respective value domains, D1, D2. l . D, and a set of constraints. A constraint Cj(Xjj * ” *** X Di, , Xii) is a subset of the Cartesian production Di, x consisting of all tuples of values for a subset (Xi, 9 ’ ’ . Xi, ) of the variables which are compatible with each other. A solution is an assignment of values to all the variables such that all the constraints are satisfied. Seen as a refinement search problem, each search node in CSP contains constraints of the form Xi = Q, which together provide a partial assignment of values to variables. The candidate set of each such node can be seen as representing all complete assignments consistent with that partial assignment. 758 Learning A solution is a complete assignment that is consistent with all the variable/value constraints of the CSP problem. Each unassigned variable in the current partial assignment is seen as a “flaw” to be resolved. There is a refinement operator Rxi corresponding to each variable Xi, which generates refinements of a node n/ (that does not assign a value to Xi) by assigning a value from Di to Xi. Rx; thus corresponds to an “OR” branch in the search space corresponding to decisions d\, d& . - -, diD _, . Each decision dj corresponds to adding the constraint Xi = Di[j], (where Di[j] is the jth value in the domain of the variable Xi). We can encode this as an operator with preconditions and effects as follows: assign@& Zij Vi”‘) Preconditions: xi is unassigned in A. Effects: A + A + (xi + ~7’) 2.2 Planning as Refinement Search A planning problem is specified by an initial state description I, a goal state description G, and a set of actions A. The actions are described in terms of preconditions and effects. The solution is any sequence of actions such that executing those actions from the initial state, in that sequence, will lead us to goal state. Search nodes in planning can be represented (see 171) as 6-tuples (S, 0, L3, L, E, C), consisting of a set of steps, orderings, bindings, auxiliary constraints, step effects and step preconditions. These constraint sets, called partial plans, are shorthand notations for the set of ground operator sequences that are consistent with the constraints of the partial plan. There are several types of complete refinement operators in planning 181, including plan space, state-space, and task reduction refinements. As an example, plan-space refinement proceeds by picking a goal condition and considering different ways of making that condition true in different branches. As in the case of CSP, each refinement operator can again be seen as consisting of a set of decisions, such that each decision produces a single refinement of the parent plan (by adding constraints). As an example, the establishment refinement or plan-space refinement corresponds to picking an unsatisfied goal/subgoal condition C that needs to be true at a step s in a partial plan P, and making a set of children plans Pt - . - P, such that in each Pi, there exists a step s’ that precedes s, which adds the condition C. P also contains, (optionally) a “causal link” constraint s’ 2 s to protect C between s’ and s. Each refinement Pi corresponds to an establishment decision di, such that di adds the requisite steps, orderings, bindings and causal link constraints to the parent plan to produce Pi. Once again, we can represent this decision as an operator with preconditions and effects. 3 ask formulation of In this section, we will look at the formulation of DDB and EBL in refinement search. The refinement search template provided in Figure 1 implements chronological backtracking by default. There are two independent problems with chrono- logical backtracking. The first problem is that once a failure is encountered the chronological approach backtracks to the immediate parent and tries its unexplored children -- even if it is the case that the actual error was made much higher up in the search tree. The second is that the search process Problem CSP Planning Nodes Partial assignment A Partial plan P Candidate Set I Refinements Flaws Soln. Constructor Complete assign- Assigning values Unassigned vari- Checking if all ments consistent to variables ables in A variables are as- with A signed in A Ground operator Establishment, Open conditions, Checking if any sequences consis- Conflict Conflicts in P ground lineariza- tent with P resolution tion of P is a solution Figure 2: CSP and Planning Problems as instances of Refinement Search Procedure Propagate parent(n/;): The node that was refined to get ti. d(JiQ: decision leading to ~6 from its parent; E(a): explanation of failure at ti. F(S): The flaw that was resolved at this node. 1. E’ t Regress(E(N;),d(.Ah) 2. If E’ = E(Ni), then (dependency directed backtracking) E(paTent(N; )) c E’; Propagate(patent(N’i )) 3. If E’ #E(g), then 3.1. If there are unexplored siblings of A6 3.1.1 Make a rejection rule R rejecting the decision d(N’; ), with E’ as the rule antecedent. Store R in rule set. 3.1.2. E(parent(N;>) t E(paTent(N;)) A E’ 3.1.3. Let Hi+, be the first unexplored sibling of node N’i . Refine-node(N’;+,) 3.2. If there are no unexplored siblings of N’i, 3.2.1. Set E(paTent(M;)) to E(paTent(Ni)) A E’ A F(paTent(hl; )) 3.2.3. Propagate(paTent(ti)) Figure 3: The complete procedure for propagating failure explanations and doing dependency directed backtracking does not learn from its failures, and can thus repeat the same failures in other branches. DDB is seen as a solution for the first problem, while EBL is seen as the solution for the second problem. As we shall see below, both of them can be formalized in terms of failure explanations. The procedure Propagate in Figure 3 shows how this is done. In the fol- lowing we explain this procedure. Section 3.1 illustrates this procedure with an example from CSP. Suppose a search node nf is found to be failing by the refinement search template in Figure 1. To avoid pursuing refinements that are doomed to fail, we would like to backtrack not to the immediate parent of the failing node, but rather to an ancestor node n/’ of n/ such that the decision taken under nl’ has had some consequence on the detected failure. To implement this approach, we need to sort out the relation between the failure at N and the refinement decisions leading to it. We can do this by declaratively characterizing the failure at JV. Explaining Failures: From the refinement search point of view, a search node N is said to be failing if its candidate set provably does not contain any solution. This can happen in two ways-- the more obvious way is when the candidate set of n/ is empty (because of an inconsistency among the constraints of n/), or because the constraints of N together with the global constraints of the problem, and the requirements of the solution, are inconsistent. For example, in CSP, a partial assignment A may be failing because A assigns two values to the same variable, or because the values that A assigns to its variables are inconsistent with the some of the specific constraints. Similarly, in the case of planning, a partial plan P may be inconsistent either because the ordering and binding constraints comprising it are inconsistent by themselves, or violate the domain axioms. In either case, we can associate the failure at n/ with a subset of constraints in Af, say E, which. possibly together with some domain constraints 6, causes the inconsistency (i.e., 6 A E k False). E is called the explanation of failure of n/. Suppose n/ is the search node at which backtracking was necessitated. Suppose further that the explanation for the failure at nl is given by the set of constraints E (where E is a subset of the constraints in N). Let J$, be the parent of search node n/ and let d be the search decision taken at h$, that lead to n/. We want to know whether o! played any part in the failure of n/, and what part of n/p was responsible for the failure of N (remember that the constraints in n/ are subset of the constraints of its parent). We can answer these questions through the process of regression. Regression: Formally, regression of a constraint c over a decision d is the set of constraints that must be present in the partial plan before the decision d, such that c is present after taking the decision. * Regression of this type is typically studied in planning in conjunction with backward application of STRIPS-type operators (with add, delete, and precondition lists), and is quite well-understood (see 1121). Here I adapt the same notion to refinement decisions as follows: Regress(c, d) = True if c E effects(d) I =c if c” E effects(d) and (c” A c’) I- c =c Otherwise Regress(cl A c2 - + - A cn, d) = RegTess(cl, d) A RegTess(c2, d) -. - A RegTess(c,, d) Dependency Directed Backtracking: Returning to our ear- lier discussion, suppose the result of regressing the explana- tion of failure E of node M, over the decision d leading to R/, d-‘(E), be E’. Suppose E’ = E. In such acase, we know that the decision d did not play any role in causing this failure.2 Thus, there is no point in backtracking and trying another al- ternative at A&. This is because our reasoning shows that the ‘Note that in regressing a constraint c over a decision d, we are interested in the weakest constraints that need to be true before the decision so that c will be true after the decision is taken. The preconditions of the decisions must hold in order for the decision to have been taken any way, and thus do not play a part in regression. ‘Equivalently, DDB can also be done by backtracking to the highest ancestor node of n/ which still contains all the constraints in E. I use the regression based model since it foregrounds the similarity between DDB and EBL. Enhancing Efficiency 759 Failure Exp: Et Failure Exp: E2 Failure Exp: E, Figure 4: Computing Failure Explanations of Interior Nodes constraints comprising the failure explanation E are present in N;, also, and since by definition E is a set of inconsistent constraints, n/p is also a failing node. This reasoning forms the basis of dependency directed backtracking. Specifically, in such cases, we can consider J$, as failing and continue backtracking upward using the propagate procedure, and using E as the failure explanation of h$. Computing Explanations of failures of Interior Nodes: If the explanation of failure changes after regression, i.e., E’ = d-‘(E) # E, then we know that the decision leading to n/ did have an effect on the failure in Af. At this point, we need to consider the sibling decisions of d under N;, . If there are no unexplored sibling decisions, this again means that all the refinements of J$, have failed. The failure explanation for J$, can be computed in terms of the failure explanations of its children, and the flaw that was resolved from node ,$, as shown in Figure 4. Intuitively, this says that as long as the flaw exists in the node, we will consider the refinement operator again to resolve the flaw, and will fail in all branches. The failure explanation thus computed can be used to continue propagation and backtracking further up the search tree. Of course, if any of the explanations of the children nodes of n/p regress unchanged over the corresponding decision, then the explanation of failure of n/p will be set by DDB as that child’s failure explanation. Explanation Based Learning: Until now, we talked about the idea of using failure explanations to assist in dependency directed backtracking. The same mechanism can however also be used to facilitate what has traditionally been called EBL. Specifically, suppose we found out that an interior node Na is failing (possibly because all its children are failing), and we have computed its explanation of failure J$,. Suppose we remember Ep as a “learned failure explanation.” Later on, if we find a search node n/’ in another search branch such that EP is a subset of N’, then we can consider n/’ to be failing with EP as its failure explanation. A variation of this approach involves learning search control rules 1101 which recommend rejection of individual decisions of a refinement operator if they will lead to failure. When the child Nt of the search node &, failed with failure explanation El, and E’ = d- 1 (El ), we can learn a rule which recommends rejection of the decision d whenever E’ is present in the current node.3 760 Learning / \ Problem Spec. Domners: x .y.u.v: ( A. B . C. D. El \ <-A ~‘1 D.El / ) 1 lA.Bl Nl: x=A Colrstmlllts: x=Az> w’=E )=B => u ‘= D X2: w-A&y= B u=C=>lkA N3: Y= / DDE resrnrrs ,I <- c stwrch qter N6 v=D=>l’=B ,y=B, v=D I \ J / I <-D .E!p: f.1 = A & \=B I N4:x=A.y=B.u=C.v=D & wiassrgnedl I& ) J -. -. )L C-E -. .-\I <-D --a NS:x=A, y=B,u=C,v=D,w=E N~:x=A,~=B.u=C,V=D.W=D E.tp.Cx=A &&=&l E.rp I> = 3 & \L = DI Figure 5: A CSP example to illustrate DDB and EBL Unlike DDB, whose overheads are generally negligible compared to chronological backtracking, learning failure ex- planations through EBL has two types of hidden costs. First, there is the storage cost. If we were to remember every learned failure explanation, the storage requirements can be exponential. Next, there is the cost of using the learned failure explanations. Since in general, using failure expla- nations will involve matching the failure explanations (or the antecedents of the search control rules) to the current node, the match cost increases as the number of stored ex- planations increase. This problem has been termed the EBL Utility Problem in the Machine learning community 111; 61. We shall review various approaches to it later. 3.1 Example Let me illustrate the DDB and EBL process above with a simple CSP example shown in Figure 6 (for a planning example that follows the same formalism, see the discussion of UCPOP-EBL in 191). The problem contains five variables, 1, z,y,u,~ and 20. The domains of the variables and the constraints on the variable values are shown in the figure. The figure shows a series of refinements culminating in node N5, which is a failure node. An explanation of failure of N5 is x = A A zu = E (since this winds up violating the first constraint). This explanation, when regressed over the decision 20 t E that leads to N5, becomes x = A (since w = E is the only constraint that is added by the decision). Since the explanation changed after regression, we restart search under N4, and generate N6. N6 is also a failing node, and its explanation of failure is y = B A 20 = D. When this explanation is regressed over the corresponding decision, we get y = B. This is then conjoined with the regressed explanation from N5, and the flaw description at N5 to give the explanation of failure of N4 as E(N4) : x = A A y = BAunassigned(w). At this point E(N4) can be remembered as a learned failure explanation (aka nogood [161), and used to prune nodes in other parts of the search tree. Propagation progresses upwards. The decision v +- D does not affect the explanation N4, and thus we backtrack over the node N3, without refining it further. Similarly, we also backtrack over N2. E(N4) does change when regressed over y t- B and thus we restart search under N 1. 4 Variations on the The basic approach to DDB and EBL that we described in the previous section admits several variations based on how the explanations are represented, discuss these variations below. selected and remembered. I 4.1 Selecting a Failure Explanation In our discussion of DDB and EBL in the previous section, we did not go into the details of how a failure explanation is selected for a dead-end leaf node. Often, there are multiple explanations of failure for a dead-end node, and the explana- tion that is selected can have an impact on the extent of DDB, and the utility of the EBL rules learned. The most obvious explanation of failure of a dead-end node n/ is the set of constraints comprising ni itself. In the example in Figure 5, E(1V5)canthusbea=AAy=BAu=CAv=DAw=E. It is not hard to see that using M as the explanation of its own failure makes DDB degenerate into chronological back- tracking (since the node nl’ must have been affected by every decision that lead to it’). Furthermore, given the way the ex- planations of failure of the interior nodes are computed (see Figure 4), no ancestor JV’ of JV can ever have an explanation of failure simpler than n/’ itself. Thus, no useful learning can take place. A better approach is thus to select a smaller subset of the constraints comprising the node, which by themselves are inconsistent. For example, in CSP, a domain constraint is violated by a part of the current assignment, then that part of the assignment can be taken as an explanation of failure. Similarly, ordering and binding inconsistencies can be used as starting failure explanations in planning. Often, there may be multiple possible failures of expla- nation for a given node. For example, in the example in Figure 5, suppose we had another constraint saying that u = C j W $ E. In such a case, the node N5 would have violated two different constraints, and would have had two failure explanations -- Et : z = A A W = E and EZ : u = c A zu = E. In general, it is useful to prefer expla- nations that are smaller in size, or explanations that refer to constraints that have been introduced into the node by earlier refinements (since this will allow us to backtrack farther up the tree). By this argument Et above is preferable to & since E2 would have made us backtrack only to N2, while Et allows us to backtrack up to n/l. These are however only heuristics. It is possible to come up with scenarios where picking the lower level explanation would have helped more. 4.2 Remembering Explanations (and using) Learned Failure Another issue that is left open by our DDB/EBL algorithm is exactly how many learned failures should be stored. Although this decision does not affect the soundness and completeness of the search, it can affect the efficiency. Specifically, there is a tradeoff in storage and matching costs on one hand and search reductions on the other. Storing the failure explanations and/or search control rules learned at all interior nodes could be very expensive from the storage and matching cost points of view. CSP, and machine learning literatures took differing approaches to this problem. Researchers in CSP (e.g. 13; 171) concentrated on the syntactic characteristics of the nogoods, such as their size and minimality, to decide whether or not they should be stored. Researchers in machine learning concentrated instead on approaches that use the distributionof ‘we are assuming that none of the refinement decisions are degenerate; each of the add at least one new constraint to the node. the encountered problems to dynamically modify the stored rules (e.g. by forgetting ineffective rules) [ 11; 61. These differences are to some extent caused by the differences in CSP and planning problems. The nogoods learned in CSP problems have traditionally only been used in intra-problem learning, to cut down search in the other branches of the same problem. In contrast, work in machine learning concentrated more on inter-problem learning. (There is no reaon for this practice to continue however, and it is hoped that the comparative analysis here may in fact catalyze inter-problem learning efforts in CSP). 5 Relations to existing work Figure 6 provides a rough conceptual flow chart of the existing approaches to DDB and EBL in the context of our formalization. In the following we will discuss differences between our formalization and some of the implemented approaches. Most CSP techniques do not explicitly talk about regression as a part of either the backtracking or learning. This is because in CSP there is a direct one-to-one correspondence between the current partial assignment in a search node and the decisions responsible for each component of the partial assignment. For example, a constraint x = a must have been added by the decision x t a. Thus, in the example in Figure 5 it would have been easy enough to see that we can “jump back” to Nl as soon as we computed the failure explanation at N4. This sort of direct correspondence has facilitated specialized versions of DDB algorithm that use “constraint graphs” and other syntactic characterizations of a CSP problem to help in deciding which decision to backtrack to 1171. Regression is however important in other refinement search scenarios including planning where there is no one-to-one correspondence between decisions and the constraints in the node. Most CSP systems do not add the flaw description to the interior node explanations. This makes sense given that most CSP systems use learned explanations only within the same problem, and the same flaws have to be resolved in every branch. The flaw description needs to be added to preserve soundness of the learned nogoods, if these were to be used across problems. The flaw description is also important in planning problems, even in the case of intra- problem learning. where different search branches may involve different subgoaling structures and thus different flaws. Traditionally learning of nogoods in CSP is done by sim- ply analyzing the dead-end node and enumerating all small subsets of the node assignment that are by themselves in- consistent. The resultant explanations may not correspond to any single explicit violated constraint, but may correspond to the violation of an entailed constraint. For example, in the example in Figure 5, it is possible to compute u = C A IJ = D as an explanation of failure of N5, since with those values in place, I cannot be given a value (even though 2 has not yet been considered until now). Dechter 131 shows that com- puting the minimal explanations does not necessarily pay off in terms of improved performance. The approach that we described in this paper allows us to start with any reasonable explanation of failure of the node -- e.g. a learned nogood or domain constraint that is violated by the node -- and learn similar minimal explanations through propagation. It seems plausible that the interior node failure explanations learned in this way are more likely to be applicable in other branches Enhancing Efficiency 761 Back Jumping (Jump to a nearest atlces,or decision that played a part in the failure). (Using regression or explicit decision dependencies) I Would also like to remember and avoid the failure Remember Failure explanations - at each deadend (May have mfotmation irrelevant to failure) (There may be multiple failure exolanations. and BackJ&plng + Remem/er failure reasons / How many what failure Failure Reasons? rearons? The EBL Approach J (Rather than pmcess The DechterKSP approach the fathue eagerly to (The basic Idea IS to eagerly get all explanations. process the failing node to fiid wait unttl fsllures occur allhme subsets of its cons~amts that in other branches. are mutually inconststent (and thus are regress them to failure exps). The “rationale” is that higher levels. thus “smaller” explaauons of fadure wdl be effecttveiy simpbfymg them utility analysis [Minton] Use scdstatic minimality Criterion [Dechter] Relevant exps (a weaker notton than minunal) (Shallow Learnlng) Figure 6: A schematic flow chart tracing the connections beiween implemented approaches to DDB and EBL and problems since they resulted from the default behavior of the underlying search engine. Intelligent backtracking techniques in planning include the “context” based backtracking search used in Wilkin’s SIPE [ 181, and the decision graphs used by Daniels et. al. to support intelligent backtracking in Nonlin [21. The decision graphs and contexts explicitly keep track of the dependencies be- tween the constr-aints in the-plan, and the decisions that were taken on the plan. These structures are then used to facilitate DDB. In a way, decision graphs attempt to solve the same problem that is solved by regression. However, the semantics of decision graphs are often problem dependent, and storing and maintaining them can be quite complex 1141. In contrast, the notion of regression and propagation is problem indepen- dent and explicates the dependencies between decisions on an as-needed basis. On the flip side, regression and propagation work only when we have a declarative representation of deci- sions and failure explanations, while dependency graphs may be constructed through procedural or semi-automatic means. 6 Summary In this paper, we characterized two long standing ideas - - dependency directed backtracking and explanation based learning -- in the general task-independent framework of refinement search. I showed that at the heart of both DDB and EBL is a process of explaining failures at leaf nodes of a search tree, and regressing them through the refinement decisions to compute failure explanations at interior nodes. DDB occurs when the explanation of failure regresses unchanged over a refinement decision, while EBL involves storing and applying failure explanations of the interior nodes in other branches of the search tree or other problems. I showed that the way in which the initial failure explanation is selected can have a significant impact on the extent and utility of DDB and EBL. The utility of EBL is also dependent on the strategies used to manage the stored failure explanations. I have also explained the relations between our formalization of DDB and EBL and the existing work in planning and CSP areas. It is hoped that this task-independent formalization of DDB/EBL approaches will clarify the deep connections between the two ideas, and also facilitate a greater cross-fertilization of approaches from the CSP, planning and problem solving communities. For example, CSP approaches could benefit from the results of research on utility of EBL, and planning research could benefit from the improved backtracking algorithms being developed for CSP [51. [ll 121 131 [41 El [61 [71 [Sl 191 1101 t113 1121 1131 1141 1151 1161 1171 1181 References N. Bhatnagar and J. Mostow. On-line Learning From Search Failures Machine Learning, Vol. 15, pp. 69- 117, 1994. L. Daniel. Planning: Modifying non-linear plans University Of Edinburgh, DAI Working Paper: 24 R. Dechter. Enhancement schemes for learning: Back-jumping, learning and cutset decomposition. Artificial Intelligence, Vol. 41, pp. 273-3 12, 1990. D. Frost and R. Dechter. Dead-end driven learning. In Proc. AAAI-94, 1994. M. Ginsberg and D. McAllester. GSAT and Dynamic Back- tracking. In Proc. KRR, 1994. J .Gratch and G. DeJong COMPOSER: A Probabilistic Solution to the Utility problem in Speed-up Learning. in Proc. AAAI 92, pp:235--240, 1992 S. Kambhampati, C. Knoblock and Q. Yang. Planning as Re- finement Search: A Unified framework for evaluating design tradeoffs in partial order planning. Artificial Intelligence spe- cial issue on Planning and Scheduling. Vol. 76, pp/ 167-238. 1995. S. Kambhampati and B. Srivastava. Universal Classical Plan- ner: An Algorithm for unifying state-space and plan-space planning. In Proc. 3rd European Workshop on Planning Systems, 1995. S. Kambhampati, S. Katukam and Y. Qu. Enclosed please find three copies of our paper “Failure driven dynamic search control for partial order planners: An explanation-based ap- proach” Artificial Intelligence, Fall 1996. (To appear). S. Minton, J.G Carbonell, C.A. Knoblock, D.R. Kuokka, 0. Etzioni and Y. Gil. Explanation-Based Learning: A Problem Solving Perspective. Artificial Intelligence, 40:63-- 118,1989. S. Minton. Quantitative Results Concerning the Utility of Explanation Based Learning. Arrzjiciul Intelligence, 42:363-- 391,199o. N.J. Nilsson. Principles of Artzficial Intelligence. Tioga, Palo Alto, 1980. J. Pearl. Heuristics: Intelligent Search Strategiesfor Computer Problem Solving. Addison-Wesley (1984). C. Petrie. Constrained Decision Revision. In Proc. Z&h AAAZ, 1992. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hail, 1995. R. Stallman and G. Sussman. Forward Reasoning and Dependency-directed Backtracking in a System for Computer Aided Circuit Analysis. Aruficial Intelligence, Vol. 9, pp. 135-196. 1977. E. Tsang. Foundations of Constraint Satisfaction, (Academic Press, San Diego, California, 1993). D. Wilkins. Pracrical Planning. Morgan Kaufmann (1988). 762 Learning
1996
113
1,748
Compilation of No eous Constraints Robert E. Wray, HII and John E. Eaird and andolph M. Jones Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109-2110 {wrayre,laird,rjones)@umich.edu Abstract Hierarchical execution of domain knowledge is a use- ful approach for intelligent, real-time systems in com- plex domains. In addition, well-known techniques for knowledge compilation allow the reorganization of knowledge hierarchies into more efficient forms. How- ever, these techniques have been developed in the con- text of systems that work in static domains. Our in- vestigations indicate that it is not straightforward to apply knowledge compilation methods for hierarchi- cal knowledge to systems that generate behavior in dynamic environments One particular problem in- volves the compilation of non-contemporaneous con- straints. This problem arises when a training instance dynamically changes during execution. After defining the problem, we analyze several theoretical approaches that address non-contemporaneous constraints. We have implemented the most promising of these alter- natives within Soar, a software architecture for perfor- mance and learning. Our results demonstrate that the proposed solutions eliminate the problem in some situ- ations and suggest that knowledge compilation meth- ods are appropriate for interactive environments. Introduction Complex domains requiring real-time performance re- main a significant challenge to researchers in artifi- cial intelligence. One successful approach has been to build intelligent systems that dynamically decom- pose goals into subgoals based on the current situation (Georgeff & Lansky 1987; Laird & Rosenbloom 1990). Such systems structure procedural knowledge hierar- chically according to a task decomposition. A hierar- chical representation allows the performance system to react within the context of intermediate goals, using domain theories appropriate to each goal. Sub-tasks are dynamically combined and decomposed in response to the current situation and higher-level tasks, until the system generates the desired level of behavior. A hierarchical decomposition can thus generate appropri- ate complex behavior while maintaining an economical knowledge representation. In contrast, a flat, fully re- active knowledge base implementing the same behavior would explicitly represent all the possible combinations of subtasks that arise through dynamic composition within a hierarchy. Unfortunately, performing step by step decomposi- tion and processing at each level of a hierarchy takes time. This problem is exacerbated when the world is changing while execution takes place. For time-critical behavior, such decomposition may not be feasible. One possible solution is to have a system that supports both hierarchical knowledge and flat reactive rules Reac- tive rules apply when possible. Otherwise, reasoning reverts to goals and subgoals. Many multi-level real- time systems approximate this general framework. Critical questions concern which reactive knowledge should be included and where such knowledge would come from. A promising answer is the dynamic com- pilation of reactive knowledge from the system’s hier- archical knowledge. To date, knowledge compilation algorithms such as explanation-based learning (EBL) (DeJong & Mooney 1986) have been used to compile the results of ofl-line planning activities into control knowledge for faster execution at runtime. However, except in limited cases (Bresina, Drummond, & Kedar 1993; Mitchell 1990; Pearson et ad. 1993), EBL and knowledge compilation techniques have not been used to compile hierarchical execution systems into reactive systems concurrent with execution. Such an approach would provide an additional form of speedup for hier- archical execution systems. In this paper, we study the issues that arise when hierarchical execution systems are dynamically com- piled into reactive systems. We assume the hierarchi- cal system represents its knowledge as goals and op- erators in domain theories, while the reactive system uses rules. Rule-based systems are an attractive repre- sentation for reactive knowledge because it is possible to build new rules incrementally and add them to very large rule bases as the system is running (Doorenbos 1994). Given this representation, we consider how a Enhancing Efficiency 771 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. knowledge compilation process could work using EBL- like techniques. What we find is that a direct appli- cation of EBL to a straightforward domain theory can lead to complications in a dynamic domain. Specif- ically, the learning mechanism may create rules with conditions that test for the existence of features that can never co-occur in the environment. We call such conditions non-contemporaneous constraints. This problem represents a severe stumbling block in applying knowledge compilation methods not only in interactive environments, but for hierarchically de- composed tasks in general. The aim of this paper is to convey an understanding of the problem and to investi- gate the space of potential solutions. Results from im- plementations of the most promising solutions demon- strate how the problem can be avoided. These results suggest that knowledge compilation mechanisms may apply to a much larger population of environments than previous work has suggested. An Example Domain To illustrate our discussion, we will present examples for a system that controls an airplane in a realistic flight, simulator. This task can naturally be decom- posed into a hierarchy of domain theories. At the highest level, the agent may generate decisions to fly particular flight plans. Execution of flight plans can be achieved by creating subgoals to arrive at certain locations at certain times. Further decomposition is possible until execution reaches the level of primitive commands. At this level, the flight controller issues commands that can be directly executed by the flight simulator, such as “push the throttle forward 4 units” or “move the stick left 10 units”. This decomposition is not a pure abstraction hier- archy as employed by abstraction planners (Sacerdoti 1974; Knoblock 1994) but a hierarchy of domain theo- ries over features that can be aggregates and not just abstractions of features at lower levels. Higher levels can also include intermediate data structures that are not used at the primitive levels. This representation is very similar to that used in HTN planners (Erol, Hendler, & Nau 1994). The difference is that in this approach all the necessary control knowledge is part, of the domain theory. The hierarchy of goals develops dynamically, in response to goals higher in the hierar- chy as well as to changes in the environment, but the task is executed without search. This style of knowledge representation has been used for both stick-level control of a simulated aircraft (Pearson et al. 1993) and higher-level control of a tac- tical flight simulator (Tambe et al. 1995) using 400 to 3500 rules and from five to ten levels of goals. We will 1. IF NotEqualcHeading, GoalHeading) THEN CreateGoal(ACHIEVE-HEADING) 2. IF Goal(ACHIEVE-HEADING) AND LeftOf(Heading, GoalHeading) THEN Execute(ROLL(right)) 3. IF Goal(ACHIEVE-HEADING) AND RightOfcHeading, GoalHeading) THEN Execute(ROLL(left)) 4. IF Goal(ACHIEVE-HEADING) AND NotEqual(Rol1, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) 5. IF Goal(ACHIEVE-HEADING) AND Equal(Rol1, level) AND Equal(Heading, GoalHeading) THEN DeleteGoal(ACHIEVE-HEADING) Table 1: Simplified domain theory for achieving a goal heading. focus on a single task from the flight domain: achieving a new heading. This task uses commands that control the roll, pitch, yaw, and speed of the aircraft, Table 1 contains a subset of a domain theory for this simplified flight, controller. Assume that the controller is flying due north and has just made a decision to turn due east,, represented as a new goal heading. The difference between the current and goal headings leads to the creation of an ACHIEVE-HEADING goal by Rule 1. The execution of the ROLL(right) command then follows from Rule 2. With the plane in a roll, it begins turning, re- flected by changing values for the current heading ar- riving through the input system. The turn continues until the current heading and goal heading are the same. Because the plane is in a roll when the goal heading is achieved, the agent, cannot simply delete the ACHIEVE-HEADING goal. Instead, Rule 4 gener- ates a command to return the aircraft to level flight. Once the aircraft has leveled off, Rule 5 terminates the ACHIEVE-HEADING goal. In this example, the knowl- edge for attaining a new heading under different condi- tions has been distributed across a number of subtasks in the hierarchy. Individual rules consist of conditions that are simple, easy to understand, and may com- bine in different, ways to get, different behaviors. For instance, ROLL commands can be invoked from other contexts, and the ACHIEVE-HEADING goal can be cre- ated for conditions other than the simple one here. Compilation of Hierarchical Knowledge Now we consider one knowledge compilation technique, EBL, for compiling hierarchical knowledge. EBL uses a domain theory to generate an explanation of why a truaning instance is an example of a goal concept (Mitchell, Kellar, & Kedar-Cabelli 1986). It then com- 772 Learning piles the explanation into a representation that satisfies appropriate operutionality criteria. In our case, an ex- planation is a trace of the reasoning generated using the domain theory for a given level in the hierarchy. The training instance is a set of features describing the state of the world; however, these features may change because the domain is dynamic. The goal con- cepts are any features or actions that are created for use in solving a higher-level problem. In our example, this includes just the primitive ROLL actions sent to the external environment (assumed to represent the high- est level of the hierarchy). Finally, the operationality criterion specifies that the features included in a com- piled rule should come from the representation used in the higher-level problem. In the example, everything but the ACHIEVE-HEADING goal is operational. EBL compiles the reasoning within a goal by regress- ing from the generated action through the trace of rea- soning, collecting operational tests. The collected tests and the action combine into a rule that can be used in place of a similar chain of reasoning in the future. Be- cause a given goal can incrementally generate results for higher levels (such as multiple primitive control ac- tions), multiple explanations and multiple rules can be constructed from a single goal. Non-Contemporaneous Constraints The rules in Table 1 represent a natural decomposition of the turning task. However, compiling this reasoning proves to be problematic. Difficulty arises because the current heading changes while the ACHIEVE-HEADING goal is active, leading it to be used for two different purposes over the course of a turn. The heading is used to initiate the turn, when it is not equal to the goal heading. Then, when the heading and goal head- ing are equal, the system ends the turn by rolling the aircraft back to level flight. Thus, the direct approach to generating an explanation for the final ROLL( level > command tests two diflerent values of the Heading, one tested by Rule 1 to initiate the goal, and one by Rule 4 to level the plane: IF NotEqual(Heading, GoalHeading) AND NotEqual(Rol1, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) These conditions are clearly inconsistent with each other. This is not a problem just because Heading changed. The value of Roll also changed (from being level to banking right), but because it was only tested at one point (by Rule 4), it did not cause a problem. One way to describe the problem is that a persistent goal has been created based on transitory features in the environment. The ACHIEVE-HEADING goal persists across changes in heading. If a feature that leads to a goal changes over the course of the goal, and a new feature is later tested, then those features might enter into the compiled rule as non-contemporaneous con- straints. This problem can occur whenever the perfor- mance system creates a persistent data structure based on features that change during the subgoal. When EBL has been used to compile control knowl- edge within planning systems, even for dynamic do- mains, non-contemporaneous constraints have not arisen because the training instance is static during plan generation. For instance, both Chien et al (1991) and DeJong and Bennett (1995) describe approaches to planning and execution in which there is no interac- tion with the environment during planning and learn- ing. However, when plan execution and EBL both occur while interacting with the environment, non- contemporaneous constraints can result 0 Possible Approaches Rather than simply dismiss explanation-based learn- ing as inadequate, we now investigate possible ways to address non-contemporaneous constraints. A-f: Include All Tested Features in the Train- ing Instance. The simplest approach is to include in the training instance every feature that was tested during reasoning, regardless of whether it exists at learning time. In some cases, a useful rule may be learned. If the features are non-contemporaneous, however, the system will learn a rule containing non- contemporaneous constraints. This rule will not cause incorrect behavior, but it will never match. Addition- ally, a particular rule may be created repeatedly, wast- ing more time. Perhaps most importantly, an oppor- tunity to learn something useful is lost. A-2: Prune Rules with Non-contemporaneous Constraints. Another obvious approach is to forgo learning when a training instance includes features that are not present when the primitive is generated. This will avoid the detrimental effects of learning rules that contain non-contemporaneous constraints. However, it will also fail to learn useful rules when the miss- ing features are not actually non-contemporaneous. A refinement of this approach is to delete rules with non- contemporaneous constraints. However, the recog- nition of non-contemporaneous constraints requires domain-dependent knowledge. For example, it may be physically impossible for a given plane to fly at a spe- cific climb rate and pitch, although it can achieve both of them independently. This knowledge may not even be implicit in the system’s knowledge base, so that ad- ditional knowledge is required for correct learning, but Enhancing Efficiency 773 not for correct perf0rmance.l A-3: Restrict Training Instances to Contemp- oraneous Features. A similar approach is to base explanations only on items that exist at some single point in time. For instance, a system could be de- signed to include in the training instance only those features that are true in the environment when the primitive is issued (even if other contributing features, no longer present, were tested as well). This guar- antees that non-contemporaneous constraints will not appear in the final rule. However, there is no guaran- tee that the resulting rule will be correct. If the final result is dependent upon the change of a value from an earlier one, then including only the final value in the learned rule will make it over-general. For exam- ple, consider the flight controller. If the system used only contemporaneous features at the time it generated ROLL(level), the rule would look like this: IF NotEqual (Roll, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) This rule could be over-general, because there may be times when the system should not level off. Bresina et al. (1993) describe an approach to avoiding over- general rules when the training instance is based upon features that existed at the time a chain of reasoning was initiated. Their approach is based upon a spe- cific representational scheme that requires knowledge of temporal relationships in the domain. A-4: Freeze Input During Execution. Another approach is to not allow the training instance to change over the course of generating a primitive, thus avoiding reasoning that would incorporate non- contemporaneous constraints. Because the training in- stance no longer changes with time, any learned rules will be guaranteed to be contemporaneous. However, this approach forces a loss in reactivity because the system will be unable to respond to changes during its reasoning, even if those changes are very important (for example, if a wind shear causes the aircraft to ex- perience a sudden loss of altitude). A-5: Deactivate Goals Following Any Change. A seemingly drastic approach is to force the system to start with a new training instance by removing all persistent goals every time there is a change. This guarantees that the execution will not use any goals that depend on outdated features of the environment. However, this means that reasoning must be restarted ‘Some systems make such relationships explicit. For example, the ERE architecture (Bresina, Drummond, & Kedar 1993) includes domain constraints, which specify all possible co-occurrences. every time the external environment changes This can be time consuming, although as more reactive rules are learned, the need to generate goals decreases. It is an empirical question whether costs of regeneration will be balanced by improvements from learning. This approach may also require some extensions to the original domain theory to work properly. For instance, in the air controller example, the origi- nal representation contains a single rule for estab- lishing the ACHIEVE-HEADING goal After the system has begun the turn, the aircraft’s changing heading will cause the goal to disappear. However, the goal will continue to regenerate through the application of Rule 1 until the plane’s heading matches the goal head- ing. At this point, Rule 1 will no longer apply, the ACHIEVE-HEADING goal will not be created, and Rule 4 cannot fire to level the airplane. This exposes a gap in the system’s domain theory, which can be patched with the following rule: IF Equal(Heading, GoalHeading) AND NotEqual(Rol1, level) THEN CreateGoal(ACHIEVE-HEADING) This rule is not just a special case to eliminate non- contemporaneous constraints. Rather, it is real flight domain knowledge that was missing from the original formulation of the task. The original representation assumed one would always roll back to level flight at the end of an ACHIEVE-HEADING goal, so this knowledge was not needed explicitly. However, this fails to take into account the possibility of a new goal arising while the old goal is being achieved. Suppose the plane is in a roll at a particular heading and higher level knowl- edge determines that the aircraft should now maintain that current heading (e.g., due to some emergency con- dition or a change in overall plan). It then becomes necessary to level off from the turn. In the new hi- erarchy, this is accomplished as an implementation of the ACHIEVE-HEADING goal. Thus, the knowledge that must be added is a beneficial refinement of the system’s domain theory. A-6: Deactivate Dependent Goal Structure Fol- lowing Any Change. A refinement of the previous approach is to deactivate the structures in a goal se- lectively, based on knowledge dependencies between intermediate results and the current external situa- tion. In this approach, the goals are continually ad- justed so they are consistent with the current training instance. As the training instance changes, interme- diate results are deactivated, and new ones, consis- tent with current training instance and domain the- ory, are generated. Referring to our flight example, ACHIEVE-HEADING would only be deleted once, when the conditions of Rule 1 no longer hold in the environ- 774 Learning ment. As with A-5, this approach Cons to the domain theory. m&Y require addi- This alternative presents a subtle complication. A domain theory may create persistent internal features as well as persistent goals. The rules that create goals will, by definition, test features that are higher in the task hierarchy than the goals they create. Goals will always be compiled at learning time into their constituent conditions. Non-goal persistent features, however, will be generated by rules that may con- tain goal tests (thus requiring further compilation), tests of other persistent features (possibly leading to non-contemporaneous constraints), and tests of non- persistent features (which should not be problematic). Thus, a complete knowledge-dependence mechanism must keep track of dependencies for both goals and other persistent features. The expectation is that this alternative will most intelligently use the architecture to track dependencies, but may also incur a large over- head in keeping track of all the relevant (and only the relevant) dependencies for goals and persistent fea- tures. This overhead may be severe enough the reactivity of the system significantly. to impact A-7: Eliminate the Persistence of Goals and Features. Another approach is to eliminate persis- tence in the performance system altogether. Goals (and other features) will remain active only as long as the rules that generate them match. This makes non-contemporaneous constraints impossible, because there will never be any goals in the current chain of rea- soning that depend on features that are no longer true. This requires no further overhead for tracking knowl- edge dependencies than an execution system would al- ready use for matching rules. This approach has been demonstrated in Theo-Agent (Mitchell 1990). How- ever, eliminating all persistence means that the intelli- gent agent can have no memory. Suppose, for example, that the flight controller received a radio message to come to a particular heading. The system would forget the new heading as soon as the radio message disap- peared from the input system because there would be no way to store the information persistently. A-8: Extend Training Instances (and Domain Theories) To Include Temporal Dependencies. A final approach is to supplement our domain the- ory representation with causal knowledge or knowl- edge of temporal dependencies. Explanation-based learning algorithms assume that a training instance includes all the features that are potentially relevant to generating the goal concept. In dynamic envi- ronments, this may include the temporal dependen- cies among the changes to the state used to gener- ate the goal. If true, domain theories need to be ex- tended to create and test temporal relationships and dependencies (such as Rule 4 testing that the Heading is now equal to the Desired-Heading, but it previ- ously had some other value). This requires a his- tory of previous events and would lead to explanations that explicitly test temporal dependencies. The re- sult of knowledge compilation would be rules that are non-contemporaneous, but could still match because a memory of earlier behavior would always be available. A number of planning and learning approaches use rep- resentations that include temporal constructs (Bresina, Drummond, & Kedar 1993; DeJong & Bennett 1995; Gervasio 1994). However, because these relations must be represented explicitly (even when behavior could be generated without the extensions), the type of domains in which such approaches are applicable may be lim- ited, as in A-7. Approaches in Task Knowledge. Each of the proposed changes to the performance system imposes some constraints on how knowledge must be repre- sented. In addition, given a general enough perfor- mance system, many of the proposed alternatives can be realized by adding general rules to the domain the- ory rather than actually changing the performance sys- tem. Thus, for comparison purposes, we propose alter- natives TK-4, TK-5, TK-6, and TK-7 as knowledge- based implementations of alternatives A-4, A-5, A-6, and A-7, respectively. Besides the possible requirement for some new domain-general rules, the TK alterna- tives are conventions for representing the domain the- orys whereas the A alternatives require similar changes to the domain theory. However, it is also possible that some of the TK alternatives will incur less over- head than the corresponding A alternatives because they can be tailored to particular domains and do not have to provide general solutions. A-8 is already an approach that is dependent on changing the domain theory in addition to the execution architecture. Evaluating the Alternatives We could explore all of these alternatives in depth, but a qualitative analysis indicates that some of the approaches are more promising than others. Thus, we have identified a few alternatives to implement based upon a number of priorities, presented in Table 2. For instance, because A-l, A-2 and A-3 do not lead to un- problematic learning, they have least priority for im- plementation. On the other hand, because A-5 and TK-5 (the “deactivation approaches”) meet all our pri- orities, they were implemented first. Space prevents us from justifying each of these priorities in this pa- per. However, our analysis has shown they successfully identify the more promising alternatives. Enhancing Efficiency 775 Comparing A-5 and TK-5 requires quantitative anal- ysis. This analysis requires both a performance sys- tem amenable to the implementation of the solu- tions and an appropriate suite of tasks. Our exe- cution strategy demands a performance system with these features: 1) interacts responsively to an exter- nal environment; 2) represents and executes hierar- chical knowledge; 3) operationalizes experience using a knowledge compilation mechanism. The Soar ar- chitecture (Laird, Newell, & Rosenbloom 1987) meets these demands. Specifically, Soar has been applied to a number of different external domains using hierarchi- cal, procedural knowledge (Laird & Rosenbloom 1990; Pearson et ad. 1993; Tambe et ad. 1995) and Soar’s learning mechanism, chunking, has been described as a form of explanation-based generalization (Rosenbloom & Laird 1986). For a task environment, we have developed the dy- namic blocks world, a test bed (in the sense of Hanks, Pollack & Cohen 1993) that facilitates controlled ex- perimentation while posing tasks that distill important properties of domains like flight control. Tasks in the test bed are similar to blocks world problems famil- iar in the planning literature. However, there are two key differences First, actions are not internal The agent generates primitive commands (e.g., “open the gripper” 9 “move-up 2 steps”), which are then executed by a simulator. The agent’s knowledge of a primitive action is thus separate from the actual implementation of the action. Second, the domain is dynamic. Actions take time to execute, there is simulated gravity, and ex- ogenous events can be scheduled to move blocks, knock towers over, etc. Our goals in developing this test bed are both to compare solution alternatives under con- trolled, experimental conditions and to further under- standing of the capabilities necessary for interaction in complex and dynamic environments. Our quantitative results thus far are based on a sim- ple tower-building task. When executing this task, the simplest approach supported by the Soar architecture learns rules with non-contemporaneous constraints (A- l). We use the performance of this system as a base- line to compare the other approaches. Both deactiva- 776 Learning tion approaches have been implemented and applied to this task. Evaluation of the alternative approaches relies on the following three criteria. Executes Task and Learns Correctly: Each of the deactivation alternatives successfully executes the tower-building task and learns rules without non- contemporaneous constraints. Our formulation of the domain theory for these problems was intended to cause the non-contemporaneous problem whenever possible by creating relatively deep hierarchies Thus, although a relatively simple task, this result is a sig- nificant validation of the approaches. Prefer Less Knowledge: Less knowledge is pre- ferred because it reduces knowledge engineering de- mands. Acquiring this knowledge, regardless of the technique chosen, will take longer for greater knowl- edge requirements. Three different types of knowledge are required in the two approaches. First is the domain theory itself. All approaches, including the baseline approach, re- quire this knowledge. However, domain theories may need further refinement under the deactivation ap- proaches, pointing out incompleteness in the domain theory. This proved true for the tower-building task, for which 10 additional rules were added to the base- line domain theory of 124 rules. Domain-independent knowledge of the approach is also required. In A-5, this knowledge is incorporated in the architecture it- self. In TK-5, this knowledge must be added to the knowledge base. This task required the addition of 2 task-independent rules. The third type of necessary knowledge is domain- dependent knowledge of the approach. Once again, in A-5 the solution is implemented in the architecture. Thus, goal deactivation can be performed independent of the domain. TK-5, on the other hand, required 7 domain-specific rules for deactivating goals. These rules are significant because they are the most difficult to engineer and/or acquire. Based on this analysis, A-5 clearly requires less knowledge in comparison to TK-5. Questions for fu- ture work are to determine if the additional knowledge 300 I I I A-l + -I- --.. TK-5 -5.. --4 A-5 ‘, 2 3 1 2 3 Figure 1: Execution steps of alternatives before (l), Figure 2: Total CPU time of alternatives before (l), during (2), and after (3) learning. during (2)) and after (3) learning. required by TK-5 is constant over the domain and how the additional knowledge requirements scale with an increasingly complex domain theory. Task Performance: We measure performance ac- cording to three key characteristics: the number of reasoning steps required for executing a task, the per- formance speedup with learning, and CPU time. Figure 1 plots the number of execution steps for the baseline, A-5, and TK-5 approaches as they change with learning. The deactivation approaches initially required about twice the number of execution steps as the baseline. However, after learning the solutions performed much better than the baseline with an av- erage speedup factor of more than three. Additionally, the architectural approach was consistently better than the knowledge-based approach, although this was not unexpected. For the same type of approach, the ar- chitecture must bring the knowledge of the technique to bear in a TK variant while in the architectural vari- ant this knowledge is embedded within the architecture and does not require additional execution steps. One advantage of implementing the different alter- natives in the same basic architecture is that CPU times can be readily compared. Figure 2 shows the total CPU times for the block stacking task. Although the effects of learning in this diagram and Figure 1 are similar, the increase in CPU time is proportionally less than the increase in execution steps, meaning that av- erage cycle time (time per execution step) is reduced in the deactivation approaches. We are still investigating this effect and do not have the space to consider the issues here. However, this result does suggest that the total performance cost of the deactivation approaches may be less than that indicated by the increase in the number of execution steps. Final Discussion These results demonstrate that the implemented approaches are sufficient for overcoming the non- contemporaneous constraints problem for at least one simple task. Furthermore, A-5 executed the task and learned correctly while requiring only minimal addi- tions to the domain theory and no domain-independent rules. Although performance before learning required more execution steps, A-5’s performance improved sig- nificantly with learning. TK-5 required more execution steps than A-5 both before and after learning. These results have led us to consider A-5 as the primary so- lution approach. In addition to our studies with A-5 and TK-5, a ver- sion of A-6 for a flight simulation domain has been im- plemented. We have not implemented a version of A-6 in the dynamic blocks world because the original ar- chitectural modifications were made to a now-defunct version of Soar. The A-6 approach turned out to be very difficult to implement, and led to a dramatically increased cycle time, due to the overhead of maintain- ing many knowledge dependencies. This experience combined with the implementation results for A-5 have led us to generate one additional alternative, which we plan to explore in the near future, as we briefly discuss here. A-5 requires increased execution steps before learn- ing has a chance to operationalize the domain theory, because it deactivates the entire goal structure any time there is a change in input. A-6 is the “smart” al- ternative, which retracts only the goals and persistent features that depend on the changing input, but this alternative is computationally expensive. A compro- mise approach is to track the dependencies of goals on changing input, but not expend the extensive effort re- quired to track the dependencies of persistent features Enhancing Efficiency 777 that are not goals. This approach allows the archi- tecture to eliminate non-contemporaneous constraints that arise from goals, but requires a domain-knowledge convention (as in TK-5) for other persistent features. The simplified tracking of dependencies through goals should decrease the computational overhead incurred by A-6, but should be less subject to environmental change than A-5. The final evaluation awaits imple- mentation and empirical tests, but our experiences with implementations of A-5, A-6, and TK-5, suggest that this might be a worthwhile pursuit. Although the alternative approaches have so far been applied mostly to simple tasks, the tasks have been designed within the test bed to capture the important properties of interaction in external domains. The re- sults of our evaluation suggest that the problem of non- contemporaneous constraints, arising from the use of explanation-based knowledge compilation in external environments, while serious, is not debilitating. Fur- ther, several of the approaches presented appear to be appropriate strategies for performance and speed-up learning in external environments. Acknowledgements This research was supported under contract N00014- 92-K-2015 from the Advanced Systems Technology Of- fice of the Advanced Research Projects Agency and the Naval Research Laboratory, and contract N66001-95- C-6013 from the Advanced Systems Technology Office of ARPA and the Naval Command and Ocean Surveil- lance Center, RDT&E division. References Bresina, J.; Drummond, M.; and Kedar, S. 1993. Re- active, integrated systems pose new problems for ma- chine learning. In Minton, S., ed., Machine Learning Methods for Planning. Morgan Kaufmann. chapter 6, 159-195. Chien, S. A.; Gervasio, M. T.; and DeJong, G. F. 1991. On becoming decreasingly reactive: Learning to deliberate m;nimally. In Proceedings of the Eighth International Workshop on Machine Learning, 28% 292. DeJong, G., and Bennett, S. 1995. Extending clas- sical planning to real-world execution with machine learning. In Proceedings of the Fourteenth Inter- national Joint Conference on Artificial Intelligence, 1153-1159. DeJong, G., and Mooney, R. 1986. Explantion-based learning: An alternative view. Machine Learning 1(2):145-176. Doorenbos, R. 1994. Combining left and right un- linking for matching a large number of learned rules. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94). Erol, K.; Hendler, J.; and Nau, D. S. 1994. HTN planning: Complexity and expressivity. In Proceed- ings of the Twelfth National Conference on Artificial Intelligence, 1123-1128. Georgeff, M., and Lansky, A. L. 1987. Reactive rea- soning and planning. In Proceedings of the National Conference on Artificial Intelligence, 677-682. Gervasio, M. T. 1994. An incremental learning ap- proach for completeable planning. In Proceedings of the Eleventh International Conference on Machine Learning, 78-86. Hanks, S.; Pollack, M.; and Cohen, P. R. 1993. Bench- marks, test beds, controlled experimentation and the design of agent architectures. AI Magazine 14:17-42. Knoblock, C. A. 1994. Automatically generating ab- stractions for planning. Artificial Intelligence 68:243- 302. Laird, J. E., and Rosenbloom, P. S. 1990. Integrating execution, planning, and learning in Soar for external environments. In Proceedings of the Eighth National Conference on Artificial Intelligence, 1022-1029. Laird, J. E.; Newell, A.; and Rosenbloom, P. S. 1987. Soar: An architecture for general intelligence. Artifi- cial Intelligence 33: l-64. Mitchell, T. M.; Kellar, R. M.; and Kedar-Cabelli, S. T. 1986. Explantion-based generalization: A uni- fying view. Machine Learning 1(1):47-80. Mitchell, T. M. 1990. Becoming increasingly reactive. In Proceedings of the Eighth National Conference on Artificial Intelligence, 1051-1058. Pearson, D. J .; Huffman, S. B.; Willis, M. B.; Laird, J. E.; and Jones, R. M. 1993. A symbolic solu- tion to intelligent real-time control. Robotics and Au- tonomous Systems 11:279-291. Rosenbloom, P., and Laird, J. 1986. Mapping explanation-based generalization onto Soar. In Pro- ceedings of the National Conference on Artificial In- telligence, 561-567. Sacerdoti, E. D. 1974. Planning in a hierarchy of abstraction spaces. Artificial Intelligence 5:115-135. Tambe, M.; Johnson, W. L.; Jones, R. M.; Koss, F.; Laird, J. E.; Rosenbloom, P. S.; and Schwamb, K. 1995. Intelligent agents for interactive simulation en- vironments. AI Magazine 16( 1):15-39. 778 Learning
1996
114
1,749
Compilation of No eous Constraints Robert E. Wray, HII and John E. Eaird and andolph M. Jones Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109-2110 {wrayre,laird,rjones)@umich.edu Abstract Hierarchical execution of domain knowledge is a use- ful approach for intelligent, real-time systems in com- plex domains. In addition, well-known techniques for knowledge compilation allow the reorganization of knowledge hierarchies into more efficient forms. How- ever, these techniques have been developed in the con- text of systems that work in static domains. Our in- vestigations indicate that it is not straightforward to apply knowledge compilation methods for hierarchi- cal knowledge to systems that generate behavior in dynamic environments One particular problem in- volves the compilation of non-contemporaneous con- straints. This problem arises when a training instance dynamically changes during execution. After defining the problem, we analyze several theoretical approaches that address non-contemporaneous constraints. We have implemented the most promising of these alter- natives within Soar, a software architecture for perfor- mance and learning. Our results demonstrate that the proposed solutions eliminate the problem in some situ- ations and suggest that knowledge compilation meth- ods are appropriate for interactive environments. Introduction Complex domains requiring real-time performance re- main a significant challenge to researchers in artifi- cial intelligence. One successful approach has been to build intelligent systems that dynamically decom- pose goals into subgoals based on the current situation (Georgeff & Lansky 1987; Laird & Rosenbloom 1990). Such systems structure procedural knowledge hierar- chically according to a task decomposition. A hierar- chical representation allows the performance system to react within the context of intermediate goals, using domain theories appropriate to each goal. Sub-tasks are dynamically combined and decomposed in response to the current situation and higher-level tasks, until the system generates the desired level of behavior. A hierarchical decomposition can thus generate appropri- ate complex behavior while maintaining an economical knowledge representation. In contrast, a flat, fully re- active knowledge base implementing the same behavior would explicitly represent all the possible combinations of subtasks that arise through dynamic composition within a hierarchy. Unfortunately, performing step by step decomposi- tion and processing at each level of a hierarchy takes time. This problem is exacerbated when the world is changing while execution takes place. For time-critical behavior, such decomposition may not be feasible. One possible solution is to have a system that supports both hierarchical knowledge and flat reactive rules Reac- tive rules apply when possible. Otherwise, reasoning reverts to goals and subgoals. Many multi-level real- time systems approximate this general framework. Critical questions concern which reactive knowledge should be included and where such knowledge would come from. A promising answer is the dynamic com- pilation of reactive knowledge from the system’s hier- archical knowledge. To date, knowledge compilation algorithms such as explanation-based learning (EBL) (DeJong & Mooney 1986) have been used to compile the results of ofl-line planning activities into control knowledge for faster execution at runtime. However, except in limited cases (Bresina, Drummond, & Kedar 1993; Mitchell 1990; Pearson et ad. 1993), EBL and knowledge compilation techniques have not been used to compile hierarchical execution systems into reactive systems concurrent with execution. Such an approach would provide an additional form of speedup for hier- archical execution systems. In this paper, we study the issues that arise when hierarchical execution systems are dynamically com- piled into reactive systems. We assume the hierarchi- cal system represents its knowledge as goals and op- erators in domain theories, while the reactive system uses rules. Rule-based systems are an attractive repre- sentation for reactive knowledge because it is possible to build new rules incrementally and add them to very large rule bases as the system is running (Doorenbos 1994). Given this representation, we consider how a Enhancing Efficiency 771 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. knowledge compilation process could work using EBL- like techniques. What we find is that a direct appli- cation of EBL to a straightforward domain theory can lead to complications in a dynamic domain. Specif- ically, the learning mechanism may create rules with conditions that test for the existence of features that can never co-occur in the environment. We call such conditions non-contemporaneous constraints. This problem represents a severe stumbling block in applying knowledge compilation methods not only in interactive environments, but for hierarchically de- composed tasks in general. The aim of this paper is to convey an understanding of the problem and to investi- gate the space of potential solutions. Results from im- plementations of the most promising solutions demon- strate how the problem can be avoided. These results suggest that knowledge compilation mechanisms may apply to a much larger population of environments than previous work has suggested. An Example Domain To illustrate our discussion, we will present examples for a system that controls an airplane in a realistic flight, simulator. This task can naturally be decom- posed into a hierarchy of domain theories. At the highest level, the agent may generate decisions to fly particular flight plans. Execution of flight plans can be achieved by creating subgoals to arrive at certain locations at certain times. Further decomposition is possible until execution reaches the level of primitive commands. At this level, the flight controller issues commands that can be directly executed by the flight simulator, such as “push the throttle forward 4 units” or “move the stick left 10 units”. This decomposition is not a pure abstraction hier- archy as employed by abstraction planners (Sacerdoti 1974; Knoblock 1994) but a hierarchy of domain theo- ries over features that can be aggregates and not just abstractions of features at lower levels. Higher levels can also include intermediate data structures that are not used at the primitive levels. This representation is very similar to that used in HTN planners (Erol, Hendler, & Nau 1994). The difference is that in this approach all the necessary control knowledge is part, of the domain theory. The hierarchy of goals develops dynamically, in response to goals higher in the hierar- chy as well as to changes in the environment, but the task is executed without search. This style of knowledge representation has been used for both stick-level control of a simulated aircraft (Pearson et al. 1993) and higher-level control of a tac- tical flight simulator (Tambe et al. 1995) using 400 to 3500 rules and from five to ten levels of goals. We will 1. IF NotEqualcHeading, GoalHeading) THEN CreateGoal(ACHIEVE-HEADING) 2. IF Goal(ACHIEVE-HEADING) AND LeftOf(Heading, GoalHeading) THEN Execute(ROLL(right)) 3. IF Goal(ACHIEVE-HEADING) AND RightOfcHeading, GoalHeading) THEN Execute(ROLL(left)) 4. IF Goal(ACHIEVE-HEADING) AND NotEqual(Rol1, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) 5. IF Goal(ACHIEVE-HEADING) AND Equal(Rol1, level) AND Equal(Heading, GoalHeading) THEN DeleteGoal(ACHIEVE-HEADING) Table 1: Simplified domain theory for achieving a goal heading. focus on a single task from the flight domain: achieving a new heading. This task uses commands that control the roll, pitch, yaw, and speed of the aircraft, Table 1 contains a subset of a domain theory for this simplified flight, controller. Assume that the controller is flying due north and has just made a decision to turn due east,, represented as a new goal heading. The difference between the current and goal headings leads to the creation of an ACHIEVE-HEADING goal by Rule 1. The execution of the ROLL(right) command then follows from Rule 2. With the plane in a roll, it begins turning, re- flected by changing values for the current heading ar- riving through the input system. The turn continues until the current heading and goal heading are the same. Because the plane is in a roll when the goal heading is achieved, the agent, cannot simply delete the ACHIEVE-HEADING goal. Instead, Rule 4 gener- ates a command to return the aircraft to level flight. Once the aircraft has leveled off, Rule 5 terminates the ACHIEVE-HEADING goal. In this example, the knowl- edge for attaining a new heading under different condi- tions has been distributed across a number of subtasks in the hierarchy. Individual rules consist of conditions that are simple, easy to understand, and may com- bine in different, ways to get, different behaviors. For instance, ROLL commands can be invoked from other contexts, and the ACHIEVE-HEADING goal can be cre- ated for conditions other than the simple one here. Compilation of Hierarchical Knowledge Now we consider one knowledge compilation technique, EBL, for compiling hierarchical knowledge. EBL uses a domain theory to generate an explanation of why a truaning instance is an example of a goal concept (Mitchell, Kellar, & Kedar-Cabelli 1986). It then com- 772 Learning piles the explanation into a representation that satisfies appropriate operutionality criteria. In our case, an ex- planation is a trace of the reasoning generated using the domain theory for a given level in the hierarchy. The training instance is a set of features describing the state of the world; however, these features may change because the domain is dynamic. The goal con- cepts are any features or actions that are created for use in solving a higher-level problem. In our example, this includes just the primitive ROLL actions sent to the external environment (assumed to represent the high- est level of the hierarchy). Finally, the operationality criterion specifies that the features included in a com- piled rule should come from the representation used in the higher-level problem. In the example, everything but the ACHIEVE-HEADING goal is operational. EBL compiles the reasoning within a goal by regress- ing from the generated action through the trace of rea- soning, collecting operational tests. The collected tests and the action combine into a rule that can be used in place of a similar chain of reasoning in the future. Be- cause a given goal can incrementally generate results for higher levels (such as multiple primitive control ac- tions), multiple explanations and multiple rules can be constructed from a single goal. Non-Contemporaneous Constraints The rules in Table 1 represent a natural decomposition of the turning task. However, compiling this reasoning proves to be problematic. Difficulty arises because the current heading changes while the ACHIEVE-HEADING goal is active, leading it to be used for two different purposes over the course of a turn. The heading is used to initiate the turn, when it is not equal to the goal heading. Then, when the heading and goal head- ing are equal, the system ends the turn by rolling the aircraft back to level flight. Thus, the direct approach to generating an explanation for the final ROLL( level > command tests two diflerent values of the Heading, one tested by Rule 1 to initiate the goal, and one by Rule 4 to level the plane: IF NotEqual(Heading, GoalHeading) AND NotEqual(Rol1, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) These conditions are clearly inconsistent with each other. This is not a problem just because Heading changed. The value of Roll also changed (from being level to banking right), but because it was only tested at one point (by Rule 4), it did not cause a problem. One way to describe the problem is that a persistent goal has been created based on transitory features in the environment. The ACHIEVE-HEADING goal persists across changes in heading. If a feature that leads to a goal changes over the course of the goal, and a new feature is later tested, then those features might enter into the compiled rule as non-contemporaneous con- straints. This problem can occur whenever the perfor- mance system creates a persistent data structure based on features that change during the subgoal. When EBL has been used to compile control knowl- edge within planning systems, even for dynamic do- mains, non-contemporaneous constraints have not arisen because the training instance is static during plan generation. For instance, both Chien et al (1991) and DeJong and Bennett (1995) describe approaches to planning and execution in which there is no interac- tion with the environment during planning and learn- ing. However, when plan execution and EBL both occur while interacting with the environment, non- contemporaneous constraints can result 0 Possible Approaches Rather than simply dismiss explanation-based learn- ing as inadequate, we now investigate possible ways to address non-contemporaneous constraints. A-f: Include All Tested Features in the Train- ing Instance. The simplest approach is to include in the training instance every feature that was tested during reasoning, regardless of whether it exists at learning time. In some cases, a useful rule may be learned. If the features are non-contemporaneous, however, the system will learn a rule containing non- contemporaneous constraints. This rule will not cause incorrect behavior, but it will never match. Addition- ally, a particular rule may be created repeatedly, wast- ing more time. Perhaps most importantly, an oppor- tunity to learn something useful is lost. A-2: Prune Rules with Non-contemporaneous Constraints. Another obvious approach is to forgo learning when a training instance includes features that are not present when the primitive is generated. This will avoid the detrimental effects of learning rules that contain non-contemporaneous constraints. However, it will also fail to learn useful rules when the miss- ing features are not actually non-contemporaneous. A refinement of this approach is to delete rules with non- contemporaneous constraints. However, the recog- nition of non-contemporaneous constraints requires domain-dependent knowledge. For example, it may be physically impossible for a given plane to fly at a spe- cific climb rate and pitch, although it can achieve both of them independently. This knowledge may not even be implicit in the system’s knowledge base, so that ad- ditional knowledge is required for correct learning, but Enhancing Efficiency 773 not for correct perf0rmance.l A-3: Restrict Training Instances to Contemp- oraneous Features. A similar approach is to base explanations only on items that exist at some single point in time. For instance, a system could be de- signed to include in the training instance only those features that are true in the environment when the primitive is issued (even if other contributing features, no longer present, were tested as well). This guar- antees that non-contemporaneous constraints will not appear in the final rule. However, there is no guaran- tee that the resulting rule will be correct. If the final result is dependent upon the change of a value from an earlier one, then including only the final value in the learned rule will make it over-general. For exam- ple, consider the flight controller. If the system used only contemporaneous features at the time it generated ROLL(level), the rule would look like this: IF NotEqual (Roll, level) AND Equal(Heading, GoalHeading) THEN Execute(ROLL(leve1)) This rule could be over-general, because there may be times when the system should not level off. Bresina et al. (1993) describe an approach to avoiding over- general rules when the training instance is based upon features that existed at the time a chain of reasoning was initiated. Their approach is based upon a spe- cific representational scheme that requires knowledge of temporal relationships in the domain. A-4: Freeze Input During Execution. Another approach is to not allow the training instance to change over the course of generating a primitive, thus avoiding reasoning that would incorporate non- contemporaneous constraints. Because the training in- stance no longer changes with time, any learned rules will be guaranteed to be contemporaneous. However, this approach forces a loss in reactivity because the system will be unable to respond to changes during its reasoning, even if those changes are very important (for example, if a wind shear causes the aircraft to ex- perience a sudden loss of altitude). A-5: Deactivate Goals Following Any Change. A seemingly drastic approach is to force the system to start with a new training instance by removing all persistent goals every time there is a change. This guarantees that the execution will not use any goals that depend on outdated features of the environment. However, this means that reasoning must be restarted ‘Some systems make such relationships explicit. For example, the ERE architecture (Bresina, Drummond, & Kedar 1993) includes domain constraints, which specify all possible co-occurrences. every time the external environment changes This can be time consuming, although as more reactive rules are learned, the need to generate goals decreases. It is an empirical question whether costs of regeneration will be balanced by improvements from learning. This approach may also require some extensions to the original domain theory to work properly. For instance, in the air controller example, the origi- nal representation contains a single rule for estab- lishing the ACHIEVE-HEADING goal After the system has begun the turn, the aircraft’s changing heading will cause the goal to disappear. However, the goal will continue to regenerate through the application of Rule 1 until the plane’s heading matches the goal head- ing. At this point, Rule 1 will no longer apply, the ACHIEVE-HEADING goal will not be created, and Rule 4 cannot fire to level the airplane. This exposes a gap in the system’s domain theory, which can be patched with the following rule: IF Equal(Heading, GoalHeading) AND NotEqual(Rol1, level) THEN CreateGoal(ACHIEVE-HEADING) This rule is not just a special case to eliminate non- contemporaneous constraints. Rather, it is real flight domain knowledge that was missing from the original formulation of the task. The original representation assumed one would always roll back to level flight at the end of an ACHIEVE-HEADING goal, so this knowledge was not needed explicitly. However, this fails to take into account the possibility of a new goal arising while the old goal is being achieved. Suppose the plane is in a roll at a particular heading and higher level knowl- edge determines that the aircraft should now maintain that current heading (e.g., due to some emergency con- dition or a change in overall plan). It then becomes necessary to level off from the turn. In the new hi- erarchy, this is accomplished as an implementation of the ACHIEVE-HEADING goal. Thus, the knowledge that must be added is a beneficial refinement of the system’s domain theory. A-6: Deactivate Dependent Goal Structure Fol- lowing Any Change. A refinement of the previous approach is to deactivate the structures in a goal se- lectively, based on knowledge dependencies between intermediate results and the current external situa- tion. In this approach, the goals are continually ad- justed so they are consistent with the current training instance. As the training instance changes, interme- diate results are deactivated, and new ones, consis- tent with current training instance and domain the- ory, are generated. Referring to our flight example, ACHIEVE-HEADING would only be deleted once, when the conditions of Rule 1 no longer hold in the environ- 774 Learning ment. As with A-5, this approach Cons to the domain theory. m&Y require addi- This alternative presents a subtle complication. A domain theory may create persistent internal features as well as persistent goals. The rules that create goals will, by definition, test features that are higher in the task hierarchy than the goals they create. Goals will always be compiled at learning time into their constituent conditions. Non-goal persistent features, however, will be generated by rules that may con- tain goal tests (thus requiring further compilation), tests of other persistent features (possibly leading to non-contemporaneous constraints), and tests of non- persistent features (which should not be problematic). Thus, a complete knowledge-dependence mechanism must keep track of dependencies for both goals and other persistent features. The expectation is that this alternative will most intelligently use the architecture to track dependencies, but may also incur a large over- head in keeping track of all the relevant (and only the relevant) dependencies for goals and persistent fea- tures. This overhead may be severe enough the reactivity of the system significantly. to impact A-7: Eliminate the Persistence of Goals and Features. Another approach is to eliminate persis- tence in the performance system altogether. Goals (and other features) will remain active only as long as the rules that generate them match. This makes non-contemporaneous constraints impossible, because there will never be any goals in the current chain of rea- soning that depend on features that are no longer true. This requires no further overhead for tracking knowl- edge dependencies than an execution system would al- ready use for matching rules. This approach has been demonstrated in Theo-Agent (Mitchell 1990). How- ever, eliminating all persistence means that the intelli- gent agent can have no memory. Suppose, for example, that the flight controller received a radio message to come to a particular heading. The system would forget the new heading as soon as the radio message disap- peared from the input system because there would be no way to store the information persistently. A-8: Extend Training Instances (and Domain Theories) To Include Temporal Dependencies. A final approach is to supplement our domain the- ory representation with causal knowledge or knowl- edge of temporal dependencies. Explanation-based learning algorithms assume that a training instance includes all the features that are potentially relevant to generating the goal concept. In dynamic envi- ronments, this may include the temporal dependen- cies among the changes to the state used to gener- ate the goal. If true, domain theories need to be ex- tended to create and test temporal relationships and dependencies (such as Rule 4 testing that the Heading is now equal to the Desired-Heading, but it previ- ously had some other value). This requires a his- tory of previous events and would lead to explanations that explicitly test temporal dependencies. The re- sult of knowledge compilation would be rules that are non-contemporaneous, but could still match because a memory of earlier behavior would always be available. A number of planning and learning approaches use rep- resentations that include temporal constructs (Bresina, Drummond, & Kedar 1993; DeJong & Bennett 1995; Gervasio 1994). However, because these relations must be represented explicitly (even when behavior could be generated without the extensions), the type of domains in which such approaches are applicable may be lim- ited, as in A-7. Approaches in Task Knowledge. Each of the proposed changes to the performance system imposes some constraints on how knowledge must be repre- sented. In addition, given a general enough perfor- mance system, many of the proposed alternatives can be realized by adding general rules to the domain the- ory rather than actually changing the performance sys- tem. Thus, for comparison purposes, we propose alter- natives TK-4, TK-5, TK-6, and TK-7 as knowledge- based implementations of alternatives A-4, A-5, A-6, and A-7, respectively. Besides the possible requirement for some new domain-general rules, the TK alterna- tives are conventions for representing the domain the- orys whereas the A alternatives require similar changes to the domain theory. However, it is also possible that some of the TK alternatives will incur less over- head than the corresponding A alternatives because they can be tailored to particular domains and do not have to provide general solutions. A-8 is already an approach that is dependent on changing the domain theory in addition to the execution architecture. Evaluating the Alternatives We could explore all of these alternatives in depth, but a qualitative analysis indicates that some of the approaches are more promising than others. Thus, we have identified a few alternatives to implement based upon a number of priorities, presented in Table 2. For instance, because A-l, A-2 and A-3 do not lead to un- problematic learning, they have least priority for im- plementation. On the other hand, because A-5 and TK-5 (the “deactivation approaches”) meet all our pri- orities, they were implemented first. Space prevents us from justifying each of these priorities in this pa- per. However, our analysis has shown they successfully identify the more promising alternatives. Enhancing Efficiency 775 Comparing A-5 and TK-5 requires quantitative anal- ysis. This analysis requires both a performance sys- tem amenable to the implementation of the solu- tions and an appropriate suite of tasks. Our exe- cution strategy demands a performance system with these features: 1) interacts responsively to an exter- nal environment; 2) represents and executes hierar- chical knowledge; 3) operationalizes experience using a knowledge compilation mechanism. The Soar ar- chitecture (Laird, Newell, & Rosenbloom 1987) meets these demands. Specifically, Soar has been applied to a number of different external domains using hierarchi- cal, procedural knowledge (Laird & Rosenbloom 1990; Pearson et ad. 1993; Tambe et ad. 1995) and Soar’s learning mechanism, chunking, has been described as a form of explanation-based generalization (Rosenbloom & Laird 1986). For a task environment, we have developed the dy- namic blocks world, a test bed (in the sense of Hanks, Pollack & Cohen 1993) that facilitates controlled ex- perimentation while posing tasks that distill important properties of domains like flight control. Tasks in the test bed are similar to blocks world problems famil- iar in the planning literature. However, there are two key differences First, actions are not internal The agent generates primitive commands (e.g., “open the gripper” 9 “move-up 2 steps”), which are then executed by a simulator. The agent’s knowledge of a primitive action is thus separate from the actual implementation of the action. Second, the domain is dynamic. Actions take time to execute, there is simulated gravity, and ex- ogenous events can be scheduled to move blocks, knock towers over, etc. Our goals in developing this test bed are both to compare solution alternatives under con- trolled, experimental conditions and to further under- standing of the capabilities necessary for interaction in complex and dynamic environments. Our quantitative results thus far are based on a sim- ple tower-building task. When executing this task, the simplest approach supported by the Soar architecture learns rules with non-contemporaneous constraints (A- l). We use the performance of this system as a base- line to compare the other approaches. Both deactiva- 776 Learning tion approaches have been implemented and applied to this task. Evaluation of the alternative approaches relies on the following three criteria. Executes Task and Learns Correctly: Each of the deactivation alternatives successfully executes the tower-building task and learns rules without non- contemporaneous constraints. Our formulation of the domain theory for these problems was intended to cause the non-contemporaneous problem whenever possible by creating relatively deep hierarchies Thus, although a relatively simple task, this result is a sig- nificant validation of the approaches. Prefer Less Knowledge: Less knowledge is pre- ferred because it reduces knowledge engineering de- mands. Acquiring this knowledge, regardless of the technique chosen, will take longer for greater knowl- edge requirements. Three different types of knowledge are required in the two approaches. First is the domain theory itself. All approaches, including the baseline approach, re- quire this knowledge. However, domain theories may need further refinement under the deactivation ap- proaches, pointing out incompleteness in the domain theory. This proved true for the tower-building task, for which 10 additional rules were added to the base- line domain theory of 124 rules. Domain-independent knowledge of the approach is also required. In A-5, this knowledge is incorporated in the architecture it- self. In TK-5, this knowledge must be added to the knowledge base. This task required the addition of 2 task-independent rules. The third type of necessary knowledge is domain- dependent knowledge of the approach. Once again, in A-5 the solution is implemented in the architecture. Thus, goal deactivation can be performed independent of the domain. TK-5, on the other hand, required 7 domain-specific rules for deactivating goals. These rules are significant because they are the most difficult to engineer and/or acquire. Based on this analysis, A-5 clearly requires less knowledge in comparison to TK-5. Questions for fu- ture work are to determine if the additional knowledge 300 I I I A-l + -I- --.. TK-5 -5.. --4 A-5 ‘, 2 3 1 2 3 Figure 1: Execution steps of alternatives before (l), Figure 2: Total CPU time of alternatives before (l), during (2), and after (3) learning. during (2)) and after (3) learning. required by TK-5 is constant over the domain and how the additional knowledge requirements scale with an increasingly complex domain theory. Task Performance: We measure performance ac- cording to three key characteristics: the number of reasoning steps required for executing a task, the per- formance speedup with learning, and CPU time. Figure 1 plots the number of execution steps for the baseline, A-5, and TK-5 approaches as they change with learning. The deactivation approaches initially required about twice the number of execution steps as the baseline. However, after learning the solutions performed much better than the baseline with an av- erage speedup factor of more than three. Additionally, the architectural approach was consistently better than the knowledge-based approach, although this was not unexpected. For the same type of approach, the ar- chitecture must bring the knowledge of the technique to bear in a TK variant while in the architectural vari- ant this knowledge is embedded within the architecture and does not require additional execution steps. One advantage of implementing the different alter- natives in the same basic architecture is that CPU times can be readily compared. Figure 2 shows the total CPU times for the block stacking task. Although the effects of learning in this diagram and Figure 1 are similar, the increase in CPU time is proportionally less than the increase in execution steps, meaning that av- erage cycle time (time per execution step) is reduced in the deactivation approaches. We are still investigating this effect and do not have the space to consider the issues here. However, this result does suggest that the total performance cost of the deactivation approaches may be less than that indicated by the increase in the number of execution steps. Final Discussion These results demonstrate that the implemented approaches are sufficient for overcoming the non- contemporaneous constraints problem for at least one simple task. Furthermore, A-5 executed the task and learned correctly while requiring only minimal addi- tions to the domain theory and no domain-independent rules. Although performance before learning required more execution steps, A-5’s performance improved sig- nificantly with learning. TK-5 required more execution steps than A-5 both before and after learning. These results have led us to consider A-5 as the primary so- lution approach. In addition to our studies with A-5 and TK-5, a ver- sion of A-6 for a flight simulation domain has been im- plemented. We have not implemented a version of A-6 in the dynamic blocks world because the original ar- chitectural modifications were made to a now-defunct version of Soar. The A-6 approach turned out to be very difficult to implement, and led to a dramatically increased cycle time, due to the overhead of maintain- ing many knowledge dependencies. This experience combined with the implementation results for A-5 have led us to generate one additional alternative, which we plan to explore in the near future, as we briefly discuss here. A-5 requires increased execution steps before learn- ing has a chance to operationalize the domain theory, because it deactivates the entire goal structure any time there is a change in input. A-6 is the “smart” al- ternative, which retracts only the goals and persistent features that depend on the changing input, but this alternative is computationally expensive. A compro- mise approach is to track the dependencies of goals on changing input, but not expend the extensive effort re- quired to track the dependencies of persistent features Enhancing Efficiency 777 that are not goals. This approach allows the archi- tecture to eliminate non-contemporaneous constraints that arise from goals, but requires a domain-knowledge convention (as in TK-5) for other persistent features. The simplified tracking of dependencies through goals should decrease the computational overhead incurred by A-6, but should be less subject to environmental change than A-5. The final evaluation awaits imple- mentation and empirical tests, but our experiences with implementations of A-5, A-6, and TK-5, suggest that this might be a worthwhile pursuit. Although the alternative approaches have so far been applied mostly to simple tasks, the tasks have been designed within the test bed to capture the important properties of interaction in external domains. The re- sults of our evaluation suggest that the problem of non- contemporaneous constraints, arising from the use of explanation-based knowledge compilation in external environments, while serious, is not debilitating. Fur- ther, several of the approaches presented appear to be appropriate strategies for performance and speed-up learning in external environments. Acknowledgements This research was supported under contract N00014- 92-K-2015 from the Advanced Systems Technology Of- fice of the Advanced Research Projects Agency and the Naval Research Laboratory, and contract N66001-95- C-6013 from the Advanced Systems Technology Office of ARPA and the Naval Command and Ocean Surveil- lance Center, RDT&E division. References Bresina, J.; Drummond, M.; and Kedar, S. 1993. Re- active, integrated systems pose new problems for ma- chine learning. In Minton, S., ed., Machine Learning Methods for Planning. Morgan Kaufmann. chapter 6, 159-195. Chien, S. A.; Gervasio, M. T.; and DeJong, G. F. 1991. On becoming decreasingly reactive: Learning to deliberate m;nimally. In Proceedings of the Eighth International Workshop on Machine Learning, 28% 292. DeJong, G., and Bennett, S. 1995. Extending clas- sical planning to real-world execution with machine learning. In Proceedings of the Fourteenth Inter- national Joint Conference on Artificial Intelligence, 1153-1159. DeJong, G., and Mooney, R. 1986. Explantion-based learning: An alternative view. Machine Learning 1(2):145-176. Doorenbos, R. 1994. Combining left and right un- linking for matching a large number of learned rules. In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94). Erol, K.; Hendler, J.; and Nau, D. S. 1994. HTN planning: Complexity and expressivity. In Proceed- ings of the Twelfth National Conference on Artificial Intelligence, 1123-1128. Georgeff, M., and Lansky, A. L. 1987. Reactive rea- soning and planning. In Proceedings of the National Conference on Artificial Intelligence, 677-682. Gervasio, M. T. 1994. An incremental learning ap- proach for completeable planning. In Proceedings of the Eleventh International Conference on Machine Learning, 78-86. Hanks, S.; Pollack, M.; and Cohen, P. R. 1993. Bench- marks, test beds, controlled experimentation and the design of agent architectures. AI Magazine 14:17-42. Knoblock, C. A. 1994. Automatically generating ab- stractions for planning. Artificial Intelligence 68:243- 302. Laird, J. E., and Rosenbloom, P. S. 1990. Integrating execution, planning, and learning in Soar for external environments. In Proceedings of the Eighth National Conference on Artificial Intelligence, 1022-1029. Laird, J. E.; Newell, A.; and Rosenbloom, P. S. 1987. Soar: An architecture for general intelligence. Artifi- cial Intelligence 33: l-64. Mitchell, T. M.; Kellar, R. M.; and Kedar-Cabelli, S. T. 1986. Explantion-based generalization: A uni- fying view. Machine Learning 1(1):47-80. Mitchell, T. M. 1990. Becoming increasingly reactive. In Proceedings of the Eighth National Conference on Artificial Intelligence, 1051-1058. Pearson, D. J .; Huffman, S. B.; Willis, M. B.; Laird, J. E.; and Jones, R. M. 1993. A symbolic solu- tion to intelligent real-time control. Robotics and Au- tonomous Systems 11:279-291. Rosenbloom, P., and Laird, J. 1986. Mapping explanation-based generalization onto Soar. In Pro- ceedings of the National Conference on Artificial In- telligence, 561-567. Sacerdoti, E. D. 1974. Planning in a hierarchy of abstraction spaces. Artificial Intelligence 5:115-135. Tambe, M.; Johnson, W. L.; Jones, R. M.; Koss, F.; Laird, J. E.; Rosenbloom, P. S.; and Schwamb, K. 1995. Intelligent agents for interactive simulation en- vironments. AI Magazine 16( 1):15-39. 778 Learning
1996
115
1,750
Sequential Inductive Learning Jonathan Gratch University of Southern California Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, CA 90292 gratch@isi.edu Abstract This article advocates a new model for inductive learning. Called sequential induction, it helps bridge classical fixed-sample learning techniques (which are efficient but dif- ficult to formally characterize), and worst-case approaches (which provide strong statistical guarantees but are too ineffi- cient for practical use). Learning proceeds as a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the only enough data to make each decision, sequential induction provides statistical guarantees but with substantially less data than worst-case methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to learning problems where the preponderance of data or the cost of gathering data precludes the use of traditional methods. Introduction Though inductive learning techniques have enjoyed remark- able success, most past work focused on “small” tasks where it is reasonable to use all available information when learning concept descriptions. Increased access to information, how- ever, raises the following question: how little data can be used without compromising the results of learning? This question is especially relevant for software agents (or softbots) that have access to endless supplies of data on the Internet but which must pay a cost both in terms raw numbers of examples but also in terms of the number of attributes that can be ob- served [Etzioni93]. Techniques in active learning [Cohn951 and megainduction [Catlett91] attempt to manage this access to information. In other words,we must determine how much data is sufficient to learn, and how to limit the amount of data to that which is sufficient. Theoretical machine learning provides some guidance. Unfortunately, these results are generally inappropriate to guide practical learning. These methods are viewed as too costly (though see [Schuurmans95]). More problematic is the fact that these techniques assume the target concept is a mem- ber of some predefined class (such as k-DNF’). Recent work in agnostic PAC learning [Haussler92, Kearns92] relaxes this later complaint, but results of these studies are discouraging from the standpoint of learning eff1ciency.l 1. Although Auer, et. al. successfully applied these methods to the learning of two-level decision trees [AuerW]. In this paper, I introduce an alternative inductive model that bridges the gap between practical and theoretical models of learning. After discussing a definition of sample sufficiency which more readily applies to the learning algorithms used in practice, I then describe Sequential ID3, a decision-tree algo- rithm based on this definition. The algorithm extends and generalizes the decision-theoretic subsampling of Musick, Catlett, and Russell [Musick93]. It can also be seen as an ana- lytic model that illuminates the statistical properties of deci- sion-tree algorithms. Finally, it embodies statistical methods that can be applied to other active learning tasks. I conclude this paper with a derivation of the algorithm’s theoretical properties and an an empirical evaluation over several learn- ing problems drawn from the Irvine repository. Sufficiency Practical learning algorithms are quite general, making few assumptions about the concept to be learned: data may con- tain noise; the concept need not be drawn from some pre-spe- cified class; the attributes may even be insufficient to describe the concept. To be of practical interest, a definition of data sufficiency must be of equal generality. The standard defini- tion of sufficiency from theoretical machine learning is what I call accuracy-based. According to this definition, a learning algorithm must output a concept description with minimal classification error [Kearns92]. Unfortunately, current re- sults suggest that learning in accordance with this definition is intractable, except for extremely simple concept descrip- tion languages (e.g., even when the concept description is re- stricted to a simple conjunction of binary attributes, Kearns shows that minimizing classification error is NP-Hard). In response, practical learning algorithms use what may be call a decision-bused definition of sufficiency. According to this definition, the process of learning is treated as a sequence of inductive decisions, or an inductive decision process. A sample is deemed sufficient (typically informally) if it ensur- es some minimum quality constraints on these decisions. As this article focuses on decision-tree learning algorithms, it is important to distinguish between decisions made while Zean- ing a decision tree, and decisions made while using a learned decision tree. Only the former are discussed in this article, which I will refer to as inductive decisions. Note that ensuring high inductive-decision quality does not necessarily ensure high accuracy for the induced concept; the Fundamental Issues 779 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. chief criticism of the decision-based view. Nonetheless, there are reasons for formalizing this perspective. First, decision criterion can serve as useful heuristics for achieving high clas- sification accuracy, as is well-documented for the entropy function. Such criteria are of little use, however, if they utilize an insufficient sample (as seen in over-fitting). Second, deci- sion criteria have been proposed to account for factors beyond classification error, such as conciseness of the concept de- scription [deMantaras92]. Unlike an accuracy-based defini- tion, a decision-based definition applies to these criteria as well. Finally, the decision-based view can be of use for active learning approaches (see [Cohn95]). In top-down decision-tree induction, learning is an induc- tive decision process consisting of two types of inductive de- cisions: stopping decisions determine if a node in the current decision tree should be further partitioned, and selection deci- sions identify attributes with which to partition nodes. Spe- cific algorithms differ in the particular criteria used to guide these inductive decisions. For example, ID3 uses information gain as a selection decision criterion, and class purity as a stopping decision criterion [Quinlan86]. These inductive de- cisions are statistical in that the criteria are defined in terms of unknown characteristics of the example distribution. Thus, when ID3 selects an attribute with highest information gain, the attribute is not necessarily the best (in this local sense), but only estimated to be best. Asymptotically (as the sample size goes to infinity), these estimates converge to the true score; when little data is available, however, the estimates and the re- sulting inductive decisions are essentially random.* I declare a sample to be sufficient if it ensures “probably ap- proximately correct” inductive decisions in the sense formal- ized below. By utilizing statistical theory, one can compute how much data is necessary to achieve this criteria. Sequential Induction Traditionally, one provides learning algorithms with a fixed-size sample of training data, all of which is used to in- duce a concept description. Consistent with this, a simple ap- proach to learning with a sufficient sample is to determine a sufficiently large sample prior to learning, and then use all of this data to induce a concept description. Following Musick et. al., I call this one-shot induction. Because inductive deci- sions are conditional on the data and the outcome of earlier de- cisions, the sample must be sufficient to learn under the worst possible configuration of inductive decisions. Prior Work. In statistics, it is well known that sequential sampling methods often require far less data than one-shot methods [Govindarajulu8 11. More recently such techniques have found their way into machine learning [Moore94, Schuurmans951. What distinguishes these approaches from what I am proposing here is that they are best characterized as “post-processors”: some learning algorithm such as ID3 2. This is indirectly addressed by pruning trees after learning [Breiman84]. With a sufficient sample, one can dispense with pruning and “grow the tree correctly the first time.” conjectures hypotheses which are then validated by sequen- tial methods. Here I push the sequential testing “deeper” into the algorithm -to the level of inductive decisions - which al- lows greater control over the learning process. This is espe- cially important, for example, if there is a cost to obtain the values of certain attributes of the data (e.g., attributes might correspond to expensive sensing actions of a robot or softbot). Managing this cost requires reasoning at the level of inductive decisions. Additionally, the above approaches are restricted to estimating the classification accuracy of hypotheses, while the techniques I propose allow decision criteria to be ex- pressed as arbitrary functions of a set of random variables. The work of Schuurmans is also restricted to cases where the concept comes from a known class. Alternative Approach. Sequential induction is the method I propose for sequential learning at the level of inductive deci- sions, in particular for top-down decision-tree induction. The approach applies to a wide range of attribute selection criteria, including such measures as entropy [Quinlan86], gini index [Breiman84], and orthogonality [Fayyad92]. For simplicity, I restrict the discussion to learning problems with binary at- tributes and involving only two classes, although the ap- proach easily generalizes to attributes and classes of arbitrary cardinality. It is not, however, immediately obvious how to extend the approach to problems with continuous attributes. Before describing the technique, I must introduce the sta- tistical machinery needed for determining sufficient samples. First, I discuss how to model the statistical error in selection decisions and to determine a sufficient sample for them. I then present a stopping criterion that is well suited to the se- quential inductive model. Next, I discuss how to insure that the overall decision quality of the inductive decision process is above some threshold. Finally, I describe Sequential ID3, a sequential inductive technique for decision-tree learning, and present its formal properties. Selection decisions choose the most promising attribute with which to partition a decision-tree node. To make this deci- sion, the learning algorithm must estimate the merit of a set of attributes and choose that which is most likely to be the best. Because perfect selection cannot be achieved with a fi- nite sample of data, I follow the learning theory terminology and propose that each selection decision identify the “prob- ably approximately” best attribute. This means that when given some pre-specified constants a and E, the algorithm must select an attribute that is within E of the best with proba- bility l-a, taking as many examples sufficient to insure a de- cision of this quality. Although this goal is conceptually sim- ple, its satisfaction requires extensive statistical machinery. A selection criterion is a measure of the quality of each at- tribute. Because the techniques I propose apply to a wide class of such criteria, I first define this class. In Figure 1, each node in the decision tree denotes some subset of the space of possible examples. This subset can be described by a proba- bility vector that specifies the probability that an example be- 780 Learning Figure I. Examples are partitioned by attribute val- ues and class, described four probabilities. longs to a particular class. For example, the probability vector associated with the root node in the decision tree summarizes the base-line class distribution. For a given node N in the deci- sion tree, let PC denote the probability that an example de- scribed by the node falls in class c - P(A(x)=v, cZass(x)=c I x reaches N). Then for an attribute A, let PVC denote the proba- bility that an example belongs to node N, has A=v, and be- longs to class c. The effect of an attribute can be summarized by the attribute probabilities: Pzl, PQ, PEI, and PEG. In fact, three probabilities are sufficient as the fourth is deter- mined by the other three: PEG = I- P~~-P~~-PEI. The techniques I develop apply to selection criteria that are arbitrary differentiable functions of these three probabilities. For example, expected entropy is an acceptable criterion: 4P7-,b PT,~, PJ) = - PlogP(pT,,) - PlogPU%,*l - PW(P,,) - pmJ(P,,) + PbP(PT,, + P,,,) + P~ogP(PT.2 + P,,) Given a selection criterion, the merit of each attribute can be estimated by estimating the attribute probabilities from the data and substituting these estimates into the selection criteri- on. To determine a sufficient sample and select the best esti- mate, the learning system must be able to bound the uncertain- ty in the estimated merit of each attribute. Collectively, the merit estimates form a complex multivariate statistic. Fortunately, a generalization of the central limit theorem, the S-method, shows that the distribution of the multivariate merit statistic is approximately a multivariate normal distri- bution, regardless of the selection criterion’s form (assuming the constraints listed above) [Bishop75 p. 4871. The selection decision thus simplifies to the problem of selecting the &-max component of a multivariate normal distribution. In the statis- tics literature this type of problem is referred to as a correlated selection problem and there are known methods for solving it (see [Gratch94b] for a survey of methods). I use a procedure called McSPRT proposed in [Gratch94b]. BRACE [Moore94], and the work of Nelson [Nelson95], are similar to McSPRT and could be used in its place. One could also use a “rational” selection procedure to provide more flexible con- trol over the cost of learning [Fong95, Gratch94cl McSPRT takes examples one at a time until a sufficient number have been taken, whence it selects the attribute with the highest estimated merit. The procedure reduces the prob- lem of finding the best attribute to a number of pairwise com- parisons between attributes. An attribute is selected when, for each pair-wise comparison, its merit is significantly greater or indifferent to the alternative. To use the procedure (or the oth- ers mentioned) one must assess the variance in the estimated difference-in-merit between two attributes. Figure 2 helps il- lustrate how this estimate can be computed. The symbol Pa,b,c denotes P(Ai(x)=a, A&)=a, cZass(x)=c I x reache.sN). For a given pair of attributes, the difference-in- merit between the two is a function of seven probabilities (the eighth is determined by the other seven). For example, if the selection criterion is entropy, the difference in entropy be- tween two attributes is de(PT,T,19 pT,T.2v pT,F,lv pT,F.29 pF,T,lv p~,T,29 pF,F.19 pF,F,2) = Ae(P,, P2, P3, Pa, P,, P6, P7) = e(P, + P,, P, + P,, P, + P7) - Ml + P5,P2 + P6, P3 + P7) This difference is estimated by substituting estimates for the seven probabilities into the difference equation. The vari- ance of this difference estimate follows from the generalized version of the central limit theorem. Using the s-method, the variance of a difference estimatefis approximately where dflaPi is the partial derivative of the difference equa- tion with respect to the ith probability, and where the seven probabilities, Pi, are estimated from training data. This can be computed automatically by any standard symbolic mathe- matical system. (I use MapleTM, a system which generates C code to compute the estimate.) In the experimental evalua- tions this estimate appeared quite close to the true value. Stopping decisions determine when the growth of the deci- sion tree should be stopped. In the standard fixed-sample learning paradigm, the stopping criterion serves an almost in- cidental role because the size of the data set is the true deter- minant of the number of possible inductive decisions and, therefore, of the maximum size of the decision tree. In PAC approaches one typically bounds the size of the tree. Here I consider an alterative approach. I introduce a stopping criteri- on to bounding the number of possible inductive decisions, and thus indirectly determine the size of the learned concept. Two sources of complexity motivate the need for a stop- ping criterion. First, as the size of the largest possible decision tree grows exponentially with the number of attributes, a trac- table algorithm cannot hope to construct a complete tree, nor j=F Figure 2. A comparison of two attributes is character- ized by eight probabilities. Fundamental Issues 781 does one always have the flexibility of allowing the algorithm to take data polynomial in the size of the best decision-tree (as in PAC approaches). Second, as the depth of the tree grows, the algorithm must draw increasingly more data to get reason- able numbers of examples at each decision-tree leaf: ifp is the probability of an example reaching a node, the algorithm must on average draw l/p examples for each example that reaches the node. Because the probability of a node can be arbitrarily small, the amount of data needed to obtain a sufficient sample at the node can be arbitrarily large. I advocate a novel stopping criterion that addresses both of these sources of complexity. The sequential algorithm should not partition a node if the probability of an example reaching it is less than some threshold parameter y. This probability can be estimated from the data and, as in selection decisions, the sequential algorithm need only be probably close to the right stopping decisions. In particular, with probability l-a, the algorithm should expand nodes with probability greater than ‘y, refuse to expand nodes of probability less than y/2, and perform arbitrarily for nodes of intermediate probability. A sufficient sample to make this decision can be determined with a statistical procedure called the sequential probability ratio test (SPRT) [Berger80]. Each leaf node of a tree can be assigned a probability equal to the probability of an example reaching that node, and the probability of all the leaves must sum to one. The stopping criterion implies that the number of the leaves in the learned decision tree will be roughly on the order of 2/yand therefore, this stopping criterion determines an upper bound on the com- plexity of the learned concept. Multiplicity Effect Together, stopping and selection decisions determine the be- havior of the inductive process, and I have proposed methods for taking sufficient data to approximate each of these induc- tive decisions. This is not, however, enough to bound the overall decision quality. When making multiple inductive de- cisions, the overall probability of making a mistake is greater than the probability of making a mistake on any individual de- cision (e.g., on a given roll of a die there is only a 1/6th chance of rolling a five, but after six rolls, there is a 35/35th chance of rolling at least one five). Called the multiplicity efSect [Hochberg87], this factor must be addressed in order to insure the overall quality of the inductive decision process. Using a statistical result known as Bonferroni’s inequality [Hochberg87 p. 3631, the overall decision error is bounded by dividing the acceptable error at each inductive decision by the number of decisions taken. As mentioned previously, I as- signed each decision an error level of a. Therefore, if one wishes to bound the overall decision error to below some con- stant, 6, it suffices to assign a=6/D where D is the expected number of inductive decisions. Although I do not know how to compute D directly, it is possible to bound the maximum possible number of decisions, which will suffice. Further- more - as will be shown - the expected sample complexity of sequential induction depends only on the log of the number Figure 3. A fringe tree that results from setting y to 0.40. of inductive decisions; consequently, the conservative nature of this bound does not unduly increase the sample size. Space precludes a complete derivation of the maximum possible number of inductive decisions, but this can be found in [Gratch94a]. To derive this number I first find the largest possible decision-tree that satisfies the stopping criterion. This tree has a particular form I refer to as afringe tree, which is a complete binary tree of L2/yj leaves that has been aug- mented with a “fringe” under each leaf that consumes the re- maining attributes. A fringe is a degenerate binary tree with each right-hand branch being a leaf with near-zero probabili- ty. A fringe trees is illustrated in Figure 3. The size of a fringe tree, T, depends on the number of attrib- utes, A, and the stopping parameter ‘y: T = F - 1 + (A - d,)(2+ - F) + (A - d2)(2F - 29 I (A - dl + 1)F - 1 = O(A/y) where F=12/y], dl dlog22/yl, d2$-log22/y‘l. Each selection decision consists of a set of pairwise comparisons - one for each attribute. To properly bound the error, we must count the number of these pairwise comparisons across every selection decision in the fringe tree, which is S = (2A - 2dl + 1)2dl + $(d, - A)2 + $(d, - A) - A - 2 = 0 $ + $log2+ . The total number of inductive decisions is 7’+S (S dominates). The Bonferroni is a straightforward but somewhat ineffi- - cient approach to bounding the overall decision error. For ex- ample, at the root of a decision tree, one has the most data available. A more efficient method would allocate a smaller error level to the root decision than later decisions. More so- phisticated methods could be incorporated into this scheme, although they will not be considered in this article. Sequential ID3 embodies the notions described above. To learn a concept description, one must specify a selection crite- rion (such as entropy) and three parameters: a confidence, 6; an indifference interval, E; and a stopping probability, y. Giv- en a set of binary attributes of size A and access to classified training data, the algorithm constructs, with probability l-6, a decision tree of size O(A/y) in which each partition is made with the E-best attribute. The algorithm does a breadth-first expansion of the decision tree, using McSPRT to choose the E-best attribute at each node while SPRT tests the probability 782 Learning an example reaching a decision node.3 If a node is shown to have probability less than y, its descendants are not expanded. McSPRT and SPRT require the specification of a minimum sample size on which to base inductive decisions; by default, this size is set to fifteen. Given a selection criterion, MapleTM generates C code to compute the variance estimate. To date Sequential ID3 has been tested with entropy and orthogonal- ity [Fayyad92] as the selection criterion. Sequential ID3 extends the decision theoretic subsampling approach of Musick et. aZ. : it applies to arbitrary selection cri- teria and relaxes the untenable assumption that attributes are independent. Furthermore, the subsampling approach was only applied to inductive decisions at the root node, and does not account for the multiplicity effect. Sequential ID3 ad- dresses both of these limitations. The subsampling approach handles one issue that is not addressed by Sequential ID3: the balance between the size of a sufficient sample and the time needed to determine this size. Sequential ID3 attempts only to minimize the sample size, without regard to the time cost (except to ensure that this cost is polynomial), whereas sub- sampling approach strikes a balance between these factors. I have determined a worst-case upper bound on the com- plexity of Sequential ID3 (the derivations are in [Gratch94a]). Expressed in terms of A, 6, y, and E, the complexity also de- pends on B, which denotes the range of the selection criterion (for entropy, B=log(2)). In the worst case, the amount of data required by Sequential ID3, (i.e., its sample complexity) is 9 [log(1/8 l l/y 0 A)]’ (2) The sample complexity grows rapidly with tighter esti- mates on the selection decisions (quadratic in l/c), and with more liberal stopping criterion (l/y[log l/y]*), In this worst case, the algorithm completes in time 0 (A2 + log2 l/y) * $ l ☯l o&/d - l /y l A)]’ For most practical learning problems, Sequential ID3 will take far less data than these bounds suggest. Nevertheless, it is interesting to compare this worst-case sample complexity with the amount of data needed by a one-shot induction tech- nique (which, as noted earlier, determines a sufficient sample size before learning begins). Using Hoeffding’s inequality (see [Maron94]), one can show that one-shot induction re- quires a sample size on the order of - log(1/6 . l/y . A) which is less by a factor of a log than the amount of data need- ed by Sequential ID3 (Equation 2). This potential disadvan- tage to sequential induction highlights the need for empirical evaluations over actual learning problems. 3. A node is not partitioned if it (probably) contains exarn- ples of only one class, or if all attributes (probably) yield trivial One criticism of this method is that the learning algorithm partitions. I ignore these caveats in the following discussion. can potentially memorize all of the original examples, allow- valuation The statistical theory underlying Sequential ID3 provides only limited insight into the expected performance of the al- gorithm on actual learning problems. One can expect the al- gorithm to appropriately bound the quality of its inductive de- cisions, and the worst-case sample and time complexity to be less than the specified bounds. One cannot, however, state how much data is required for a particular learning problem a priori. More importantly, one cannot analytically charac- terize the relationship between decision quality and classifi- cation accuracy, because this relationship depends on the structure of specific learning problems. Knowledge of this re- lationship is essential, though, for if Sequential ID3 is to be a useful tool, an increase in decision quality must lead to a de- crease in classification error. I test two central claims. First, decision quality should be closely related to classification accuracy in actual learning problems. More specifically, classification error should de- crease as the stopping parameter, y, decreases or as the indif- ference parameter, E, decreases. Second, the expected sample complexity of sequential induction should be less than a one-shot method which chooses a fixed-size sufficient sam- ple apriori. I first describe the testing methodology and then examine each of these claims in turn. Although Sequential 3 can incorporate arbitrary selection criteria, this evalua- tion only considers entropy, the most widely used criterion. A secondary consideration is how to set the various learn- ing parameters. If the first claim holds, one should expect a monotonic tradeoff between the amount of data taken (as con- trolled by y and E) and classification error. The ideal setting will depend on factors specific to the particular application (e.g., the cost of data and the accuracy demands) and the rela- tionship between the parameter settings and classification er- ror, which unfortunately, can only be guessed at. In the evalu- ation I investigate the performance of Sequential ID3 over several parameter settings to give at least a preliminary idea of how these factors relate. Sequential ID3 is intended for megainduction tasks involving vast amounts of data. Unfortunately, the current implementa- tion of the algorithm is restricted to two-class problems with categorical attributes, and I do not currently have access to large-sized problems of this type. Nevertheless, by a simple transformation of a smaller-sized data set, I can construct a reasonably realistic evaluation. The idea is to assume that a set of classified examples completely defines the example distribution. Given a set of n training examples, I assume that each example in the set occurs with probability l/n, and that each example not in the set occurs with zero probability. An arbitrarily large sample can then be generated according to this example distribution. Furthermore, as the example distri- bution is now exactly known, I can compute the exact classifi- cation error of a given decision tree. Fundamental Issues 783 ing perfect accuracy when the original data is noise-free. However, this criticism is mitigated by the fact that the deci- sion trees learned by Sequential ID3 are limited in size. I en- sure that the learned decision trees have substantially fewer leaves than the number of original unique examples. I test Sequential ID3 on nine learning problems. Eight are drawn from the Irvine repository, including the DNA promot- er dataset, a two class version of the gene splicing dataset (made by collapsing EI and IE into a single class), the tic-tat-toe dataset, the three monks problems, a chess end- game dataset, and the soybean dataset. The ninth dataset is a second DNA promoter dataset provide by Hirsh and Noor- dewier [Hirsh94]. When the problems contain non-binary at- tributes, they are converted to binary attributes in the obvious manner. In all trials I set the level of decision error, 6, to 10%. Both the stopping parameter, y, and the indifference parame- ter, E, are varied over a wide range of values. To insure statisti- cal significance, I repeat all learning trials 50 times and report the average result. All of the tests are based on entropy as a selection criterion. Due to space limitations, I consider only the evaluations for the gene splicing dataset and the third monks dataset (monks-3) in detail here. Classification Accuracy vs. Decision Sequential ID3 bases decision quality on indifference, E, and stoppingy, parameters. As E shrinks, the learning algorithm is forced to obey more closely the entropy selection criterion. Assuming that entropy is a good criterion for selecting attrib- utes, classification error should diminish as selection deci- sions follow more closely the true entropy of the attributes. As y shrinks, concept descriptions can become more complex, thus allowing a more faithful model of the underlying concept and consequently, lower classification error.5 Additionally, an interaction may occur between these parameters: allowing a larger concept description may compensate for a poor selec- tion criterion, as a bad initial split can be rectified lower in the decision-tree (provided the tree is large enough). Figure 4 summarizes the empirical results for the splicing and monks-3 datasets. The results of the splicing evaluation is typical and supports the claim that classification accuracy and decision quality are linked: classification error dimi- nishes as decision quality increases. The chess, both promot- er, and monks-2 datasets all show the same basic trend, lend- ing further support to the claim. (The soybean dataset shows near zero error for all parameter settings) The results of the monks-3 and, to a lesser extent, the monks- 1 evaluations raise a note of caution, however: here classification error increases as quality increased. These later findings suggests that, at least for these two problem sets, entropy is a poor selection criterion. This is perhaps not surprising, as the monks prob- lems are artificial problems designed to cause difficulties for top-down decision-tree algorithms. The tic-tat-toe dataset, interestingly, showed almost no change in classification error as a result of changes in E. Sample Complexity Sequential ID3 must draw sufficient data to satisfy the speci- fied level of decision quality. The complex statistical machin- ery of sequential induction is justified to the extent that it re- quires less data than simpler one-shot inductive approaches. It is also interesting to consider just how much data is neces- sary to arrive at statistically sound inductive decisions while inducing decision trees. In addition to the decision quality parameters, the size of a sufficient sample depends on the number of attributes asso- ciated with the induction problem. The splicing problem uses 480 binary attributes, whereas the monks-3 problem uses fif- teen. One-shot sample sizes follow from Equation 4. Figure 5 illustrates these sample sizes for the corresponding parame- ter values. Because it has more attributes, the splicing prob- lem requires more data than the monks-3 problem. Figure 6 illustrates the sample sizes required for sequential induction. The benefit is dramatic for the monks-3 problem: sequential induction requires 1/32th the data needed by one-shot induction. In the case of the splicing data set, smaller but still significant improvement is observed: sequential ID3 used one third the data needed by the one-shot approach. A closer examination of the splicing dataset reveals that many of the selection decisions have several attributes tied for the best. In this situation, McSPRT has difficulty selecting a win- ner and is forced closer to its worst-case complexity. Machine learning researchers may be surprised by the large sample sizes required for learning because standard algo- rithms can acquire comparably accurate decision trees with far less data. This can in part be explained by the fact that the concepts being learned are fairly simple: most of the concepts are deterministic with noise-free data. There is also the fact that “making good decisions” and “being sure one is making good decisions,” are not necessarily equivalent: the later re- quires more data and, when most inductive decisions lead to good results (as can be the case in simple concepts), “being sure” can be overly conservative. Nevertheless, in many learning applications one can make a strong case for conser- vatism, especially when the results of our algorithms inform important judgements, and when these judgements are made automatically, without human oversight. Table 1 summarizes the results of all datasets for two se- lected values of y and E (complete graphs can be found in [Gratch94a]). In all but one case, Sequential ID3 required sig- nificantly less data than one-shot learning. This advantage should become even more dramatic with smaller settings for the indifference and stopping parameters. 4. Available via anonymous FTP: ftp.ics.uci.edu/pub/ma- chine-learning-databases 5. Claims that smaller trees have lower error [Breiman84] only apply when there is a fixed amount of data. Summary an nclusion Sequential induction shows promise for efficient learning in problems involving large quantities of data, but there are sev- eral limitations to Sequential ID3 and many areas of future re- 784 Learning Figure 4. Classification error of Sequential ID3 as a function of y an 32 3 3 fi 9 4 Id E. Decision error is 10%. Figure 5. One-shot sufficient sample as a function of y and E. Decision error is 10%. %i Figure 6. Average sample size of Sequential ID3 as a function of y and E. Decision error is 10%. Table 1 I pO.08, =0.36 II wI.02, &=0.09 I Sequential Sample Sz search. A limitation is that ensuring statistical rigor comes at a significant computational expense. Furthermore, the empir- ical results suggest that the current statistical model may be too conservative for many problems. For example, in many Fundamental Issues 785 problems the concept is deterministic and the data noise-free. It is unclear how to incorporate such knowledge into the mod- els. Additionally, the Bonferroni method tends to be an overly conservative method for bounding the overall error level. There are some practical limits to what kinds of problems can be handled by the sequential model. Whereas one could easily extend the approach to multiple classes and non-binary attributes, it is less clear how to address continuous attributes. Another practical limitation is that although the approach generalizes to arbitrary selection criteria, round-off error in computing selection and variance estimates may be a signifi- cant problem for some selection functions. Round-off error contributes to excessive sample sizes on some of my evalua- tions of the orthogonality criterion. Probably the most significant limitation of Sequential ID3 (and of all standard inductive learning approaches) is the ten- uous relationship between decision error and classification error. Improving decision quality can reduce classification accuracy due to the hillclimbing nature of decision-tree in- duction (this was clearly evident in the monks-3 evaluation). In fact, standard accuracy improving techniques exploit the randomness caused by insufficient sampling to break out of local maxima; by generating several trees and selecting one through cross-validation. An advantage of the sequential in- duction model, however, is that it clarifies the relationship be- tween decision quality and classification accuracy, and sug- gests more principled methods for improving classification accuracy. For example, the generate-and-cross-validate ap- proach mainly varies the inductive decisions at the leaves of learned trees (because the initial partitions are based on large samples and thus, are less likely to change), whereas it seems more important to vary inductive decisions closer to the root of the tree. A sequential approach could easily make initial inductive decisions more randomly than later ones. Further- more, the sequential model allows the easy implementation of more complex search strategies, such as multi-step look-ahead. More importantly, the statistical framework en- ables one to determine easily how these strategies affect the expected sample time. For example, performing k-step look-ahead search requires on the order of k times as much data as a non-look-ahead strategy to maintain the same level of decision quality [Gratch94a]. Therefore, sequential induc- tion is suitable not only as amegainduction approach, but also as an analytic tool for exploring and characterizing alternative methods for induction. AcknowIedgements I am indebted to Jason Hsu, Barry Nelson, and John Marden for sharing their statistical knowledge. Carla Brodly for pro- vided comments on an earlier draft. This work was supported by NSF under grant NSF IRI-92-09394. References [Auer95] P Auer, R. C. Holte, and W. Maass, “Theory and Ap- plications of Agnostic PAC-Learning with Small Decision Trees,” Proceedings ML95,1995, pp. 21-29. [BergergO] J. 0. Berger, Statistical Decision Theory and Baye- siun Analysis, Springer Verlag, 1980. [Bishop751 Y. M. M. Bishop, S. E. Fienberg and P. W. Holland, Discrete Multivariate Analysis: Theory and Practice, The MIT Press, Cambridge, MA, 1975. [Breiman84] L. Breiman, J. H. Friedman, R. A. Olshen and C. 9. Stone, Classification and Regression Trees, Wadsworth, 1984. [Catlettgl] J. Catlett, “Megainduction: a test flight,” Proceed- ings of ML91, Evanston, IL, 1991, pp. 596-599. [Cohn951 D. Cohn, D. Lewis, K. Chaloner, L. Kaelbling, R. Schapire, S. Thrun, and F? Utgoff, Proceedings of the AAAI95 Sym- posium on Active Learning, Boston, MA, 1995. [deMantaras92] R. L. deMantaras, “A Distance-Based Attribute Selection Measure for Decision Tree Induction,” Machine Learning 6, (1992), pp. 81-92. [Etzioni93] 0. Etzioni, N. Lesh and R. Segal, “Building Softbots for UNIX,” Technical Report 93-09-01, 1993. [Fayyad92] U. M. Fayyad and K. B. Irani, “The Attribute Selec- tion Problem in Decision Tree Generation,” Proceedings of AAAZ92, San Jose, CA, July 1992, pp. 104-110. [Fong95] P W. L. Fong, “A Quntitative Study of Hypothesis Selection,” Proceedings of the Zntemutional Conference on Ma- chine Learning, Tahoe City, CA, 1995, pp. 226-234 [Govindarajulu81] Z. Govindarajulu, The Sequential Analysis, American Sciences Press, INC., 198 1. Statistical [Gratch94a] “On-line Addendum to Sequential Inductive Learning,” anonymous ftp to beethoven.cs.uiuc.edu/pub/gratch/ sid3-ad.ps. [Gratch94b] J. Gratch, “An Effective Method for Correlated Se- lection Problems,” Tech Rep UIUCDCS-R-94- 1898, 1994. [Gratch94c] J. Gratch, S. Chien, and G. Dejong, “Improving Learning Performance Through Rational Resource Allocation,” Proceedings of AAA194, Seattle, WA, pp 576-581. [Haussler92] D. Haussler, “Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Applications,” bzforma- tion and Computation 100, 1 (1992) [Hirsh94] H. Hirsh and M. Noordewier, “Using Background Knowledge to Improve Learning of DNA Sequences,” Proceedings of the IEEE Conference on AI for Applications, pp. 35 l-357. [Hochberg87] Y. Hochberg and A. C. Tamhane, Multiple Compuri- son Procedures, John Wiley and Sons, 1987. [Kearns92] M. J. Kearns, R. E. Schapire and L. M. Sellie, “To- ward Efficient Agnostic Learning,” Proceedings COLT92, Pitts- burgh, PA, July 1992, pp. 341-352. [Maron94] 0. Maron and A. W. Moore, “Hoeffding Races: Ac- celerating Model Selection Search for Classification and Function Approximation,” Advances in Neural Information Processing Sys- tems 6, 1994. [Moore941 A. W. Moore and M. S. Lee, “Efficient Algorithms for Minimizing Cross Validation Error,” Proceedings of ML94, New Brunswick, MA, July 1994. [Musick93] R. Musick, J. Catlett and S. Russell, “Decision Theo- retic Subsampling for Induction on Large Databases,” Proceedings of ML93, Amherst, MA, 1993, pp. 212-219. [Nelson951 B. L. Nelson and F. J. Matejcik, “Using Common Random Numbers for Indifference-Zone Selection and Multiple Comparisions in Simulation,” Management Science, 1995. [Quinlan86] J. R. Quinlan, “Induction of decision trees,” Ma- chine Learning I, 1 (1986), pp. 81-106. 786 Learning
1996
116
1,751
ng to Take Actio oni Khardon Aiken Computation Laboratory, Harvard University, Cambridge, MA 02 13 8 roni @das.harvard.edu Abstract and time. We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and show that pat-learning results on Occam algorithms hold in this model as well. We then identify a particularly useful bias for action strategies based on production rule systems. We show that a subset of production rule systems, including rules in predicate calculus style, small hidden state, and unobserved support predicates, is properly learnable. The bias we introduce enables the learning algorithm to invent the recursive support predicates which are used in the action strategy, and to reconstruct the internal state of the strategy. It is also shown that hierarchical strategies are learnable if a helpful teacher is available, but that otherwise the problem is computationally hard. Explanation Based Learning (EBL) (DeJong & Mooney 1986; Mitchell, Keller, & Kedar-Cabelli 1986) uses declar- ative knowledge and search, but learns from its experience, essentially compiling its knowledge into a more procedural form by saving generalized forms of the results of search as rules in the system. While arbitrary addition of rules may actually reduce the performance, utility based tests for added rules were found to be useful (Minton 1990). Note that, similar to reinforcement learning, EBL is an unsuper- vised process since no external guidance for the search is given, and that both approaches ultimately try to find the optimal solution to problems. Introduction Planning and acting have been mainly studied in AI with a logical perspective, where knowledge about the world is encoded in declarative form. In order to achieve goals, one proves that they are true in some world state, and as a side effect derives a plan for these goals (McCarthy 1958). Similarly, in partial order planning declarative in- formation is given, and search in plan space is performed to find a plan (Weld 1994). However, the computational problems involved in these approaches are computationally hard (Cook 1971; Bylander 1994). Furthermore, these ap- proaches have difficulties in handling dynamic situations where “re-planning” is used, and situation where the world is non-deterministic, or partially observable. In this paper we follow the framework of learning to rea- son (Khardon & Roth 1994; 1995) and previous formaliza- tions of learning in deterministic domains (Tadepalli 1991; 1992; ‘I’adepalli & Natarajan 1996) and suggest a new ap- proach to these problems. The new formalization, learning to act, combines various aspects of previous approaches. In particular we use the stochastic partially observable world model as in reinforcement learning, but on the other hand use symbolic representations and action strategies that are similar to the ones used in planning and explanation based learning. Our model is similar to the reinforcement learning ap- proach, in that the agent tries to learn action strategies which are successful in the world; namely, no explicit reasoning power is required from the agent. Rather, it is sufficient that an agent chooses its actions so that most of the time it succeeds. A different approach is taken by the reinforcement learn- ing paradigm where “reactive” action selection is used. In this model an agent wanders in a (partially observable) Markov decision process, and the only source of information is a (positive or negative) reinforcement signal given in re- sponse to its actions. The goal of the agent is to find a good mapping from situations to actions so as to maximize its future reinforcement. While interesting results on conver- gence to optimal strategies have been obtained (Sutton 1988; Fiechter 1994), the resulting strategies essentially enumer- ate the state space, and therefore require exponential space Our framework differs from previous approaches in a few aspects. First, no direct assumptions on the structure of the world are made. We do assume that the world behaves as a partially observable Markov process, but we do not make any restrictions on the size or structure of this process. On the other hand, in order to ensure tractability we as- sume that some simple strategy provides good behavior in the world, where simple is properly quantified. We also as- sume some form of supervised learning, where the learner observes a teacher acting in the world, and is trying to find a strategy that achieves comparable performance. Unlike previous models we do not require optimalpe$or- munce (which in many cases is hard to achieve), but rather Fundamental Issues 787 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. demand that the learner be able to reproduce things that have already been discovered. This can be seen as an attempt to model progress in some communities, where most agents only perform local discoveries or learning. However, once an important tool is found and established, it is transferred to the rest of the community relatively fast, and with no requirement that everyone understand the process properly, or reinvent it. Another important part of this work is the choice of knowledge representation. We concentrate on action strate- gies in the form of Production Rule Systems (PRS). This is a class of programs which has been widely studied (Anderson 1983; Laird, Rosenbloom, & Newell 1986). A nice prop- erty of PRS is that it allows for a combination of condition action rules, and declarative knowledge that can be used for search, under the same framework. Previous studies have mainly used PRS in a manner similar to EBL emphasizing the effect of declarative representations. In this paper, the rules are mainly used as a functional rep- resentation, which chooses which actions to take. Learning of simple PRS strategies, is performed with the help of an external teacher. Our strategies have a flavor of reactive agents. However, they are goal based, have internal state, and use predicates which compute simple recursive func- tions of the input. We start by presenting the model of acting and learning in the world, and deriving a general learning result showing the utility of Occam algorithms. We then present a particular subset of PRS which we show learnable using this result, and briefly discuss the learnability of hierarchical strategies. We conclude with a discussion and some reference to future work. For lack of space, some details and proofs and further discussion are omitted from the paper; these can be found in (Khardon 1995). Technically, the framework presented here is similar to the one studied in (Tadepalli 199 1; Tadepalli & Natarajan 1996). The main difference is that we do not incorporate assump- tions about the deterministic structure of the world into the model. Intuitively, the world is modeled as a randomized state machine; in each step the agent takes an action, and the world changes its state depending on this action. The agent is trying to get to a state in which certain “goal” conditions hold. The interface of the agent to the world is composed of three components: 0 The measurements of the learner are represented by a set of n literals, ~1,222, . . . , x,, each taking a value in’ (0, 1). The set X = (0, l}n is the domain of these measurements. For structural domains, similar to (Haussler 1989), the input, a multi-object scene, is composed of a list of objects and values of predicates instantiated with these objects. ‘Our results also hold in a more general model, where a third value, *, is used, denoting that the value of some variable is not known or has not been observed (Valiant 1995). o The agent can be assigned a goal, from a previously fixed set of goals s. For simplicity we would assume that G is the class of conjunctions over the literals gi,g2, . . . , g,, and their negations, where gi represents the desired state of xi. (This is similar to conjunctive goals in STRIPS style planning problems.) o The agent has at its disposal a set of actions 0 = {a,..., on}. (The choice of n as the number of actions is simply intended to reduce the number of parameters used.) In the learning model, the agent is not given any information on the effects of the actions, or the precon- ditions for their application. In particular, there is no hidden assumption that the effects of the actions are de- terministic, or that they can be exactly specified. The protocol of acting in the world is modeled as a in- finitely repeated game. At each round, nature chooses an instance, (x, g), such that x E X and g E G. Then the agent is given some time, say N steps (where N is some fixed polynomial in the complexity parameters), to achieve the goal g starting with state x. In order to do this the learner has to apply its actions, one at a time, until its measurements have a value y which satisfies g (i.e. g(y) = 1). Intuitively, each action that is taken changes the state of the world, and at each time point the agent can take an action and then read the measurements after it. However, some of the actions may not be applicable in certain situations, so the state does not have to change when an action is taken. Furthermore, the state may change even when no action is taken. Definition 1 (strategy) A strategy s is composed of a state machine (I, io, 6, ), and a mapping s : X x 5: x I --+ 0 from instances and states into actions. An agent is follow- ing a strategy s if before starting a run it is in state in, and whenever it is in state i E I, and on input (x, g), the agent chooses the action s(x, g, i), and changes its state to b(x, 9, i). Most of the strategies we consider are stationary, namely no internal state is used. In this case a strategy is simply a mapping s : X x G -+ 0 from instances into actions. Definition 2 (Ron) A run of a strategy s on instance (x, g), is a sequence resulting from repeated applications of the strategy s, R(s, x, 9) = x, s(x, 9, io), x1, s(d 9, $7 x2, * * * > until g has been achieved or N steps have passed, where foreach j 2 1, ij = S,(x’-‘,g,ij-t). Definition 3 (successful run) A run is successful if for some i 5 N, 9(x2) = 1. Notice that, depending on the world, a run might be a fixed value or a random variable. Definition 4 (world) The world W is modeled as a par- tially observable Markov decision process whose transitions are effected by the actions of the agent. Given this definition, for any fixed starting state, a proba- bility distribution is induced on the values that the run may 788 Learning take. It should be noted that we do not make any assump- depends on the actions it takes. Therefore the states visited tions on the size or structure of W. Furthermore, in contrast are not independent random variables, and PAC results are with reinforcement learning, we do not expect an agent to not directly applicable. Nevertheless, the examples runs are have complete knowledge of W. Instead, an agent needs independent of each other. The proof of the next theorem to have a strategy that copes with its task when interacting follows by showing that most of the good runs of the teacher with W. are also covered by a consistent strategy.2 When interacting with the world the agent has some form of a reset button which draws a new problem to be solved. We assume that, at the beginning of a random run, a state of the Markov process is randomly chosen according to some fixed probability distribution D. This distribution induces a probability distribution D over the measurements X x G that the learner observes at a start of a run. Definition 5 (random run) A random run of a strategy s with respect to a world W, and probability distribution D, denoted R(s, D), is a run R(s, x, g) where (x, g) are induced by a random draw of D, and the successor states are chosen according to the transition matrix of W. We say that a strategy is consistent with a run R = X,Oil,X 1 joi~,~ 2 tOi - - .I oil, x1 if for all j, the action cho- sen by the strategy in step j, given the history on the first j - 1 steps (which determine the internal state of the strat- egy) is equal to oi, . Theorem 1 Let H be a class of strategies, and let L be an algorithm such that for any t E H, and on any set of runs {R(h D)h L fi n d s a strategy h E H which is consistent with all the runs. Then L is a learn to act algorithm for H when given m = + log(F) independent example runs. The above definition ensures that a random run is indeed a random variable. Finally, Definition 6 (quality of a strategy) The quality Q(s, D) of a strategy s, with respect to a world W, and probability distribution D, is IJsing the above theorem we can immediately conclude that several learning results developed in the model with a deterministic world hold in our model as well. In particular macro tables (Tadepalli 1991), and action strategies which are intersection closed and have a priority encoding over actions (Tadepalli & Natarajan 1996) are learnable in our model. Q(s, D) = Prob[R(s, D) issuccessful] resentation of Strategies where the probability is taken over the random variable R (which is determined by D and W). We study a supervised learning scenario, where the learner can observe a teacher acting in the environment. We assume that a teacher has some strategy t according to which it chooses its actions. Definition 7 The oracle example(t) when accessed, re- turns a random sample of R(t, 0). A learning algorithm will get access to the oracle example and will try to find a strategy which is almost as good as the teacher’s strategy. Definition 8 (learning) An algorithm A is a Learn to Act algorithm, with respect to a class of strategies S, class of worlds W, and class of distributions lJ, if there exists a polynomial p(), such that on input 0 < E, 6 < 1, for all t E S, for all W E W, and for all D E V, and when given access to example(t), the algorithm A runs in time p(n, I/E, 1/S), w h ere n is the number of attributes A observes, and with probability at least 1 - S outputs a strategy s such that Q(t, D) - Q(s, D) < E. Production rule systems (Anderson 1983; Laird, Rosen- bloom, & Newell 1986) are composed of a collection of condition action rules C + A, where C is usually a con- junction, and A is used to denote an action. Actions in PRS denote either a real actuator of the agent, or a predicate which is “made true” if the rule is executed. PRS are simply a way to describe programs with a special kind of control mechanism. An important part of this mechanism is the working memory. The working memory captures the “cur- rent state” view of the system. Initially, the input is put into the working memory, and the PRS then works in iterations. In each iteration, the condition C of every rule is evalu- ated, to get a list of rules which may be executed. Out of these rules, one is selected, by the “resolution mechanism”, and its action A is executed. That is either the actuator is operated, or the predicate mentioned as A is added to the working memory. The above cycle is repeated until the goal is achieved. We study the learnability of a restricted form of PRS. In particular we use a priority list of rules as a resolution mechanism, and restrict the conditions to include conjunc- tion of bounded size. Furthermore, we restrict the amount and size of the working memory used. The working mem- ory includes internal predicates and small state machines, and is combined with a particular control structure. Blocks World: In order to illustrate the style of PRS con- sidered, we present a PRS for the Blocks World. We then proceed with formal definitions. earning Action Strategies We now present a general result on learning algorithms. Similar to results in the PAC model (Blumer et al. 1987), we show that in order to learn it is sufficient to find a concise action strategy which is consistent with the examples given by the teacher. The main idea is that an action strategy which is very different from the teacher’s strategy will be detected as dif- ferent by a large enough random sample. Notice that the distribution of the states visited by the agent within a run 21t is straightforward to generalize the theorem so that the hypothesis size will depend on the size of the strategy being learned as in (Blumer et al. 1987). Fundamental Issues 789 A situation in the blocks world is described by listing the names of blocks, and the relations that hold for them. The input relations we consider are: clear(x) which denotes that nothing is placed above block x, and on(x, y) which denotes that block x is on block y. We assume that the goal situation is described in a similar manner using the predicate GO. For example G(on(u, b))G(on(b, c)) could be our goal. The only action available is move(x, y) which moves object x to be on y given that both were clear beforehand. Finding an optimal solution for this problem is NP- complete, but there is a simple algorithm that produces at most twice the number of steps that is needed (Gupta & Nau 1991). The idea is that if a block is above another block, which is part of the goal but is not yet in its goal place, then it has to be moved. If we arbitrarily move such blocks to the table, then we can easily build the required tower by moving each block once more. So blocks are moved twice, and each of them must be moved at least once in the optimal solution. We present a PRS which implements this algorithm (which assumes for simplicity that the target towers start on the table). Our PRS has three parts. The first part computes the support predicates of the system. The second part consists of a priority list of condition action rules which we will refer to as the main part of the PRS. The third part includes rules for updating the internal state. The production rule system first computes the support predicates inplace( and above(x, y). These have the intuitive meaning; namely &place(x) if x is already in its goal situation, and ubove(x, y) if x is in the stack of blocks which is above y. 1. inpZuce(T) 2. on(x, y) A G(on(x, y)) A inpZuce(y) - inpZuce(x) 3. on(x, y) - ubove(x, y) 4. on(x, y) A ubove(y, z) - ubove(x, z) Then the actions are chosen: 1. clear(x) A clear(y) A G(on(x, y)) A inpZuce(y) - move(x, Y) 2. inpZuce(y) A G(on(x, y)) A on(s,y) A ubove(z, x) A clear(z) A sad - move(z, T) 3. inpZuce(y) A G(on(x, y)) A on(z,y) A ubove(z, y) A clear(z) - move(2, T) 4. inpZuce(y) A G(on(x, y)) A on(z,y) A ubove(z,x) A clear(z) - move(z, T) Then the internal state is updated: 1. sad c sad @ inpZuce(y) A G(on(x, y)) A on(z,y) A ubove(z, x) A clear(z) The PRS computes its actions as follows: (1) First, the support predicate rules are operated until no more changes occur. (No ordering of the rules is required.) (2) Then, the main part of the PRS chooses the action. The main part of the PRS is considered as a priority list. Namely, the first rule that matches the situation is the one that chooses the action. (It is assumed that if the condition holds for more than one binding of the rule to the situation, then an arbitrary fixed ordering is used to choose between them.) (3) The internal state is updated after choosing the action. The form of the transition rules is defined by conditions under which the value of a state variable is flipped; the details are explained below. The state machine is not so useful in the strategy above. (The internal state sad is superfluous and rule 4 could replace rule 2.) However, it is useful for example when memory for a particular event is needed. While the PRS we consider have all these properties we restrict these representations syntactically. First, for sim- plicity we would assume that n bounds the number of ob- jects seen in a learning scenario, the number of predicates, and the number of action names. Restricted Conjunctions: When using PRS, and when the input is given as a multi-object scene, one has to test whether the scene satisfies the condition of a rule under any substitution of objects for variables. This binding problem is NP-hard in general, and a simple solution (Haussler 1989) is to restrict the number of object variables in the conjunction by some constant. We restrict the rules, so that there are at most k literals in the conjunction, and that every literal has at most a constant C = 2 object variables. This bounds the number of variables in a rule to be 2k + 2. (A similar restriction has been used in (Valiant 1985).) Notice that the priority list only resolves between different rules in the list. We must specify which action to choose when more than one binding matches the condition of a rule. For this purpose we use a priority preference between bindings. In particular, for every condition-action rule we allow a separate ordering preference on its bindings. (This includes as a special case the situation where all rules use the lexicographical ordering as a preference ordering.) Support Predicates: The support predicates include a re- stricted type of recursive predicates, similar to the ones in the example. In particular, we consider support predicates that can be described using condition action rules C + A, where C has at most k literals. Another restriction we em- ploy is that the support predicates do not interact with each other; namely, the PRS can use many support predicates, but each of them is defined in terms of the input predicates. We consider both recursive and non-recursive predicates. A non-recursive predicate Z(x, y) is defined by a rule of the form c 3 Z(x, y). For a recursive predicate, we allow the same predicate to appear positively in the conjunction. We further associate with this predicate, a “base case rule”. This rule is a non- recursive statement as above. For example, for the predicate above in our example, on(x, y) - ubove(x, y) is the base case, and on(x, y) A ubove(y, z) - ubove(x, z) is the re- cursive rule. The total number of predicates in this setting is bounded by rng, where mo = n(n + 1)“(2k + 2)(2k+2). Notice that recursive predicates enhance the computing power of PRS considerably. These predicates enable the PRS to perform computations which are otherwise impos- sible. For example the predicate above cannot be described by a simple PRS. Evaluating this predicate may require an arbitrary number of steps, depending on the height of the 790 Learning stack of blocks. Internal State: We next describe the restrictions on the state machines, whose transition function is restricted in a syntactic manner. Suppose the machine has c Boolean state variables sr , ~2, . . . , sc. The transition function of the machine is described for each variable separately. For each si we have a L+conjunction as before, conjuncted in turn with an arbitrary conjunction of the state variables. This conjunction identifies a condition under which the value of si is to be flipped. For example we can have sr +- sr $ S~~~OTZ(X, 2)moue(x, y), meaning that if s2 was 1, sg was 0, and a certain block was just moved in a certain way, then we should flip the value of sr . We would assume that the state machine is reset to state Oc in the beginning of every run. The number of state machines that can be defined in this way is at most m2 = mi”“(2k + 2)c(2k+2)3c . PRS and Decision Lists To illustrate the basic idea consider propositional PRS with no internal state and with no support predicates. The PRS is composed only of its main part, which is a priority list of condition action rules. Each condition is a conjunction of at most L literals, and the actions specify a literal oi E 0. The class of PRS is very similar to the class of decision lists. Rivest (1987) showed that a greedy algorithm succeeds in finding a consistent hypothesis for this class; we observe that the same holds in our model. First, recall that by Theorem 1 it is sufficient to find a strategy consistent with the examples in order to learn to act. Since the strategies are stationary we can partition each run into situation-action pairs and find a PRS consistent with the collection of these pairs, just as in concept learning. The main observation (Rivest 1987) is that the teacher’s strategy t is a consistent action strategy. Suppose we found a rule which explains some subset of the situation-action pairs. Then, if we add t after this rule we get a consis- tent strategy. Therefore, explaining some examples never hurts. Furthermore, there is always a consistent rule which explains at least one example, since the rule in t does. By enumerating all rules and testing for consistency we can find such a rule, and by iterating on this procedure we can find a consistent PRS. Learning BRS in Structural Assume first, that the strategies are stationary. Therefore it is sufficient to consider situation-action pairs, which we refer to as examples. Notice that for our PRS the main part is in the form of a priority list. This part of the PRS can be learned as in the propositional case by considering rules with structural predicates. (The restrictions make sure that the number of rules is polynomial for fixed Ic). When testing to see whether a rule C ---f A is consistent with the examples, one has to check whether there is a binding order which is consistent for this rule. If an example matches the rule with two bindings, and only one of them produces the correct action, then we record a constraint on the binding order for this rule. (The binding producing the correct action Algorithm Learn-PRS Initialize the strategy S to the empty list. For each possible state machine Do: Compute all possible support predicates for all ex- amples. Separate the example runs into a set E of situation- action pairs. Repeat: Find a consistent rule R = C -+ A. Remove from E the examples chosen by R. Add R at the end of the strategy S. Until E = 0 or there are no consistent rules. If E = 0 then output S and stop. Otherwise, initialize the strategy S to the empty list, and go to the next iteration. Figure 1: The Algorithm Learn-PRS must precede the other one.) The rule is consistent with the examples iff the set of constraints produced in this way is not cyclic. Our learning algorithm must also find the internal parts, not observable to it, namely the support predicates and state machine. To find the support predicates, we pre-process the examples by computing, for each example, the values of all possible invented predicates that agree with the syntactic restriction. (As noted above the number of such predicates is polynomial.) This computation is clearly possible for the non-recursive predicates. For the recursive predicates, one can use an iterative procedure to compute these values. First apply the base case on all possible bindings. Then in each iteration apply the recursive rule until no more changes occur. This procedure is correct since the recursive rule is monotone in the new predicate. To find the internal state machine we must deal with non-stationary strategies. This problem is much harder in general, since the actions selected can make the impres- sion that the output of the teacher is random or arbitrary. However, if we know the values of the state variables then the problem is simple again, since the strategy is stationary when these values are added to the input. The restrictions imposed imply that the number of state machines is poly- nomial. Therefore we can enumerate the machines, and for each possibility compute the values of the internal state (assuming that this is the right state machine), and apply the algorithm described above. We are guaranteed that the al- gorithm will succeed for the choice of the correct machine, and PAC arguments imply that a bad state machine is not likely to be chosen. A high level description of the algorithm Learn-PRS is described in Figure 1. Using the above observations, we can apply Theorem 1 to show that PRS under these restrictions are learnable. Theorem 2 The algorithm Learn-PRS is a learn to act al- gorithm for the class of restricted PRS actions strategies. Fundamental Issues 791 Hierarchical Strategies The computing power of the PRS considered can be en- hanced considerably if we allow a hierarchical construction. We consider such strategies where previously learned sub- routines can be used as primitive actions in new strategies. In the full version of the paper we show that with annotation on the examples it is possible to learn hierarchical strate- gies, but that without annotation the task is hard even in propositional domains. For lack of space we just illustrate this point through an example. A possible input for hierarchical learning problem is the subroutine 21 -+ A; 22 + B; True + A, with goal x4 = 1, and where the priority is from left to right, and the two example runs: RI = 01000,A,01100,B, lOlOO,A, 11011, andR2= 10100,B,11100,A,11010,A,10011. Thegoal of the PRS being learned is to achieve x5 = 1 which is indeed satisfied in the last state of the example runs. Notice that if we know which actions were chosen by the subroutine we can rewrite the examples by slicing out the parts of the subroutine and replacing it with a new action, S. Using this new input, the greedy algorithm can find a consistent strategy. For example, using the information that for RI the second and third actions are taken by S, and for R2 the second action is taken by S, it is easy to see that the PRS q --+ A; 22 -+ S; xi -+ B is consistent with the example runs. However, without this information the problem is NP-Hard; the hardness of the problem essentially lies in determining a proper annotation for the examples. Conclusions We presented a new framework for studying the problem of acting in dynamic stochastic domains. Following the learning to reason framework we stressed the importance of learning for accomplishing such tasks. The new approach combines features from several previous works, and in a sense narrows the gap between the reactive and declarative approaches. We have shown that results from the PAC model of learning can be generalized to the new framework, and thus many results hold in the new model as well. We also demonstrated that powerful representations in the form of production rule systems can be learned. We have also shown that bias can play an important role in predicate invention; by using an appropriate bias, and grad- ing the learning experiments in their difficulty, a new set of useful predicates (which are not available for introspection) can be learned and then used in the future. Our model shares some ideas with reinforcement learning and EBL, and several related questions are raised by the new approach. In particular we believe that introducing bias in either of these frameworks can enhance our understanding of the problems. Other interesting questions concern learn- ing of randomized strategies, and learning from “exercises” (Natarajan 1989), in the new model. Further discussion can be found in the full version of the paper. Acknowledgments I am grateful to Les Valiant for many discussions that helped in developing these ideas, and to Prasad Tadepalli for helpful comments on an earlier draft. This work was supported by AR0 grant DAAL03-92-6-0115 and by NSF grant CCR- 9504436. eferences Anderson, J. 1983. The Architecture of Cognition. Harvard University Press. Blumer, A.; Ehrenfeucht, A.; Haussler, D.; and Warmuth, M. K. 1987. Occam’s razor. Information Processing Letters 24:377- 380. Bylander, T. 1994. The computational complexity of proposi- tional STRIPS planning. ArtiJCiciaZ Zntelligence 69: 165-204. Cook, S. A. 1971. The complexity of theorem proving proce- dures. In 3rd annual ACM Symposium of the Theory of Comput- ing, 151-158. DeJong, G., and Mooney, R. 1986. Explanation based learning: An alternative view. Machine Learning 1: 145-l 76. Fiechter, C. N. 1994. Efficient reinforcement learning. In Proc. of Workshop on Comput. Learning Theory, 88-97. Gupta, N., and Nau, D. 1991. Complexity results for blocks world planning. In Proceedings of AAAZ-9I,629-633. Haussler, D. 1989. Learning conjunctive concepts in structural domains. Machine Learning 4( 1):7-40. Khardon, R., and Roth, D. 1994. Learning to reason. In Pro- ceedings of AAAI-94,682-687. Khardon, R., and Roth, D. 1995. Learning to reason with a restricted view. In Proc. Workshop on Comput. Learning Theory, 301-310. Khardon, R. 1995. Learning to take actions. Technical Report TR-28-95, Aiken Computation Lab., Harvard University. Laird, J.; Rosenbloom, P.; and Newell, A. 1986. Chunking in Soar: the anatomy of a general learning mechanism. Machine Learning 1: 1 l-46. McCarthy, J. 1958. Programs with common sense. In Brachman, R., and Levesque, H., eds., Readings in Knowledge Representa- tion, 1985. Morgan-Kaufmann. Minton, S. 1990. Quantitative results concerning the utility of explanation based learning. ArtiJicial Intelligence 42:363-39 1. Mitchell, T.; Keller, R.; and Kedar-Cabelli, S. 1986. Explanation based learning: A unifying view. Machine Learning 1:47-80. Natarajan, B. K. 1989. On learning from exercises. In Proc. of Workshop on Comp. Learning Theory, 72-87. Rivest, R. L. 1987. Learning decision lists. Machine Learning 2(3):229-246. Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine Learning 3( 1):9134. Tadepalli, P., and Natarajan, B. 1996. A formal framework for speedup learning from problems and solutions. Journal of AI Research. Forthcoming. Tadepalli, P. 199 1. A formalization of explanation based macro- operator learning. In Proceedings of IJCAI-91, 6 16-622. Tadepalli, P. 1992. A theory of unsupervised speedup learning. In Proceedings of AAAI-92,229-234. Valiant, L. G. 1985. Learning disjunctions of conjunctions. In Proceedings of IJCAI-85,560-566. Valiant, L. G. 1995. Rationahty. In Proc. Workshop on Comput. Learning Theory, 3-l 4. Weld, D. 1994. An introduction to least commitment planning. AZ magazine 15(4):27-6 1. 792 Learning
1996
117
1,752
ustness of the etic Algorit lock Representation. Robert K. Lindsay Annie S. Wu Mental Health Research Institute Artificial Intelligence Laboratory University of Michigan University of Michigan Ann Arbor, MI 48109-0720 Ann Arbor, MI 48109-2110 lindsay@umich.edu aswu@eecs.umich.edu Abstract Recent studies on a floating building block representa- tion for the genetic algorithm (GA) suggest that there are many advantages to using the floating represen- tation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA dur- ing the problem solving process, (2) functions which have negative-valued building blocks, and (3) random- izing non-coding segments. Results indicate that the GA’s performance on floating representation problems is very robust. Significant reductions in genetic mate- rial (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance. more fundamental AI problem is how ever more com- plex organisms can evolve, whether or not they are optimizers. Introduction Intelligent natural systems are complexly structured, modular organisms. Acting over billions of years, evo- lution has created organisms by combining simple com- ponents into more complex systems through selection from a diverse population of varied individuals. The Genetic Algorithm (GA) was pioneered by Holland (Holland 1975) as an abstract model of evolution based upon only a few of its then-known salient features. In the standard GA representation, individuals are repre- sented as binary strings, usually a few hundred digits in length, recombination is limited to c”rossozler (swap- ping of long segments between pairs of individuals) and mutation (random changes of single bits), adaptation is modeled by a real-number valued fitness function that (typically, but not necessarily) treats substrings of the individual as parameters in an algebraic formula, and higher fitness results in a higher probability of repro- duction. The bulk of research on the GA has been directed toward studying its ability to find function optima, a problem of interest to AI. However, a far In nature, the mechanisms are more complex than in the GA: there is a distinction between individuals that carry the genetic code (genotypes) and their sub- sequent development as organisms (phenotypes) that interact with the environment and each other; geno- types are strings of base-pairs, each of which can as- sume one of four values, and may be several billion in length; the range of genotype length among dif- ferent species is very large; genotype length is not strictly correlated with organism complexity; there are several restructuring mechanisms, some of which are still not well-understood; particular patterns of base- pair strings (genes) yield particular products, usually proteins; in the more complex organisms (eukaryotes) most of the genotype - strangely - does not code for any product (non-coding segments); the genes may lie anywhere on the genotype, since the product is deter- mined solely by the base-pair pattern (lo&ion inde- pendence); genes that are functionally related are of- ten but not always nearby, and though at no fixed dis- tance are in relatively similar order in same-species in- dividuals; genes may contain short base-pair sequences having no known purpose (introns) that are removed during the translation process; the genotype may com- prise several disjoint strings (chromosomes); the genes and gene products interact in complex ways including switching the expression of one another on and OR phe- notypes develop over extended periods of time through complex interactions of gene products; success means that an individual survives long enough to reproduce many times, which may take anywhere from a few sec- onds to more than a decade depending on the species; successful species persist over millions of years even after more complex species have evolved. Although gene products interact in complex ways! they are inherentIy modular because biochemical re- actions require only the presence of proteins in suf- Fundamental Issues 793 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. ficient concentrations rather than in precise arrange- ments such as are required in typical mechanical sys- tems or computer programs. Presumably, individual genes whose products serve useful functions in a vari- ety of contexts act ils building blocks. Building blocks may form larger building blocks when they become co- present in single individuals thus producing a combi- nation of components that is more fit than any taken individually. Hence over many generations a func- tional hierarchy is formed, and the components are pre- served because in combination they are highly fit and hence more successful, on average, at reproduction. In both nature and the GA, adjacency of base-pairs or bits is an important carrier of information, which is why crossover, which preserves most adjacency infor- mation, is fundamentally different from extensive mu- tation, which does not. The Building Block Hypothe- sis (BBH) of the GA states, in effect, that crossover is a mechanism that finds successful new combina- tions more rapidly than does random change because crossover takes advantage of the ordered structure of the fit fragments. The BBH is the central working hy- pothesis and motivation for GA research. Some form of the BBH is essential to the success of evolution, which would otherwise amount to ran- dom hill-climbing and be unlikely to produce com- plexly structured stable organisms exhibiting intelli- gence even in geologic time. Recent research has cast doubt on the general validity of the BBH in the stan- dard GA representation (Wu 1995) (Wu & Lindsay 1996). What else is needed in an evolutionary model to result in the beneficial effects predicted by the BBH? We have explored this question by modifying the rep- resentation used by the GA to reflect two of the above mentioned features of natural evolution that may plau- sibly contribute to this: location independent build- ing blocks and non-coding segments. We argue that the combination of these two features may lead to GA behavior that better maintains diversity, is robust in changing environments, better preserves building blocks that have been discovered, and increases the rate of successful recombination that produces struc- tured organisms. Initial studies on this new flouting building block rep- resentation have produced favorable results (Wu 1995) (Wu & Lindsay 1996). In this article, we investi- gate the effectiveness of the floating representation by studying its reaction to three, potentially difficult, sit- uations: (1) a reduction in the amount of genetic mate- rial available to the GA during a run, (2) the existence of negative building blocks in the function to be solved, and (3) randomized non-coding segments. Royal Road Functions The RR functions are a class of functions that were designed for studying building block interactions (Mitchell, Forrest, & Holland 1991). Building blocks in the GA are groups of bits that affect the fitness of an individual. The RR functions have multiple levels of hierarchically arranged building blocks. All build- ing blocks are defined and assigned optimum patterns and fitness contributions before a run begins. The op- timum individual is also defined in advance. The basic or lowest-level building blocks are shortest in length and upper-level building blocks consist of combinations of lower-level building blocks The fitness of an in- dividual is the sum of the fitness contributions of all building blocks that exist on that individual. Table 1 shows an example of an RR function. This RR function has eight basic building blocks, fifteen total building blocks, and four levels. The additional fitness support provided by the intermediate-level building blocks are expected to encourage the recombination and preser- vation of the basic building blocks, in effect, laying out a “royal road” for the GA to follow to the optimum individual. We chose the RR functions for our investi- gations because the pre-defined, hierarchical structure of an RR function allows us to carefully monitor the GA’s progress in solving the function and also allows for easy modification of the characteristics of the func- tion. For a detailed description of the GA used in these experiments the reader is referred to (Wu 1995). Floating lock Representation The floating building block representation introduced in (Wu 1995) (Wu & Lindsay 1996) investigates the idea of location independent building blocks for the GA. Instead of defining building blocks by their lo- cation on an individual, the existence and placement of building blocks depend on the existence and place- ment of predefined tags. Table 2 shows an example of a floating Royal Road (FRR) function. Like the function shown in table 1, this function has eight basic building blocks, fifteen total building blocks, and four levels. In the FRR function, however, basic building blocks may appear anywhere on an individual. Higher level building blocks still consist of combinations of lower level building blocks. At the start of a run, all build- ing blocks are assigned fitness contributions; each basic building block is assigned a unique optimum pattern. Optimum patterns are randomly generated at the start of each run, so building blocks have different optimum patterns from run to run. Everywhere the start tag is found on an individual, a possible building block exists immediately following this tag. For example, suppose we use the function shown in figure 2. The follow- 794 Learning i Schema b; Level 1; 0 11111111**********************************~**~*~*~*~*****~*~**~~* 0 1 ********11111111*************************~****~******~~~**~****~* 0 2 ****************~l~l~~~~*****************~********~*****~~~~**** 0 3 *********************~~*~~iilill~********~*~**~**~******~***~**~* 0 4 *******************************~*~~~~~~~~~*~~****~*******~**~~*~~ 0 5 ****************************************~~~~~~~l*~**~**~*~**~**~ 0 6 ***************************************~~~**~~*~~~~~~ii******~* 0 7 ****************************************~~*~*~***~**~~~**~~~~~~li 0 8 1111~11111i11111*************************~*~~****~*~**~~****~~** 1 9 ****************~lil~~ll~~lllill***********~~~***~**~******~**** 1 10 ********************************~lll~~~~~~~~~~l~***~**~**~~***~ 1 11 ****************************************~*~******~~~~~~~~~~~~~~~~ 1 12 11111111111111111111111111111111~*~*~********~*~*~**~**********~**~~ 2 13 ********************************~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2 14 111111111111111111111111111111111111111111111111111111111111111~ 3 Table 1: The RR function, 8x8. RR2. Each ba, i = 1,2, . . . . 15 shows the optimum pattern for one building block. *9s represent don’t cure bits. For each bi, there is a value li which indicates the level of that building block. Basic building blocks Upper level building blocks (Level 0, start tag:ll) BB Opt. pattern1 BB Level Component BB 0 01101111 8 1 01 1 10100110 9 1 23 2 00001001 10 1 45 3 10010111 11 1 67 4 11100010 12 2 0123 5 i0010111 13 2 4567 6 00101010 14 3 01234567 7 11111110 Table 2: The FRR function 8x8. FRR2. lRandomly generated at the start of each run. ing individual has eleven start tags indicating possible building blocks, but only three actual building blocks: 2 copies of building block 0 and 1 copy of building block 1. Start tags of possible building blocks are marked by an x while the start of existing building blocks are marked by the corresponding building block numbers. 1000110110111101001100110110111100 x x xxx x xx xxx 0 1 0 Like the RR functions, FRR building blocks con- tribute to the fitness of an individual only if their opti- mum pattern exists on an individual. The goal of the GA, then, is to find at least one copy of each building block on an individual. Note that with the floating rep- resentation comes many new issues such as non-coding segments (segments of bits that have no effect on the fitness of the individual), multiple copies of building blocks, and overlapping building blocks. These issues are discussed in greater detail in (Wu 1995). Figure 1 shows typical results from a comparison of 500 1 1 450 400 350 300 250 200 150 100 50 0 8x8 RR2 - _ 8x8 FRR2 - i _ I ! I 0 I I i, _ - ---------*----------- --e----_--- _ _________ 100 200 300 400 500 Genome length Figure 1: Average number of function evaluations needed to find the optimum vs. genome length: GA on 8x8. RR2 and 8x8. FRR2. GA performance on the fixed and floating represen- tations. The z-axis shows increasing genome length (length of an individual) and the y-axis shows increas- ing generations. In the fixed representation functions, extra bits from the longer genome lengths are dis- tributed evenly between the basic building blocks as non-coding segments. With the floating representa- tion, longer genome lengths simply allowed the GA more resources with which to work. Comparisons of GA performance on the fixed and floating representa- tions clearly indicate that long genome lengths are a big advantage for the floating representation. At short genome lengths, the average GA performance on the FRR functions is worse than the average GA perfor- mance on the RR functions. As the genome length Fundamental Issues 795 increases, GA performance on the floating representa- tion improves exponentially and quickly becomes sig- nificantly better than GA performance on the fixed representation. Once the genome length exceeds a cer- tain point, the performance levels off to ideal values. Density of Genetic Material Given the behavior shown in figure 1, why not sim- ply run the GA on floating functions using as long a genome length as possible ? The amount of “genetic material” used in each generation is not a trivial is- sue. The more genetic material there is to be analyzed, the more computation time is required. As a result, there is a tradeoff between the better performance of the floating representation on longer individuals and the extra computation time required to generate and process the longer individuals. The minimum genome length that a fixed RR function needs to code a so- lution is equal to the sum of the lengths of the basic building blocks. The minimum genome length for an FRR function depends on the arrangement of building blocks with the greatest overlap. This arrangement dif- fers for each set of building block optima. Because of the ability to overlap building blocks, FRR runs on av- erage use fewer bits to code for all of the building blocks of a function than corresponding RR runs. What then is the minimum genome length necessary for the GA to be able to find an optimum individual for an FRR function? We attempt to empirically estimate this minimum length. A series of experiments was carried out on the FRR function, 8x6. FRR2, which resembles the one described in table 2 except that basic building blocks are six instead of eight bits long. The longest genome length tested was 64 bits, and genome lengths were shortened by four bits in each consecutive experiment until the GA stopped finding optimum individuals. Each experiment was run 100 times and the average re- sults reported in figure 2. The x-axis shows increasing genome length. The yaxis values may be interpreted in one of three ways. The percentage of successful runs refers to the number of runs (out of a total 100) in which an optimum individual was found. The average generation found is calculated from only those runs where an optimum individual was found. The percent- age of coding bits, also calculated only from runs where an optimum was found, shows the average proportion of an optimum individual that was used to code for the solution. As genome length decreased, GA perfor- mance grew progressively worse. The GA was less suc- cessful at finding optimum individuals, required more time to find the optimum in those runs where it was successful, and used up a larger percentage of the bits I Percent successful runs - Average generation found --+I- Percent coding bits .-o-.--. 200 150 100 50 45 65 Figure 2: Results from shrinking the genome length of the 8x6. FRR2 function. of an individual to code for the optimum. The “per- cent coding bits” graph shows a tendency for the GA to use as much space as is available. As genome length shrank, the GA packed building blocks more compactly until the percentage of coding bits approached 100%. At this point, the GA had compacted the building blocks as tightly as possible and was unable to find optimum individuals at any shorter genome lengths. Studies sults. on other FRR functions produced similar re- Negative uilding Blocks A key phrase associated with evolutionary systems is “survival of the fittest.” Positive qualities are rewarded with increased chance of survival and reproduction, and negative qualities are discouraged and eliminated. The complex mechanisms of natural systems, however, have resulted in gene products that may interact and affect their organism in many ways, possibly produc- ing both positive and negative effects. This overlap in functionality is paralleled in the FRR function by overlapping building blocks. As in natural systems, one segment of an individual can affect the fitness of the individual in multiple ways. In using negative building blocks in the FRR func- tions, we introduce a situation where building blocks may contribute both positively and negatively to an individual’s fitness. Negative building blocks have neg- ative fitness contributions that decrease an individual’s fitness. However, a negative building block may be part of a larger, positive building block. This situa- tion is similar to the deceptive problems described in (Goldberg 1989) h w ere building blocks that are dis- couraged/encouraged early in a run may/may not lead to the optimum individual. Initial experiments showed 796 Learning 6000 1 1 I I GA - 32x8.FRR2 (nclen 5 - 5000 GA - 32x8.FRR2 (nclen 10 - I 4000 - 3000 - 2000 - 1000 - I--------.-m_m- __________------- 0 ----*-------------------- E ---3 I ’ 1 2 3 4 Negative level Figure 3: Number of function evaluations needed to Figure 4: Average number of generations needed to find the optimum vs. level of negative building blocks: find the optimum vs. genome length: GA and RNC- 32x8. FRR2. GA on 8x8. FRR2. that one or two negative building blocks had little ef- fect on GA performance. Our next test, then, was to investigate the effects of setting entire intermediate levels of building blocks to negative values. For each function tested, the intermediate level building blocks were assigned a fitness of -1 (while all other building blocks had fitness 1) one level at a time. The performance of the GA was evaluated to determine at which levels of a function is negative reinforcement most detrimental. Each experiment was run 50 times and the average values reported here. Figure 3 plots the data from a six level function called 32x8. FRR2 using genome lengths of 416 and 576. The x-axis indicates the level at which building blocks were set to negative fitnesses and the y-axis indicates the generation at which the optimum was found. Set- ting the lower level building blocks to negative fitnesses seemed to cause the most difficulty for the GA; more time was needed to find an optimum. This behav- ior supports the building block hypothesis’ implication that it is important to encourage the GA to retain short building blocks because they are easy to find and can be easily combined to form larger building blocks. Performance also degrades when upper level building blocks are set to negative fitnesses. Because of the hierarchical nature of the RR functions, a ten- dency to lose very large building blocks can result in significant setbacks in the search for solutions. This suggests that it is also important to encourage the GA to keep upper level building blocks in the population. The best performance occurred when the middle-most levels were set to negative fitnesses. Tests on other RR and FRR functions produced similar results. The need for positive reinforcement at the lower levels is 500 450 400 350 300 250 200 150 100 50 - 0 0 . I 200 300 400 500 Genome length even more important with the RR functions. andomized Non-coding Segments One of the unique features of the floating representa- tion is the fact that floating building blocks allow the bits of an individual to switch back and forth between coding and non-coding status. As mentioned earlier, non-coding bits are bits that are completely ignored by the fitness function and make no contribution to an individual’s fitness. Thus, when an existing building block is destroyed, parts of that building block may still be retained as non-coding regions. Creating a build- ing block - whether one that was just destroyed, or a different one - out of these parts should be easier than creating a new building block from scratch. In addition, there are some situations in which the fitness function may vary over time. The ability to save old building blocks in the population could reduce search time later on in a run when the old building blocks once again become “fitness contributing” building blocks. Thus the floating representation is thought to provide the GA with a limited “memory” capability that could improve its rediscovery abilities. To test this hypothesis, an additional step was added to the GA in between the evaluation and selection steps to “randomly initialize” the non-coding bits of each individual. This step should erase any partial building blocks that may have been saved in non-coding regions. Coding regions were left untouched. Figure 4 plots the average number of generations needed to find an optimum individual as genome length increases for the 8x8 .FRR2 function. “RNC” or ran- domized non-coding segments refers to the modified GA runs. Figure 5 plots the average number of times Fundamental Issues 797 8x8 FRR2 S-all - RNC 8x8 FRR2 S-all - 40 - 30 - 20 - 10 - 0. 0 100 200 300 400 500 Genome length Figure 5: Average number of times building blocks are found vs. genome length: GA and RNC-GA on 8x8 o FRR2. building blocks are found for the 8x8 .FRR2 function. Both plots compare the performance of the modified GA with the performance of the original GA. Con- trary to expectations, the modified GA performed bet- ter than the original GA in terms of both speed and stability. Similar behavior was seen in runs on other FRR functions. Analysis of these runs suggests the following reason for these unexpected results. While randomizing the non-coding bits may destroy previ- ous building block parts, it also increases the muta- tion rate, and hence the exploration, in the non-coding regions. In effect, this modified GA saves existing building blocks with minimal risk of loss while con- tinuing high-powered exploration in the as yet unused regions of the individuals. Though the randomization step may destroy previous building block parts, the detrimental effects of this action appear to be far out- weighed by the benefits of increased exploration. Conclusions Goldberg, D. E. 1989. Genetic algorithms and Walsh functions: Part II, deception and its analysis. Com- plex Systems 3~153-171. The first experiment looked into the density of build- ing blocks in floating representation solutions. Results showed that the GA can effectively solve floating rep- resentation problems even with significantly reduced genetic material. The GA tends to use as much ma- terial as it is given: runs with longer genome lengths will scatter building blocks randomly all over the in- dividuals while runs with shorter genome lengths will The fact that the GA was able to solve most of the Holland, J. H. 1975. Adaptation in Natural and Ar- experiments described in this paper makes a strong tificial Systems. University of Michigan Press. statement for the effectiveness of the floating represen- Mitchell, M.; Forrest, S.; and Holland, J. H. 1991. tation. Nevertheless, these are still preliminary results; additional studies are needed to fully understand the The royal road for genetic algorithms: Fitness land- implications of the floating representation on the GA. scapes and GA performance. In Toward a Practice of Autonomous Systems: Proc. of 1st ECAL. Wu, A. S., and Lindsay, R. K. 1996. A comparison of the fixed and floating building block representation in the genetic algorithm. Forthcoming. Wu, A. S. 1995. Non-coding DNA and floating build- ing blocks for the genetic algorithm. Ph.D. Disserta- tion, University of Michigan. pack building blocks more densely and have more over- laps on the individuals. As expected, the probability that a solution will be found at all decreases as genome length shrinks and eventually becomes zero when it is no longer physically possible to fit all building blocks on an individual. The second experiment studied the effects of nega- tive building blocks on GA performance. Entire levels of building blocks were assigned negative fitness contri- butions and the average performance of the GA com- pared. The results suggest that the middle-most levels of building blocks are the least important to the GA. Lower level building blocks are the most important be- cause the “pieces of the puzzle” must exist before the puzzle can be built. Upper level building blocks must be encouraged because they consist of such a large pro- portion of the entire solution. The third experiment studied a modified GA in which non-coding bits were randomized during each generation of a GA run. This extra step was expected to diminish GA performance because it would remove any “memory” capabilities (for saving parts of previ- ously existing building blocks) that non-coding seg- ments may have provided the GA. Instead, results indi- cated that performance improved with randomization and this improvement was due to increased exploration in the non-coding regions. Acknowledgments This research was sponsored by NASA/JSC under grant NGT-51057. Th e authors would like to thank John Holland for many discussions and suggestions re- lating to this work and Leeann Fu for many helpful comments on this article. eferences 798 Learning
1996
118
1,753
Carla E. Bradley School of Electrical and Computer Engineering Purdue University West Lafayette, IN 47906 brodley@ecn.purdue.edu Abstract This paper presents a new approach to iden- tifying and eliminating mislabeled training in- stances. The goal of this technique is to improve classification accuracies produced by learning al- gorithms by improving the quality of the training data. The approach employs an ensemble of clas- sifiers that serve as a filter for the training data. Using an n-fold cross validation, the training data is passed through the filter. Only instances that the filter classifies correctly are passed to the fi- nal learning algorithm. We present an empirical evaluation of the approach for the task of auto- mated land cover mapping from remotely sensed data. Labeling error arises in these data from a multitude of sources including lack of consis- tency in the vegetation classification used, vari- able measurement techniques, and variation in the spatial sampling resolution. Our evaluation shows that for noise levels of less than 40%, filter- ing results in higher predictive accuracy than not filtering, and for levels of class noise less than or equal to 20% filtering allows the base-line accu- racy to be retained. Our empirical results suggest that the ensemble filter approach is an effective method for identifying labeling errors, and fur- ther, that the approach will significantly benefit ongoing research to develop accurate and robust remote sensing-based methods to map land cover at global scales. Introduction A goal of an inductive learning algorithm is to form a generalization from a set of training instances such that classification accuracy on previously unobserved in- stances is maximized. The maximum accuracy achiev- able depends on the quality of the data and on the appropriateness of the biases of the chosen learning algorithm for the data. The work described here fo- cuses on improving the quality of the training data by identifying and eliminating mislabeled instances prior to applying the chosen learning algorithm, thereby in- creasing classification accuracy. Mark A. Fried1 Department of Geography and Center for Remote Sensing Boston University Boston, MA 02215 friedl@crsa.bu.edu For some learning tasks, domain knowledge exists such that noisy instances can be identified because they go against the “laws” of the domain. For example, in the domain of diagnosing Alzheimer’s disease, it is known that the illness strikes the elderly. An instance, describing a patient, labeled as sick (versus not sick) for which the patient’s age is ten is clearly incorrect. This is an exa,mple of an instance for which the class label is incorrect, or a faulty measurement of the age feature was recorded. For many domains, this type of knowledge does not exist, and an automated method is needed to eliminate mislabeled instances from the training data. The idea of eliminating instances to improve the performance of nearest neighbor classifiers has been a focus of research in both pattern recognition and instance-based learning. Wilson (1972) used a three- nearest neighbor classifier (3-NN) to select instances that were then used to form a l-NN; only instances that the 3-NN classified correctly were retained for the l-NN. Aha, Kibler and Albert (1991) demonstrated that filtering instances based on records of their contri- bution to classification accuracy in an instance-based classifier improves the accuracy of the the resulting classifier. Skalak (1994) created an instance selection mecha.nism for nearest neighbor classifiers with the goal of reducing their computational cost, which de- pends on the number of stored instances. The idea of selecting “good” instances has also been applied to other types of classifiers. Winston (1975) demonstrated the utility of selecting “near misses” when learning structural descriptions. Skalak & Riss- land (1990) d escribe an approach to selecting instances for a decision tree algorithm using a case-based re- trieval algorithm’s taxonomy of cases (for example “the most-on-point cases”). Lewis and Catlett (1994) il- lustrate that sampling instances using an estimate of classification certainty drastically reduces the amount of data needed to learn a concept. In this article we address the problem of identify- Inductive Learning 799 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. ing training instances that are mislabeled. Quinlan (1986) demonstrated that, for higher levels of noise, removing noise from attribute information decreases the predictive accuracy of the resulting classifier if the same attribute noise is present when the classifier is subsequently used. In the case of mislabeled training instances (class noise) the opposite is true; cleaning the training data will result in a classifier with higher pre- dictive accuracy. Cleaning can take one of two forms: removing mislabeled instances from the training data, or correcting their labels and retaining them. In the next section we introduce a method for iden- tifying mislabeled instances that is not specific to any single learning algorithm, but rather serves as a general method that can be applied to a dataset before feed- ing it to a specific learning algorithm. The basic idea is to use a set of learning algorithms to create classifiers that serve as a filter for the training data. The method was motivated by the technique of removing orrtliers in regression analysis (Weisberg, 1985). An outlier is a case (an instance) that does not follow the same model as the rest of the data, appearing as though is comes from a different probability distribution. Candidates are cases with a large residual error.’ Weisberg sug- gests building a model using all of the data except for the suspected outlier and testing whether it does or does not belong to the model using the externally stu- dentized t-test. Here, we apply this idea by using a set of classifiers formed from part of the training data to test whether instances in the remaining part of the training data are mislabeled. An important difference between our work and previous approaches to outlier detection is that our approach assumes that the errors in the class labels are independent of the particular model being fit to the data. In essence, our method attempts to identify data points that would be outliers in any model. Filtering Training This section describes a general procedure for iden- tifying and eliminating mislabeled instances from a training set. The first step is to identify candidate instances by using m learning algorithms (called filter algorithms) to tag instances as correctly or incorrectly labeled. To this end, a n-fold cross-validation is per- formed over the training data. For each of the n parts, the m algorithms are trained on the other n - 1 parts. The m resulting classifiers are then used to tag each instance in the excluded part as either correct or mis- labeled. An individual classifier tags an instance as c- ‘Not all residual cases are outliers because according to the model, large errors will occur with the frequency prescribed by the generating probability distribution. mislabeled if it cannot classify the instance correctly. At the end of the n-fold cross-validation each in- stance in the training data has been tagged. Using this information, the second step is to form a classifier using a new version of the training data for which all of the instances identified as mislabeled are removed. The filtered set of training instances is provided as input to the final learning algorithm. The resulting classifier is the end product of the approach. Specific imple- mentations of this general procedure differ in how the filtering is performed, and in the relationship between the filter algorithm(s) and the final algorithm. Implementing the General Procedure One approach to implementing this procedure is to use the same algorithm to construct both the filter and the final classifier. This approach is most similar to remov- ing outliers in regression analysis, for which the same model is used to test for outliers and for fitting the final model to the data once the outliers have been re- moved. A related approach method is John’s (1995) method for removing the training instances that are pruned by C4.5 (Q uinlan, 1993). After the instances are removed a new tree is constructed using the filtered training data. In this approach training instances are filtered based on C4.5’~ pruning decisions, whereas our approach filters instances based on classification deci- sions. A second way to implement filtering is to construct the filter using one algorithm and the final classifier using a different algorithm. The assumption under- lying this approach is that some algorithms act as good filters for other algorithms, much like some algo- rithms act as good feature selection methods for others (Cardie, 1993). W’l 1 son’s (1972) approach to filtering data for a l-NN using a 3-NN is an example of this approach. A third method is based on ensemble classifiers, which combine the outputs of a set of base-level classi- fiers (Hansen & Salamon, 1990; Benediktsson & Swain, 1992; Wolpert, 1992). A majority vote ensemble classi- fier will outperform each individual base-level classifier on a dataset if two conditions hold: (1) the probability of a correct classification by each individual classifier is greater than 0.5 and (2) if the errors in predictions of the base-level classifiers are independent (Hansen & Salamon, 1990). For this work, we use an ensemble classifier to detect mislabeled insta.nces by construct- ing a set of base-level detectors (classifiers) and then using them to identify mislabeled instances by consen- sus vote. This is distinct from majority vote in that all base-level detectors must agree that an instance is mislabeled for it to be eliminated from the training 800 Learning Discarded Mislabeled Training Instances Figure 1: Types of detection errors data. Consensus Filters In regression analysis, outliers are defined relative to a particular model. Here we assume that, some insta,nces in the data have been mislabeled and that the label errors are independent of the particu1a.r model being fit to the data. Therefore collecting information from different models will provide a better methocl for de- tecting mislabeled instances than from a single model. Training data (ground truth) for the purposes of land cover mapping is generally sca,rce. Indeed, this problem is common in many classification and learning problem domains (e.g. medical diagnosis). Therefore, we want to minimize the proba.bility of discarding a.n instance that is an exception rather than an error. In- deed, Danyluk and Provost (1993) note that, learning from noisy data is difficult because it is hard to distin- guish between noise and exceptions, especially if the noise is systematic. Ideally, the biases of at least one of the learning algorithms will enable it to learn the ex- ception. Therefore, one or more of the classifiers tha.t comprise the base-level set of detectors can have diffl- culty capturing a particular exception without causing the exception to be erroneously eliminatecl from t#he training data. Ta.king a consensus rather than a major- ity vote is a more conservative approach and will result in fewer instances being eliminated from the training data. The drawback of a conservative approach is the added risk in retaining bad data. In identifying mislabeled instances there are two types of error that can be made (see Figure 1). The first type (El) occurs when an instance is incorrectly tagged as mislabeled (D). The second type of error (Ez) occurs when a mislabeled instance (M) is tagged as correctly labeled. A consensus filter has a smaller probability of ma.k- ing an El error than each of its base-level detectors if the errors made by the base-level detectors are inde- pendent. Let pi be the probability that a base-level detector i makes an error of type El, then the proba- bility that a consensus filter comprised of m base-level detectors will make an error is: m i= 1 The probability of mistaking a mislabeled instance for a correctly labeled instance (&) is computed dif- ferently. Let qi be the probability that a base-level detector i makes an error of type ES. A consensus filter makes a type Ez error if one or more of the base- level classifiers makes a type E2 error. The probability that a consensus filter makes an E2 error is given by: m Qi L fv2) 5 c qi i = 1 The lower bound represents the case where each clas- sifier makes identical errors. The upper bound rep- resents the case where each base-level classifier makes type E2 errors on different parts of the data. There- fore, in direct contrast to type 1 errors, independence of the errors can lead to higher overall E2 error. Empirical Evaluation This research was motivated by the uncertainty caused by labeling errors in land-cover maps of the Earth’s surface. In this context, we evaluate the ability of con- sensus filters to identify mislabeled training instances in remotely sensed data that was labeled using exist- ing land cover maps. To simulate the type of error that is common to land-cover maps, we artificially in- troduced noise between pairs of classes that are likely to be confused in the original labels. We chose not to introduce noise between all pairs of classes as this would not model the types of labeling errors that occur in practice. Our experiments are designed to evaluate the consensus filter’s ability to identify mislabeled in- stances and the effect that, eliminating mislabeled in- st#ances has on predictive accuracy. Automated Land Cover Mapping The dataset consists of a time series of globally dis- tributed satellite observations of the Earth’s surface. The dataset was compiled by Defries and Townsend (1994), and includes 3398 locations that encompass all major terrestrial biomes2 and land cover types at the Earth’s surface (see Table 1). 2A biome is the largest subdivision of the terrestrial ecosystems. Some examples of biornes are grasslands, forests and deserts. Inductive Learning 801 1 2 3 4 5 6 7 8 9 10 11 Table 1: Land cover classes Class Name broadleaf evergreen forest conif. evergreen forest & woodland high lat. decid. forest & woodland tundra decid .-evergreen forest & woodland wooded grassland grassland bare ground cultivated broadleaf decid. forest & woodland shrubs and bare ground Insts 628 320 112 735 57 212 348 291 527 15 153 The remote sensing observations are measurements of a parameter called the normalized difference vege- tation index (NDVI). This index is commonly used to infer the amount of live vegetation present within a pixel at the time of data acquisition. Each one degree pixel is described by a time series of twelve NDVI val- ues at monthly time increments from 1987, and by its latitude, which can be useful for discriminating among classes with otherwise similar spectral properties. Learning Algorithms We used three well-known algorithms from the machine learning and statistical pattern recognition communi- ties: decision trees, nearest neighbor classifiers and lin- ear machines. The decision tree algorithm uses the in- formation gain ratio (Quinlan, 1986) to construct the tree, and prunes the tree using C4.5’~ pruning algo- rithm with a confidence level of 0.10. We choose to set k = 1 for the k-nearest neighbor algorithm, but in fu- ture implementations we will experiment with varying values of k. To find the weights of a linear machine (Nilsson, 1965) we used the thermal training rule for linear machines (Brodley & Utgoff, 1995). Experimental Method To test the data filtering procedure described above, we introduced random noise into the training data be- tween pairs of classes that are most likely to be con- fused in the d’riginal labels. In this way, we have re- alistically simulated a type of labeling error that is common to land cover maps. This type of error oc- curs because discrete classes and boundaries are used to distinguish between classes that have transitional boundaries in space and that have fairly small differ- ences in terms of their physical attributes. For exam- ple, the distinction between a grassland and wooded grassland can be subtle. Consequently, pixels labeled 802 Learning as grassland may in fact represent open woodland ar- eas and vice versa, especially at the one degree spatial resolution of the data used here. For this work, we in- troduced random error between the following pairs of classes: 3-4, 5-2, 6-7, 8-11, 5-10. For each of ten runs, the dataset was divided into a training (90%) set and a testing (10%) set. For each run, an even distribution over the classes was en- forced to reduce variation in performance across dif- ferent runs. After the data was split into independent train and test sets, we then corrupted the training data by introducing labeling errors. For a noise level 2, an individual observation whose class is one of the identi- fied problematic pairs has an 2% chance of being cor- rupted. For example, an instance from class 8 (bare ground) has an s% chance of being changed to class 11 (shrubs and bare ground), and an instance from class 11 has an 2% chance of being changed to class 8. Using this method the percentage of the entire training set that is corrupted will be less than Z% because only some pairs of classes are considered problematic. The actual percentage of noise in the corrupted training data is reported in column 2 of Table 2. For each of six noise levels, ranging from 0% to 40%, we compared the average predictive accuracy of clas- sifiers trained using filtered and unfiltered data. For each of the ten runs that make up the average, we used a four-fold cross-validation to filter the corrupted instances from the training data. The consensus filter consisted of the following base-level classifiers: a de- cision tree, a linear discriminant function and a l-NN classifier. To assess the ability of the consensus filter to identify the corrupted instances we then trained each of the three algorithms twice: first using the unfiltered dataset then using the filtered dataset. In addition, we formed two majority vote ensemble classifiers: one from the filtered and one from unfiltered data. The ma- jority vote ensemble serves as the final classifier and not as the filter. The resulting classifiers were then used to classify the uncorrupted test data. AfTect of Filtering on Classification Accuracy Table 2 reports the accuracy of the classifiers formed by each of the three algorithms without a filter (none) and with a consensus filter (CF). When zero noise is introduced, filtering did not make a significant differ- ence for any of the methods. Since the original data is not guaranteed to be noise free, we have no way to evaluate whether it improves the true classification ac- curacy by using the test data. For noise levels of 5 to 30%, filtering significantly improved the classification accuracy (at the 0.05 level of significance using a paired Table 2: Comparison of classification accuracy of filtered versus unfiltered data Noise Actual 1-NN LM Level Noise None CF None CF 0 -----m- 87.3 87.5 78.6 80.0 5 3.4 84.4 87.8 76.7 78.9 10 7.1 81.8 86.4 77.5 79.2 20 13.8 75.8 83.1 70.2 78.1 30 23.3 68.6 75.2 63.4 74.0 40 36.1 58.4 59.9 49.0 54.2 t-test) in all cases except when a linear machine was the final classifier under noise levels 5% and 10%. For noise levels of 5% and 10% the filtering allows reten- tion of approximately the same accuracy as the original uncorrupted dataset. At 20% noise the accuracies of k- NN and the decision tree constructed from the filtered data begin to drop, but not as substantially as when they are constructed from the unfiltered data. For ex- ample, applying a decision tree to the unfiltered data set causes a drop of 12.2% (100 * ($5.6 - 75.2)/ $5.6) from the base-line accuracy versus a 4.4% drop when using the filtered data. At 30% noise, filtering cannot fully overcome the error in the data and for noise levels of 40% and over, consensus filtering does not help. A hypothesis of interest is whether a majority vote ensemble classifier can be used instead of filtering. The final column of Table 2 reports the accuracies of major- ity vote ensembles constructed from unfiltered and fil- tered data. Each ensemble consists of three base-level classifiers: a l-NN, a linear machine and a univariate decision tree. During classification, if no ma.jority cla.ss was predicted for an instance, then the ensemble se- lects the class predicted by the univariate decision tree. The results show that the majority ensemble classifier formed from the unfiltered dataset is not as accurate as the l-NN and the decision tree formed from filtered data (excluding the case of a tree constructed with 5% noise). Using a consensus filter in conjunction with a majority vote ensemble results in higher accuracy than any of the other methods in all but three cases (l-NN for noise levels 0, 5 and 40). In summary, the results show that the consensus filter improves the classifk- tion accuracies of all four learning methods. Another point worth noting is that applying the con- sensus filter to the training data leads to substantia.lly smaller decision trees. Table 3 reports the number of leaves in decision trees produced from the filtered and unfiltered data. For O-10% noise, the filtered data cre- ates a tree with fewer leaves than a tree induced from the original dataset. This affect was also observed by John (1995) and attributed to Robust C4.5’~ ability to remove “confusing” instances from the training data, thereby reducing the size of the learned decision trees. UTree None CF 85.6 85.5 84.4 $6.1 80.1 $5.1 75.2 $1.8 67.4 74.2 56.9 59.6 Table Majority None CF $7.3 87.1 $6.5 87.2 $4.4 86.7 $0.7 85.6 71.7 79.0 57.5 59.7 3: Tree size - number of Noise None CF 0 187.7 121.4 5 270.7 126.0 10 333.0 143.5 20 419.2 189.7 30 484.6 262.4 40 517.7 302.8 The reduction in tree size and the improvement in accuracy raise the question as to whether filtering is just getting rid of those parts of the data that are difIicultO to classify, thereby achieving higher overall accuracy. In other words, is the filtering procedure throwing out the instances of a particular class, or of a subset of the classes, to achieve higher accuracy on the rema.ining classes. To test this hypothesis we ex- amined the decision tree’s accuracy for each class at each noise level. The results are shown in Table 4. We have placed a t by the class numbers of those classes that were artificially corrupted. For a noise level of 5% filtering decreased accuracy for classes 3, 5, 7 and 9, but by less than 2.5%. For a noise level of 10% the accuracy of class 1 went down by 0.7 %. For a noise level of 20% classes 5 and 10 decreased in accuracy. For a noise level of 30% the accuracies of classes 5 and 7 decreased with filtering a.nd for 40% the accuracy of classes 4, 5, 6, and 8 de- creased. For classes that did not have noise added to them (classes 1 and 9), filtering generally did not de- crease accura.cy and in some cases increased accuracy. For class 5, filtering decreases accuracy in four out of the five cases. Class 5 is unique in that its labels were artificially corrupted with both class 10 and class 2, thereby doubling the noise level. In general, the results show that for this dataset, filtering does not sacrifice the accuracy of some of the classes to improve overall Inductive Learning 803 Table 4: Decision tree accuracy by class Noise Filter 1 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 lot 11i 5 None 94.3 78.4 92.7 92.9 40.0 81.0 66.8 90.4 78.5 0.0 82.0 5 CF 96.0 79.7 91.8 95.6 38.0 84.8 64.4 97.3 77.7 0.0 88.0 10 None 96.0 73.1 80.9 86.7 46.0 74.8 58.2 $9.7 73.5 0.2 74.0 10 CF 95.3 79.7 88.2 94.8 46.0 83.8 63.2 94.2 78.4 0.2 81.3 20 None 94.0 64.4 65.4 80.0 40.0 68.1 55.9 74.5 76.3 20.0 70.0 20 CF 96.6 75.9 80.0 87.7 38.0 78.6 62.1 89.7 77.9 0.0 72.7 30 None 95.3 50.9 54.5 64.4 22.0 58.1 52.4 67.9 74.2 0.0 54.0 30 CF 95.7 64.1 54.5 76.3 16.0 68.1 50.9 74.8 81.5 0.0 71.3 40 None 94.2 44.4 47.3 47.0 18.0 47.1 32.4 44.8 72.7 0.0 39.4 40 CF 97.0 51.2 49.1 45.1 12.0 43.8 35.3 42.8 83.8 0.0 47.3 accuracy, and that it can retain base-line accuracy in noisy classes for noise levels of up to 20%. Filter Precision To assess the consensus filter’s ability to identify misla- beled instances, we examined the intersection between the set of instances that were corrupted and the set of instances that were tagged as mislabeled by the con- sensus filter. In Figure 1 this is the a.rea A4 fl D. The results of this analysis are shown in Table 5. Each row in the table reports the average over the ten runs of the number of instances discarded by the filter 1 D 1, corrupted in the data 1 M I, in both sets 1 M n D I, and estimates of the probability of making an El or an E2 error. P(Er) represents the probability of throwing out good data and can be estimated as: f’(G) = Discarded - Intersect Total - Corrupted = WI-IMf-w Total- I M I P(E2) represents the probability of keeping ba.d da.ta and can be estimated as: P(E2)= Corrupted - Intersect IMI-IMnDI = Corrupted WI There are 3063 (90% of 3398) total training in- stances. Therefore, for a noise level of 5%, P(EI ) = 257.8-89.5 3063-103.0 = .057 and P(E2) = ‘“~&~g.5 =‘.13i. -’ The results show that the probability of throwing out good data remains small even for higher noise lev- els, illustrating that the consensus filter is conservative in discarding data. On the other hand, the result,s il- lustrate that the probability of keeping bad data grows rapidly as the noise level increases. Indeed for a noise level of 40% it has a 72% chance of retaining misla- beled instances. This comes as no surprise since 40% noise makes it difficult to distinguish between pairs of classes that have been corrupted, which is evident from Table 5: Consensus filter precision Number of Instances PI WI IMnDl 5 257.8 103.0 89.5 10 353.7 217.7 171.8 20 465.0 422.7 272.8 30 559.8 712.2 324.9 40 609.8 1106.4 314.8 Prob. of Error P(E1) P(E2) 0.057 0.131 0.064 0.211 0.073 0.355 0.100 0.544 0.151 0.716 the low accuracies observed for these classes in Table 4. For higher levels of noise, the consensus filter did not find many of the mislabeled instances. For do- mains with high class noise a less conservative ap- proach may do a better job at minimizing type E2 errors. Indeed, a drawback of the consensus filter is that it minimizes El errors at the expense of incurring more E2 errors. Therefore future work will focus on modeling this tradeoff explicitly as a parameter. In addition, we will develop methods for customizing this parameter to the dataset characteristics of the partic- ular task at hand. A straightforward way to model the tradeoff is to ha.ve the parameter be set to be the min- imum number of base-level classifiers that must label an instance as noisy before it can be discarded. At one end of the spectrum a.ll ba.se-level classifiers must label an instance a.nd at the other only one must la.bel an instance before it ca.n be discarded. Conclusions and Future Directions This article presented a procedure for identifying mis- labeled instances. The results of an empirical evalua- tion demonstrated that the consensus filter method im- proves classification accuracy for a la.nd-cover mapping task for which the training data contains mislabeled in- stances. Filtering allowed the base-line accuracy to be 804 Learning retained for noise levels up to 20%. An evaluation of the precision of the approach illustrated that consensus filters are conservative in throwing away good data, at the expense of keeping mislabeled data. A future direction of research will be to extend the filter approach to correct labeling errors in training data. For example, one way to do this might be to rela- bel instances if the consensus class is different than the observed class. Instances for which the consensus filter predicts two or more classes would still be discarded. This direction is particularly important because of the paucity of high quality training data availa.ble for many applications. Finally, the issue of determining whether or not to use the consensus filter method for a given data set must be considered. For the work described here, the data were artificially corrupted. Therefore the nature and magnitude of the labeling errors were known a pri- ori. Unfortunately, this type of information is rarely known for most “real world” applications. In some sit- uations, it may be possible to use doma.in knowledge to estimate the amount of la,bel noise in a dataset. For situations where this knowledge is not available, the conservative nature of our filtering procedure dic- tates that relatively few instances will be cliscarcled for data sets with low levels of labeling error (see Table 5). Therefore, the application of this method to rela.tively noise free datasets should not significantly impact the performance of the final classification procedure. Acknowledgments We would like to thank our reviewers for their ca.reful and detailed comments and Ruth Defries for supplying the data used for this work. References Aha, D., Kibler, D., AZ Albert, M. (1991). Instance- based learning algorithms. Machine Learning, 6, 37-66. Benediktsson, J., & Swain, P. (1992). Consensus theo- retic classification methods. IEEE Transactions on Systems, Man, and Cybernetics, 22, 668-704. Brodley, 6. E., & Utgoff, P. E. (1995). Multivariate decision trees. Machine Learning, 19, 45-77. Cardie, C. (1993). Using decision trees to improve case- based learning. Machine Learning: Proceedings of the Tenth International Conference (pp. 25-32). Amherst, MA: M0rga.n Kaufmann. Danyluk, A., & Provost, F. (1993). Sma.11 disjuncts in action: Learning to dia.gnose errors in the telephone network local loop. Machine Learning: Proceedings of the Tenth International Conference (pp. 81-88). Amherst, MA: Morgan Kaufmann. Defries, R. S. , & Townsend, J. R. G. (1994). NDVI-derived land cover classifications at a global scale. International Journal of Remote Sensing, 15, 3567-3586. Hansen, L. K., & Salamon, P. (1990). Neural network ensembles. IEEE Transactions of Pattern Analysis and Machine Intelligence, 12, 993-1001. John, G. H. (1995). Robust decision trees: Remov- ing outliers from data. Proceedings of the First International Conference on Knowledge Discovery and Data Mining (pp. 174-179). Montreal, Quebec: AAAI Press. Lewis, D., & Catlett, J. (1994). Heterogeneous uncer- tainty sa.mpling for supervised learning. Machine Learning: Proceedings of the Eleventh Interna- tional Conference (pp. 148-156). New Brunswick, NJ: Morgan Kaufmann. Nilsson, N. J. (1965). L earning machines. New York: McGra.w-IIill. Quinlan, J. R.. (1986). Induction of decision trees. Ma- chine Learning, I, 81-106. Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann. Skalak, D., 8r; R.issla.nd, E. (1990). Inductive learning in a mixed paradigm setting. Proceedings of the Eighth National Conference on Artificial Intelli- gence (pp. 840-847). Boston, MA: Morgan Kauf- mann. Ska.lak, D. (1994). Prototype and feature selection by sampling and random mutation hill climbing al- goritllins. Machine Learning: Proceedings of the Eleventh International Conference (pp. 293-301). New Brunswick, NJ: Morgan Kaufmann. Weisberg, S. (1985). Applied linear regression. John Wiley & Sons. Wilson, D. (1972). Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans. on System.s, Man and Cybernetics, 2, 408-421. Winston, I’. II. (1975). Learning structura.1 descriptions from exa,mples. In Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill. Wolpert, D. II. (1992). Stacked generalization. Neural Networks, 5, 24 l-259. Inductive Learning 805
1996
119
1,754
Tracking Dynamic Team Activity Milind Tambe Information Sciences Institute and Computer Science Department University of Southern California 4676 Admiralty Way, Marina de1 Rey, CA 90292 tambe@isi.edu http://www.isi.edu/soar/tambe Abstract AI researchers are striving to build complex multi-agent worlds with intended applications ranging from the RoboCup robotic soccer tournaments, to interactive virtual theatre, to large-scale real-world battlefield simulations. Agent track- ing - monitoring other agent’s actions and inferring their higher-level goals and intentions - is a central requirement in such worlds. While previous work has mostly focused on tracking individual agents, this paper goes beyond by focus- ing on agent teams. Team tracking poses the challenge of tracking a team’s joint goals and plans. Dynamic, real-time environments add to the challenge, as ambiguities have to be resolved in real-time. The central hypothesis underlying the present work is that an explicit team-oriented perspective enables effective team tracking. This hypothesis is instantiated using the model tracing technology employed in tracking individual agents. Thus, to track team activities, team models are put to service. Team models are a concrete application of the joint inten- tions framework and enable an agent to track team activities, regardless of the agent’s being a collaborative participant or a non-participant in the team. To facilitate real-time ambigu- ity resolution with team models: (i) aspects of tracking are cast as constraint satisfaction problems to exploit constraint propagation techniques; and (ii) a cost minimality criterion is applied to constrain tracking search. Empirical results from two separate tasks in real-world, dynamic environments - one collaborative and one competitive - are provided. Introduction In many multi-agent domains, the interaction among in- telligent agents - collaborative or non-collaborative - is both dynamic and real-time. For instance, in educa- tion, intelligent tutoring systems interact with students to provide real-time feedback(Anderson et al. 1990). In entertainment, projects such as virtual immersive envi- ronments(Maes et al. 1994) and virtual theatre(Hayes- Roth, Brownston, & Gen 1995) involve real-time and dy- namic interaction. Similarly, in training, dynamic, real- time simulations - e.g., traffic or air-traffic control(Pi- mentel & Teixeira 1994) and combat(Rao & Murray 1994; Tambe et al. 1995) simulations - involve such collabora- tive and non-collaborative interaction among tens or hun- dreds of agents (and humans). Such interaction is also seen 80 Agents in robotic domains, e.g., collaboration by observation(Ku- niyoshi et al. 1994), RoboCup soccer(Kitano et al. 1995). In all these environments, agent tracking - monitoring other agents’ observable actions and inferring their high- level goals, plans and behaviors - is a central capability required for intelligent interaction(Anderson et al. 1990; Rao 1994; Tambe & Rosenbloom 1995). While this capa- bility is obviously essential in non-collaborative settings, it is also important in collaborative settings, where communi- cation may be restricted due to cost or risk. The key to this capability is tracking an agent’s flexible and reactive behav- iors in dynamic, multi-agent domains. This contrasts with previous work in the related area of plan recognition(Kautz & Allen 1986), which mostly focuses on static, single-agent domains. This paper takes a step beyond tracking individual agents - the current state of the art in agent tracking and plan-recognition - by focusing on team tracking. We (humans) see team activity all around, e.g., team- work in games (soccer, hockey or bridge), an orches- tra, a ballet, a discussion, a play, etc. Naturally, this teamwork is being reflected in various agent worlds, e.g., RoboCup. The key in tracking such teamwork is to rec- ognize that it is not merely a union of individual simulta- neous activity, even if coordinated(Grosz & Sidner 1990; Cohen & Levesque 1991). For instance, ordinary auto- mobile traffic is not considered teamwork, despite the si- multaneous activity, coordinated by traffic signs(Cohen & Levesque 1991). Teamwork involves team members’ joint goals and joint intentions, i.e., joint commitments to joint ac- tivities(Cohen & Levesque 1991). Consequently, tracking teamwork as independent activities of individual members is difficult. Consider the example of two children collabo- ratively building a tower of blocks(Grosz & Sidner 1990) - they cannot be tracked as building two individual tow- ers of blocks with gaps in just the right places. Similarly, in (RoboCup) soccer, the collaborative pass play of two at- tackers cannot be tracked as independent activities. Robotic collaboration by observation(Kuniyoshi et al. 1994) would also require tracking such joint activities. Thus, team tracking raises the novel challenge of track- ing a team’s joint goals and intentions. Previous ap- proaches(Anderson et al. 1990; Rao 1994; Tambe & Rosen- From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. bloom 1993, that focus on tracking individual agents, fail to track such team activities. One basic problem is in- expressiveness. In particular, these approaches are based on model tracing, which involves executing an agent’s runnable model, and matching the model’s predictions with actual observations. However, an individual’s model simply does not express a team’s joint goal and activities. Further- more, by failing to exploit such jointness, these approaches also fail to adequately meet the demands of dynamic, real- time domains (the focus of our current work). The main difficulty in real-time domains is that the tracker (tracking agent) has to resolve ambiguities in multiple team members’ actions in real-time. Here, the jointness of teamwork is itself key in addressing this challenge. In particular, given this jointness, recognizing one team-member’s actions helps to disambiguate other members’ actions. Unfortunately, un- able to exploit such jointness, the individual-oriented ap- proaches engage in unconstrained search; this can be par- ticularly problematic when tracking large teams. Finally, the above approaches also fail to address the dynamism in teamwork, particularly, dynamic formation and dissolution of subteams for different subtasks. Some recent work has attempted to go beyond individuals and track multipleagents. One such approach tracks a group of agents engaged in identical activity(Tambe 1995). Yet, a group (e.g., cars driving in ordinary traffic) differs from a team (e.g., driving in a convoy) precisely due to the lack of any jointness of purpose. Thus, for instance, this approach fails to track teamwork where agents engage in non-identical activities. An alternative approach(Rao 8~ Murray 1994), although focused on multi-agent domains, tracks mental states of individual agents rather than joint mental states of teams. As shown later, such individual oriented approaches are found lacking in both solution quality (due to lack of expressiveness) and efficiency (due to lack of jointness) when tracking teams. As a concrete basis for the above discussion, consider the following example of real-world teamwork - a typical simulated air-combat scenario (Figure l), from a real-world combat simulation environment(Calder et al. 1993). Here, a pilot agent D confronts a team of four enemy fighters J, M, L and M. In Figure l-a, D detects the four opponents turning towards its aircraft, and infers that they are approaching it with hostile intent. In Figure l-b, the four opponents split up into two subteams, and begin a pincer maneuver(Shaw 1988). That is, one subteam (J and K) starts turning left, while the other subteam (L and M) starts turning right. Their goal is to trap D in the center, and attack it from two sides. By correctly tracking the pincer D effectively counteracts it - it turns away from the center (Figure l-b). Although this puts the second subteam (L and M) outside of sight, recognizing the pincer also enables D to anticipate this second subteam’s possible actions. In Figure l-c, upon reaching its missile firing range, D turns and fires a missile at J. In Figure l-d, D executes an Fpole turn, to provide radar guidance to its missile, without flying right behind the missile. Although D’s missile is invisible to their radars, J and K track D’s maneuvers and infer a missile firing. (a) (4 I (4 Figure 1: Simulated 1 vs4 air-combat. Therefore, in Figure l-d, they attempt to evade the missile (actually its radar-guidance) via a 90’ beam turn. While beaming defeats ‘s missile, it also, unfortunately, disrupts the team’s pincer. Basically, unaware that J and K have the pincer, the second subteam (L its part. Meanwhile, anticipating ossible turn behind its (D’s) back ans an appropriate response. One key point of this scenario is a concrete illustration of the need to express a team’s joint goals and intentions. In Figure l-b, for instance, the four opponents are not ex- ecuting independent left and right turns! They are jointly executing a pincer. Yet, by focusing on individual models, D may be unable to express such jointness. In particular, D may possibly execute an individual model to track an individual, such as J, as executing a pincer. However, J’s singlehanded pincer is meaningless, since a pincer mandates the participation of two or more agents. This expressive in- adequacy persists even if all agents are tracked as simulta- neously executing individual pincers, e.g., is this one pincer with all four agents involved (Figure l-b) or two separate pincers with two agents each? The scenario also illustrates the difficulty in real-time ambiguity resolution. In Figure l-b, for instance, a pin- cer is only one of many possible team tactics. The team could be beginning a post-hole tactic, where one subteam turns in a circle, to confuse D by disappearing and reap- pearin n its radar, while the second subteam attempts attack Or, each subteam may possibly be attacking independently. In resolving such ambiguity, it is useful to exploit the team’s jointness, i.e., to recognize that if one sub- team is executing one half of the pincer, the other subteam must be executing the other half, and cannot be engaged in some unrelated activity. ’ Finally, the scenario also shows the dynamic formation of subteams and their sometimes ‘This work will assume that subteams begin a together; over time though, a subteam may deviate. joint activity Multiagent Learning 81 unsychronized activities. Team members begin with almost identical activities(Figure l-a), then dynamically split into subteams to begin a pincer (Figure I-b), and finally, one subteam dynamically deviates from its role (Figure l-d). The key hypothesis in this paper is that adopting a team perspective enables effective and efficient tracking of a team’s activities, thus alleviating the difficulties dogging the agent-oriented approaches. In model tracing terms, this implies executing a team’s runnable model, which predicts the actions of the team and its subteams (rather than sepa- rate models of individual team members). A team model treats a team as a unit, and thus explicitly encodes joint goals and intentions required to address the challenge of tracking a team’s joint mental state. Indeed, team tracking based on team models is among the first practical applica- tions of thejoint intentions theory developed in formalizing teamwork(Cohen & Levesque 1991). The paper shows that team models are uniformly applicable in tracking even if an agent is a participant in a team, rather than a non- participant. Furthermore, it shows that the team models are efficient: (i) they explicitly exploit a team’s jointness to constrain tracking effort; and (ii) by abstracting away from individuals, they avoid the execution of a large number of individual agent models. This abstraction in team models also provides robustness, e.g., changes in number of team members may not disturb tracking. To track with such team models in real-time dynamic environments, we build on the RESC(Tambe & Rosenbloom 1995) approach for tracking individual agents. The new approach, RESCteam, is aimed at real-time, dynamic team tracking. Before describing team models and REXteam in more detail, the following section first provides an overview of RESC. The description below assumes as a concrete basis, pilot agents based on the Soar architecture(Tambe et al. 1995). We assume some familiarity with Soar’s problem solving, specifically, applying operators to states to reach a desired state(Newel1 1990). RESC: tiacking Individual Agents The RESC (REal-time Situated Commitments) approach to agent tracking(Tambe & Rosenbloom 1995) builds on model tracing(Anderson et al. 1990). Here, a tracker ex- ecutes a model of the trackee (the agent being tracked), matching the model’s predictions with observations of the trackee’s actions. One key innovation in RESC is the use of commitments. In particular, due to ambiguity in trackee’s actions, there are often multiple matching execu- tion paths through the model. Given real-time constraints and resource-bounds, it is difficult to execute all paths, or wait so trackee may disambiguate its actions. There- fore, RESC commits to one, heuristically selected, execu- tion path through the model, which provides a constraining context for its continued interpretations. Should this com- mitment lead to a tracking error, a real-time repair mecha- nism is invoked. RESC is thus a repair-based approach to tracking (like a repair-based approach to constraint satisfac- tion(Minton et al. 1990)). A second key technique in RESC leads to its situated- 82 Agents ness, i.e., ability to track the trackee’s dynamic behaviors. A key assumption here is that the tracker is itself capable of the flexible and reactive behaviors required in this environ- ment. That is, the tracker’s architecture can execute such behaviors. Therefore, this architecture is reused to execute the trackee’s model to allow dynamic model execution. As a concrete example of RESC, consider J in Figure 1 -d, assuming J is the only oppon illustrate architectural reuse for tracking, we first describe ‘s generation of its own behaviors. Figure 2-a illustrates ‘s operator hierarchy during its Fpole (Fi l-d). The top operator, execute-mission indicates that executing its mission (e.g., defend against intruders). e the mis- sion is not complete, selects the intercept operator in a subgoal to combat its opponents. In service of intercept, D applies employ-missile in the next subgoal. Since a mis- sile has been fired, D selects thefpole operator in the next subgoal to guide the missile with radar. In the final sub- goal, maintain-heading causes D to maintain its heading. All these operators used for generating D’s own actions will be denoted with the subscript D, e.g., fpolen. Gperatoru will denote an arbitrary operator in D’s operator hierar- chy. Stateu will denote D’s st Together, stateu and the operatoru hierarchy constitu ‘s model of its present dynamic self, referred to as modelu. OperatOrI) Hierarchy Figure 2: (a) Modelu; (b) M de1n.r. Dashed lines are unse- OperatorDJ Hierarchy j EXECUTE-MISSION 1 --A---, 1 %-Y&FLIGHT , lected alternative operators. Modeln and ModeluJ need not be identical. To reuse own architecture for tracking such as the one in Figure 2-b to track the hierarchy represents D’s model of J’s current operators in the situation in Figure l-d. These operators are denoted with the subscript D J. This operatorDJ hierarchy and stateDJ constitute D’s model of J or modelnJ, use o track J’s behavior. For instance, in the final subgoal, applies the start-&-maintain-tumDJ operator, which predicts J’s action. Thus,if J starts turning right towards beam, then there is a match with modeluJ - D believes that J is turning right to beam and evade its missile, as indicated by the higher-level operators in the operatorDJ hierarchy. Such architectural reuse provides situatedness in RESC, e.g., operatorDJ may now be reactively terminated, and flexibly selected, to respond to the dynamic world sit- uation, (Such architectural reuse is also possible with other architectures(Hayes-Roth, Brownston, & Gen 1995; Rao 1994).) As for RESC’s commitments, notice that from D’s perspective, there is some ambiguity in J’s right turn in Figure l-d - it could be part of a 90” beam turn or a 150” turn to run away. Yet, commits to just one operatorDJ hierarchy. This commitment may be inaccurate, resulting in a match failure, i.e., a difference between the modeloJ’s prediction and the actual observed action. For example, if J were to actually turn 150°, there would be a match failure. RESC’s primary repair mechanism to recover from such failures is “current-state backtracking”, which involves backtracking over the operatorDJ hierarchy, within the con- text of the current continuously updated state. Thus, RESC attempts to generate a matching new operatorDJ hierarchy without re-examining past states. Tracking with Team Models To step beyond tracking individuals, and track a team’s goals and intentions, team models are put to service. A tracker’s model of a team consists of a team state and team operators. A team state is used to track a team’s joint state, and it is the union of a shared part and a divergent part. The shared part is one assumed common to all team members (e.g., overarching team mission, team’s participants). The divergent part refers to aspects where members’ states differ (e.g., 3-D positions). One approach to define this divergent part is to compute a region or boundary encompassing all individual members; another approach may be to compute an average of individual attributes. While these approaches are desirable, in the absence of appropriate low-level sen- sors, their cost can be prohibitive. Therefore, the approach currently preferred in this work is to equate the divergent part to the state of a single paradigmatic (or representa- tive) agent within the team, e.g., the team’s orientation is the paradigmatic agent’s orientation (as in (Tambe 1995)). Such a paradigmatic agent is selected by a separate module (which currently selects one agent in a prominent location). Thus, a generic team 0 is represented as { mp ,{ mt . ..mp . ..}}. where rni are some arbitrary number of team members, and rnp is the paradigmatic agent. 0 may have N sub-teams, (~1 . ..gN. each also possessing its own members, state and paradigmatic agents. Unless subteams are known in ad- vance, they are detected dynamically based on individual agent movements. Similarly, merging of subteams into a larger team is also detected (see the final section for further discussion of detecting (sub)teams). A dynamically de- tected subteam inherits the joint part, but not the divergent part. Thus, 7, the team of opponents in Figure l-b con- sists of {J,{J,K,LM}}, with two subteams St={J,{J,K}} and &={ L,{L,M}}. For the sake of consistency, a sin- gle agent is considered a singleton team, which is its own paradigmatic agent: { mt ,{ mr }}. A team operator in a team model represents the team’s joint commitment to a joint activity. A key aspect of a team operator are the roles, which define activities that subteams undertakes in service of the team operator. For instance, in the game of bridge, opponents’ bidding team operator has two roles, e.g., NORTH and SOUTH. The pincer team op- erator in Figure l-b has two roles LEFT and RIGHT. While the notion of roles has previously appeared in the context of teamwork(Kinny et al. 1992), it is exploited here via the specification of a role coherency constraint, i.e., there must be one subteam per role for the performance of the team operator. However, roles for a single operator need not all be distinct. For the team 0, a team operator with R roles is denoted as operator@< yi , . . ., 7~ >. The chil- dren of this operator in the operator hierarchy must then define the activities for subteams in these roles. Thus, for instance, the opponents’ pincer in Figure l-b is denoted pincerl<LEFT,RIGHT> (and D’s model of it is denoted pincely>l <LEFT,RIGHT>). Some abstract high level op- erators may require multiple definitions as they allow mul- tiple role combinations. Such operators may essentially impose no role coherency constraints; hence their roles are not explicitly denoted. Following is now the RESC team approach to team track- ing: I. Execute the team model on own (tracker’s) architecture. That is, commit to a team operator hierarchy and apply it to a team state to generate predictions of a team’s action. In doing so, if alternative applicable operators available (ambiguity): (a) Prefer ones where number of subteams equals number of roles. (b) If multiple operators still applicable, heuristically select one. 2. Check any tracking failures, if none, got0 step 1. specifically, match or role failures; 3. If failure, determine if failure in tracking the entire team or just one subteam. If team failure, repair the team operator hierarchy. If one subteam’s failure, remove subteam assignment to role in team operator, repair only subteam hierarchy. Goto step I. Step 1 reuses tracker’s architecture for flexible team model execution, to track dynamic team activity. Step l(a) selects among multiple operators based on the number of subteams, while 1 (b) relies on domain-independent and de- pendent heuristics for such selection, e.g., one heuristic is assuming the worst about an opponent in an adversarial setting. The commitment in step 1 creates a single team op- erator hierarchy. With this commitment RESCteam always has a current best working hypothesis about the team activ- ity - an anytime quality suitable for a real-time domain. In step 2, tracking failure is redefined in RESC,,,. Match failure - where a team’s actions (e.g., orientation) does not match RESCteam’s current predictions - is cer- tainly a tracking failure. However, in addition, inaccurate commitments in RESCtea, can also cause role failure, a new tracking failure, which may occur in one of three ways due to violation of the role coherency constraint. First, role overload failure occurs if the number of subteams exceeds the number of roles in a team operator. Second, role under- subscribe failure occurs if the number of subteams falls short of the required number of roles - particularly, if subteams merge together. Third, role assignment failure occurs if the number of subteams equals the number of roles, but they Multiagent Learning 83 do not match the roles. Both match and role failures cause the same repair mechanism to be invoked - current-state backtracking - although in case of role failures, operators with higher (or lower) number of roles may be attempted next. (Abstract higher level operators, which may not im- pose role-coherency constraints, are not susceptible to role failures). One novel issue in team tracking, outlined in step 3, is whether the match failure of one subteam is one of just that subteam or the whole team (discussed in the next Section). The result of RESCteam in tracking the situation from Figure l-a is shown in Figure 3-a. At the top-most level, &%?cute-missionDT denotes the operator that D uses to track 7’s joint mission execution. Since 7’s mission is not yet complete, D applies the ZnterceptDl operator in the sub- goal to track 7’s joint intercept. In the next subgoal, empZOy-weaponsD;r is applied. Following that, get-jiring- po&ionD~ tracks D’s belief that 7 is attempting to get to a missile firing position, and so on. Each operator in this operatorDl hierarchy indicates D’s model of 7’s joint com- mitment to that activity. OPERATOR DTHIERARCHY ‘I ET-FIRING-POSITION ##pzGeq#)) SUBTEAM SUBTEAM . . . . . . . . (a) (W OPERATOR. HIERARCHY TM VEL.l.JNG <LEAD, FOLLOWER, FOIA.OWEb (FoLwW-LEAD #I FLY-CONTOUR #J(F~U~W-LEAD # 1 SELF (TRACKER) LEADER OTHER-FOLLOWER (cl Figure 3: Team tracking via team operators. Some team op&ators require exactly one role. -Rather than naming such a role, we denote it with a “##“. When the team in Figure l-a splits into two subteams, role overload failure causes employ-weaponsD7 to fail. Af- ter current state backtracking, operatorD7 hierarchy in Fig- ure 3-b results, which correctly tracks the on-going pincer. Here, pincI?rDl<LEFT, RIGHT> has two roles. The chil- dren of this operator specify activities - starting left and right arms of the pincer - for the two subteams formed. Team models and RESCte,, provide improved expres- siveness, efficient ambiguity resolution and robustness re- quired for team tracking. Team operators are expressive, since they explicitly encode a team’s joint activity, with roles expressing different activities for subteams. For in- stance, the team operator hierarchy in Figure 3-b clearly expresses the team’s joint pincer, the subteams involved and their roles in it. Team operators also facilitate real-time ambiguity resolution by enforcing jointness. Furthermore, role coherency in team operators adds further constraints, since subteams may only fill unassigned roles. For instance, in Figure 3-b, if one subteam is assigned to the LEFT role of a pincer, the other must also be part of the pincer, and in fact, must fulfill the RIGHT role. Additionally, team models also execute fewer operator hierarchies, e.g., instead of execut- ing four separate operator hierarchies corresponding to four individual opponents, D executes only one team operator hierarchy. Even if subteam hierarchies are generated, they are still fewer than the number of agent hierarchies. Finally, team models provide robustness due to abstraction from in- dividual agents to teams and subteams. Thus, tracking is not disturbed if agents switch allegiance from one subteam to another, or heretofore unseen agents emerge in a team, etc., unless this forms new subteams. Team models and RESCte,, are applicable for track- ing even if an agent is a collaborative participant in the team. Consider the scenario in Figure 3-c, which shows a team of simulated helicopters executing its mission(Tambe, Schwamb, & Rosenbloom 1995), again in the real-world combat simulation environment(Calder et al. 1993). He- licopter radio communications are often restricted to avoid detection by enemy. It is thus essential for a helicopter pilot agent to infer relevant information from the actions of its teammates, e.g., the team has reached a pre-specified hold- ing area since teammates have begun hovering. To track team activities, a team member executes a team model, us- ing RESCteam. In Figure 3-c, the tracker happens to be a subordinate in the team, and the result of its tracking is the operator hierarchy shown. That is, the tracker believes its own team as jointly engaged in execute-mission. In service of mission execution, the team is flying a flight plan via a technique called travelling. TravelZing involves a LEAD role, and two other FOLLOWER roles, causing the operator hierarchy to branch out. The key point in Figure 3-c are the two types of unifor- mities shown. First, team models and RESCteam are shown to be uniformly applicable in a collaborative situation. Sec- ond, the process of an agent’s generation of its own actions and its tracking of its teammates’ actions are also shown to be uniform. The tracker executes thefoZZow-Zeaderoperator branch to generate its own behaviors, while executing the other branches to track teammates. While team tracking has received little attention in the liter- ature, researchers are investigating teamwork(Grosz & Sid- ner 1990; Cohen & Levesque 199 1; Kinny et al. 1992). One leading theory of teamwork is the joint intentions frame- work(Cohen & Levesque 1991). Very briefly, this theory states that a team jointly intends an activity if it is jointly committed to completing that activity (commitments have a common escape clause 4). Joint commitment implies that (at least initially) team members have a mutual belief that 84 Agents they are each committed to that activity. Furthermore, a team jointly intending an activity, leads subteams to intend to do their share in that activity, subject to the joint intention remaining valid. In a team model, a team operator selected in an opera- tor hierarchy (as in Figure 3) is or tracks a joint intention in the above sense. Thus, team models are among the first practical applications of the joint intentions framework; and their application here is certainly novel - tracking team ac- tivities. This application raises one issue: joint intentions pack with them the responsibility of a team member when it privately comes to believe that the team’s jointly intended activity is unachievable (or achieved) - this team member is left with the commitment to communicate this private belief to its teammates. However, if communication itself is very costly - breaking radio silence may be risky for a helicopter pilot agent - such a commitment may be inap- propriate. Thus, when tracking, RESCt,,, does not assume that all subteams are aware of a subteam’s deviation from its role(more in the next section). nhancements in Team Tkacking Efficient Role Assignments While team operators do constrain tracking effort via joint- ness and role coherency, role assignment in team oper- ators can potentially be inefficient. Given a team op- erator with R roles, a tracker may need to test all R! permutations of subteam to role assignments; generating R children operators in each test. Furthermore, such a team operator could itself be defined in terms of multiple role combinations. Although in some such cases almost no role combinations may be disallowed, in other cases, there may be a fixed set of allowable role combinations. For example, half-pincer<STRAIGHT,RIGHT> and half- pincer<STRAIGHT,LEFf> are two separate definitions of half-pincer - in one role combination, one subteam flies straight to attack, while a second subteam attacks from the left; while the second combination involves an attack from the right. If there are C such role combinations, the total operators executed are R x R! x 6. Furthermore, observa- tions of match failure are often not instantly available - and thus, real-time role assignment can be difficult. To alleviate this inefficiency, any multiple definitions of a team operator, corresponding to its multiple role combina- tions, are unified together, with explicit constraints to define allowable role combinations. The role assignment problem for this unified operator is now cast as a constraint satisfac- tion problem (CSP)(Kumar 1992). In this CSP, subteams are variables and possible roles are the domains of those variables. Constraints are the explicit constraints among roles, i.e., their allowable combinations. Figure 4 shows the role assignment for half-pincer cast as a CSI? Subteams St and S2 are variables, with the roles of half-pincer as their domains. The static constraints specify that in a half- pincer, one subteam must take on the role STRAIGHT, the other must take on either RIGHT or LEFT. Observations of subteam actions provide dynamic unary constraints. Dynamic observations Static Constraints = ((LEFI-,STRAIGHT) RIGHTSTRAIGHT)) Dynamic observations Domain = (LEFT,RIGHT,STRAIGHTJ Domain = [ LEFI’,RIGHT,STRAIGHT) Figure 4: Detailed illustrative CSP (two variables). Unifying role combinations and casting the problem as a CSP provides several benefits. First, while detecting match failures is the simplest consistency check (node consis- tency), other constraint propagation techniques such as arc consistency or path-consistency(Kumar 1992) can acceler- ate the process of role assignment (or detecting failures). For instance, in Figure 4, if node-consistency rules out the role STRAIGHT for subteam S2, then arc-consistency will automatically rule out roles LEFT and RIGHT for St. This is important, given complex constraint graphs tested (up to four variables), and the absence of immediate observations. Additionally, independently testing the node consistency of variables converts the multiplicative effect of generating combinations of role assignments into an additive effect. In general, given this mapping, more of tracking could be cast asaCSP- an issue for future work. Based on the above, RESCteam was modified so that roles in a team operator are assigned to subteams via CSP (unless the role assignment is known). So far, only arc-consistency has been incorporated in RESetearn. RESCtea,, being a repair-based approach, solves this CSP via a repair-based approach(Minton et al. 1990) - it commits to one assign- ment of roles to subteams, and dynamically repairs incon- sistent assignments. Minimum Cost Repairing role assignments to subteams is, however, more complex than indicated in the previous section. In particular, team tracking raises a novel issue - ambiguity about a subteam’s degree of adherence to its role. If a subteam is strongly adherent it is very likely to fulfill its role in the joint activity; but if it is weakly adherent, it may deviate. Thus, when a subteam is observed to not fulfill its role, there are two possible explanations: (i) the subteam being strongly adherent is fulfilling its role, but there is an error in tracking the entire team tactic; or (ii) this single subteam being weakly adherent is deviating from its role to respond to some event. For example, in Figure l-d, we assumed that a weakly adherent subteam responded to a missile firing by abandoning the team’s on-going pincer. However, if this subteam were known to be strongly adherent, then it would never deviate from its role, and thus there is an error in tracking - the whole team was not executing a pincer. A symmetrical issue arises if a subteam is seen to fulfill its role despite a reason to deviate. This could be because it is strongly adherent; but if it is weakly adherent, then there is an error in tracking. This ambiguity greatly increases the search space of re- pairs in RESCteanz. To tame the search, RESCteam uses the heuristic of minimal cost repair - where cost measures Multiagent Learning 85 the amount of repair effort required to continue error-free tracking. Zero repair to the team model is naturally consid- ered lowest cost. Repairing a single subteam’s model (i.e., operators involving just a single subteam) within the team model is considered higher cost (given a possible reason for the subteam deviation). Repairing operators involving the entire team is considered to be even higher cost. The tracker uses this minimal repair cost heuristic in at- tributing a degree of adherence to subteams. Thus, if a subteam is fulfilling its role despite events that could cause deviation from that role - e.g., if a missile has been fired at the subteam - RESC team attributes strong adherence to the subteam; this attribution suggests zero repair for continued error-free tracking. If, however, a subteam does not fulfill its role during such an event, then RESCteanz attributes weak adherence to the subteam - the event supports a cheap repair to explain this single subteam’s deviation. Thus, RESCt,,, assumes that this single subteam has abandoned its role in the joint activity. In terms of CSP, this aban- donment implies disabling outward constraint propagation from this subteam variable. Thus, this subteam is no longer assumed as part of the joint activity; however, the other subteam is assumed to continue. In contrast, if a subteam deviates without an explaining event, RESCteam assumes team-wide error (applies normal CSP). The pragmatic rationale behind the above heuristic is that it reduces real-time repair expense. Theoretically, it leans towards parsimonious explanations (it is based on minimality of fault models(Stefik 1995)). This heuristic is actually already incorporated into individual-agent RESC in that once committed to an interpretation, RESC avoids repairs until failure. Implementation Results and Evaluation We have implemented experimental variants of Soar-based pilot agents for both simulated fighters and helicopters. The original pilot agents have participated in various large-scale exercises, some involving expert human pilots(Tambe et al. 1995). Our experimental pilots (called pilottracker) in- corporate RESCtean2 (contain over 1000 rules). Promising results have led the team models to be ported to the original agents. We have run the pilots2rdeker agents in several combat simulation scenarios outlined by our human experts. Fig- ure 5-a compares a fighter pilottracker’s performance when tracking with team models versus when using individuals’ models. The scenario in Figure 1 is used as a basis for comparison, with four agents in the opponents’ team. Fig- ure 5-a shows the percent of its total time that pilotlracter spends in acting and tracking. Thus, when using team mod- els in tracking, a pilottracker spends only 18% of its time is tracking. In contrast, it spends 71% of its time in tracking when using individual agents’ models. Basically, individ- ual agents’ models fail to correctly track the team’s pincer. This failure is not simply in terms of inexpressivity, but also, unable to exploit the teams’ jointness, they engage in a large unconstrained tracking effort. Thus, they run out of time, before they can each at least individually detect the pincer. 86 Agents % of 7o - Total Time ” 1 30 20 20 / -I 10 i i L. TRACKING SELF TRACKING SELF (a) Four Agents in TEAM (b) Three Agents in TEAM Similarly, with team models, pilott’““~“’ spends only 7% of its time in deciding on its own actions(SELF), since it can quickly and accurately track its opponents. In contrast, pilot traclcer incorrectly readjusts its own maneuvers when using individual models; hence spends 28% of time deciding on its own actions. Figure 5-b provides similar comparative numbers for a team consisting of three opponents. (Thus, when using team models, a pilottrucker spends 25% (18% TRACKING + 7% SELF) of time in mental activity, and the rest it waits for its maneuvers to complete. When using individual models, most of the time is spent tracking.) % of 7o -I Total Time ” I 50 s TEAM MODEL m INDIVIDUAL MODELS 40 30 Figure 5: Comparing the efficiency of team models and individual models. Time measured in simulation cycles. Focusing only on role-assignment, Table 1 presents the reduction in tracking effort due to team models and CSP (assuming four agents in each tactic). Column 1 names dif- ferent tactics(Shaw 1988). Column 2 estimates the (worst- case) total number of operators searched assuming roles assigned to individual agents rather than subteams. Col- umn 3 shows the factor reduction in the operators searched when role assignment is based on teams/subteams. The final column shows the actual results from the RESCte,, im- plementation, with operator role-unification and CSP The reduction in tracking effort is substantial, and it will only grow with increasing numbers of agents. Tactic Num opertors Reduction Reduction name indivdul models team model team+CSP Pincer 56 14 14 Half-pincer 112 14 22 Posthole 112 14 22 Pincer-trail 144 8 24 Table 1: Factor reduction in role assignment effort due to operator role-unification, team models and CSP. Summary and Animal and human (natural) world is full of collaborative and competitive team activities: a mongoose team surround- ing a Cobra, a pack of cheetahs hunting a prey, games of soccer or cricket, an orchestra, a discussion, a coauthored paper, a play, etc. It is only natural that this teamwork is (and will be) reflected in virtual and robotic agent worlds, e.g., robotic collaboration by observation(Kuniyoshi et al. 1994), RoboCup soccer(Kitano et al. 1995), virtual the- atre(Hayes-Roth, Brownston, & Gen 1995), virtual battle- fields(Tambe et al. 1995). If agents are to successfully inhabit such worlds, they must understand and track team activity. This paper has taken a step towards this goal and advanced the state of the art in agent tracking and plan recog- nition. Key contributions/ideas in this paper include: (i) the use of explicit team models for team tracking; (ii) uniform application of team models regardless of an agent’s being a participant or non-participant in a team; (iii) demonstra- tion of the key advantages of team models, specifically, efficiency, robustness and expressivity gained via jointness, and team abstractions; (iv) use of constraint satisfaction for improved tracking efficiency; (v) use of a minimal cost repair criterion to constrain tracking search. Team models are one of the first to practically instanti- ate the theoretical joint intentions framework. While team models and other key ideas could be applied in different ap- proaches to tracking, we presented one specific approach: RESGeom. Although based on RESC, RESCt,,, includes several additions to address subteam formation(merging), role assignments and subteam deviations. REXteam has been applied to two different tasks in a synthetic yet real- world environment: (i) a collaborative task involving sim- ulated helicopters; and (ii) an adversarial task involving fighter combat. One concern raised in generalizing this work to other do- mains is detecting that observed agents actually form a team. Currently, the team is either known in advance (e.g., heli- copters) or it is detected based on agents’ proximity to each other (e.g., pilot agents conclude that enemy fighters within 3-4 kilometers form a team). It appears that in many do- mains, teams are indeed known in advance (e.g., RoboCup Soccer or collaboration by observation); nonetheless, fu- ture work will hopefully uncover some general heuristics for team detection. Another issue for future work is track- ing (apparantely) ill-structured team activity. To this end, we are currently testing RESCtearra in RoboCup soccer sim- ulation(Kitan0 et al. 1995). Acknowledgement: I thank Randy Hill and Paul Rosenbloom for valuable comments; Ramesh Patil, Kevin Knight, Gal Kaminka and Micheal Wooldridge provided useful feedback. This re- search was supported as part of contract N66001-95-C-6013 from ARPA/IS0 Domain expertise was provided by Bob Richards and Dave Sullivan of BMH Inc. eferences Anderson, J. R.; Boyle, C. F.; Corbett, A. T.; and Lewis, M. W. 1990. Cognitive modeling and intelligent tutoring. Artificial Intelligence 42:7-49. Calder, R. B.; Smith, J. E.; Courtemanche, A. J.; Mar, J. M. F.; and Ceranowicz, A. Z. 1993. Modsaf behavior simulation and control. In Proceedings of the Conference on Computer Gener- ated Forces and Behavioral Representation. Cohen, P. R., and Levesque, H. J. 1991. Teamwork. Nous 35. Grosz, B. J., and Sidner, C. L. 1990. Plans for discourse. Cam- bridge, MA: MIT Press. 417-445. Hayes-Roth, B.; Brownston, L.; and Gen, R. V. 1995. Multiagent collaobration in directed improvisation. In Proceedings of the International Conference on Multi-Agent Systems (ICMAS-95). Kautz, A., and Allen, J. F. 1986. Generalized plan recognition. In Proceedings of the National Conference on Arti$cial Intelligence, 32-37. Menlo Park, Calif.: AAAI press. Kinny, D.; Ljungberg, M.; Rao, A.; Sonenberg, E.; Tidhard, G.; and Werner, E. 1992. Planned team activity. In Castelfranchi, C., and Werner, E., eds., Artificial Social Systems, Lecture notes in AI 830. Springer Verlag, New York. Kitano, H.; Asada, M.; Kuniyoshi, Y.; Noda, I.; and Osawa, E. 1995. Robocup: The robot world cup initiative. In Proceedings of IJCAI-95 Workshop on Entertainment and AI/Alife. Kumar, V. 1992. Algorithms for constraint-satisfaction problems: A survey. AI Magazine 12( 1). - Kuniyoshi, Y.; Rougeaux, S.; Ishii, M.; Kita, N.; Sakane, S.; and Kakikura, M. 1994. Cooperation by observation: the frame- work and the basic task pattern. In Proceedings of the IEEE International Conference on Robotics and Automation. Maes, P.; Darrell, T.; Blumberg, B.; and Pentland, S. 1994. Inter- acting with animated autonomous agents. In Bates, J., ed., Pro- ceedings of the AAAI Spring Symposium on Believable Agents. Minton, S.; Johnston, M. D.; Philips, A.; and Laird, P. 1990. Solving large-scale constraint satisfaction and scheduling prob- lems using a heuristic repair method. In Proceedings of the National Conference on Artijcial Intelligence. Newell, A. 1990. Un$ed Theories of Cognition. Cambridge, Mass.: Harvard Univ. Press. Pimentel, K., and Teixeira, K. 1994. Virtual reality: Through the new looking glass. Blue Ridge Summit, PA: Windcrest/McGraw- Hill. Rao, A. S., and Murray, 6. 1994. Multi-agent mental-state recognition and its application to air-combat modelling. In Pro- ceedings of the Workshop on Distributed Artificial Intelligence (DAI-94). Menlo Park, Calif.: AAAI press, Technical report WS-94-02. Rao, A. S. 1994. Means-end plan recognition: Towards a theory of reactive recognition. In Proceedings of the International Con- ference on Knowledge Representation and Reasoning (KR-94). Shaw, R. L. 1988. Fighter combat: tactics and maneuvers. Annapolis, Maryland: Naval Institute Press. Stefik, M. 1995. Introduction to Knowledge Systems. Palo Alto, CA: Morgan Kaufmann. Tambe, M., and Rosenbloom, P. S. 1995. RESC: An approach for real-time, dynamic agent tracking. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Tambe, M.; Johnson, W. L.; Jones, R.; Koss, F.; Laird, J. E.; Rosenbloom, P. S.; and Schwamb, K. 1995. Intelligent agents for interactive simulation environments. AI Magazine 16( 1). Tambe, M.; Schwamb, K.; and Rosenbloom, P. S. 1995. Building intelligent pilots for simulated rotary wing aircraft. In Proceed- ings of the Fifth Conference on Computer Generated Forces and Behavioral Representation. Tambe, M. 1995. Recursive agent and agent-group tracking in a real-time dynamic environment. In Proceedings of the Interna- tional Conference on Multi-agent systems (ICMAS). Multiagent Learning 87
1996
12
1,755
Generation of Attributes for Learning Algorithms Yuh-Jyh Hu and Dennis Kibler Information and Computer Science Department University of California, Irvine (yhu, kibler}@ics.uci.edu Abstract Inductive algorithms rely strongly on their represen- tational biases, Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our meth- ods are designed as preprocessing step before stan- dard machine learning algorithms are applied. We present the results which demonstrate the effective- ness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation. Introduction The ability of an inductive learning algorithm to find an accurate concept description depends heavily upon the representation. Concept learners typically make strong assumptions about the vocabulary used to rep- resent these examples. The vocabulary of features de- termines not only the form and size of the final concept learned, but also the speed of the convergence (Fawcett & Utgoff, 1991). Learning algorithms that consider a single attribute at a time may overlook the significance of combining features. For example, C4.5 (Quinlan, 1993) splits on the test of a single attribute while con- structing a decision tree, and CN2 (Clark & Niblett, 1989; Clark & Boswell,l991) specializes the complexes in Star by conjoining a single literal or dropping a dis- junctive element in its selector. Such algorithms suffer from the standard problem of any hill-climbing search: the best local decision may not lead to the best global result. One approach to mitigate these problems is to con- struct new features. Constructing new feature by hand is often difficult (Quinlan, 1983). The goal of con- structive induction is to automatically transform the original representation space into a new one where the regularity is more apparent (Dietterich & Michalski, 1981; Aha 1991 ), thus yielding improved classifica- tion accuracy. Several machine learning algorithms perform feature construction by extending greedy, hill- climbing strategies, including FRINGE, GREEDY3 806 Learning (Pagallo & Haussler, 1990), DCFringe (Yang et. al., 1991), CITRE (Matheus & Rendell, 1989). These al- gorithms construct new features by finding local pat- terns in the previously constructed decision tree. These new attributes are added to the original attributes, the learning algorithm is called again, and the process continues until no new useful attributes can be found. However, such greedy feature construction algorithms often show their improvement only on artificial con- cepts (Yang ed. al., 1991). FRINGE-like algorithms are limited by the quality of the original decision tree. Some reports show that if the basis for the construction of the original tree is a greedy, hill-climbing strategy, accuracies remain low (Rendell & Ragavan, 1993). An alternative approach is to use some form of looka- head search. Exhaustive lookahead algorithms like IDX (Norton, 1989) can improve decision tree induc- tive learners without constructing new attributes, but the lookahead is computationally prohibited, except in simple domains. LFC (Ragavan & Rendell, 1993; Ra- gavan et. al., 1993) mitigate this problem by using directed lookahead and by caching features. However, LFC’s quality measure (i.e., blurring measure) limits this approach. In this paper, we introduce a new feature construc- tion algorithm which addresses both the bias of the initial representation and the search complexity in con- structive induction. Issues There are several important issues about the construc- tive induction process. Interleaving vs Preprocessing By interleaving we mean that the learning process and constructive induction process are intertwined into a single algorithm. Most current constructive induction algorithms fall into this category. This limits the ap- plicability of the constructive induction method. By keeping these processes separate, the constructive in- duction algorithm can be used as a preprocessor to any learning algorithm. With the preprocessor model one can also test the appropriateness of the learned fea- From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. tures over a wider class of learning algorithms. GALA follows the preprocessing approach. Wypot hesis-driven vs Data-driven Hypothesis-driven approaches construct new at- tributes based on the hypotheses generated previously. This is a two-edged sword. They have the advantage of previous knowledge and the disadvantage of being strongly dependent on the quality of previous knowl- edge. On the other hand, data-driven approaches can- not benefit from previous hypotheses, but can avoid the strong dependence. GALA is data-driven. Absolute Measures vs Relative Measures Absolute measures evaluate the quality of an attribute on the training examples without regard to how the attribute was constructed. Examples of such measures include entropy variants (e.g., blurring), gain ratio, and error-rate. While it is important that a new attribute performs well, it is also important that it has significant improvement over its parents. We refer to this as a rel- ative measure. GALA uses both relative and absolute measures. GALA’s relative measure is different from STAGGER’s (Schlimmer, 1987) in two respects. First, STAGGER metrics are based on statistical measures rather than information measures. Second, STAGGER evaluates the quality of a son by its absolute difference in quality from it parents (i.e., using parent’s quality as threshold) while GALA uses the relative quality dif- ference. Operators The simplest operators for constructing new attributes are boolean operators, which are what most construc- tive induction algorithms use. One could also consider relational operators or operations based on clustering. Currently GALA only uses (iteratively) the boolean operators “and” and “not”. Attribute Types We say an attribute is crisp if it has a relatively small description as a Boolean combination of the primitive features. Otherwise the attribute is not crisp. A com- mon type of non-crisp attributes are prototypical at- tributes. A prototypical attribute corresponds to a m- of-n concept. For example the 5-of-10 concept requires 252 conjuncts when described in disjunctive normal form. Obviously there is a spectrum between crisp and non-crisp attributes. GALA finds crisp attributes. The GALA Algorithm The idea of GALA (G eneration of Attributes for Learn- ing Algorithms) is to consider those constructed fea- tures which have high relative and absolute gain ratio. This will be defined more precisely. In later sections we show the advantage of GALA. We also show the value of using a combined metric with ablation studies. The Given: Primitive attributes P, training examples E, threshold, cycle limit c and new attributes NEW (NEW is empty when GALA invoked the first time) Return: a set of new attributes NEW Procedure GALA(P,E,threshold,c,NEW) If (size(E) greater than threshold) and (E is not all of same class) Then Set Boo1 to Boolean attributes from Booleanize(P,E) Set Pool to attributes from Generate(Bool,E,c) Set Best to attribute in Pool with hiphest nain ratio Adi fgest to NEW i more than one, pick one of smzllest zze) Split on Best N = empty set For each outcome, Si, of Split on Best Ei = examples with outcome Si on split NEWi = GALA(P,Ei,threshold,c,NEW) N = union of N and NEWi NEW = union of NEW and N Return NEW Else Return empty set Figure 1: GALA Given: Attributes P and examples E. Return: set of candidate boolean attributes. Procedure Booleanize (P,E) Set Boo1 to empty. For each attribute f in P, find the v such that Pos(f,v) has highest gain ratio on E. Add Pos(f,v) and Neg(f,v) to Bool. Return Boo1 Figure 2: Transforming boolean attributes real and nominal attributes to algorithm has three basic steps. The general flow of the algorithm is generate-and-test, but it is compli- cated since testing is interlaced with generation. The overall control flow is given in Figure 1. For each parti- tion of the data, only one new attribute is added to the original set of primitives. Partitioning is stopped when the set is homogeneous or below a fixed size, currently set at 10. The following subsections describe the basic steps in more detail. Booleanize Suppose f is a feature and v is any value in its range. We define two boolean attributes, Pos(f,v) and Neg(f,v) as follows: Pos(f,u) I ; ;; if f is a nominal or boolean attribute if f is a continuous attribute Nes(f, u> f if f is a nominal or boolean attribute if f is a continuous attribute The idea is to transform each real or nominal at- tribute to a single boolean attribute by choosing a sin- gle value from the range. The algorithm is more pre- cisely defined in Figure 2. This process takes O(AVE) time where A is the number of attributes, V is the max- imum number of attribute-values, and E is the number of examples. The net effect is that there will be two Inductive Learning 807 Given: a set of boolean attributes P a set of training examples E cycle limit C Return: new boolean attributes Procedure Generate(P,E,C) Let Pool be P Repeat For each conjunction of Pos(f,v) or Neg(f,v) with Pos (g,w) or Neg(g,w). If conjunction passes GainFilter, add it to Pool Until no new attributes are found or reach cycle limit C Return Pool Figure 3: Attribute Generation Given: a set of new Attributes N Return: new attributes with high GR and RGR Procedure GainFilter Set M to those attributes in N whose gain ratio is better than mean(GR(N)). Set M’ to those attributes in M whose UPPER-RGR is better than mean(UPPER-RGR(N)) or LOWER-RGR is better than mean(LOWER-RGR(N)) Return M’. Figure 4: ratio Filtering by mean absolute and relative gain boolean tribute. attributes associated with each primitive at- Generation Conceptually, GALA uses only two operators, conjunc- tion and negation, to construct new boolean attributes from old ones. Repetition of these operators yields the possibility of generating any boolean feature. However only attributes with high heuristic value will be kept. Figure 3 describes the iterative and interlaced gener- ate and test procedure. If in each cycle we only keep B best new attributes (i.e., beam size is B), the procedure takes O(cB2E) time where c is the pre-determined cy- cle limit, B is the beam size, and E is the number of examples. Heuristic Gain Filtering In general, if A is an attribute we define GR(A) as the gain ratio of A. If a new attribute A is the conjunction of attributes Al and A2, then we define two relative gain ratios associated with A as: UPPER-RGR(A) = max{ GR(A) - GR(A1) GR(A) - GR(A2) GR(A) ’ GRC-4) 1. LOWER-RGR(A) = min{ GR(A) - GR(A1) GR(A) - GR(A2)) GR(A) ’ GR(A) ’ We consider the relative gain ratio only when the con- junction has a better gain ratio than each of its par- ents. Consequently this measure ranges from 0 to 1 and is a measure of the synergy of the conjunction over the value of the individual attributes. To con- sider every new attribute during feature construction is computational impractical. Coupling relative gain ratio with gain ratio constrains the search space with- out overlooking many useful attributes. We define mean(GR(S)) as th e average absolute gain ratio of each attribute in S. We also define the mean relative gain ratios (mean(UPPER-RGR(S)) and mean(LOWER- RGR(S))) over a set S of attributes similarly. We use these measures to define the GainFilter in Figure 4. Experimental We carried out three types of experiments. First, we show that GALA performs comparably with LFC on an artificial set of problems that Ragaven & Rendell (1993) used. They have kindly provided us with the code to LFC so that we could also perform other tests (Thanks to Ricardo Vilalta). Second, we consider the performance of GALA on a number of real world do- mains. Last, we consider various ablation studies to verify that our system is not overly complicated, i.e. the various components add power to the algorithm. In all experiments, the parameters of C4.5 and CN2 were set to default values to keep the consistency. For the backpropagation algorithm, we used the best pa- rameter settings after several tries. The learning rate and the momentum was between 0 and 1. We also adopted the suggested heuristics for a fully connected network structure: initial weights selected at random and a single hidden layer whose number of nodes was half the total number of input and output nodes (Ra- gavan & Rendell 1993; Rumelhart et. al., 1986) Artificial Domains We chose the same four boolean function as did Ragavan & Rendell (1993) and used their training methodology. Each boolean was defined over nine at- tributes, where four or five attributes were irrelevant. The training set had 32 examples and the remaining 480 were used for testing. The four boolean functions were: fl = x1x2x3 + 2122x4 + x1x2x5 f2 = 5122%3 + 2224??3 + jc3z4itl f3 = &j&x6 + ii62@5 + jc&& f4 = zf5xlx8 + 282451 + i?$&$xl These boolean functions are progressively more diffi- cult to learn, as verified by the experiments with C4.5, or in terms of blurring measure (see Ragavan & Ren- dell, 1993). Th e results of this experiment with respect to the learning algorithms C4.5, CN2 (using Laplace accuracy instead of entropy), perceptron, and back- prop are reported in Table 1. Each result is averaged over 20 runs. GALA always significantly improved per- ceptron and backpropagation and usually improved the Learning Table 1: Accuracy and Hypothesis Complexity Comparison for artificial domains. Significant GALA over learning algorithms is marked with *, and significant difference between GALA+C4 ceptron, backprop) and LFC is marked with 1 (or 2, 3, 4). improvement by .5 (or CN2, per- . .c, - - - - LFC 1 94.2 f4.1 1 93.3 kts.9 1 84.0 Lt6.31zd4 1 82.3 f8.11 size I 3.9 1 5.51 I 6.9’ I 7 11 II ll.Prl 1 c, I 77 79 . .* I 3.1 tl performance of C4.5 and CN2. This demonstrates that the construction process is different than the learn- ing one. Also it shows that the features generated by GALA are valuable with respect to several rather dif- ferent learning algorithms. We believe this is a result of combining relative and absolute measure, so that the constructed attribute will be different from the ones generated by the learning algorithm. Previous reports indicated that LFC outperforms many other learners in several domains (Ragavan & Rendell, 1993; Ragavan et. al., 1993). The results of LFC are also reported in the table. In Table 1 from top-to-bottom the rows denote: the accuracy of the learning algorithm, the accuracy of the learning algo- rithm using the generated attributes, the concept size (node count for C4.5 and number of selectors for CN2) without generated attributes, the concept size after us- ing GALA, the accuracy of LFC, the concept size (node count for LFC), and the number of new attributes LFC used. In all of these experiments, GALA produced an average of only one new attribute and this attribute was always selected by C4.5 and CN2. The differences of hypothesis complexities and accuracies are signifi- cant at 0.01 level in a paired t-test. Because CN2, which is a rule induction system, is different from de- cision tree induction algorithms, we did not compare its hypothesis complexity with LFC’s. In no case was the introduction of new attributes detrimental and in 13 out of 16 cases the generated attribute was useful. Real Domains We are more interested in demonstrating the value of GALA on real world domains. The behavior of GALA on these domains is very similar to its behavior on the artificial domains. We selected from UC1 repository several domains that have a mixture of nominal and continuous attributes. These were: Cleveland heart disease, Bupa liver disorder, Credit screening, Pima Diabetes, Wisconsin Breast Cancer, Wine and Promot- ers. In each case, two thirds of the examples form the training set and the remaining examples form the test set. The results of these experiments are given in table 6. We do not report the results for the Diabetes do- main since there was no significant difference for any field. Besides providing the same information as in the artificial domains, we have also added two additional fields, the average number of new attributes generated by GALA and the average number of attributes used (i.e., included in the final concept description) by the learning algorithm. We did not apply LFC or percep- tron to the wine domain because they require 2 classes and the wine domain has 3 classes. Again each result is averaged over 20 runs. Table 2 shows the results for accuracies, hypothesis size, number of new attributes added and number of new attributes used. The differences of concept com- plexities are significant at the 0.01 level, and the dif- ferences of accuracies are significant at the 0.02 level in a paired t-test. In no case was the introduction of the generated attributes harmful to the learning algorithm and in 21 out of 27 cases (6 out of 7 for C4.5, 4 out of 7 for CN2, 5 out of 6 for perceptron, 6 out of 7 for backprop) GALA significantly increased the resulting accuracy. Excluding the Diabetes domain GALA al- ways improved the accuracy of both backpropagation and perceptron, mimicking the results for the artifi- cial concepts. For CN2 and C4.5, GALA sometimes improved the performance and never decreased the ac- curacy. Ablation Studies To further demonstrate the value of combining abso- lute gain ratio with relative gain ratio, we conducted the following studies. First, we evaluated the impor- tance of GainFilter (i.e., combining relative gain ratio with absolute gain ratio). We reran all the experi- ments, including the artificial and the real domains, but we only used the attributes not passing Gain- Filter to construct new attributes. The significant decrease of accuracies for all domains indicates that GainFilter could effectively keep promising attributes for constructing new attributes. Those attributes that Inductive Learning 809 Table 2: Accuracy and Hypothesis Complexity Comparison for real domains. Significant improver over learning algorithms is marked with *, and significant difference between GALA+C4.5 (or CP backprop) and LFC is marked with 1 (or 2. 3. 4). - -I \ I Domem Heart Liver Credit Breast Wine Promoter C4.5 72.3 f 2.1 62.1 f 5.0 81.6 f 2.5 93.6 f 1.4 89.5 f 4.9 73.9 f 8.8 GALA+ 76.4 f 2.5’ 65.4 f 3.8’ 83.3 f 2.2’ 95.2 f l.am 93.8 3.0* f 79.5 f 7.8’ c4.5 would not contribute to forming useful new attributes were successfully filtered out by GainFilter. Due to space consideration, we only report the accuracies for C4.5+GALA and C4.5+GALA- in Table 3, where GALA- stands for the modified algorithm. Second, we evaluated the value of relative measure. In any iterative feature construction process, the ma- jor difficulty is to determine the usefulness of attributes for later process. One obvious way to avoid overlooking promising attributes is to keep all the attributes, but this is computationally prohibited. Thus beam search naturally comes into play. However, when absolute measure is the only quality criterion, it increases the danger of overlooking promising attributes. For exam- ple, a new attribute over its parents with a minor in- crease of absolute gain ratio by chance may be mistak- enly selected in the beam given that the new attribute happens to have high absolute gain ratio. On the other hand, a new attribute with significant increase of gain ratio over its parents, but only with low ab- solute gain ratio itself may be mistakenly ruled out of the beam though this attribute is promising. Increas- ing the beam size is one way to solve the problem, but it is difficult to predict the perfect beam size, and arbi- trarily increasing the beam size is also computationally prohibited. We first demonstrated the problem men- tioned above to emphasize the need of other measure than absolute measure. We reran all the experiments using only the absolute gain ratio with beam size of ten, and compared the results with those of GALA also with the same beam size. GALA’s accuracies for heart, wine and promoter domains were significantly better (by 1% to 3%, depending on the domain and the learning algorithm). The results of the other domains were not significantly different. Though the absolute gain ratio beam search sometimes reached the same ac- 810 Learning lent by GALA 2, perceptron, Table 3: Effectiveness Test on GainFilter (C4.5+GALA/C4.5+GALA-) Function C4.5+GALA C4.5+GALA- fl 95.6zt 2.0 93.8f 4.4 . f2 95.2f 5.7 91.9 f 5.7 . f3 87.9f 6.8 84.0f 6.3 fA 85.9 f 7.8 77.8f 9.1 curacy as GALA, yet it’s search space was much larger ( i.e., by 100% to 300% depending on domains). The above results showed that absolute measure does have the problem mentioned earlier. Increasing the beam size could solve the problem, but by how much do we need to increase the beam size? This suggests that we need another measure which combined with absolute measure could not only put the right attributes in a beam and avoid overlooking promising attributes, but also effectively constrain the search space. Thus we introduced the relative measure. To further validate the contribution of relative mea- sure, we intentionally removed the relative measure from GainFilter (refer to Figure 4) to increase the search space, and in fact, the new search space covered the old one. Again we reran all the experiments, and found that the accuracies were not significantly differ- ent, but search space increased dramatically (i.e., by 25% to 200%, depending on domains). Based on the above studies, we conclude that absolute measure is insufficient, but combined with relative measure could not only avoid overlooking important information, but also effectively constrain the search space. Conclusion and Future Research This paper presents a new approach to constructive induction which generates a small number (1 to 3 on average) of new attributes. The GALA method is in- dependent of the learning algorithm so can be used as a preprocessor to various learning algorithm. We demon- strated that it could improve the accuracy of C4.5, CN2, the perceptron algorithm and the backpropaga- tion algorithm. The method relies on combining an absolute measure of quality with a relative one, which encourages the algorithm to find new features that are outside the space generated by standard machine learn- ing algorithms. Using this approach we demonstrated significant improvement in several artificial and real- world domains and no degradation in accuracy in any domain. There is no pruning technique incorporated with the current version of GALA. We suspect that some new attributes generated by GALA may be too complicated to improve the accuracy. Therefore, in one direction of the future research, we will study various pruning techniques. Another limit of the current GALA is that it has only two Boolean operators: “and” and “not”. Extending the operators could not only improve the performance of GALA but also expand its applicabil- ity. With hindsight, the results of GALA may be an- ticipated. Essentially, GALA constructs crisp boolean features. Since some boolean features are outside the space of a perceptron, we should expect that GALA would help this learner the most. In fact, perceptron trainging with GALA generated features was nearly as good as more general symbolic learners. With re- spect to backpropagation the expectation is similar. While neural nets can learn crisp features, their search bias favors prototypical attributes. Consequently we again expect that GALA will aid neural net learn- ing. While our results were less impressive compared with perceptron and backprop, it is almost more sur- prising that GALA improves CN2 and C4.5. Both of these algorithms find crisp attributes althought their search biases are somewhat different. However for both algorithms, GALA found useful additional crisp at- tributes. This demonstrates that GALA’s search is different from either of these algorithms. What GALA lacks is the ability to find non-crisp attributes. In our future research we intend to extend GALA to also find non-crisp attributes. References Aha, D. “Incremental Constructive Induction: An Instanced-Based Approach”, in Proceeding of the 8th Machine Learning Workshop, ~117-121, 1991. Clark, P. & Boswell, R. “Rule Induction with CN2: some recent improvements”, European Working Ses- sion on Learning, ~151-161, 1991. Clark, P. & Niblett, T. “The CN2 Induction Algo- rithm”, Machine Learning 3, ~261-283, 1989. Dietterich, T. G. & Michalski, R. S. “Inductive Learn- ing of Structural Description : Evaluation Criteria and Comparative Review of Selected Methods”, Ar- tificial Intelligence 16 (3), p257-294, 1981. Fawcett, T. E. & Utgoff, P. E. “Automatic Feature Generation for Problem Solving Systems”, in Pro- ceeding of the 9th International Workshop on Ma- chine Learning, ~144-153, 1992. Matheus, C. J. & Rendell, L. A. “Constructive Induc- tion on Decision Trees”, in Proceeding of the 1 lth International Joint Conference on Artificial Intelli- gence, ~645-650, 1989. Norton, S. W. “Generating better Decision Trees”, in Proceeding of the 11th International Joint Confer- ence on Artificial Intelligence, ~800-805, 1989. Pagallo, G. & Haussler, D. “Boolean Feature Dis- covery in Empirical Learning”, Machine Learning 5, p71-99, 1990. Quinlan, J. R. “L earning efficient classification proce- dures and their application to chess end games” , in Michalski et. al.‘s Machine Learning : An artificial intelligence approach. (Eds.) 1983. Quinlan, J. R. C4.5 : Programs for Machine Learn- ing, Morgan Kaufmann, San Mateo, CA, 1993. Ragavan, H., Rendell, L., Shaw, M., Tessmer, A. “Complex Concept Acquisition through Directed Search and Feature Caching”, in Proceeding of the 13th International Joint Conference on Artificial In- telligence, p946-958, 1993. Ragavan, H. & Rendell, L. “Lookahead Feature Con- struction for Learning Hard Concepts”, in Proceed- ing of the 10th Machine Learning Conference, p252- 259, 1993. Rendell L. A. & Ragavan, H. “Improving the Design of Induction Methods by Analyzing Algorithm Func- tionality and Data-Based Concept Complexity”, in Proceeding of the 13th International Joint Confer- ence on Artificial Intelligence, p952-958, 1993. Rumelhart, D. E., Hinton, G. E., Williams, R. J. “Learning Internal Representations by Error Prop- agation” in Parallel Distributed Processing: Ex- plorations in the Microstructures of Cognition, Vol l,p318-362, 1986. Schlimmer, J. C. “Incremental Adjustment of Repre- sentations for Learning” in Proceeding of the Fourth International Workshop on Machine Learning, p79- 90, 1987. Yang, D-S., Rendell, L. A., Blix, G. “A Scheme for Feature Construction and a Comparison of Em- pirical Methods”, in Proceeding of the 12th Inter- national Joint Conference on Artificial Intelligence, p699-704, 1991. Inductive J,earning 811
1996
120
1,756
Structural Stefan Kramer Austrian Research Institute for Artificial Intelligence Schottengasse 3 A-1010 Vienna, Austria stefan@ai.univie.ac.at Abstract In many real-world domains the task of machine learning algorithms is to learn a theory for pre- dicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predict- ing numerical values from examples and rela- tional and mostly non-determinate background knowledge. However, so far no ILP algorithm ex- cept one can predict numbers and cope with non- determinate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be ap- plied to the above class of problems. SRT inte- grates the statistical method of regression trees into ILP. It constructs a tree containing a literal (an atomic formula or its negation) or a conjunc- tion of literals in each node, and assigns a numer- ical value to each leaf. SRT provides more com- prehensible results than purely statistical meth- ods, and can be applied to a class of problems most other ILP systems cannot handle. Experi- ments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy. Introduction Many real-world machine learning domains involve the prediction of a numerical value. In particular, sev- eral test domains used in Inductive Logic Programming (ILP) (including the Mesh data sets (Dolsak, Bratko, & Jezernik 1994) and the problem of learning quantita- tive structure-activity relations (QSAR) (Hirst, King, & Sternberg 1994a) (Hirst, King, & Sternberg 1994b)) are concerned with the prediction of numerical values from examples and relational background knowledge. This kind of learning problem is called relational re- gression in (DBeroski 1995)) and can be formulated in the “normal” ILP framework (i.e., it is not part of the non-monotonic ILP framework which includes the closed-world assumption). Nevertheless, relational regression differs from other ILP learning tasks in that there are no negative examples. 812 Learning In this paper we present Structural Regression Trees (SRT), a new algorithm for relational regression. SRT can be viewed as integrating the statistical method of regression trees (Breiman et al. 1984) into ILP. To simplify the presentation, we first review work in statistics and machine learning that is related t’o our approach. In the third section we will describe the method, including the solution for the problem of non-determinate literals. Furthermore, we present a new method for detecting outliers by analogy. Sub- sequently, we discuss results of experiments in several real-world domains. Finally, we draw our conclusions, and sketch possible directions of further research. elated Work The classical statistical model for the prediction of nu- merical values is linear least-squares regression. Re- finements and extensions like non-linear models are also well-known and used in many real-world appli- cations. However, regression models have several limi- tations: first of all, regression models are often hard to understand. Secondly, classical statistical methods as- sume that, all features are equally relevant for all parts of the instance space. Thirdly, regression models do not allow for easy utilization of domain knowledge. The only way to include knowledge is to “engineer” features, and to map these symbolic features to real- valued features. In order to solve some of these problems, regression tree methods (CART (Breiman et al. 1984), RETIS (Karalic 1992), M5 (Quinlan 1992)) have been devel- oped. Regression trees are supposed to be more com- prehensible than traditional regression models. Fur- thermore, regression trees by definition do not t,reat all features as equally relevant for all regions of the in- stance space. The basic idea of regression trees accord- ing to CART is to minimize the least squared error for the next split of a node in the tree, and to predict, for unseen instances, the average of the dependent variable of all training instances covered by the matching leaf. RETIS and M5 differ in that they do not assign single values to the leaves, but linear regression models. Sophisticated post-pruning methods have been de- From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. veloped for CART, since the method grows the tree A more detailed comparison between FORS and until every leaf is “pure”, i.e. the leaves cover instances SRT shows that FORS may induce clauses contain- ing linear regression models, whereas SRT does not. sharing the same value of the dependent variable. The regression tree resulting from the growing phase is usu- ally bigger than a classification tree, since it takes more nodes to achieve pure leaves. Manago’s KATE (Manago 1989) learns decision trees from examples represented in a frame-based lan- guage that is equivalent to first-order predicate calcu- lus. KATE makes extensive use of a given hierarchy and heuristics to generate the branch tests. To our knowledge, KATE was the first system to induce first- order theories in a divide-and-conquer fashion. Watanabe and Rendell (Watanabe & Rendell 1991) also investigated the use of divide-and-conquer for learning first-order theories. Although their so-called structural decision trees are used for the prediction of categorical classes and not continuous classes, it is the closest work found in the literature. Linear regression models usually improve the accuracy of the induced models, but they also reduce their com- prehensibility. Moreover, FORS requires setting thir- teen parameters, whereas SRT only has one parame- ter. Seven parameters of FORS are used to control pre-pruning, whereas SRT uses a simple method for tree selection based on the minimum description length (MDL) principle (Rissanen 1978). Most importantly, we believe that the theories learned by FORS are very hard to understand, since the rules are basically or- dered rule sets (Clark & Boswell 1991): to understand the meaning of a particular clause, we have t,o under- stand all preceding clauses in the theory. So far, two methods have been applied to the prob- lem of relational regression: DINUS/RETIS (Dzeroski, Todoroski, & Urbancic 1995), a combination of DINUS (Lavrac & Dzeroski 1994) and RETIS (Karalic 1992), and FORS (Karalic 1995). DINUS/RETIS transforms the learning problem into a propositional language, and subsequently ap- plies RETIS, a propositional regression tree algo- rithm, to the transformed problem. In contrast to DINUS/RETIS, SRT solves the problem in its origi- nal representation, and does not require transforming the problem. Furthermore, the transformation does not work for non-determinate background knowledge, which is a strict limitation of the appr0ach.i FORS is the only other algorithm so far that can be applied to the same class of learning problems as SRT, since it can also deal with non-determinate back- ground knowledge. FORS differs from SRT in its use of separate-and-conquer instead of divide-and-conquer ( i.e., it is a covering algorithm, not a tree-based al- gorithm). Generally, all the advantages and disad- vantages known from other algorithms of these types (tree-based vs. covering) apply. A discussion of both strategies in the context of top-down induction of logic programs can be found in (Bostrom 1995): on the one hand, the hypothesis space for separate-and-conquer is larger than for divide-and-conquer. Thus, more com- pact hypotheses might be found using separate-and- conquer. On the other hand, building a tree is compu- tationally cheaper than searching for rules. In (Weiss & Indurkhya 1995), a comparison of tree induction and rule induction in propositional regression basically draws the same conclusions. However, the approach to rule-based regression in (Weiss & Indurkhya 1995) is different from FORS, since it involves the discretiza- tion of the numeric dependent variable. 1 Cohen (Cohen 1994b) demonstrated that certain types of non-determinate background knowledge can be proposi- tionalized. However, this is not generally the case. c 4 I2 nl n2 Yl,j Y&j set of instances covered by a leaf 1 in a partial tree the conjunction of all literals in the path from the root of the tree to E subset of I for which proving c A t (t E T, the set of possible tests) succeeds subset of I for which proving c A t fails (I=I1UI2, I~nI2=0) number of instances in Ii ( nl = IIll ) number of instances in 12 ( n2 = 1121 ) value of the dependent variable of a training instance j in 11 value of the dependent variable of a training instance j in 12 average of all instances in 11 average of all instances in 12 Table 1: Definition of terms escription of the Method Overview SRT is an algorithm which learns a theory for t’he pre- diction of numerical values from examples and rela- tional (and even non-determinate) background knowl- edge. The algorithm constructs a tree cont,aining a literal (an atomic formula or its negation) or a conjunc- tion of literals in each node, and assigns a numerical value to each leaf. More precisely, SRT generates a series of trees of in- creasing complexity, and subsequently returns one of the generated trees according to a preference criterion. This preference criterion is based on the minimum description length (MDL) principle (Rissanen 1978)) which helps to avoid overfitting the data especially in the presence of noise. The whole process can be seen as a kind of pruning based on the MDL principle. For the construction of a single tree, SRT uses the same method as used for the usual top-down induc- Inductive Learning 813 activity(Drug,8.273) :- struct(Drug,Groupl,Group2,Group3), (pi_doner(Group3,X), X< 2). activity(Drug,6.844) :- struct(Drug,Groupl,Group2,Group3), \+ (pi_doner(Group3,X), X< 2), h-doner(Groupl,O). activity(Drug,6.176) :- struct(Drug,Groupl,Group2,Group3), \+ (pi-doner(GroupS,X), X< a), \+ h-doner(Groupl,O). Table 2: Example of a structural regression tree in clausal form tion of decision trees (Quinlan 1993a). The algorithm recursively builds a binary tree, selecting a literal or a conjunction of literals (as defined by user-defined schemata (Silverstein & Pazzani 1991)) in each node of the tree until a stopping criterion is fulfilled. With each selected literal or conjunction, the examples cov- ered by a node are further partitioned, depending on the success or failure of the literal(s) on the examples. The selection of the literal or conjunction is per- formed as follows: let I be the set of training instances covered by a leaf I in a partial tree, and c be the con- junction of all literals in the path from the root of the tree to Z. (For a definition of all used terms see table 1.) Then every possible test t is evaluated according to the resulting partitioning of the training instances I in 1. The instances I are partitioned into the instances 11 C I for which proving c A t succeeds, and into the instances 12 C_ I for which proving CA t fails. For every possible test t we calculate the sum of the squared dif- ferences between the actual values yi,j of the training instances and the average @ of Ii. From all possible tests, SRT selects t* E T which minimizes this sum of squared differences (see equation 1). When the stop- ping criterion (see below) is fulfilled, the average @ is assigned to the leaf as the predicted value for unseen cases that reach the leaf. Sum Squared Errors = >: x(yi,j _ vii)2 (1) i=l j=l From another point of view, each path starting from the root can be seen as a clause. Every time the tree is extended by a further literal or conjunction, two fur- ther clauses are generated: one of them is the current clause (i.e. the path in the current partial tree) ex- tended by the respective literal or conjunction of lit- erals. The other clause is the current clause extended by the negation of the literal(s). Table 2 shows a sim- ple example of a structural regression tree in clausal form. The three clauses predict the biological activity of a compound from its structure and its characteris- tics. Depending on the conditions in the clauses, the theory assigns either 8.273 or 6.844 or 6.176 to every unseen instance. In the following, we will use this lat- ter, clausal view on the process. The simplest possible stopping criterion is used to decide if we should further grow a tree: SRT stops ex- tending a clause when no literal(s) can produce t,wo further clauses that both cover more than a required minimum number of training instances. In t,he follow- ing this parameter will be called the minimum cover- age of all clauses in the theory. Apart from its use as a stopping criterion, the minimum coverage parameter has the following benefits: we have direct control over the complexity of the trees being built. The smaller the value of the parameter, the more complex the tree will be, since we allow for more specific clauses in the t,ree. In such a way we can generate a series of increasingly complex trees, and return the one which opt,imizes a preference function. Note that if the minimum cov- erage parameter had a different value, different, split#s might have been selected in any node of the t’ree. SRT generates a series of increasingly complex trees by varying the minimum coverage parameter. The al- gorithm starts with a high minimum coverage, and decreases it from iteration to iteration. Fortunately, many iterations can be skipped, since nothing would change for certain values of the minimum coverage pa- rameter: in the process of building the tree, we always select the one literal or conjunction which produces two clauses with an admissible coverage and which yields the lowest sum of squared errors. There could be lit- erals or conjunctions yielding an even lower sum of squared errors, but with a coverage that! is too low. The maximum coverage of these literals or conjunc- tions is the next value of the paramet,er for which t#he tree would be different from the current tree. So we choose this value as the next minimum coverage. Finally, SRT returns the one tree from this series that obtains the best compression of the data. The compression measure is based on the minimum descrip- tion length (MDL) p rinciple (Rissanen 1978)) and will be discussed in the next section. Tree Selection by MDL The MDL principle tries to measure both t’he simplicity and the accuracy of a particular theory in a common currency, namely in terms of the number of bits needed for encoding theory and data. (Cheeseman 1990) de- fines the message length of a theory (called model in his article) as: 814 Learning Total message length = Message length to describe the model + Message length to describe the data, given the model. This way a more complex theory will need more bits to be encoded, but might save bits when encoding more data correctly. The message length of the model consists of the en- coding of the literals and the encoding of the predicted values in the leaves. The message length of the data, given the model, is the encoding of the errors. The encoding of the tree structure is simply the en- coding of the choices made (for the respective literals) as the tree is built. For a single node, we encode the choice from all possible literals, so that the encoding considers predicates as well as all possible variabiliza- tions of the predicates. In addition to the tree struc- ture, we also have to encode the real numbers assigned to the leaves. In our coding scheme, we turn them into integers by multiplication and rounding. The factor is the minimum integer that still allows to discern the values in the training data after rounding. Then we encode all these integers in a way that the encoding requires the same amount of information for all values regardless of their magnitude. The errors are also real numbers, and they are turned into integers in the same way as above. Sub- sequently, however, these integers are encoded by the universal prior of integers (UPI) (Rissanen 1986) - in this way the coding length of the errors roughly corre- sponds to their magnitude. We chose MDL instead of cross-validation since it is computationally less expensive, and it can be used for pruning in search (Pfahringer & Kramer 1995). How- ever, we are planning to compare both methods for model selection in the future. Non-Determinate Background Knowledge Literals are non-determinate if they introduce new variables that can be bound in several alternative ways. Non-determinate literals often introduce ad- ditional parts of a structure like adjacent nodes in a graph. (Other examples are “part-of”-predicates.) Clearly, non-determinate literals do not immediately reduce the error when they are added to a clause un- der construction. Thus, any greedy algorithm without look-ahead would ignore non-determinate literals. The problem is how to introduce non-determinate literals in a controlled manner. In SRT, the user has to specify which literal(s) may be used to extend a clause. Firstly, the user can de- fine conjunctions of literals that are used for a limited look-ahead. (Th ese user-defined schemata are similar to relational cliches (Silverstein & Pazzani 1991)). Fur- t,hermore, the user can constrain the set of possible lit- erals depending on the body of the clause so far. The conditions concerning the body are arbitrary Prolog clauses, and therefore the user has even more possibil- ities to define a language than by Antecedent Descrip- tion Grammars (ADGs) (Cohen 1994a). To further reduce the number of possibilities, the set of literals and conjunctions is also constrained by modes, types of variables, and variable symmetries. Outlier Detection by Analogy Test instances that are outliers strongly deteriorate the average performance of learned regression models. Usually we cannot detect if test instances are outliers, because only little information is available for this task. If relational background knowledge is available, how- ever, a lot of information can be utilized to detect, by “analogy” , if test instances are outliers. Intuitively, when a new prediction is made, we check if the test instance is similar to the training instances which are covered by the clause that fires. If the similarity be- tween these training instances and the test instance is not big enough, we should consider a different predic- tion than the one suggested by the clause which suc- ceeds on the instance. In this case, we interpret the regression tree as defining a hierarchy of clusters. SRT chooses the cluster which is most similar to the test instance, and predicts the average of this cluster for the test instance. To implement this kind of reasoning by analogy, we first have to define similarity of “relational structures” (such as labeled graphs). 2 Our simple approximation of similarity is based on properties of such structures. In this context, we say that an instance i has a prop- erty P iff P is a literal or a conjunction (permitted by specified schemata) that immediately succeeds on i ( i.e., it succeeds without the introduction of interme- diate variables). The similarity is defined as follows: let pinstance denote th e set of properties of an in- stance i. Let pinVeommon (I) be the set of properties all instances in a set I have in common. Then the simi- larity between a test instance i and a set (cluster) of training instances I is similarity(i, I) = lPi7hZ?lC~ (i) n ~~~~~~~~~~ (1) I IPin-common (1) I The similarity is simply defined as the number of prop- erties that the test instance and the covered training instances have in common, divided by the number of properties that the training instances have in common. SRT uses a parameter for the minimum similarity to determine if the similarity between a test instance and the training instances covered by the clause that fires is large enough. The minimum similarity pa- rameter is the only real parameter of SRT, since the best value for the required minimum coverage of a clause is determined automatically using the MDL 2(Bisson 1992) defined a similarity measure for first- order logic, but it measures the similarity of two tuples in a relation, not of two “relational structures”. Inductive Learning 815 Method Pyrimidines Mean (a) Triazines Mean (a) Linear Regression on Hansch parameters and squares 0.693 (0.170) 0.272 (0.220) Linear Regression on attributes and squares 0.654 (0.104) 0.446 (0.181) Neural Network on Hansch parameters and squares 0.677 (0.118) 0.377 (0.190) Neural Network on attributes and squares 0.660 (0.225) 0.481 (0.145) GOLEM 0.692 (0.077) 0.431 (0.166) SRT 0.806 (0.110) 0.457 (0.089) Table 3: Summary of all methods in the biomolecular domains of the inhibition of dihydrofolate reductase by pyrimidines and by triazines: performances as measured by the Spearman rank correlation coefficients principle. Note that although we choose the cluster with the largest similarity, this similarity might be smaller than the specified minimum similarity. This way of detecting and handling outliers adds an instance-based aspect to SRT. However, it is just an additional possibility, and can be turned off by means of the minimum similarity parameter. Experimental Results We performed experiments in five real-world domains. For each domain, we performed experiments with (minimum similarity = 0.75 ) and without outlier detection by analogy (minimum similarity = 0 ). In cases where outlier detection affects the results, we will mention it in the discussion. A common step in pharmaceutical development is forming a quantitative structure-activity relationship (QSAR) th t 1 t a re a es the structure of a compound to its biological activity. Two QSAR domains, namely the inhibition of Escherichia coli dihydrofolate reduc- tase (DHFR) by pyrimidines (Hirst, King, & Stern- berg 1994a) and by triazines (Hirst, King, & Sternberg 199413) have been used to test SRT. The pyrimidine dataset consists of 2198 background facts and 55 instances (compounds), which are parti- tioned into 5 cross-validation sets. For the triazines, the background knowledge are 2933 facts, and 186 in- stances (compounds) are used to perform 6-fold cross validation. Hirst et al. made comprehensive compar- isons of several methods in these domains, but they concluded there is no statistically significant difference between these methods. Table 3 shows the results of the methods compared in (Hirst, King, & Sternberg 1994a) and in (Hirst, King, & Sternberg 1994b), and the results of SRT. The table summarizes the test set performances in both do- mains as measured by the Spearman rank correlation coefficients. The Spearman rank correlation coefficient is a measure of how much the order of the test instances according to the target variable correlates with the or- der predicted by the induced theory. The only reason why Hirst et al. use the Spearman rank correlation coefficient instead of, say, the average error is to com- pare GOLEM (Muggleton & Feng 1992) (which cannot 816 Learning predict numbers) with other methods.3 For the pyrimidines, SRT performs better than other methods, but the improvement is not statistically sig- nificant. Hirst et al. emphasize that a difference in Spearman rank correlation coefficient of about 0.2 would have been required for a data set of this size. The comparatively good performance of SRT is mostly due to the detection of two outliers that cannot be rec- ognized by other methods. These two outliers were the only ones identified in these two domains. For the tri- azine dataset, SRT performs quite well, but again the differences are not statistically significant. Since the Spearman rank correlation coefficient does not measure the quantitative error of a prediction, we included several other measures as proposed by Quin- lan (Quinlan 199313). Clearly, these measures have dis- advantages, too, but they represent interesting aspects of how well a theory works for a given test set. Unfortu- nately, we do not yet have a full comparison with other methods that are capable of predicting numbers. Ta- bles 4 and 7 contain the cross-validation test set perfor- mances of SRT in four test domains not only in terms of the Spearman rank correlation coefficient, but also in terms of several other accuracy measures. Furthermore, we performed experiments in the do- main of finite element mesh design (for details see (Dolsak, Bratko, & Jezernik 1994)), where the back- ground knowledge is non-determinate. Table 5 shows the results of SRT for the mesh dataset together with the results of FOSSIL (Fiirnkranz 1994) and results of other methods that were directly taken from (Karalic 1995). SRT performs better than FOIL (Quinlan 1990) and mFOIL (Dzeroski & Bratko 1992), but, worse than the other methods. However, statistical analysis shows that only the differences between FOIL and the other algorithms are significant. We also applied SRT to the biological problem of learning to predict the mutagenic activity of a chem- ical, i.e., if it is harmful to DNA. (For details see (Srinivasan et al. 1994) and (Srinivasan, Muggleton, & King 1995)). Th is omain involves non-determinate d background knowledge, too. In Table 6 we compiled 3Despite this disadvantage of GOLEM, Hirst et al. state that GOLEM is the only method that provides understand- able rules about drug-receptor interactions. SRT can be seen as a step towards integrating both capabilities. Measure of Accuracy Pyr. Mean (a) Triaz. Mean (g) Mutagen. Mean (0) Spearman rank correlation coefficient 0.806 (0.110) 0.457 (0.089) 0.683 (0.124) Average error IEI 0.435 (0.088) 0.514 (0.084) 1.103 (0.121) Correlation r 0.818 (0.091) 0.457 (0.104) 0.736 (0.089) Relative Error RE 0.218 (0.170) , 0.381 (0.132) 0.170 (0.055) Table 4: Performances of SRT in three domains in terms of several accuracy measures 1 Struct. 11 FOIL 1 mFOIL 1 GOLEM 1 MILP I FOSSIL I FORS 1 SRT 1 c 36 62 80 90 90 87 68 % 12.9 22.3 28.8 32.4 32.4 31.3 24.4 Table 5: Results of several methods in the domain of finite element mesh design: numbers and percentages of correctly classified edges results from (Karalic 1995) and (Srinivasan, Muggle- ton, & King 1995), and filled the result of SRT. In the table, ‘S’ refers to structural background knowledge, ‘NS’ refers to non-structural features, ‘PS’ refers to predefined structural features that can be utilized by propositional algorithms, and ‘MDL’ refers to MDL pre-pruning. (Note that the results of FORS are the best that can be found by varying three of its param- eters.) In the experiments we used the 188 instances (compounds) for a lo-fold cross-validation. The accu- racy concerns the problem to predict if a chemical is active or not. Since SRT learns a theory that predicts the activity (a number) instead, we had to evaluate it in a different way (by discretization) to compare the re- sults. Summing up, the experiments showed that SRT is competitive in this domain too, although the differ- ences between SRT and the rest are not statistically significant. Finally, we applied SRT to a domain where we are trying to predict the half-rate of surface water aerobic aqueous biodegradation in hours (DBeroski & Kompare 1995). To simplify the learning task, we discretized this quantity and mapped it to {1,2,3,4}. The back- ground knowledge is non-determinate, and except for the molecular weight there are no “global” features available. The dataset contains 62 chemicals, and we performed 6-fold cross-validation in our tests. The re- sults of SRT can be found in table 7. SRT is the first algorithm to be tested on the data, and the results ap- pear to indicate that there are too few instances to find good generalizations. Again, SRT with outlier detec- tion improves upon the result of SRT without it. Note that neither a propositional algorithm (such as CART) nor an algorithm that cannot handle non-determinate background knowledge (such as FOIL, GOLEM and DINUS/RETIS) can be applied to this problem. To sum up the experiments, SRT turned out to be quantitatively competitive, but its main advantages are that it yields comprehensible and explicit rules for predicting numbers, even when given non-determinate background knowledge. 1 CART + NS I ) 0.82 (0.03j 1 FOIL + NS + S 0.81 (0.03j Progol + NS + S 0.88 (0.02) FORS + NS + S 0.89 (0.06) FORS + NS + S + MDL 0.84 (0.11 j SRT + NS + S 0.85 (0.08) Table 6: Summary of accuracy of several systems in the mutagenicity domain Conclusion and rther esearch In this paper we presented Structural Regression Trees (SRT), a new algorithm which can be applied to learn- ing problems concerned with the prediction of numbers from examples and relational (and non-determinate) background knowledge. SRT can be viewed as in- tegrating the statistical method of regression trees (Breiman et al. 1984) into ILP. SRT can be applied to a class of problems no ILP system except FORS can handle, and learns theories that may be easier t,o understand than theories found by FORS (section 2). The advantages and disadvantages of SRT are basically the same as the ones of CART: regression trees have a great potential to be explanatory, but we cannot ex- pect to achieve a very high accuracy, since we predict Inductive Learning 817 Measure of Accuracy Biod. with Outl. Det. Mean ((T) Biod. w o Outl. Det. Mean ~7 7 Spearman rank correlation coefficient 0.463 (0.213) 0.402 (0.232) Average error IEI 0.744 (0.190) 0.771 (0.210) Correlation r 0.382 (0.247) 0.364 (0.223) Relative Error RE 0.363 (0.139) 0.377 (0.141) Table 7: Performances of SRT with and without outlier detection in the biodegradability domain constant values for whole regions of the instance space. As it could help to build more accurate models, one of the next steps will be to assign linear regression models to the leaves. One of the biggest differences between SRT and FORS is that it is a tree-based and not a covering algo- rithm. So generally all the advantages and disadvan- tages known from other algorithms of these types ap- ply (Bostrijm 1995) (Weiss & Indurkhya 1995). How- ever, FORS and SRT also differ in many other ways, and thus a real comparison of the search strategies em- ployed is still to be done for relational regression. Experiments in several real-world domains demon- strate that the approach is competitive with existing methods, indicating that its advantages (the applica- bility to relational regression given non-determinate background knowledge and the comprehensibility of the rules) are not at the expense of predictive accu- racy. SRT generates a series of increasingly complex trees, but currently every iteration starts from scratch. We are planning to extend the algorithm such that parts of the tree of one iteration can be reused in the next iteration. We also plan to compare our way of coverage-based prepruning and tree selection by MDL with more tra- ditional pruning methods a la CART (Breiman et al. 1984). Besides, we addressed the problem of non- determinate literals. We adopted and generalized so- lutions for this problem, but they involve the tiresome task of writing a new specification of admissible literals and conjunctions for each domain. We therefore think that a more generic solution would make the applica- tion of the method easier. Acknowledgements This research is sponsored by the Austrian Fonds zur Forderung der Wissenschaftlichen Forschung (F WF) under grant number P10489-MAT. Financial support for the Austrian Research Institute for Artificial Intel- ligence is provided by the Austrian Federal Ministry of Science, Research, and Arts. I would like to thank Johannes Fiirnkranz, Bernhard Pfahringer and Ger- hard Widmer for valuable discussions. I also wish to thank Saso Dzeroski for providing the biodegradabil- ity dataset and the anonymous AAAI referees for their comments which helped to improve this paper. 818 Learning References Bisson, G. 1992. Learning in FOL with a similar- ity measure. In Proc. Tenth National Conference on Artificial Intelligence (AAAI-92). Bostrom, H. 1995. Covering vs. Divide-and-Conquer for Top-Down Induction of logic programs. In Proc. Fourteenth International Joint Conference on Artiji- cial Intelligence (IJCAI-95), 1194-1200. San Mateo, CA: Morgan Kaufmann. Breiman, L.; Friedman, J.; Olshen, R.; and Stone, C. 1984. Classification and Regression Trees. The Wadsworth Statistics/Probability Series. Belmont, CA: Wadsworth International Group. Cheeseman, P. 1990. On finding the most probable model. In Shrager, J., and Langley, P., eds., Compu- tational Models of Discovery and Theory Formation. Los Altos, CA: Morgan Kaufmann. Clark, P., and Boswell, R. 1991. Rule induction with CN2: Some recent improvements. In Proceedings of the Fifth European Working Session on Learning, 151-161. Berlin Heidelberg New York: Springer. Cohen, W. 1994a. Grammatically biased learning: Learning logic programs using an explicit antecedent description language. Artificial Intelligence 68(2). Cohen, W. 1994b. Pat-learning nondeterminate clauses. In Proc. Twelfth National Conference on Ar- tificial Intelligence (AAAI-94). Dolsak, B.; Bratko, I.; and Jezernik, A. 1994. Fi- nite element mesh design: An engineering domain for ILP application. In Proceedings of the Fourth Inter- national Workshop on Inductive Logic Programming (ILP-94), GMD-Studien Nr. 237, 305-320. Dieroski, S., and Bratko, I. 1992. Handling noise in Inductive Logic Programming. In Proceedings of the International Workshop on Inductive Logic Program- ming. Dzeroski, S., and Kompare, B. 1995. Personal Com- munication. Dzeroski, S.; Todoroski, L.; and Urbancic, T. 1995. Handling real numbers in inductive logic program- ming: A step towards better behavioural clones. In Lavrac, N., and Wrobel, S., eds., Machine Learning: ECML-95, 283-286. Berlin Heidelberg New York: Springer. Dzeroski, S. 1995. Numerical Constraints and Learn- ability in Inductive Logic Programming. Ph.D. Disser- tation, University of Ljubljana, Ljubljana, Slovenija. Fiirnkranz, J. 1994. FOSSIL: A robust relational learner. In Bergadano, F., and De Raedt, L., eds., Machine Learning: ECML-94, 122-137. Berlin Hei- delberg New York: Springer. Hirst, J.; King, R.; and Sternberg, M. 1994a. Quanti- tative structure-activity relationships by neural net- works and inductive logic programming. the inhibi- tion of dihydrofolate reductase by pyrimidines. Jour- nal of Computer-Aided Molecular Design 8:405-420. Hirst, J.; King, R.; and Sternberg, M. 199413. Quan- titative structure-activity relationships by neural net- works and inductive logic programming: The inhibi- tion of dihydrofolate reductase by triazines. Journal of Computer-Aided Molecular Design 81421-432. Karalic, A. 1992. Employing linear regression in regression tree leaves. In Neumann, B., ed., Proc. Tenth European Conference on Artificial Intelligence (ECAI-92), 440-441. Chichester, UK: Wiley. Karalic, A. 1995. First Order Regression. Ph.D. Dissertation, University of Ljubljana, Ljubljana, Slovenij a. Lavrac, N., and Dieroski, S. 1994. Inductive Logic Programming. Chichester, UK: Ellis Horwood. Manago, M. 1989. Knowledge-intensive induction. In Segre, A., ed., Proceedings of the Sixth International Workshop on Machine Learning, 151-155. Morgan Kaufman. Muggleton, S., and Feng, C. 1992. Efficient induction of logic programs. In Muggleton, S., ed., Inductive Logic Programming. London, U.K.: Academic Press. 281-298. Pfahringer , B . , and Kramer, S. 1995. Compression- based evaluation of partial determinations. In Pro- ceedings of the First International Conference on Knowledge Discovery and Data Mining. AAAI Press. Quinlan, J. 1990. Learning logical definitions from relations. Machine Learning 5:239-266. Quinlan, J. 1992. Learning with continuous classes. In Adams, S., ed., Proceedings AI’92, 343-348. Sin- gapore: World Scientific. Quinlan, J. 1993a. C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann. Quinlan, J. 199313. A case study in machine learning. In Proceedings ACSC-16 Sixteenth Australian Com- puter Science Conference. Rissanen, J. 1978. Modeling by shortest data descrip- tion. Automatica 14:465-471. Rissanen, J. 1986. Stochastic complexity and model- ing. The Annals of Statistics 14(3):1080-1100. Silverstein, G., and Pazzani, M. 1991. Relational cliches: Constraining constructive induction during relational learning. In Birnbaum, L., and Collins, G., eds., Machine Learning: Proceedings of the Eighth In- ternational Workshop (ML91), 203-207. San Mateo, CA: Morgan Kaufmann. Srinivasan, A.; Muggleton, S.; King, R.; and Stern- berg, M. 1994. Mutagenesis: ILP experiments in a non-determinate biological domain. In Proceedings of the Fourth International Workshop on Inductive Logic Programming (ILP-94), GMD-Studien Nr. 237, 217-232. Srinivasan, A.; Muggleton, S.; and King, R. 1995. Comparing the use of background knowledge by In- ductive Logic Programming systems. In Proceedings of the 5th International Workshop on Inductive Logic Programming (ILP-95), 199-230. Katholieke Univer- siteit Leuven. Watanabe, L., and Rendell, L. 1991. Learning struc- tural decision trees from examples. In Proc. Twelfth International Joint Conference on Artificial Intelli- gence (IJCAI-9l), 770-776. San Mateo, CA: Morgan Kaufmann. Weiss, S., and Indurkhya, N. 1995. Rule-based machine learning methods for functional prediction. Journal of Artificial Intelligence Research 3:383-403. Knowledge Bases 819
1996
121
1,757
iscovering Robust Chun-Nan su and Craig A. Knoblock Information Sciences Institute and Department of Computer Science University of Southern California 4676 Admiralty Way, Marina de1 Rey, CA 90292 {chunnan,knoblock}@isi.edu http://www.isi.edu/sims/ Abstract Schema: Many applications of knowledge discovery require the knowledge to be consistent with data. Exam- ples include discovering rules for query optimiza- tion, database integration, decision support, etc. However, databases usually change over time and make machine-discovered knowledge inconsistent with data. Useful knowledge should be robust against database changes so that it is unlikely to become inconsistent after database changes. This paper defines this notion of robustness, de- scribes how to estimate the robustness of Horn- clause rules in closed-world databases, and de- scribes how the robustness estimation can be ap- plied in rule discovery systems. geoloc(name,glc~ode,country,latitude,longitude), seaport(name,glc-code,storage,rail,road,anch~ffshore), wharf(id,glc-code,depth,length,crane), ship(name,class,status,fleet,year), ship-class(classaame,type,draft,length,container-cap). Rules: Rl: ;The latitude of a Maltese geographic location is greater ;than or equal to 35.89. ?latitude 1 35.89 e: geoloc( -)-, ?country,?latitude,-) A ?country = “Malta”. R2: ;A11 Maltese geographic locations are seaports. seaport (,?glcrd ,-,-,-,-) e geoloc(,?glc-cd,?country,,,-) A ?country = “Malta”. F?,3: ;A11 ships built in 1981 belong to either “MSC” fleet or *“MSC Lease” fleet. Introduction Databases are evolving entities. Knowledge discov- ered from one database state may become invalid or inconsistent with a new database state. Many ap- plications require discovered knowledge to be consis- tent with the data. Examples are the problem of learning for database query optimization, database in- tegration, knowledge discovery for decision support, etc. However, most discovery approaches assume static databases, while in practice, many databases are dy- namic, that is, they change frequently. It is impor- tant that discovered knowledge is robust against data changes in the sense that the knowledge remains valid or consistent after databases are modified. ‘member(?R133,[ “MSC” , “MSC LEASE”]) + ship(-,,,-, ?R133,?R132) A ?R132 = 1981. R4: ;If the storage space of a seaport is greater than 200,000 tons, ;then its geographical location code is one of the four codes. member(?R213,[ “APFD” , “ ADLS”, “WMY2”, “NPTU”]) e seaport(,?R213,?R212,,-,-) A ?R212 < 200000. Table 1: Schema and rules of an example database This notion of robustness can be defined as the probability that the database is in a state consistent with discovered knowledge. This probability is dif- ferent from predictive accuracy, which is widely used in learning classification knowledge, because predic- tive accuracy measures the probability that knowledge is consistent with randomly selected unseen data in- stead of with an entire database state. This difference *This research was supported in part by the National Science Foundation under Grant No. IRI-9313993 and in part by Rome Laboratory of the Air Force Systems Com- mand and the Advanced Research Projects Agency under Contract No. F30602-94-C-0210. is significant in databases that are interpreted using the closed-world assumption. For a Horn-clause rule C + A, predictive accuracy is usually defined as the conditional probability Pr(CjA) given a randomly cho- sen data instance (Cohen 1993; 1995; Cussens 1993; Furnkranz & Widmer 1994; LavraE & DZeroski 1994). In other words, it concerns the probability that the rule is valid with regard to a newly inserted data. IIowever, databases also change by updates and deletions, and in a closed-world database, they may affect the validity of a rule, too. Consider the rule ~2 in Table 1 and the database fragment, in Table 2. R2 will become inconsis- tent if we delete the seaport, instance labeled with a “*” in Table 1, because the value 8004 for variable ?glc-cd that satisfies the antecedent of R2 will no longer satisfy the consequent of R2. To satisfy the consequent of R2 requires that there is a seaport instance with its glc-cd value 8004, according to the closed-world assumption. Closed-world databases are widely used partly be- cause of the limitation of the representation systems, but, mostly because of the characteristics of applica- tion domains. Instead of being a piece of static state 820 Learning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. geoloc(” Safaqis” , 8001, Tunisia, . . .) geoloc(“Valletta”, 8002, Malta, . . .)+ geoloc(“Marsaxlokk” , 8003, Malta, . . .)-l- geoloc(“San Pawl”, 8004, Malta, . . .)+ geoloc(“Marsalforn”, 8005, Malta, . . .)+ geoloc(” Abano” , 8006, Italy, . . .) geoloc(“Torino”, 8007, Italy, . . .) geoloc(” Venezia” , 8008, Italy, . . .) seaport(“Marsaxlokk”, 8003, . . .) seaport(” Grand Harbor”, 8002, . . .) seaport(” Marsa” , 8005, . . .) seaport(“St Pauls Bay”, 8004, . . .)* seaport(” Catania”, 8016, . . .) seaport (” Palermo” , 8012, . . .) seaport (” Traparri” , 8015, . . .) seaport(“AbuKamash”, 8017, . . .) Table 2: Example database fragment of past experience, an instance of closed-world data usually represents a dynamic state in the world, such as an instance of employee information in a person- nel database. Therefore, closed-world data tend to be dynamic, and it is important for knowledge discovery systems to handle dynamic and closed-world data. This paper defines this notion of robustness, and describes how robustness can be estimated and ap- plied in knowledge discovery systems. The key idea of our estimation approach is that it estimates the probabilities of data changes, rather than the num- ber of possible database states, which is intractably large for estimation. The approach decomposes data changing transactions and estimates their probabilities using the Laplace law of succession. This law is simple and can bring to bear information such as database schemas and transaction logs for higher accuracy. The paper also describes a rule pruning approach based on the robustness estimation. This pruning approach can be applied on top of any rule discovery or induction systems to generate robust rules. Our experiments demonstrate the feasibility of our robustness estima- tion and rule pruning approaches. The estimation ap- proach can also be used by a rule maintenance system to guide the updates for more robust rules so that the rules can be used with a minimal maintenance effort. This paper is organized as follows. We establish the terminology on databases and rules in the next sec- tion. Then we define robustness and describe how to estimate the robustness of a rule. We present our rule pruning system next. Finally, we conclude with a sum- mary of contributions and potential applications of the robustness estimation. Terminology This section introduces the terminology that will be used throughout this paper. In this paper, we consider relational databases, which consist of a set of relations. A relation is a set of instances (or tuples) of attribute- value vectors. The number of attributes is fixed for all instances in a relation. The values of attributes can be either a number or a string, but with a fixed type. Table 1 shows the schema of an example database with five relations and their attributes. Table 1 also shows some Horn-clause rules describ- ing the data. We adopt standard Prolog terminol- ogy and semantics as defined in (Lloyd 1987) in our discussion of rules. In addition, we refer to literals on database relations as database liter& (e.g., sea- port(-,?glc-cd,?storage,-,-,J) and literals on built-in relations as built-in liter& (e.g., ?latitude 2 35.89). We distinguish two classes of rules. Rules with built- in literals as their consequents (e.g., Rl) are called range rules. The other class of rules contains rules with database literals as their consequents (e.g., R2). Those rules are rektional rules. These two classes of rules are treated differently in robustness estimation. A database state at a given time t is the collection of the instances present in the database at time t. We use the closed-world assumption (CWA) to interpret the semantics of a database state. That is, informa- tion not explicitly present in the database is taken to be false. A rule is said to be consistent with a database state if all variable instantiations that satisfy the an- tecedents of the rule also satisfy the consequent of the rule. For example, R2 in Table 1 is consistent with the database fragment shown in Table 2, since for all geoloc tuples that satisfy the body of R2 (labeled with a “+” in Table l), there is a corresponding instance in seaport with a corresponding glc-cd value. A database can be changed by transactions. A trans- action can be considered as a mapping from a database state to a new database state. There are three kinds of primitive transactions - inserting a new tuple into a relation, deleting an existing tuple from a relation, and updating an existing tuple in a relation. stimating obustness of This section first defines formally our notion of robust- ness and then describes an approach to estimating ro- bustness. Following those subsections is an empirical demonstration of the estimation approach. Robustness Intuitively, a rule is robust against database changes if it is unlikely to become inconsistent after database changes. This can be expressed as the probability that a database is in a state consistent with a rule. efinition 1 (Robustness for all states) Given a rule r, let D denote the event that a database is in a state that is consistent with r. The robustness of r is RobustI = I%(D). This probability can be estimated by the ratio be- tween the number of all possible database states and the number of database states consistent with a rule. That is, RobustI = # of database states consistent with r # of all possible database states Knowledge Bases 821 There are two problems with this estimate. The first problem is that it treats all database states as if they are equally probable. That is obviously not the case in real-world databases. The other problem is that the number of possible database states is intractably large, even for a small database. Alternatively, we can define robustness from the observation that a rule be- comes inconsistent when a transaction results in a new state inconsistent with the rule. Therefore, the prob- ability of certain transactions largely determines the likelihood of database states, and the robustness of a rule is simply the probability that such a transaction is not performed. In other words, a rule is robust if the transactions that will invalidate the rule are unlikely to be performed. This idea is formalized as follows. Definition 2 (Robustness for accessible states) Given a rule r and a database in a database state de- noted as d. New database states are accessible from d by performing transactions. Let t denote the transac- tions on d that result in new database states inconsis- tent with r. The robustness of r in accessible states from the current state d is defined as Robust(r(d) = Pr(+ld) = 1 - Pr(tld). This definition of robustness is analogous in spirit to the notion of accessibility and the possible worlds semantics in modal logic (Ramsay 1988). If the only way to change database states is by transactions, and all transactions are equally probable to be performed, then the two definitions of robustness are equivalent. However, this is usually not the case in real-world databases since the robustness of a rule could be dif- ferent in different database states. For example, sup- pose there are two database states dl and d2 of a given database. To reach a state inconsistent with r, we need to delete ten tuples in dl and only one tuple in d2. In this case, it is reasonable to have Robust(rldl) > Robust(rld2) because it is less likely that all ten tuples are deleted. Definition 1 implies that robustness is a constant while Definition 2 cap- tures the dynamic aspect of robustness. Estimating Robustness We first review a useful estimate for the probability of the outcomes of a repeatable random experiment. It will be used to estimate the probability of transactions and the robustness of rules. Laplace Law of Succession Given a repeatable ex- periment with an outcome of one of any k classes. Sup- pose we have conducted this experiment n times, r of which have resulted in some outcome C, in which we are interested. The probability that the outcome of the r-l-1 next experiment will be C can be estimated as -. n-G A detailed description and a proof of the Laplace law can be found in (Howson & Urbach 1988). The Laplace Rl: ?latitude > 35.89 += geoloc( --,? country,?latitude,-) A ?country = “Malta”. Tl: One of the existing tuples of geoloc with its ?country = “Malta” is updated such that its ?latitude < 35.89. T2: A new tuple of geoloc with its ?country = “Malta” and ?latitude < 35.89 is inserted to the database. T3: One of the existing tuples of geoloc with its ?latitude < 35.89 and its ?country # “Malta” is updated such that its ?country = “Malta”. Table 3: Transactions that invalidate Rl law applies to any repeatable experiments (e.g., toss- ing a coin). The advantage of the Laplace estimate is that it takes both known relative frequency and prior probability into account. This feature allows us to in- clude information given by a DBMS, such as database schema, transaction logs, expected size of relations, ex- pected distribution and range of attribute values, as prior probabilities in our robustness estimation. Our problem at hand is to estimate the robustness of a rule based on the probability of transactions that may invalidate the rule. This problem can be de- composed into the problem of deriving a set of in- validating transactions and estimating the probabil- ity of those transactions. We illustrate our estima- tion approach with an example. Consider Rl in Ta- ble 3, which also lists three mutually exclusive trans- actions that will invalidate Rl. These transactions cover all possible transactions that will invalidate Rl. Since Tl, T2, and T3 are mutually exclusive, we have Pr(TlVT2VT3) = Pr(Tl)+Pr(T2)+Pr(T3). The prob- ability of these transactions, and thus the robustness of RI, can be estimated from the probabilities of Tl, T2, and T3. We require that transactions be mutually exclusive so that no transaction covers another because for any two transactions t, and tb, if t, covers tb, then Pr(t, V tb) = Pr(t,) and it is redundant to consider tb. For example, a transaction that deletes all geoloc tuples and then inserts tuples invalidating RI does not need to be considered, because it is covered by T2 in Table 3. Also, to estimate robustness efficiently, each mu- tually exclusive transactions must be minimal in the sense that no redundant conditions are specified. For example, a transaction similar to Tl that updates a tuple of geoloc with its ?country = “Malta” such that its latitude < 35.89 and its longitude > 130.00 will invalidate RI. However, the extra condition “longitude > 130.00” is not relevant to Rl. With- out this condition, the transaction will still result in a database state inconsistent with Rl. Thus that trans- action is not minimal for our robustness estimation and does not need to be considered. We now demonstrate how Pr(T1) can be estimated only with the database schema information, and how we can use the Laplace law of succession when trans- action logs and other prior knowledge are available. Since the probability of Tl is too complex to be esti- 822 Learning Figure 1: Bayesian network model of transactions mated directly, we have to decompose the transaction into more primitive statements and estimate their lo- cal probabilities first. The decomposition is based on a Bayesian network model of database transactions illus- trated in Figure 1. Nodes in the network represent the random variables involved in the transaction. An arc from node xi to node xj indicates that xj is dependent on xi. For example, 22 is dependent on xi because the probability that a relation is selected for a transaction is dependent on whether the transaction is an update, deletion or insertion. That is, some relations tend to have new tuples inserted, and some are more likely to be updated. x4 is dependent on 22 because in each re- lation, some attributes are more likely to be updated. Consider our example database (see Table l), the ship relation is more likely to be updated than other re- lations. Among its attributes, status and fleet are more likely to be changed than other attributes. Nodes 2s and x4 are independent because, in general, which tuple is likely to be selected is independent of the like- lihood of which attribute will be changed. The probability of a transaction can be estimated as the joint probability of all variables Pr(xi A . . . A x5). When the variables are instantiated for Tl, their semantics are as follows: e xi: a tuple is updated. e x2: a tuple of geoloc is updated. e x3: a tuple of geoloc with its ?country = “Malta” is updated. 0 x4: a tuple of geoloc whose ?latitude is up- dated. o x5: a tuple of geoloc whose ?lat itude is updated to a new value less than 35.89. From the Bayesian network and the chain rule of probability, we can evaluate the joint probability by a conjunction of conditional probabilities: Pr(T1) = Pr(xr A 22 A x3 A x4 A x5) = Pr(xi) . Pr(x21xi) . Pr(xalx2 A x1) - Pr(x41x2 A x1) - Pr(xslx4 A 22 A x1) We can then apply the Laplace law to estimate each local conditional probability. This allows us to esti- mate the global probability of Tl efficiently. We will show how information available from a database can be used in estimation. When no information is avail- able, we apply the principle of indifference and treat all possibilities as equally probable. We now describe our approach to estimating these conditional probabilities. e A tuple is updated: tu + 1 WXl) = t+3 where t, is the number of previous updates and t is the total number of previous transactions. Because there are three types of primitive transactions (inser- tion, deletion, and update), when no information is available, we will assume that updating a tuple is one of three possibilities (with t, = t = 0). When a trans- action log is available, we can use the Laplace law to estimate this probability. A tuple of geoloc is updated, given that a tuple is updated: Pr(x2 1x1) = t u,geoloc + 1 t, + R where R is the number of relations in the database (this information is available in the schema), and tu,gkoloc is the number of updates made to tuples of relation geoloc. Similar to the estimation of Pr(xl), when no information is available, the probability that the up- date is made on a tuple of any particular relation is one over the number of relations in the database. o A tuple of geoloc with its ?country = “Malta” is updated, given that a tuple of geoloc is updated: Pr(xslx2 A x1) = tu,a3 + 1 tu,geoloc •I G/I,3 where G is the size of relation geoloc, Ia is the number of tuples in geoloc th at satisfy ?country =“Malta”, and tu,a3 is the number of updates made on the tuples in geoloc that satisfy ?cou.ntry =“Malta”. The number of tuples that satisfy a literal can be re- trieved from the database. If this is too expensive for large databases, we can use the estimation approaches used for conventional query optimization (Piatetsky- Shapiro 1984; Ullman 1988) to estimate this number. e The value of latitude is updated, given that a tu- ple of geoloc with its ?country =“Malta” is updated: Pr(x41xz A x1) = tu,geoloc,latitude + 1 tu,geoloc + A where A is the number of attributes of geoloc, t u,geoloc,latitude is the number of updates made on the latitude attribute of the geoloc relation. Note that x4 and x3 are independent and the condition that ?country =“Malta” can be ignored. Here we have an example of when domain-specific knowledge can be used in estimation. We can infer that latitude is less likely to be updated than other attributes of geoloc from our knowledge that it will be updated only if the database has stored incorrect data. The value of latitude is updated to a value less than 35.89, given that a tuple of geoloc with its ?country =“Malta” is updated: Pr(alx4 A 22 A XI) 0.5 no information available = 0.398 with range information Knowledge Bases 823 ~p~*a~o~~,,;Ix& * .) A Ll A * - * A Cz- : Tl: Update ?z of a tuple of A covered by the rule so that the new ?x does not satisfy B(?s); T2: Insert a new tuple to a relation involved in the antecedents so that the tuple satisfies all the antecedents but not 6J(?m) T3: Update one tuple of a relation involved in the antecedents not covered by the rule so that the resulting tuple satisfies a11 the antecedents but not e(?z) Table 4: Templates of invalidating transactions for range rules Without any information, we assume that the attribute will be updated to any value with uniform probability. The information about the distribution of attribute values is useful in estimating how the attribute will be updated. In this case, we know that the latitude is between 0 to 90, and the chance that a new value of latitude is less than 35.89 should be 35.89/90 = 0.398. This information can be derived from the data or pro- vided by the users. Assuming that the size of the relation geoloc is 616, ten of them with ?country =“Malta”, without trans- action log information, and from the example schema (see Table l), we have five relations and five attributes for the geoloc relation. Therefore, Pr(T1) = -. 1 -. 1 -. 10 -. 1 - 1 = 3 5 616 5 2 0.000108 Similarly, we can estimate Pr(T2) and Pr(T3). Sup- pose that Pr(T2) = 0.000265 and Pr(T3) = 0.00002, then the robustness of the rule can be estimated as 1 - (0.000108 + 0.000265 + 0.00002) = 0.999606. The estimation accuracy of our approach may de- pend on available information, but even given only database schemas, our approach can still come up with reasonable estimates. This feature is important because not every real-world database system keeps transaction log files, and those that do exist may be at different levels of granularity. It is also difficult to collect domain knowledge and encode it in a database system. Nevertheless, the system must be capable of exploiting as much available information as possible. Templates for Estimating Deriving transactions that invalidate an arbitrary logic statement is not a trivial problem. Fortunately, most knowledge discovery systems have strong restrictions in the syntax of discovered knowledge. Hence, we can manually generalize the invalidating transactions into a small sets of transaction templates, as well as tem- plates of probability estimates for robustness estima- tion. The templates allow the system to automatically estimate the robustness of knowledge in the procedures of knowledge discovery or maintenance. This subsec- tion briefly describes the derivation of those templates. Recall that we have defined two classes of rules based on the type of their consequents. If the consequent Table 5: Relation size and transaction log data of a rule is a built-in literal, then the rule is a range rules (e.g., Rl), otherwise, it is a relational rule with a database literal as its consequent, (e.g., R2). In Table 3 there are three transactions that will invalidate RI. Tl covers transactions that update on the attribute value used in the consequent, T2 covers those that insert a new tuple inconsistent with the rule, and T3 covers up- dates on the attribute values used in the antecedents. The invalidating transactions for all range rules are covered by these three general classes of transactions. We generalize them into a set of three transaction tem- plates illustrated in Table 4. For a relational rule such as ~2, the invalidating transactions are divided into another four general classes different from those for range rules. The complete templates are presented in detail in (Hsu & Knoblock 1996a). These two sets of templates are sufficient for any Horn-clause rules on relational data. From the transaction templates, we can derive the templates of the equations to compute robustness es- timation for each class of rules. The parameters of these equations can be evaluated by accessing database schema or transaction log. Some parameters can be evaluated and saved in advance (e.g., the size of a re- lation) to improve efficiency. For rules with many an- tecedents, a general class of transactions may be eval- uated into a large number of mutually exclusive trans- actions whose probabilities need to be estimated sep- arately. In those cases, our estimation templates will be instantiated into a small number of approximate es- timates. As a result, the complexity of applying our templates for robustness estimation is always propor- tional to the length of rules (Hsu & Knoblock 1996a). Empirical Demonstration We estimated the robustness of the sample rules on the database as shown in Table 1. This database stores information on a transportation logistic planning do- main with twenty relations. Here, we extract a subset of the data with five relations for our experiment. The database schema contains information about relations and attributes in this database, as well as ranges of some attribute values. For instance, the range of year of ship is from 1900 to 2000. In addition, we aIso have 824 Learning Figure 2: Estimated robustness of sample rules a log file of data updates, insertions and deletions over this database. The log file contains 98 transactions. The size of relations and the distribution of the trans- actions on different relations are shown in Table 5. Among the sample rules in Table 1, RI seems to be the most robust because it is about the range of latitude which is rarely changed. R2 is not as robust because it is likely that the data about a geographical location in Malta that is not a seaport may be inserted. R3 and R4 are not as robust as Rl, either. For R3, the fleet that a ship belongs does not have any necessary implication to the year the ship was built, while R4 is specific because seaports with small storage may not be limited to those four geographical locations. Figure 2 shows the estimation results. We have two sets of results. The first set shown in black columns is the results using only database schema information in estimation. The second set shown in grey columns is the results using both the database schema and the transaction log information. The estimated re- sults match the expected comparative robustness of the sample rules. The absolute robustness value of each rules, though, looks high (more than 0.93). This is be- cause the probabilities of invalidating transactions are low since they are estimated with regard to all possible transactions. We can normalize the absolute values so that they are uniformly distributed between 0 and 1. The results show that transaction log information is useful in estimation. The robustness-of R2 is esti- mated lower than other rules without the log informa- tion because the system estimated that it is not likely for a country with-all its geographical locations as sea- ports. (See Table 1 for the contents of the rules.) When the log information is considered, the system increases its estimation because the log information shows that transactions on data about Malta are very unlikely. For R3, the log information shows that the fleet of ships may change and thus the system estimated its robust- ness significantly lower than when no log information is considered. A similar scenario appears in the case of R4. Lastly, Rl has a high estimated robustness as ex- pected regardless whether the log information is used. APPlYiW obustness in de iscovery This section discusses how to use the robustness mates to guide the knowledge discovery. esti- Background and Problem Specification Although robustness is a desirable property of discov- ered knowledge, using robustness alone is not enough to guide the knowledge discovery system. The tautolo- gies such as False j seaport(-,?glc-cd ,-,-,-,- ), and seaport(,?glcrd ,-,-,-,-) + True have a robustness estimate equal to one, but they are not interesting. Therefore, we should use robust- ness together with other measures of interestingness to guide the discovery. One of the measures of interest- ingness is applicability, which is important no matter what our application domains are. This section will focus on the problem of discovering rules from closed- world relational data that are both highly applicable and robust. In particular, we will use length to mea- sure the applicability of rules. Generally speaking, a rule is more applicable if it is shorter, that is, if the number of its antecedent literals is smaller, because it is less specific. Many systems are now able to generate a set of Horn-clause rules from relational data. These systems include inductive logic programming systems (Lavrac & Dzeroski 1994; Raedt & Bruynooghe 1993), and systems that discover rules for semantic query opti- mization (Hsu & Knoblock 1994). Instead of gen- erating desired rules in one run, we propose to use these existing algorithms to generate rules, and then use a rule pruning algorithm to prune a rule so that it is highly robust and applicable (short). The ra- tionale is that rule construction algorithms tend to generate overly-specific rules, but taking the length and robustness of rules into account in rule construc- tion could be too expensive. This is because the search space of rule construction is already huge and evaluating robustness is not trivial. Previous work in classification rule induction (Cohen 1993; 1995; Furnkranz & Widmer 1994) shows that dividing a learning process into a two-stage rule construction and pruning can yield better results in terms of classifi- cation accuracy as well as the efficiency of learning. These results may not apply directly to our rule discov- ery problem, nevertheless, a two-stage system is clearly simpler and more efficient. Another advantage is that the pruning algorithm can be applied on top of existing rule generation systems. The specification of our rule pruning problem is as follows: take a machine-discovered rule as input, which is consistent with a database but potentially overly- specific, and remove antecedent literals of the rule so that it remains consistent but is short and robust. The Pruning Algorithm The basic idea of our algorithm is to subset of antecedent literals to remove search for a until any fur- Knowledge Bases $25 R5: ?length 2 1200 SC wharf( -, ?code,?depth,?length,?crane) A seaport(?name,?code,, -,-,-) A geoloc(?name,,?country,-,-) A ?country = “Malta” A ?depth < 50 A ?crane > 0. 1-7: ?length 2 1200 -G= wharf(,,?code,?depth,?length,?crane) A seaport(?name,?code,, -,-,-) A geoloc(?name,,?country,-,-) A ?crane > 0. rlO:?length 1 1200 t= wharf( -,? code,?depth,?length,?crane) A seaport(?name,?code,, -,-,-) A geoloc(?name,,?country,-,-). Table 6: Example rule to be pruned and results ther removal will make the rule inconsistent with the database, or make the rule’s robustness very low. We can apply the estimation approach described in the previous section to estimate the robustness of a par- tially pruned rule and guide the pruning search. The main difference of our pruning problem from previous work is that there is more than one property of rules that the system is trying to optimize, and these properties - robustness and length - may interact with each other. In some cases, a long rule may be more robust, because a long rule is more specific and covers fewer instances in the database. These instances are less likely to be selected for modification, compared to the case of a short rule, which covers more instances. We address this issue by a beam search algorithm. Let n denote the beam size, our algorithm expands the search by pruning a literal in each search step, pre- serves the top n robust rules, and repeats the search until any further pruning yields inconsistent rules. The system keeps all generated rules and then selects those with a good combination of length and robustness. The selection criterion may depend on how often the appli- cation database changes. Empirical Demonstration of Rule Pruning We conducted a detailed empirical study on R5 in Ta- ble 6 using the same database as in the previous sec- tions. Since the search space for this rule is not too large, we ran an exhaustive search for all pruned rules and estimated their robustness. The entire process took less than a second (0.96 seconds). In this ex- periment, we did not use the log information in the robustness estimation. The results of the experiment are listed in Table 7. To save the space, we list the pruned rules with their abbreviated antecedents. Each term represents a lit- eral in the conjunctive antecedents. For example, “W” represents the literal wharf (-, ?code, . . .) . “Cr” and “Ct” represent the literals on ?crane and ?country, respectively. Inconsistent rules and rules with dangling literals are identified and discarded. A set of literals are considered dangling if the variables occurring in 1 Rule 1 Antecedents 1 Robustness 1 Remarks Table 7: Result of rule pruning on a sample rule those literals do not occur in other literals in a rule. Dangling literals are not desirable because they may mislead the search and complicate the robustness esti- mation. The relationship between length and robustness of the pruned rules is illustrated in Figure 3. The best rule will be the one located in the upper right corner of the graph, with short length and high robustness. On the top of the graph is the shortest rule r10, whose complete specification is shown in Table 6. Although this is the shortest rule, it is not so desirable because it is somewhat too general. The rule states that wharves in seaports will have a length greater than 1200 feet. However, we expect that there will be data on wharves shorter than 1200 feet. Instead, with the robustness estimation, the system can select the most robust rule r7, also shown in Table 6. This rule is not as short but still short enough to be widely applicable. More- over, this rule makes more sense in that if a wharf is equipped with cranes, it is built to load/unload heavy cargo carried by a large ship, and therefore its length must be greater than some certain value. Finally, this pruned rule is more robust and shorter than the orig- inal rule. This example shows the utility of the rule pruning with the robustness estimation. Conclusions Robustness is an appropriate and practical measure for knowledge discovered from closed-world databases that change frequently over time. An efficient estima- tion approach for robustness enables effective knowl- edge discovery and maintenance. This paper has de- fined robustness as the complement of the probabil- ity of rule-invalidating transactions and described an approach to estimating robustness. Based on this es- timation approach, we also developed a rule pruning approach to prune a machine-discovered rule into a highly robust and applicable rule. Robustness estimation can be applied to many AI 826 Learning 2.5 t 3- 110 3.5 - g4.:. r6 r5 r7 5- r3R ri 5.5 - 6- R5 6.5 - &6 * 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1 Robuanegl Figure 3: Pruned rules and their estimated robustness and database applications for information gathering and retrieval from heterogeneous, distributed environ- ment on the Internet. We are currently applying our approach to the problem. of learning for seman- tic query optimization (Hsu & Knoblock 1994; 1996b; Siegel 1988; Shekhar et al. 1993). Semantic query op- timization (SQO) (King 1981; Hsu & Knoblock 1993; Sun & Yu 1994) optimizes a query by using semantic rules, such as all Maltese seaports have railroad access, to reformulate a query into a less expensive but equiv- alent query. For example, suppose we have a query to find all Maltese seaports with railroad access and 2,UUU,UUU ft3 of storage space. From the rule given above, we can reformulate the query so that there is no need to check the railroad access of seaports, which may reduce execution time. In our previous work, we have developed an SQO optimizer for queries to multi- databases (Hsu & Knoblock 1993; 1996c) and a learn- ing approach for the optimizer (Hsu & Knoblock 1994; 1996b). The optimizer achieves significant savings us- ing learned rules. Though these rules yield good op- timization performance, many of them may become invalid after the database changes. To deal with this problem, we use our rule pruning approach to prune learned rules so that they are robust and highly appli- cable for query optimization. Acknowledgements We wish to thank the SIMS project members and the graduate students of the In- telligent Systems Divison at USC/IS1 for their help on this work. Thanks also to Yolanda Gil, Steve Minton, and the ments. anonymous reviewers for their valuable com- References Cohen, W. W. 1993. Efficient pruning methods for separate-and-conquer rule learning systems. In Proceed- ings of the 13th International Joint Conference on Artifi- cial Intelligence(IJCAI-93). Cohen, W. W. 1995. Fast effective rule induction. In Machine Learning, Proceedings of the 12th International Conference(ML-95). S an Mateo, CA: Morgan Kaufmann. Cussens, J. 1993. Bayes and pesudo-bayes estimates of conditional probabilities and their reliability. In Ma- chine Learning: ECML-93, 136-152. Berlin, Germany: Springer-Verlag. Furnkranz, J., and Widmer, G. 1994. Incremental reduced error prunning. In Machine Learning, Proceedings of the 11 th International Conference(ML-94). San Mateo, CA: Morgan Kaufmann. Howson, C., and Urbach, P. 1988. Scientific Reasoning: The Bayesian Approach. Open Court. Hsu, C.-N., and Knoblock, C. A. 1993. Reformulating query plans for multidatabase systems. In Proceedings of the Second International Conference on Information and Knowledge Management(CIKM-93). Hsu, C.-N., and Knoblock, C. A. 1994. Rule induction for semantic query optimization. In Machine Learning, Proceedings of the 11 th International Conference(ML-94). San Mateo, CA: Morgan Kaufmann. Hsu, C.-N., and Knoblock, C. A. 1996a. Discovering ro- bust knowledge from databases that change. Submitted to Journal of Data Mining and Knowledge Discovery. Hsu, C.-N., and Knoblock, C. A. 1996b. Using inductive learning to generate rules for semantic query optimiza- tion. In Fayyad, U. M. et al., eds., Advances in Knowl- edge Discovery and Data Mining. AAAI Press/MIT Press. chapter 1’7. Hsu, C.-N., and Knoblock, C. A. 1996c. Semantic Opti- mization for Multidatabase Retrieval, Forthcoming. King, J. J. 1981. Query Optimization by Semantic Reason- ing. Ph.D. Dissertation, Stanford University, Department of Computer Science. Lavrac, N., and Dieroski, S. 1994. Inductive Logic Pro- gramming: Techniques and A pplica tions. EIIis Horwood. Lloyd, J. W. 1987. Foundations of Berlin, Germany: Springer-Verlag. Logic Programming. Piatetsky-Shapiro, G. 1984. A Self-Organizing Database System - A Difierent Approach To Query Optimization. Ph.D. Dissertation, Department of Computer Science, New York University. Raedt, L. D., and Bruynooghe, M. 1993. A theory of clausal discovery. In Proceedings of the 13th International Joint Conference on Artificial Intelligence(IJCAI-93). Ramsay, A. 1988. Formal Methods f’n Artificial Intelli- gence. Cambridge, U.K.: Cambridge University Press. Shekhar, S.; Hamidzadeh, B.; Kohli, A.; and Coyle, M. 1993. Learning transformation rules for semantic query optimization: A data-driven approach. IEEE Transac- tions on Knowledge and Data Engineering 5(6):950-964. Siegel, M. D. 1988. Automatic rule derivation for semantic query optimization. In Kerschberg, L., ed., Proceedings of the Second International Conference on Expert Database Systems. Fairfax, VA: George Mason Foundation. Sun, W., and Yu, C. T. 1994. Semantic query optimization for tree and chain queries. IEEE Trans. Knowledge and Data Engineering 6(1):136-151. UIIman, J. D. 1988. Principles of Database and Knowledge-base Systems, volume II. Palo Alto, CA: Com- puter Science Press. Knowledge Bases 827
1996
122
1,758
Post-Analysis of Bing Liu and Wynne Hsu Department of Information Systems and Computer Science National University of Singapore Lower Kent Ridge Road, Singapore 119260, Republic of Singapore (hub, whsu)@iscs.nus.sg Abstract Rule induction research implicitly assumes that after producing the rules from a dataset, these rules will be used directly by an expert system or a human user. In real-life applications, the situation may not be as simple as that, particularly, when the user of the rules is a human being. The human user almost always has some previous concepts or knowledge about the domain represented by the data&. Naturally, he/she wishes to know how the new rules compare with his/her existing knowledge. Ill dynamic domains where the rules may change over time, it is important to know what the changes are. These aspects of research have largely been ignored in the past. With the increasing use of machine learning techniques in practica1 applications such as data mining, this issue of post analysis of rules warrants greater emphasis and attention. In this paper, we propose a technique to deal with this problem. A system has been implemented to perform the post analysis of classification rules generated by systems such as C4.5. The proposed technique is general and highly interactive. It will be particularly useful in data mining and data analysis. 1. ntroduction Past research on inductive learning has mostly been focused on techniques for generating concepts or rules from datasets (e.g., Quinlan 1992; Clark & Niblett 1989; Michalski 1980). Limited research has been done on what happens after a set of rules has been induced. It is assumed that these rules will be used directly by an espert system or some human user to infer solutions for specific problems within a given domain. We argue that having obtained a set of rules is not the end of the story. As machine learning techniques are increasingly being used to solve real-life problems, post-analysis of rules will become increasing important. The motivation for perfomling post-analysis of the rules comes from realizing the fact that using a learning technique on a dataset does not mean that the user knows nothing at all about the domain and the dataset. This is particularly true if the user is a human being. Typically, the human user does have some pre-conceived notions or knowledge about the learning domain. Hence, when a set of rules is generated from a clataset, naturally he/she 828 Learning would like to know the following: Do the rules represent what I know? If not, which part of my previous knowledge is correct and which part is not? In what ways are the new rules different from my previous knowledge? Past research has assumed that it is the user’s responsibility to analyze the rules to answer these questions. However, when the number of rules is large, it is very hard for the user to analyze them manually. In dynamic domains, rules themselves may change over time. Obviously, it is important to know what have changed since the last learning. These aspects of research have largely been ignored in the past. Besides enabling a user to determine how well the new rules confimz/deny his/her existing concepts and to detemline whether the rules have changed over time, post- analysis of rules also helps to deal with a major problem in data mining, i.e., the interestingness problem (Piatesky- Shapiro & Matheus 1994; Piatesky-Shapiro et crl 1994). This problem is typically described as follows: In data mining, it is all too easy to generate a huge nmnber of patterns in a database, and most of these patterns (or rules) are achiahy useless or uninteresting to the user. But due to the huge number of patterns, it is diflicult for the user to comprehend them and to identify those patterns that are interesting to him/her. Thus, some techniques are needed to perform this identification task. This paper proposes a fuzzy matching technique to perform the post-analysis of rules. In this technique, existing rules (from previous knowledge) are regarded as fuzzy rules and are represented using fuzzy set theory (Zimmemlann 1991). The newly generated rules are matched against the existing fuzzy rules using the fuzzy matching technique. Differeut algorithms are presented to perform matching according to different criteria. The matched results will then enable us to answer some of the concerns raised above. In this paper, we focus on perfomling post-analysis of classification r&es generated by induction systems, such as C4.5 (Quinlan 1992). This is because classification rule induction is perhaps the most successful machine learning technique used in practice. However, our proposed technique is not bound to classification rules, it is also applicable to other types of From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. rules generated by some other learning techniques. The proposed technique is general. It can be used in any domain. It is also highly interactive as it allows the user to I modify the existing rules as and when modification is needed. We believe our technique is a major step in the right direction. 2. roblem Definition Assuming a human user or an intelligent agent has some previous knowledge, or hypotheses, or a set of rules generated from an earlier dataset, about a particular domain. These concepts can be expressed as a set of rules E. There exists a dataset D from this domain. A learning technique T can be applied to D to induce a set of rules B. The rules in E and rules in B have the same syntax and semantics. We are interested in knowing the degree of similarity and difEerence between set B and set E. Since the focus of this paper is on the classification rules produced by C4.5, T is the decision tree induction technique used in C4.5. The rules in E and B have the same syntax and semantics as the rules produced by C4.5. The syntax of the rules generated by C4.5 has the following form (we use If-then format, instead of “+” as in C4.5, and we also add a “,” between two propositions to facilitate presentation): lf PI, .&, P3, . . . . P,, tbcn C where “,” means “and”, and Pi is a proposition of the form: rrttr OF’ v&e, where nttr is the name of an attribute in the dataset, value is a possible value for &‘r, and OP E (=, f, <, >: I, 2) is the operator. C is the consequent of the form: CInss = value. We denote a new rule as &i E B and an existing rule as e, E E. We denote the set of attribute names in the conditional part of B, as Fj, and the set of attribute names in the conditional part of l$ as H$ For example, we have Bi: If Ht :. 1.8, Wt < 70 then Chss = underweight. .I$: If S& = ntec!, l;‘t 2 65, Wt < 70 then Clrrss =fit. Then, Fi = (Fit, Wt>, and k!i= (Size3 WI). We now define what we mean by similarities and differences between E and B. Firstly: we define them intuitively (in this section), and then we define them computationally (in Section 3 and 4). The intuitive delM.ions can be stated at two levels: the individual-rule level and the aggregate level. Definitions at hdividual-rule Level Definition 1 (Rule-Similarity): Bj and Ej are similar if both the conditional parts of Bi and Ej, and the consequent parts of Bi and & are similar. Definition 2 (Rule-Difference): Bi and EJ are different if Bj and 15 are far apart in one of the following ways: (1). Unexpected consequent: The conditional parts of B, and E; are similar, but the consequents of the two rules are quite di.Eerent. ected conditions: The consequents of Bi and EJ are similar but the conditional parts of the two rules are far apart. There are two situations: a). Contradictory conditions: The sets of attribute names in the conditional parts of Bi and Ej overlap to a large extend but their attribute values are quite different. b). Unanticipated conditions: The sets of attribute names in the conditional parts of B, and Ej do not overlap. Definition 3 (Set-Similarity): The subset of rules in B that are the same or Ileum close to some of the rules in E. efini O&e): The subset of rules in B that the set of rules in E in the sense of unexpected consequent or unexpected conditions as stated in definition 2. Notice that we have been using fuzzy ternIs such as similar, veyJ different (or far apart) that we have not quantified. In the next two sections, we will define computation procedures that measure the similarity and difference. Our technique does not identify the similar rules and di.Eerent rules. It only ranks them according to the similarity <and difference values. The final job of identifying the rules as similar or different is left to the user as the system does not know what degree of match of two rules are considered similar or different. 3. In this section, we present a high level view of the fiizzy matching method. It consists of two main steps: Stepl. The user converts each rule in E to a fuzzy rule. The fuzzy rule has the same syntax as the original rule, but its attribute values must be described using some fuzzy linguistic variables. See the definition below. Step2. The system matches each new rule Bi E B against each fuzzy rule EJ E E in order to obtain the degree of match for each new rule Bj against the set E. The new rules in B are then ranked according to their degrees of match with E. Four matching algorithms are used to perform matching for different purposes. Before we proceed, let us review the definition of a fuzzy linguistic variable (Zimmermann 1991). etinitioa 5: A fuzzy linguistic variable is a quintuple (x, T(x), CT, G, fi) in which x is the name of the variable; T(x) is the term set of X; that is, the set of names of linguistic values of x: with each value being a fuzzy variable denoted generally by x and ranging over a universe of discourse r/‘; G is a syntactic rule for generating the name, X$ of values of X; and M is a semantic rule for associating with each value S its Knowledge Bases 829 meaning, A%(J) which is a fuzzy subset of U. A particular X is called a term. For example, if speed is interpreted as a linguistic variable with G’ = [ 1,140], then its term set 7’(speed) could be T(speed) = {slow, moderate, fast, . . . ). fi(,U> gives a meaning to each term. For example, ik(sZow> may be defined as follows: fi(slow) ={(u, ps*ow(zc)) 1 zl E [l, ISO]> 1 1 u E [1,30] where psfow(24) = 1 ---u+ 2 30 u E (30,601 0 u E (60,140] ~~~ow(zO denotes the degree of membership of u in slow. A window-based user interface has been implemented to simplie the input of semantic rules (i.e. membership values). After the user has speci.Eied the semantic rules associated with each term used, the fuzzy matching process can begin. Let us have an example. Consider the set of classification rules generated from an accident database. IfAge > 50, Lot = straight then Class = slight If Age > 65, Lot = bend, Spd =s 50 then Class = killed IfAge > 50, Lot = T-J’unct then Ciass = slight Assume the user believes that an old person crossing at a location with obstructed view is likely to result in a serious accident. S/he specifies the hypothesis as follows: If Age = OLD, Lot = OEST’ UCT then Class = &ID To confirm or deny this hypthesis, the system must first how how to interpret the semantic meanings of “OLD’, “OBS’I’RI/CT’ and ‘BAD.” This is achieved by asking the user to provide the semantic rules associated with these terms. Once the semantic rules have been specified, the matching process is carried out to determine the degrees of match between the hypothesis and the system generated rules. Different ranking algorithms are used for different purposes. For con&nation of the hypothesis, the system will give higher ranking to rules that are similar to the hypothesis. The resulting ranking could be as follows: 1. If Age B 65, Lot = bend, Spd >50 then Cioss = killed 2. If J4ge > 50, Lot = T+IG~ then Class = slight 3. If&e > 50, Lot = straight then Class = slight On the other hand, if our purpose is to find those rules that are contradictory to the hypothesis? then a different ranking algorithm is used and the result is shown below: 1. If Age > 50, Lot = T-junct the11 Class = slight 2. If&e > 65, Lot = bend> Spd >50 then Cbs = killed 3. If Age > 50, Lot = straight theu Class = slight This shows that rule 1 contradicts the hypothesis because instead of a serious injury, the old person suffers a slight injury. It is important to note that simply reversing the order of ranking for finding similar rules does not work in general for finding different rules. 830 Learning atching Computation Having seen an overview of the proposed technique, we shall now describe the detailed formulas used in the rule matching computation. Let E be the set of existing rules and B be the set of newly generated rules. We denote IYi as the degree of match between a newly generated rule B, E B and the set of user-specified rules E. We denote w(i,j) as the degree of match between Bj E B and Ej E E. Ranking of the new (or system generated) rules is performed by sorting them in a decreasing order according to their Ivj (i.e., the rule with the highest K’itliill be at the top). 4.1 Computing w(~J) and Fvi The computation of wtiJ. consists of hvo steps: (1) attribute name match, and (2) attribute value match. 1. Attribute uame match - The attribute names of the conditions of B, and Ei are compared. The set of attribute names that are common to both the sonditions of B, and Ej is denoted as Ac,,l~ = P’j n Hi. Then, the degree of attribute name match of the conditional parts, denoted as Lci~lli): is computed as follows: where IFi] and ]JYJ are the numbers of attribute names in conditional parts of B, and & respectively, and p(jib:,I is the srze of the set Aji~:,. Likewise, the attribute names of the consequents of Bi and E’ are also compared. However, in the case of (X.5, the rules generated all have the same attribute name, i.e., C~QXS. Existing rules also use the same attribute. Thus, the attribute names always match. For example, we have Bi: If Ht > 1.8, R’t < 70 then C%WS = underweight. Ej: If Size = medium, R’t = middle then Class =Jit. The set of common attributes in the conditional parts of the two rules isA,, = ( Wt]. Hence, L<iij = 0.5. 2. Attribute value match - Once an attribute of Bi and Ej matches, the two propositions are compared taking into consideration both the attribute operators and attribute values. We denote r/ii, as the degree of value match of the kth matching attribute in AcjJ:,, and .Z<jJ) the degree of value match of the consequents. The computation of the two values is presented in Section 4.2. We are now in the position to provide the formulas for computing *tv<i,i) and Svi. Due to the space limitation, we will not give detailed e?rplanation for each formula. Interested readers, please refer to (Liu & Hsu, 1995). 1. Finding similar rules / Z(i, j> x L<i, i) x C JJ’(~, f:lk hzA:I.J) W(i, j) = 1 AM al I Au* aI fr 0 0 I Au. ill = 0 2. The formula for U:, which is the degree of match of the rule Bj with respect to the set of existing rules, E, is defined as follows (see Figure 1): Wi = IllaS(W(i,1), W(i,2), . . . , W(jj), . . . , W(j,q>) B E Figure 1. Computing !Yi Finding different rules in the following sense: 1) consequent: The conditional parts are similar but the consequent parts are far apart. [L(f. j)X CvCLl,k 1 ke.rl(t.,: W(i, f) = IAU.J,I -(Zw,-1) IAwl* 0 I -Z(t. J) 1 Au, j,l= 0 Jfi: is computed as follows: J4’i = IIlaX(W(i,l ), W(i,2), . . . , H’(iJ), . . ., W(i,pl)) IJnexpected conditions: The consequents are similar but the conditional parts are far apart. Two types of ranking are possible. (a) Contradictory conditions (rules with j~Ic~.~:,,l > 0 will be ranked higher) I I cvw)~ ‘1 &, J) - J&, j) x ReA(r,;: wyi, f ) = I AL 111 -1 IA(i.J)I+O \ J Z(i. j) 1 A(i. j)I= 0 r;ri, is computed as follows: U’i = llXlX(W(i,1), M’(I,~), . . . . M’(iJ], . . . . W(i,p:,). (b) Unanticipated conditions (rules with /L!~~~,[ = 0 will be ranked higher) I c vu. J)k +l Waji. j) = Z(i.J)-L,t,Ji x ky~;*.f,, pu.nj;t 0 \ I Z(i. 1) Itu..i= 0 4.2 Computing V&M and Z,iJ) For the computation of 5iJp and Z(jib, we need to consider both the attribute values and the operators. Furthermore, the attribute value types (discrete or continuous) are also important. Since the computations of P:iJ)k and Zci,ll are the same, it suffices to just consider the computation of T/(yP, the degree of match for the Mz matching attribute in AtiJ9. Two cases are considered: the matching of discrete attribute values and the matching of continuous attribute values. 2.1. atching of discrete attribute values En this case, the semantic rule for each term (133 used in describing the existing rule must be properly defined over the universe of the discrete attribute. We denote Vk as the set of possible values for the attribute. For each tf E & the user needs to input the membership degree of u in X, i.e., ,q&). For example, the user gives the following rule: Grade = poor then Class = reject Here, poor is a fuzzy term. To describe this term: the user needs to spec@ the semantic rule for poor. Assume the universe of the discrete attribute Grade = (A, B, C, D, F). The user may specirj; that “poor” grade means: W, 01, (B, 0): CC 0.2), (D, 0.81, V’. 1)) where the left coordinate is an element in the universe of the “Grade” attribute, and the right coordinate is the degree of membership of that element in the fuzzy set poor, e.g.: pP,,,.(D) = 0.8. When evaluating the degree of match for Pl/ijlk, two factors play an important role, namely: the semantic rules associated with the attribute value descriptions and the operators used in the propositions. In the discrete case, the valid operators are “=” and “f”. For example, suppose that the two propositions to be matched are as follows: User-supplied proposition: attr Opu X System-generated proposition: attr Ops S where attr is the matching attribute, Opu? Ops E (=, ;t], and Sand S are the attribute values of uttr. The matching algorithm must consider the combination of both operators and attribute values. Four cases result: Case 1. opu = ops = W’: 1;‘(i. Jjh = /fX(S). Case 2. Opu = ‘h” and 0~s = “f”: c P(U) resuppm( x ) r (i. J,k = U#S 1 Vkl- 1- ’ where support(X) = (~4 E r/k 1 &u) > 01, and I r/,1 is the size of LG. IC$l - 1 = 0 is not possible. Case 3. opu = ‘if” and Ops = cc=,,: r/‘(i.j)k = p-s(s). Case 4. OJ?U = i;#” and Ops = “#“I r jl--r(u) “cs”p~(Lr) 1” (i, J jk = ULS ICq--1 fl where if Ops = “+“, IC$l - 1 = 0 is not possible. 42.2. Matching of continuous attribute values When an attribute takes continuous values, the semantic rule for the term (-J;) takes on the form of a continuous function. To simplify the user’s task of specifying the Knowledge Bases 831 shape of this continuous function, we assume the function has a curve of the form as shown in Figure 2. Thus, the Figure 2. Membership function For example, the user’s pattern is: IefAge = young then Ch.ss = accepted. Here, *voung is a term for variable Age. Suppose that in this case .4ge takes continuous values from 0 to 80. The user has to provide the 4 points using the values from 0 to 80: e.g., a = 15, b = 20, c = 30, and d = 35. In the continuous case, the range of values that the operator can take is expanded to (=: f, 2, 2, %>. 5~” represents a range: Xl I attr 5 Xz. With this expansion, the total number of possible cases to be considered is 25. Due to space limitation, we cannot list all the formulas here. Interested readers may refer to (Liu & Hsu, 1995). 5. Application in We have already mentioned that the proposed technique helps to solve the “interestingness” problem in data mining. We now outline the application in greater detail. Let D be the database on which the data mining technique 2’ is applied. Let B be the set of patterns (or rules) discovered by T in D. If we denote 1 as the set of interesting patterns (or rules) that may be discovered in D. Then, I c B. Three points to be noted: e The set of patterns in B could easily be in the hundreds or even thous,ands, and many of the patterns in B are useless. But because of the sheer number of patterns, it is very difficult for a user to focus on the “right” subset I of patterns that are interesting or useful to him/her. e Not all patterns in I are equally interesting. Different patterns may have different degrees of interestingness. e I may be a dynamic set in the sense that the user may be interested in different things at different points in time. In general, the size of B is much larger than the size of I. It is desirable that a wstem only gives the user the set of interesting patterns, 1: and ranks the patterns in. I according to their degrees of interestingness. Hence, the interestinbmess problem can be defined as follows: Definition 6: Given B, determine I and rank the patterns in I according to their degrees of interestingness to the user at the particular point in time. So, how can a computer system know what is useful in a domain and what is considered interesting at a particular moment to a user? What are the criteria used to rank the discovered patterns? Our proposed technique is able to provide a partial answer to these problems. The interestingness problem has been discussed in many papers (piatesky-Shapiro si Matheus 1994; Piatesky-Shapiro et al 1994; Silberschatz & Tuzhilin 1995). Many factors that contribute to the interestingness of a discovered pattern have also been proposed. They include: coverage, confidence, strength, significance, simplicity, unexpectedness, and actionability (Major Bi Mangano 1993; Piatesky-Shapiro & Matheus 1994). The first five factors are called objective measures. They can be handled with techniques that do not require user and domain knowledge. They have been studied extensively in the literature (e.g., Quinlan 1992; I’vlajor $ Mangano 1993). The last two factors, unexpectedness and actionability, are called subjective measures. It has been noted in (Piatesky-Shapiro & Matheus 1994) that although objective measures are useful, they are insufficient in determining the interestingness of a discovered pattern. Subjective measures are needed. Our proposed technique is able to rank the discovered patterns according to their subjective interestingness. In particular, it helps to identify those unexpected patterns. To date a number of studies have also been conducted on the subjective interestingness and some systems have been built (Major & Mangano 1993; Piatesky-Shapiro & Matheus 1994) with interestingness filtering components to help users concentrate on only the useful patterns. However, these systems handle the subjective interestingness in application specific fashions eiatesky- Shapiro et al 1994; Silberschatz & Tuzhilin 1995). For such systems, domain-specific theories and expert knowledge are hard-coded into the systems, thus making them rigid and inapplicable to other applications. In our proposed technique, we adopt a simple and effective approach: 1. The user is asked to supply a set of expected patterns, (not necessary a complete set). 2. The user then gives the semantic meanings to the attribute values (as described in Section 3). 3. The newly discovered patterns are then matched against the expected patterns and ranked according to different requirements from the user. Note that our technique does not identify the set of interesting patterns I. This task is left to the user. The assumption of this approach is that some amount of domain knowledge and the user’s interests are implicitly embedded in his/her specified expected patterns. With various types of ranking, the user can simply check the few patterns at the top of the list to confirm or to deny his/her intuitions (or previous knowledge), and to find those patterns that are against his/her expectation. It should be noted that the user does not have to provide all his/her expected patterns at the beginning, which is quite difficult. He/she can actually do this analysis incrementally, and slowly modify and build up the set of expected patterns. The highly interactive nature of our technique makes this possible. 832 Learning 6. Evaluation The proposed technique is implemented in Visual C++ on PC. A number of experiments have been conducted. A test example is given to evaluate how well the system performs its intended task of ranking the newly generated nlles against the existing knowledge. An analysis of .the complexity of the algorithm is also presented. 6.1. A test example The set of rules is generated from a database using 64.5. The attribute names and values have been encoded to ensure confidentiality. Due to the space limitation, only a subset of the rules is listed below for ranking. Rule 1: Al <= 41 -> Class NO Rule 2: Al <= 49, A3 -== 5.49, A4 > 60 -> Class NO Rule 3: Al>49,A2=2 -> Class YES Rule 4: Al > 49, Al <= 50 -> Class YES Rule 5: Al>55 -> Class YES Rule 6: Al > 41, A3 > 5.49, A7 > 106, A4 > 60, A10 <= 5.06 -> Class YES Rule 7: Al > 41, A4 <= 60 -> Class YES Rule 8: Al > 41, Al <= 47, A3 -== 3.91, A7 > 106, A4 > 60, A10 <= 5.06 -> Class YES Two runs of the system are conducted in this testing. In the first run? the focus is on finding similar (generated) rules, while in the second run the focus is on finding different rules (or unexpected rules). The system automatically cuts off rules with low matching values. (II). Fiding similar rules The set of user’s rules is listed below with the fuzzy set attached to each term. User’s rule set 1: Rule 1: Al <= Mid (a = 30, b = 35, c = 45, d = 50) -> Class NO {(NO, l), (YES, 0)) Rule2: Al >=Ret {a = 50, b = 53, c = 57, d = 60) -> Class YES ((NO, 0): (YES, 1)) e Ranking results: RANK 1 Rule 5: Al > 55 -> Class YES confirming user specified rule 2 RANK2 Rulel:Al<=41 -> Class NO confirming user specified rule 1 RANK3 RANK4 RANK5 Rule3:Al>49,A2=2 -> Class YES confirming user specified rule 2 Rule 7: Al > 41: A4 <= 60 -> Class YES confirming user specified rule 2 Rule 2: Al <= 49, A3 <= 5.49, A4 > 60 -> Class NO confirming user specified rule 1 inding different rules The set of user’s rules for this test nm are listed below, which is followed by three types of ranking for finding different rules. User’s rule set 2: Rule 1: A7 >= HI, (a =147, b=148, c=152, d = 153) A4>=HI (a=92,b=93,c=97,d=98) -> Class YES {(NO, 0), (YES, 1)) Rule 2: A3 >= MI (a =1.75, b =1.8, c = 2.2, d=2.25} -> Class YES ((NO, 0): (YES, 1)) Unexpected consequent: RANK 1 Rule 2: Al <= 49: A3 <= 5.49, A4 > 60 -> Class NO contradicting user specified rule 2 Contradictory conditions: RANK1 Rule7:Al>41:A4<=60 -> Class YES contradicting user specified rule 1 RANK 2 Rule 6: Al > 41, A3 > 5.49, A7 > 106, A4 > 60, A10 <= 5.06 -> Class YE% contradicting user specified rule 1 RANK 3 Rule 8: Al > 41, Al <= 47, A3 += 3.91, A7 > 106, A4 > 60, Al0 <= 5.06 -> Class YES contradicting user specified rule 1 Unanticipated conditions: RANK 1 Rule 3: Al > 49, A2 = 2 -> Class YES RANK2 Rule4: Al > 49, Al <= 50 -> Class YES RANK3 Rule5: Al>55 -> Class YES RANK4 Rule7: Al>41,A4<=60 -> Class YES 6.2 Efficiency analysis Finally, let us analyze the nmtime complexity of the algorithm that implements the proposed technique. Assume the maximal number of propositions in a rule is hi. Assume the attribute value matching (computing YtjJ:, and ,Z&,) takes constant time. Combining the individual Knowledge Bases 833 matching values to calculate lg’(i,l> also takes constant time. The computation of E/i is O(lQ). Then, without considering the final ranking, which is basically a sorting process, the worst-case time complexity of the technique is o(lEp3p2). 7. Related ‘N70rk To the best of our knowledge, there is no reported work on comparing existing rules with the newly generated rules. Most of the work on machine learning focuses on the generation of rules from various @pes of datasets as well as pruning of the generated rules (e.g., Quinlan 1992; Breiman et al 1984; Clark & Matwin 1993). Some systems also use existing domain knowledge in the induction process (e.g., Ortega $ Fisher 1995; Clark & Mawin 1993; Pazzani & Kibler 1992). However, their purpose is mainly for helping the induction process so as to increase learning efficiency and/or to improve prediction accuracy of the generated rules. Clearly, the focus of their research is quite diEerent from ours because our technique is primarily a post-analysis method that aims to help the user or an intelligent agent to analyze the rules generated. In the areas of data mining research, the interestingness issue (Piatesky-Shapiro & Matheus 1994; Piatesky- Shapiro et ul 1994) has long been identified as an important problem (see Section 5). (Piatesky-Shapiro & Matheus 1994) studied the issue of subjective interestingness in the context of a health care application. The system (called KEFIR) analyzes the he<aIth care information to uncover interesting deviations (from the norms) or “key findings”. A domain expert system is built to identify findings that are actionable and the actions to be taken. KEFIR does not do rule comparison, nor does it handle the unexpectedness problem. The KEFIR approach is also application specific. Its domain knowledge is hard-coded in the system as production rules. Our system is domain independent. (Silberschatz & Tuzhilin 1995) proposed to use probabilistic beliefs as the framework for describing subjective interestingness. Specifically, a belief system is used for defining unexFtedness. However, (Silberschatz & Tuzhilin 1995) is just a proposal, and no system was implemented. * Our proposed approach has been implemented and tested. In addition, our approach allows the user to specify his/her beliefs in fizzy terms which are more natural than complex probabilities that the user has to assign in (Silberschatz Rt Tuzhilin 1995). 8. Conclusion This paper tries to bridge the gap behveen the user and the rules produced by an induction system. A filzzy matching technique is proposed for rule comparison in the context of classification rules. It allows the user to compare the generated rules with his/her hypotheses or existing knowledge in order to find out what is right and what is wrong about his/her knowledge, and to tell what has changed since the last learning. This technique is also useful in data mining for solving the interestingness problem. The proposed technique is general, and highly interactive. We do not claim however: that the issues associated with the rule comparison are fully understood. In fact, much work remains to be done. We believe the proposed technique represents a major step in the right direction. Ac~o~led~me~ts: We would like to thank Hing-Yan Lee and Hwee-Leng Ong of Information Technology Institute for providing us the databases, and for many usem discussions. We thank Lai-Fun Mun and Gui-Jun Yang for implementing the system. The project is tided by National Science and Technology Board. References Breiman, L., Friedman, J., ’ Olshen, R., and Stone, C. 1984. CiussiJication and regression trees. Belmont: Wadsworth. Clark, P and Niblett, T. 1989. “The CN2 induction algorithm.” MacJline Learning 3,261-284. Clark, P. and Mahvin, S. 1993. “Using qualitative models to guide induction learning.” In Proceedings of the International Conference on Machine Learning, 49-56. Liu, B. and HSU~ W. 1995. Finding interesting patterns using user expectations. DISCS Technical Report. Major, J., and Mangano, J. 1993. “Selecting among rules induced from a hurricane database.” In Proceedings of the AAAl-93 Workshop on KDD. Michalski, R. 1980. “Pattern recognition as rule-guided induction inference.” IEEE Trunsactions on Pattern Analysis and Machine Intelligence 2, 349-361. Ortega, J., and Fisher, D. 1995. “Flexibly exploiting prior knwledge in empirical learning.” IJC.41-95. Pazzani,M. and Kibler,D. 1992. “The utility of knowledge in inductive learning.” Machine learning 9. Piatesky-Shapiro, G., and C. J Matheus. 1994. “The interestingness of deviations.” In Proceedings of the .&LU-94 Workshop on KDD, 25-36. Piatetsky-Shapiro, G., Matheus, C., Smyth, P. and Uthurusamy, R. 1994. ‘KDD-93: progress and challenges . . . .” AI maguziije, Fall, 1994: 77-87. Quinlan, J. R. 1992. C4.5: program for machine learning. Morgan Kaufmann. Silberschatz, A. and Tuzhilin, A. 1995. “On subjective measures of interestingness in knowledge discovery.” In Proceedings of tJTe First International Conference on Knowledge Discovery and Data Mining. Zimmermann, H. J. 199 1. Fuzzy set theory and its applications. Khlwer Academic Publishers. 834 Learning
1996
123
1,759
: A Tool for Knowledge Integration Kenneth S. Murray Cycorp 3500 West Balcones Center Drive Austin, Texas 78759 murray@cyc.com Abstract Knowledge integration is the process of incor- porating new information into a body of exist- ing knowledge. It involves determining how new and existing knowledge interact and how exist- ing knowledge should be modified to accommo- date the new information. KI is a machine learn- ing program that performs knowledge integra- tion. Through actively investigating the interac- tion of new information with existing knowledge KI is capable of detecting and exploiting a variety of diverse learning opportunities during a single learning episode. Empirical evaluation suggests that KI provides significant assistance to knowl- edge engineers while integrating new information into a large knowledge base. Introduction Knowledge integration is the process of incorporating new information into a body of existing knowledge. It involves determining how new and existing knowledge interact and how existing knowledge should be mod- ified to accommodate the new information. Knowl- edge integration differs significantly from traditional approaches to learning from instruction because no specific performance task is assumed. Consequent- ly, the learning system must assess the significance of new information to determine how existing knowledge should be modified to accommodate it. Typically, new information has many consequences for existing knowl- edge, and many learning opportunities will result from one learning episode. This paper describes exploratory research that inves- tigates knowledge integration in the context of building knowledge-based systems. The goals of this research include formalizing knowledge integration as a machine learning task, developing a computational model for performing knowledge integration, and instantiating the computational model in KI, an implemented ma- chine learning program (Murray 1995). Specifically, this paper describes how KI performs knowledge inte- gration and illustrates how KI exploits multiple learn- ing opportunities during a single learning episode. Knowledge integration addresses several critical is- sues that arise when developing knowledge bases. It is important to assess how new information interacts with existing knowledge because knowledge base mod- ifications that are intended to correct a shortcoming may conflict with existing knowledge and introduce problems. For example, extending a drug therapy advi- sor (e.g., Mycin) to minimize the number of drugs pre- scribed to each patient conflicts with the therapy goal of maximizing the number of symptoms covered by the prescribed treatment (Mostow & Swartout 1986). De- tecting and adjudicating conflicts as new information is entered prevents subsequent problem solving failures. Alternatively, new information may interact syner- gistically with existing knowledge. One common type of beneficial interaction occurs when the new infor- mation explains existing knowledge. For example, adding the new information that chloroplasts contain the green pigment chlorophyll to a botanical knowledge base helps to explain the existing default beliefs that the leaves of plants are green and capable of conduct- ing photosynthesis (Murray 1990). Recognizing this interaction between new and prior knowledge permits the system to better explain its conclusions. Figure 1 presents a learning scenario that exempli- fies learning as knowledge integration. The teacher presents new information about the anatomy of a plant leaf. The idealized student is very reactive and re- sponds with several observations on the physiological effects of the leaf cuticle covering the leaf’s epidermis. These responses indicate the student acquires knowl- edge beyond the literal content of the new information. For example, the student suggests generalizing the new information: not only does the leaf have a cuticle, but so do the other parts of the plant’s shoot system. Fur- thermore, the student’s responses reveal to the teacher the existing state of the student’s knowledge, enabling the teacher to provide follow-up comments that resolve the student’s questions and misconceptions. Signifi- cantly, the student is not limited to a single learning goal, such as defining a particular target concept, but instead acquires a wealth of diverse knowledge during this single learning episode. Knowledge Bases 835 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Teacher: The epidermis of the plant leaf is covered by the leaf cuticle, which is composed of cutin. Student: Cutin is impermeable to gases, so the cuticle restricts water loss from the leaf. Teacher: Yes, that’s right. Student: By reducing water loss, the cuticle helps the leaf avoid dehydration. Other plant organs that transpire would also benefit from a cuticle. Do stems, fruits, and flowers have cuticles? Teacher: Yes. Student: But the cuticle also causes the leaf to starve. Teacher: Explain. Student: The cuticle is impermeable to gases. This prevents carbon dioxide in the atmosphere from passing through the leaf’s epidermis. Without carbon dioxide, the leaf cannot conduct photosynthesis and starves. Teacher: Well, the cuticle is impermeable to carbon diox- ide; however, the leaf survives. Student: Does the cuticle only partially cover the epider- mis? Or, are there portals in the epidermis that permit restricted gas flow? Teacher: Yes, the epidermis has portals called stomata. Figure 1: Learning about leaf cuticle As illustrated in Figure 1, learning proceeds with- out assuming how acquired knowledge will eventually be used. Consequently, the learning method cannot be tailored to a particular performance task, such as classification. Furthermore, the new information in- cludes general rules. Thus knowledge integration dif- fers from traditional machine learning tasks in two re- spects. First, the learning system cannot assume that new information conforms to a narrow format and con- tent, such as classified instances of a target concept. Second, the significance of new information is often varied and hidden; therefore, the learning system must assess the significance of new information to determine how existing knowledge should be modified to accom- modate it. When new information has many signific- ant consequences for existing knowledge, many learn- ing opportunities will result from the single learning episode. The following sections describe how KI ex- hibits the learning behavior presented in Figure 1. KI: An Overview KI is an interactive tool for knowledge integration. It was developed to help extend the Botany Knowledge Base, a large knowledge base representing plant anato- my, physiology, and development (Porter et al. 1988). The knowledge base contains about 4000 terms and 17,000 rules. However, it was implemented in a ver- sion (circa 1990) of the Cyc knowledge base (Lenat & Guha 1990), which contains about 27,000 terms and 40,000 rules, all of which are accessible to KI. KI interacts with a knowledge engineer to facilitate adding new general statements about the domain to the knowledge base and explore the consequences of the new information. When a knowledge engineer pro- vides new information, KI uses the existing knowledge base to identify possible gaps or conflicts between new and existing knowledge and to identify beliefs sup- ported by the new information. By identifying beliefs supported by the new information KI helps to veri- fy that the actual effect of the new information ac- cords with the knowledge engineer’s intended effect. By posing questions back to the knowledge engineer, KI solicits additional information. Thus, KI provides a highly interactive, knowledge-editing interface between the knowledge engineer and the knowledge base that guides knowledge-base development. KI goes beyond identifying “surface” inconsisten- cies, such as explicit constraint violations, by deter- mining indirect interactions between new information and existing knowledge. This involves a focused, best- first search exploring the consequences of new informa- tion. KI’s computational model of knowledge integra- tion comprises three prominent activities: 1. Recognition: identifying the knowledge relevant new information. to 2. Elaboration: applying relevant domain rules to de- termine the consequences of the new information. base to ac- 3. Adaptation: commodate modifying the the elaborated i knowledge nformation. During recognition, KI identifies concepts in the knowledge base that are relevant to the new infor- mation. KI uses views to determine which concepts, beyond those explicitly referenced by the new informa- tion, are relevant (Murray & Porter 1989; Murray 1990; 1995). Each view identifies a set of propositions that interact in some significant way. The views for con- cepts referenced in the new information determine which existing knowledge structures are recalled during recognition. When new information is presented, KI identifies the views defined for concepts referenced by the new information and heuristically selects one. The concepts contained in the selected view are deemed rel- evant to the new information, and KI limits its search to consider only the interaction of the new information with the existing knowledge of concepts recalled during recognition. During elaboration, KI investigates the conse- quences of the new information for relevant concepts in the knowledge base. This involves applying domain rules defined for the concepts recalled during recogni- tion. Elaboration “expands the information content” of the new information and identifies new views rel- evant to the elaborated concepts. KI enters a cycle of recognition (i.e., selecting views) and elaboration (i.e., applying domain rules) as it searches for the con- sequences of new information. Conflicts are revealed when inferences completed during elaboration assert 836 (3a) Qua Container cutin t composedOf LeafCu title I f cover LeafEpidermis I Numerical subscripts denote class membership (e.g., isa(LeafEpidermisl LeafEpidermis)). Figure 2: The initial learning context inconsistent conclusions. Novel explanations are de- tected when the new information enables inferences. Both conflicts and novel explanations suggest learning opportunities. During adaptation, KI assists the user in modify- ing the knowledge base to accommodate the elaborat- ed information. In response to conflicts, KI analyzes the support of the conflicting predictions to suggest modifications to the knowledge base that would resolve the conflict. Identifying and correcting conflicts dur- ing knowledge integration prevents subsequent prob- lem solving failures. In response to novel explanations, KI evaluates the explanations to suggest ways in which the new information can be generalized or the repre- sentations of existing concepts can be augmented. Through recognition, elaboration, and adaptation, KI determines what existing knowledge is relevant to the new information, the consequences of the new in- formation for the relevant knowledge, and how the rel- evant knowledge should be modified to accommodate the new information. The following three sections illus- trate these activities while describing how KI performs the learning scenario presented in Figure 1. ’ Recognition During recognition KI identifies knowledge that is rel- evant to the new information. This involves maintain- ing a learning context comprising only propositions on concepts deemed relevant to the new information. The new information presented to KI is: [V (x) isa(x LeafEpidermis) 3 (y) isa(y LeafCuticle) & covZ(x y) & composedOf(y Cutin)] KI initializes the learning context by creating a set of propositions that satisfy the new information (see Fig- ure 2). This involves binding each variable appearing in the new information to a hypothetical instance of the class of objects over which the variable may range. To extend the learning context, KI uses views to de- termine which concepts in the knowledge base, beyond those explicitly referenced in the new information, are relevant. ‘This description is simplified for presentation; for a more precise discussion see (Murray 1995). @ contains , J condultIn 0 0 (3a) LeafEpidermis Qua Container LeafAmbient source LeafLight Atmosphere* Acquisition LeafLight Distribution LeafEpider& contains B-LeafMesoDhvll Leaf LeafCO 2 LeafCO 2 Distribution The view type Qua Contazner identifies relations that are relevant when considering a concept as a container. The view LeafEpzdermas Qua Contazner a semantic network con- taining those propositions in the knowledge base that are relevant to LeafEpidermis in its role as a container. Figure 3: An example view type and view Views are sets of propositions that interact in some significant way and should therefore be considered to- gether. Each view is created dynamically by applying a generic vieur type to a domain concept. Each view type is a parameterized semantic net, represented as a set of paths emanating from a root node. Applying a view type to a concept involves binding the concept to the root node and instantiating each path. Figure 3 presents an example view type and view. View type Qua Container identifies the knowledge base paths emanating from a concept that access properties relevant to its function as a container. These properties include the contents of the container and the process- es that transport items into and out of the contain- er. Applying this view type to leaf epidermis identifies the segment of the knowledge base representing the leaf epidermis in its role as a container. For example, this segment includes propositions representing that leaf transpiration transports water vapor from the leaf mesophyll, contained within the leaf epidermis, to the atmosphere outside the leaf epidermis. Knowledge Bases 837 rule 1: If an object is composed of cutin, then it is impermeable to gases. [V (x) composedOf(x Cutin) =t impermeableTo(x Gas)] rule 2: If the covering part of an object is impermeable to a sub- stance, then the object is impermeable to the substance. [V (w x y z) cover(w x) & impermeableTo(x y) & unless(partialCover(w x)) & unless(portal(w z) & Tcover(z x)) 3 impermeableTo(w y)] rule 3: If the conduit is impermeable to the transportee, then the transportation event is disabled. [V (v w x y z) conduit(v w) & transportee(v x) & isa(x y) & impermeableTo(w y) & un- less(conduit(v z) & TimpermeableTo(z y)) 3 status(v Disabled)] rule 4: If resource acquisition is disabled, then resource utzlazatzon is also disabled. [V (w x y z) acquiredIn(w x) & utilizedIn(w y) & status(x Disabled) & unless(acquiredIn(w z) & Tstatus(z Disabled)) 3 status(y Disabled)] rule 5: if a livang thing’s method of attaining nutraents as dasabled, then it is starving. [V (w x y z) attainerIn(w x) & attainedIn(y x) & isa(y Sugar) & status(x Disabled) & unless(lhealth(w Starving)) + health(w Starving)] The operator unless permits ne ation-as-failure: succeeds when p cannot be estab ished. pl unless(p) Figure 4: Example domain rules To extend the learning context, KI identifies the views defined for objects already contained in the learning context. Typically, several different views will be defined for each object, so a method is needed for selecting one from among the many candidate views. Each candidate view is scored with heuristic mea- sures of its relevance to the current learning context and its interestingness. Intuitively, relevance is a mea- sure of reminding strength and is computed as a func- tion of the overlap between the candidate view and the context (e.g., the percentage of concepts that are represented in the candidate view that are also repre- sented in the learning context). Interestingness is com- puted using a small set of heuristics that apply to the individual propositions contained in the view. For ex- ample, propositions on concepts referenced by the new information are deemed more interesting than propo- sitions on other concepts. The interestingness of each candidate view is a function of the interestingness of the propositions it contains. The candidate views are ordered by the product of their relevance and interest- ingness measures, and the view with the highest score is selected. The set of propositions contained in the selected view are added to the learning context. This results in a learning context containing propositions on those concepts in the knowledge base considered most relevant to the new information. Elaboration During elaboration KI determines how the new infor- mation interacts with the existing knowledge within the learning context. Non-skolemizing rules in the knowledge base are allowed to exhaustively forward- chain, propagating the consequences of the new infor- mation throughout the context. For example, one con- sequence of having a cuticle is that the leaf epidermis is impermeable to gases. Some of the domain rules health Starving * - - - Leaf 1 LeafLight LeafCO 2 ’ provision L&Z0 2 Acquisition I Provision l4 Assimilation I Figure 5: The extended learning context applicable to this example are listed in Figure 4. KI enters a cycle of recognition (i.e., selecting views) and elaboration (i.e., applying rules) that explicates the consequences of the new information within an ever-expanding context. The propositions added to the learning context during recognition determine which implicit consequences of the new information will be made explicit during elaboration. This cycle continues until the user intervenes to react to some consequence that KI has identified, or until the computational re- sources expended exceeds a threshold. Figure 5 illus- trates the second round of this cycle. The recognition phase extends the learning context with a view contain- ing propositions that describe how the leaf acquires and makes use of carbon dioxide. The elaboration phase propagates the consequences of the new information throughout the extended context and predicts the leaf cannot perform photosynthesis and starves. Adaptation During adaptation, KI appraises the inferences com- pleted during elaboration and assists the user in mod- ifying the knowledge base to accommodate the conse- quences of the new information. This involves detect- ing and exploiting learning opportunities that embel- lish existing knowledge structures or solicit additional knowledge from the knowledge engineer. 2 A common learning opportunity occurs when incon- sistent predictions are made. For example, elaboration reveals that the leaf’s cuticle prevents the leaf from acquiring carbon dioxide from the atmosphere. Since 21n this example, KI suggests one hundred new domain rules. or directly asserts over 838 Learning carbon dioxide is an essential resource for photosynthesis, KI concludes that leaves having cuticle cannot perform photosynthesis. This conflicts with the expectation that leaves, in general, must be able to perform photosynthesis. To resolve this conflict, KI inspects the censors of rules participating in the support of the anomalous prediction. Censors are conditions assumed false whenever a rule is used (see the unless conditions of rules in Figure 4) Each censor identifies plausible modifications to the knowledge base that would allow the leaf to attain’ carbon dioxide and perform photosynthesis. Suggesting these plausible modifications prompts the knowledge engineer to provide additional information describing stomata, portals in the leaf’s epidermis that allow restricted gas flow between the atmosphere and the leaf’s interior. This adaptation method is an example of anomaly-based abduction. conditions are identified which if assumed to be true would resolve a contradiction. A second learning opportunity occurs when consequences of the new information suggest generalizations of the new information. For example, elaboration reveals that the leaf cuticle enhances the leaf’s physiology by restricting water loss through transpiration. KI recognizes this as a teleological consequence of the new information: the physiological benefit of moderating water loss explains why the leaf has a cuticle. A weakest-preconditions analysis of the explanation supporting this conclusion shows that other organs of a plant’s shoot system (e.g , stems, fruit, flowers) will also benefit from having a cuticle, and KI suggests this generalization to the knowledge engineer. Consequently, the knowledge structures representing stems, fruit and flowers are embellished to denote they also have cuticles. While this adaptation method constitutes a form of abductive learning, no anomaly is involved. Abduction is motivated by the completion of a teleological explanation rather than a contradiction or a failure to explain some observation. A third learning opportunity occurs when a property of a particular object in the learning context can be generalized into a property for every instance of a class of objects. For example, elaboration reveals that the hypothetical leaf cuticle is assumed to be translucent. 3 By analyzing the explanation of why the leaf cuticle is translucent, KI determines that all leaf cuticle are, by default, translucent. Consequently, KI asserts the inheritance specification that all instances of leaf cuticle are assumed to be translucent This is an example of explanation-based learning (EBL) (Mitchell, Keller, & Kedar-Cabelli 1986). However, unlike existing EBL systems, compilation is not triggered when an instance of some specific goal concept has been established Instead, compilation occurs whenever a 3This assumption is made when KI determines that light energy, (typically) used by the leaf during photosynthesis, is acquired from the atmosphere and must pass through the leaf cuticle sequence of inferences can be summarized by a rule having a specific format. In this case, the permitted format is restricted to the inheritance specification: (V(z) isa(z Y) =S 51(3: Z)), for arbitrary binary predicate ~1, class Y, and constant 2. Speed-up learning (Dietterich 1986) has occurred since a chain of rulefirings which collectively reference an extensive set of preconditions has been compiled into a single rule having an exceedingly simple precondition. Discussion This example illustrates how a tool for knowledge integration assists adding a new general rule to a large knowledge base. KI identifies appropriate generalizations of the new information and resolves indirect conflicts between the new information and existing knowledge. It exploits the existing domain knowledge to determine the consequences of new information and guide knowledge-base development. Intuitively, recognition and elaboration model the learner’s comprehension of new information. In this model, comprehension is heavily influenced by the learner’s existing knowledge: recognition determines what existing knowledge is relevant to the new information; elaboration determines what beliefs are supported by combining the new information with the knowledge selected during recognition. Thus, comprehension produces a set of beliefs that reflect how the new information interacts with existing knowledge. This set of beliefs, and their justifications, afford many diverse learning opportunities, which are identified and exploited during adaptation. This accords with the goals of muftistrategy learning (e.g., (Michalski 1994)). The computational model of learning presented in this paper and embodied in KI does not require a priori assumptions about the use of the new information. Existing approaches to compositional modeling (e.g., (Falkenhainer & Forbus 1991; Rickel & Porter 1994; Iwasaki & Levy 1994)) require a given goal query to guide the creation of a domain model, and traditional approaches machine learning (e.g., (Michalski 1986; Dietterich 1986)) exploit assumptions about the eventual uses of acquired knowledge to determine what is to be learned. While such assumptions, when available, can provide substantial guidance to learning, they are not appropriate in many learning situations (e.g., reading a newspaper or textbook). KI relies on generic learning goals, such as resolving conflicts (to promote consistency) and embellishing the knowledge of the system’s explicitly represented (i.e., named) concepts (to promote completeness). Consequently, this model of learning is not limited to skill refinement, where learning necessarily occurs in the context of problem solving, but can also be applied to learning activities where the eventual uses of acquired knowledge are as yet unknown (Lenat 1977; Morik 1989). Knowledge Bases 839 Empirical Analysis This section presents an empirical analysis of KI’s learning behavior during several learning episodes. The learning trials used for this analysis fall into three categories (Murray 1995): The first three trials are scripted trials. These trials were deliberately engineered to demonstrate learn- ing behaviors that exemplify learning as knowledge integration. For each, a targeted learning behavior was identified and the knowledge base was extended and corrected as necessary to support that learning behavior. The fourth through the tenth learning trials are representative trials. These were developed as a coherent progression of knowledge base extensions thought to be representative for developing a botany knowledge base. For these trials, minor modifica- tions to the knowledge base were performed in or- der to facilitate reasonable behaviors. This included, for example, correcting pre-existing knowledge-base errors that prevented any reasonable interpretation of the new information and launched the subsequent search for consequences in spurious directions. The eleventh through the seventeenth learning trials are blind trials. These were desired knowledge-base extensions submitted by knowledge engineers devel- oping the Botany Knowledge Base. No modifications to the knowledge base were performed to facilitate these trials. group of learning trials has a significantly dif- Lath 6 ferent origin and extent to which the knowledge base was modified to facilitate desired learning behaviors. Consequently, the following empirical analyses include separate consideration for each of these three groups. Diversity of learning behaviors KI was designed to exploit a method of searching for the consequences of new information that was not ded- icated to a single adaptation method. The methods for elaboration and recognition reveal the consequences of new and relevant prior knowledge; a suite of adaptation methods then searches these consequences for learning opportunities. This approach separates the search for the consequences of new and prior knowledge from the detection and exploitation of learning opportunities. This separation affords a single, uniform method for identifying consequences that can be used seamlessly and concurrently with a variety of adaptation methods and thus supports a variety of learning behaviors. To provide evidence for this, the frequencies for each type of learning opportunity that was detected and ex- ploited during the examples are summarized in Figure 6. The data indicate that the learning opportunities were both substantial and diverse: a variety of learning behaviors were exhibited during the learning trials as demonstrated by the diversity of the types of knowl- edge acquired. The average 3 uantities al by type. of acquired rules resented are the numbers o lp er learning tri- acquired taxo- nomic rules (tax), inheritance rules (inh), skolemizing rules (skol), argument-typing constraints (arg), rules resulting from teleological learning (teleo), rules resulting from oth- er abductive reasoning (abd), and aU acquired rules (total). Figure 6: Scope of learning opportunities Measuring learning gain The obligation of every non-trivial learning system is to acquire knowledge beyond the literal content of new information. Learning gain is defined as the amount of acquired knowledge (e.g., measured in terms of the number of beliefs asserted or retracted) not included explicitly in the new information; it provides a natu- ral measure to estimate the effectiveness of a learning program. The relative learning gain is defined as the amount of knowledge acquired by one agent (e.g., a learning program) beyond that acquired by another (e.g., a knowledge engineer). To determine the relative learning gain of KI, profes- sional knowledge engineers were recruited to perform the same seventeen learning trials. These knowledge engineers were quite familiar with the representation language but only marginally familiar with botany and the contents of the knowledge base. However, most of these trials involve only a basic and common knowl- edge of botany. For each trial, a knowledge engineer was provided with the new information presented both as a seman- tic network and as a statement in English. The knowl- edge engineers were free to make any knowledge-base modifications they felt were appropriate and to inquire about either the domain or the contents of the knowl- edge base. They were encouraged to follow their nor- mal practices when formalizing and entering knowl- edge. The number of axioms produced manually by the knowledge engineers was then compared to the num- ber of axioms produced automatically by KI. Figure 7 presents the results of this experiment. The relative knowledge gain exhibited by KI is significant. Over- all, KI derives many times more axioms during these learning trials than was derived manually. Measuring learning utility While the data in figures 6 and 7 indicate that KI id- entifies a diverse and relatively large number of learn- ing opportunities during the learning trials, they do not indicate how useful are the new axioms that result from those opportunities. Traditionally, in machine Learning The relative learning ain ef is computed as the difference between the number o axioms produced b number of axioms developed manually by a 3I KI and the nowledge en- gineer . Figure 7: Relative learning gain learning, evaluating the utility of acquired knowledge is demonstrated by showing that after learning the system’s performance has improved on a set of test queries. This approach is problematic for evaluating KI since by design there is no assumed application task with which to test the system’s performance. However, a relative measure of utility can be estimated by sub- jectively comparing the axioms produced by KI with those produced manually by the knowledge engineers. For each learning trial, the axioms produced by KI that “correspond” to the axioms produced manual- ly by a knowledge engineer were selected. Two ax- ioms correspond if they are the same or if the predi- cates match and most of the arguments match (e.g., (genls GroundWater Water) and (genls GroundWater PlantAssimilabZeWater) correspond). 4 Next, for each learning trial, the selected KI axioms were compared to the corresponding axioms developed by the knowledge engineer, and three sets of axioms were defined. The first set includes axioms produced both by KI and the knowledge engineer (i.e., those pro- duced by KI that differed from manually produced ax- iom only by variable names or by the order of literals). The second set includes axioms produced only by the knowledge engineer. The third set includes axioms pro- duced only by KI. For each trial, the second and third sets were randomly labeled as resulting from Method I and Method 2. Finally, for each trial, a knowledge engineer (other than the knowledge engineer who performed the learn- ing trial) assessed the utility of the axioms that were produced by either KI or the knowledge engineer but not both. For each Method 1 axiom the evaluator was asked to indicate how much she agreed with the state- ments This axiom is useful and This axiom is subsumed by axioms of Method 2 and the prior knowledge base. For each statement, the evaluator scored each Method I axiom with an integer ranging from 1 (denoting strong disagreement with the statement) to 5 denoting (de- noting strong agreement with the statement). The evaluator was then asked to perform a similar eval- 4 The knowledg e engineers did not produce axioms cor- responding to the targeted learning behaviors of the first three trials. Therefore, these engineered learning behaviors were not included in this study. The subjective utility scores for all axioms produced by the knowledge engineer,. axioms edge engineer, all amoms pro B reduced only b the knowl- duced only by KI. uced by KI, an cry axioms pro- Figure 8: The utility of acquired axioms u 1 I I The subjective estimates of the extent to which axioms pro- duced by a knowled e en Ifi ant ineer ioms produced by were subsumed by the ax- umn 2 the prior knowledge base. Col- ioms. e resents the scores for all manually produced ax- olumn 3 presents the scores for those manually produced axioms deemed useful (i.e., by having a utility score greater than 3). Figure 9: MI’s coverage of ICE axioms uation of the Method 2 axioms. The axioms that were produced by both KI and the knowledge engineer were given the scores of 5 both for utility and subsumption. Figure 8 presents the average utility score for axioms produced by KI and for axioms produced by the knowl- edge engineer. The overall utility score for axioms pro- duced only by KI was 0.6 (or about 19%) higher than the scores for axioms produced only by the knowledge engineer. This difference is statistically significant at .95 level of confidence. Figure 9 presents the extent to which axioms pro- duced by the human knowledge engineer were sub- sumed by axioms produced by KI. In almost every learning trial, both KI and the knowledge engineer pro- duced axioms that transcend the explicit content of the new information. Learning systems that exploit sign- ificant bodies of background knowledge are inherently idiosyncratic, and it would be unreasonable to expect that any learning system (e.g., KI) to completely sub- sume the learning behavior of another learning system (e.g., a knowledge engineer). However, the data indi- cate that KI was fairly effective at producing axioms during these learning trials that subsume the useful ax- ioms produced by human knowledge engineers. Over- all, KI scored a 4.6 out of a possible 5.0 for subsuming the useful axioms produced manually by professional knowledge engineers on these learning trials. Statis- tical analysis determined that with a 95% confidence coefficient this score would range between 4.4 and 4.8. Knowledge Bases 841 Summary Knowledge integration is the task of incorporating new information into an existing body of knowledge. This involves determining how new and existing knowledge interact. Knowledge integration differs significantly from traditional machine learning tasks because no specific performance task is assumed. Consequent- ly, the learning system must assess the significance of new information to determine how existing knowledge should be modified to accommodate it. KI is a machine learning program that performs knowledge integration. It emphasizes the signific- ant role of existing knowledge during learning, and it has been designed to facilitate learning from gen- eral statements rather than only from ground obser- vations. When presented with new rules, KI creates a learning context comprising propositions on hypo- thetical instances that model the new information. By introducing additional propositions that model exist- ing knowledge into the learning context and allowing applicable domain rules to exhaustively forward chain, KI determines how the new information interacts with existing knowledge. Through actively investigating the interaction of new information with existing knowledge KI is capable of detecting and exploiting a variety of diverse learning opportunities. First, KI identifies plausible generaliz- ations of new information. For example, when told that leaves have cuticle, KI recognizes that the cuti- cle inhibits water loss and suggests that other organs of the plant’s shoot system, such as fruit and stems, also have cuticle. Second, KI identifies indirect in- consistencies introduced by the new information and suggests plausible modifications to the knowledge base that resolve the inconsistencies. For example, KI pre- dicts the leaf cuticle will inhibit the intake of carbon dioxide from the atmosphere and disrupt photosynthe- sis; consequently, KI suggests the leaf epidermis also has portals to allow the passage of carbon dioxide. Third, KI compiles inferences to embellish the repre- sentations of concepts. For example, KI suggests ev- ery leaf cuticle is translucent since they must transmit the light assimilated into the leaf from the atmosphere and used during photosynthesis. By identifying the consequences of new information, KI provides a highly reactive, knowledge-editing interface that exploits the existing knowledge base to guide its further develop- ment. Acknowledgments I am grateful to Bill Jarrold and Kathy Mitchell, for their assistance with the empirical evaluation of KI, and to Bruce Porter, who supervised this research and contributed to both its conception and development. This research was performed while the author was en- rolled at the University of Texas at Austin and was supported, in part, by the Air Force Human Resources Laboratory under RICIS contract ET.14. References Dietterich, T. 1986. Learning at the knowledge level. Machine Learning 1(3):287-315. Falkenhainer, B., and Forbus, K. 1991. Composi- tional modeling: Finding the right model for the job. Artificial Intelligence 51:95-143. Iwasaki, Y., and Levy, A. 1994. Automated model se- lection for simulation. In Proceedings of the National Conference on Artificial Intelligence, 1183-1190. Lenat, D., and Guha, R. 1990. Building Large Knowledge-Based Systems. Reading, MA:Addison- Wesley. Lenat, D. 1977. The ubiquity of discovery. Artificial Intelligence 9:257-285. Michalski, R. 1986. Understanding the nature of learning: Issues and research directions. In Michal- ski, R.; Carbonell, J.; and Mitchell, T., eds., Ma- chine Learning: An Artificial Intelligence Approach, volume II. Los Altos, CA: Morgan Kaufmann Pub- lishers, Inc. 3-25. Michalski, R. 1994. Inferential theory of learning: Developing foundations for multistrategy learning. In Michalski, R., and Tecuci, G., eds., Machine Learn- ing: An Multistrategy Approach, volume IV. Los Al- tos, CA: Morgan Kaufmann Publishers, Inc. 3-61. Mitchell, T.; Keller, R.; and Kedar-Cabelli, S. 1986. Explanation-based generalization: A unifying view. Machine Learning 1( 1):47-80. Morik, K. 1989. Sloppy modeling. In Morik, K., ed., Knowledge Representation and Organization in Machine Learning. Springer-Verlag. 107-134. Mostow, J., and Swartout, W. 1986. Towards ex- plicit integration of knowledge in expert systems: An analysis of Mycin’s therapy selection algorithm. In Proceedings of the National Conference on Artificial Intelligence, 928-935. Murray, K., and Porter, B. 1989. Controlling search for the consequences of new information during knowledge integration. In Proceedings of the Sixth In- ternational Workshop on Machine Learning, 290-295. Murray, K. 1990. Improving explanatory competence. In Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, 309-316. Murray, K. 1995. Learning as knowledge integration. Technical Report TR-95-41, Department of Comput- er Sciences, University of Texas at Austin. Porter, B.; Lester, J.; Murray, K.; Pittman, K.; Souther, A.; Acker, L.; and Jones, T. 1988. AI re- search in the context of a multifunctional knowledge base. Technical Report AI88-88, Department of Com- puter Sciences, University of Texas at Austin. Rickel, J., and Porter, B. 1994. Automated model- ing for answering prediction questions: Selecting the time scale and system boundary. In Proceedings of the National Conference on Artificial Intelligence. Learning
1996
124
1,760
Laurie . Ibrig and Subbarao Department of Computer Science and Engineering Arizona State University, Tempe, AZ 85287 laurie.ihrig@asu.edu, rao@asu.edu Abstract In this paper we describe the design and implementation of the derivation replay framework, DERSNLP+EBL (Deriva- tional SNLP+EBL), which is based within a partial order planner. DERSNLP+EBL replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (EBL) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, DERSNLP+EBL normally stores plan derivations for individ- ual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effective- ness of DERSNLP+EBL in improving planning performance on randomly-generated problems drawn from a complex domain. ntroduction Case-based planning provides significant performance im- provements over generative planning when the planner is solving a series of similar problems, and when it has an adequate theory of problem similarity (Hammond 1990; Ihrig 1996; Ihrig & Kambhampati 1994; Veloso & Car- bone11 1993). One approach to case-based planning is to store plan derivations which are then used as guidance when solving new similar problems (Veloso & Carbonell 1993). Recently we adapted this approach, called derivational replay, to improve the performance of the partial-order planner, SNLP (Ihrig & Kambhampati 1994). Although it was found that replay tends to improve overall performance, its effectiveness depends on retrieving an appropriate case. *This research is supported in part by an NSF Research Initiation Award IRI-9210997, NSF Young Investigator Award IRI-9457634, and ARPA/Rome Laboratory planning initiative under grant F30602-93-C-0039 and F30602-95-C-0247. I would like to thank Biplav Srivastava for his helpful comments. Often the planner is not aware of the implicit features of the new problem situation which determine if a certain case is applicable. Earlier work in case-based planning has retrieved pre- vious cases on the basis of a static similarity metric which considers the previous problem goals as well as the features of the initial state which are relevant to the achievement of those goals (Kambhampati & Hendler 1992; Ihrig & Kambhampati 1994; Veloso & Carbonell 1993). If these are again elements of the problem description then the case is retrieved and reused in solving the new problem. Usually the new problem will contain extra goal condi- tions not covered by the case. This means that the planner must engage in further planning effort to add constraints (including plan steps and step orderings) which achieve the conditions that are left open. Sometimes an extra goal will interact with the covered goals and the planner will not be able to find a solution to the new problem without backtracking and retracting some of the replayed decisions. In the current work we treat such instances as indicative of a case failure. We provide a framework by which a planner may learn from the case failures that it encounters and improve its case retrieval. In this paper, we present the derivation replay framework, DERSNLP+EBL, which extends DERSNLP, a replay system for a partial-order planner, by incorporating explanation- based learning (EBL) techniques for detecting and explain- ing analytical failures in the planner’s search space. These include methods for forming explanations of search path failures and regressing these explanations through the plan- ning decisions in the failing paths (Kambhampati, Katukatn, & Qu 1996). Here we employ these techniques to construct reasons for case failure, which are then used to annotate the failing cases to constrain their future retrieval. Furthermore, each failure results in the storage of a new case which repairs the failure. DERSNLP+EBL normally stores plan derivations solving single input goals. When a case fails in that it can- not be extended to solve extra goals, a new multi-goal case is stored covering the set of negatively interacting goals. DERSNLP+EBL thus builds its case library incrementally in response to case failure, as goal interactions are discovered through the course of problem solving. In (Ihrig & Kambhampati 1995), the potential effective- Planning $49 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. STORE REPLAY / EXTEND Figure 1: Schematic characterization of derivation storage and replay. Each time that a plan is derived, the decisions contained in the plan derivation (shown as the filled circles to the left of the figure) are stored as a sequence of instructions (shown as open rectangles) which are used to guide the new search process. The guidance provided by replay is considered successful if the skeletal plan that replay produces can be extended (by the addition of constraints) into a solution to the new problem. If replay has been successful, the skeletal plan lies on the new derivation path leading to the solution. ness of this approach was evaluated in an empirical study which compared the replay performance of DERSNLP+EBL both with and without failure information. In this paper, we demonstrate overall performance improvements provided by multi-case replay when a case library is constructed on the basis of replay failures. In the next section, we describe DERSNLP+EBL which implements derivation replay within the partial-order planner, SNLP. splay in Partial- er Planning Whenever DERSNLP+EBL attempts a new problem, and achieves a solution, a trace of the decisions that fall on the derivation path leading from the root of the search tree to the final plan in the leaf node is stored in the case library. Then, when a similar problem is encountered, this trace is replayed as guidance to the new search process. Figure 1 illustrates the replay of a derivation trace. DERSNLP+EBL employs an eager replay strategy. With this strategy, control is shifted to the series of instructions provided by the previous derivation, and is returned to from-scratch planning only after all of the valid instructions in the trace have been replayed. This means that the plan which is produced through replay, called the skeletal plan, contains all of the constraints that were added on the guidance of the previous trace. When the skeletal plan contains open conditions relating to extra goals not covered by the case, further planning effort is required to extend this plan into a solution for the new problem. In the current work replay success and failure is defined in terms of the skeletal plan. Replay is considered to fail if the skeletal plan cannot be extended by the addition of further constraints into a solution for the new problem (See Figure 2). In such instances, the planner first explores the failing subtree underneath the skeletal plan, then recovers by backtracking over the replayed portion of the search path. Figure 2: A replayfailure is indicated when a solution to the new problem can be reached only by backtracking over the skeletal plan, which now lies outside the new plan derivation (shown as filled circles). Explanations are constructed for the failing plans in the leaf nodes of the subtree directly beneath the skeletal plan, and are regressed up the search tree and collected at the root to become the reason for case failure. Replay failure usually results in poor planning performance since, over and above the cost of the search effort, it entails the additional cost of retrieving a trace from the library, as well as the cost of validating each of the decisions in the trace. This means that when replay fails and the planner has to backtrack over the skeletal plan performance may be worse than in from-scratch planning. When a case fails, and the planner goes on to find a new solution, the final plan that it reaches does not contain some of the constraints that are present in the skeletal plan. The new derivation path which leads from the root of the search tree to the final plan in the leaf node thus avoids (or repairs) the failure encountered in replaying the old case. Consider a simple example taken from the logistics transportation domain of (Veloso & Carbonell 1993). Figure 3a illustrates the solution to a simple problem drawn from this domain. The goal is to have package OB 1 located at the destination location ld. The package is initially at location 21. There is a plane located at lr, which can be used to transport the package. A previous plan which solves this problem will contain steps (shown by the curved arrows in Figure 3a) that determine the plane’s route to the destination airport as well as steps which accomplish the loading of the package at the right place along this route. This plan may be readily extended to load and unload extra packages which lie along the same route. However, if the new problem involves the additional transport of a package which is off the old route, the planner may not be able reach a solution without backtracking over some of the previous step additions. The new plan shown in Figure 3b contains some alternative steps that achieve the goal covered by the previous case. The plane takes a longer route which means that the plan may be readily extended to solve the extra goal. DERSNLP+EBL detects that a previous case has failed when all attempts to refine the skeletal plan have been tried, and the planner is forced to backtrack over this plan. At this point, the planner has already constructed an explanation 850 Learning (a) Previous Case Figure 3: An example of plan failure. The plan derived in an earlier problem-solving episode is shown in (a). This plan accomplishes the transport of a single package, OBI, to the destination airport L-J. Replay fails for a new problem, whose solution is illustrated in Figure (b). The new problem contains an extra goal which involves the additional transport to Id of a second package, OB2, which is initially located ofs the previous route. (b) New Problem with Extra Goal for the skeletal plan’s failure (which becomes the reason for case failure). This explanation is incrementally formed with each path failure experienced in the subtree rooted at the skeletal plan. Each analytical failure that is encountered is regressed through the decisions in the failing path and the regressed path failure explanations are collected at the root of the search tree to form the reason for case failure. An example of a case failure reason is shown in Figure 4. It gives the conditions under which a future replay of the case will again result in failure. These conditions refer to the presence in the new problem of a set, C, of negatively interacting goals, as well as some initial state conditions, contained in E. A summary of the information content of the failure reason is: There is an extra package to transport to the same destination location, and that package is not at the destination location, is not located on the plane’s route. inside the plane, -and is not Since replay merely guides the search process (without pruning the search tree), a replay failure does not affect the soundness or completeness of the planning strategy. After backtracking over the skeletal plan, the planner continues its search, and will go on to find a correct solution to the full problem if one exists. This new solution achieves all of the negatively interacting goals identified in the case failure reason. Moreover, since the interacting goals represent a subset of the new problem goals, the new derivation may be used to construct a new repairing case covering only these goals. The repairing case is indexed directly beneath the failing case so as to censor its retrieval. In the future, whenever the failure reason holds, the retriever is directed away from the case that experiences the case that repairs the failure. a failure and toward We are now in a position to describe how the planner learns the reasons underlying a case failure. Specifically, we use EBL techniques to accomplish this learning. In the next section, we show how the techniques developed in (Kambhampati, Katukam, & Qu 1996) are employed to construct these reasons. Case Failure Explanation: C = (((AT-OB OBl Id), TV) ((AT-OB OB2 Id), ta)} & = ((tr, (TAT-OB OB2 &$)) (tl, (TINSIDE-PL OB2 ?PL )) (tI, (TAT-OB OB2 rl)) (TV, (TAT-OB OB2 &,))} Figure 4: An example of a case failure reason earning from Case Failure DERSNLP+EBL constructs reasons for case failure through the use of explanation-based learning techniques which allow it to explain the failures of individual paths in the planner’s search space. A search path experiences an analytical failure when it arrives at a plan which, because it contains a set of inconsistent constraints, cannot be further refined into a solution. EBL techniques are used to form explanations of plan failures in terms of these conflicting constraints (Kambhampati, Katukam, 8z Qu 1996). DERSNLP+EBL constructs explanations for each of the analytical failures that occur in the subtree beneath the skeletal plan’. Since a plan failure explanation is a subset of plan con- straints, these explanations are represented in the same manner as a pahilph. DERSNLP+EBL represents itspar- tial plans as a 6-tuple, (S, 0, x3, L, E, C), where (Barrett & Weld 1994): S is the set of actions (step-names) in the plan, each of which is mapped onto an operator in the domain theory. S contains two dummy steps: tl whose effects are the initial state conditions, and tG whose preconditions are the input goals, G. f3 is a set of codesignation (binding) and non-codesignation (prohibited binding) constraints on the variables appearing in the preconditions and post-conditions of the operators which are represented in the plan steps, S. 0 is a partial ordering relation on S, representing the ordering constraints over the steps in S. C is a set of causal links of the form (s, p, s’) where s, s’ E S. A causal link contains the information that s causes (contributes) p which unifies with a precondition of s’. E contains step effects, represented as (s, e), where s E S. C is a set of open conditions of the partial plan, each of which is a tuple (p, s) such that p is a precondition of step s and there is no link supporting p at s in L. The explanation for the failure of the partial plan contains a minimal set of plan constraints which represent an incon- sistency in the plan. These inconsistencies appear when new constraints are added which conflict with existing con- straints. DERSNLP makes two types of planning decisions, establishment and resolution. Each type of decision may result in a plan failure. For example, an establishment ‘Depth limit failures are ignored. This means that the failure explanations that are formed are not sound in the case of a depth limit failure. However, soundness is not crucial for the current purpose, since explanations are used only for case retrieval and not for pruning paths in the search tree. Planning 851 Type : ESTABLISHMENT Type : ESTABLISHMENT Kind : NEW STEP Kind : NEW LINK Preconditions : Preconditions : (P’, 8’) E c (P’, 4 E c Effects : Effects : s’ = s u (8) 0’ = c3 u (8 4 a’} 8’ = t? U uni f y(p, p’) t’ = L u {(qp, 8’)) E = & U effects(e) c’ = c - ((p’, 8’)) 0’ = 0 u (8 4 8’) 8’ = B U unif y(p, p’) e!? = L u {(8,p, 8’)) c’ = c - ((p’, 8’ } U pTeconditione(8) Figure 5: Planning decisions are based on the current active plan (S, 0, x3, E, E, C) and have eJfects which alter the constraints so as to produce the new current active plan (S’, 0, t3’, J?, E’, Cl). decision makes a choice as to a method of achieving an open condition, either through a new plan step, or by adding a causal link from an existing step (See Figure 5). When an attempt is made to achieve a condition by linking to an initial state effect, and this condition is not satisfied in the initial state, the plan then contains a contradiction. An explana- tion for the failure is constructed which identifies the two conflicting constraints: (0,0,0, ((tl, p, s)), ((tr, lp)), 0). As soon as a plan failure is detected and an explanation is constructed, the explanation is regressed through the decisions in the failing path up to the root of the search tree. In order to understand the regression process, it is useful to think of planning decisions as STRIPS-style operators acting on partial plans. The preconditions of these operators are specified in terms of the plan constraints that make up a plan flaw, which is either an open condition or, in the case Each of the conflicting constraints in the failure expla- nation is regressed through the planning decision, and the results are sorted according to type to form the new regressed explanation. As an example, consider that a new decision, df , adds a link from the initial state which results in a failure. The expl=tion, el, is: (0,0,0, (@I, P, 0, @I, lp)), 0) When el is regressed through the final decision, &f , to obtain a new explanation, o!, ‘(el), the initial state ef- fect regresses to itself. However, since the link in the explanation was added by the decision, df , this link re- gresses to the open condition which was a precondition of adding the link. The new explanation, df l(ei), is therefore (0,0,0,0, ((tr, lp)), ((p, 8))). The regression process continues up the failing path until it reaches the root of the search tree. When all of the paths in the subtree underneath the skeletal plan have failed, the failure reason at the root of the tree provides the reason for the failure of the case. It represents a combined explanation for all of the path failures. The case failure reason contains only the aspects of the new problem which were responsible for the failure. It may contain only a subset of the problem goals. Also, any of the initial state effects that are present in a leaf node explanation, are also present in the reason for case failure. The next section describes how case failure reasons are used to build the case library. A large complex domain means a great variety in the problems encountered. When problem size (measured in terms of the number of goals, n) is large, it is unlikely that a similar n-goal problem will have been seen before. It is therefore an advantage to store cases covering smaller subsets of goals, and to retrieve and replay multiple cases in solving a single large problem. In implementing this storage strategy, decisions have to be made as to which goal combinations to store. Previous work (Veloso & Carbonell 1993) has reduced the size of the library by separating out connected components of a plan, and storing their derivations individually. Since DERSNLP+EBL is based on a partial order planner, it can replay cases in sequence and later add step orderings to accomplish the merging of their subplans. It therefore has a greater capability of reducing the size of the library, since it may store smaller problems. In the current work, we store multi-goal cases only when subplans for individual goals cannot be merged to reach a full solution. With this aim in mind, we have implemented the fol- lowing deliberative storage strategy. When a problem is attempted which contains 72 goals, a single goal problem containing the first goal in the set is attempted and, if solved, the case covering this goal alone is stored in the library. Multi-goal problems to be stored are solved incre- mentally by increasing the problem size by one goal at a time. For example, if the problem just attempted solved goals G = (a, in, . . . . gi) through a decision sequence Di then a second decision sequence, Di+l , is stored whenever Di cannot be successfully extended to achieve the next goal gi+l . When this occurs, the explanation of replay failure is used to identify a subset of input goals that are responsible for the failure. A new derivation is produced which solves only these negatively interacting goals. This derivation is then stored in the library. Whenever the next goal in the set is solved through simple extension of the previous de- cision sequence, no case is stored which includes that goal. This means that each new case that is stored corresponds to either a single-goal problem or to a multi-goal problem containing negatively interacting goals. Moreover, all of the plan derivations stored from a single problem-solving episode are different in that no decision sequence stored in the library is a prefix of another stored case. This strategy drastically reduces the size of the library. It means that goals that interact positively in that they can be solved through one or more common steps are stored individually in single cases. Goals that are negatively interacting (in that solving one means having to alter the solution to the other) are stored together as multi- goal cases. The more experience that the planner has in problem-solving, the more of these multi-goal cases are discovered and stored, and the less likely it is that the 852 Learning I GO input goals: AT-OB (OBl h) I I I initial conditions: IG 1 IG 2 I I derivation 1 1 1 derivation 2 1 A A failure reasons: L-l rl /\ r2 Figure 6: Local organization of the case library. planner has to backtrack over its replayed paths. The aim is to store a minimum number of cases such that all of the problems encountered in the future may be achieved through sequential replay of multiple stored cases. Multi-goal cases are indexed in the library so as to censor the retrieval of their corresponding single-goal subprob- lems. The discrimination net depicted in Figure 6 indexes one fragment of the case library. This fragment includes all of the cases which solve a single input goal. Individual cases which solve this goal are represented one level lower in the net. Each case is indexed by its relevant initial state conditions. When one of these cases is retrieved for replay and the case fails, the alternative derivation corresponding to the additional interacting goal is added to the library and indexed directly under the failing case so as to censor its future retrieval. Before the case that experienced a failure is retrieved again, the retriever checks whether the extra goals responsible for the failure are present under the same initial conditions. If so, the retrieval process returns the alternative case containing these extra goals. The case failure reason is thus used to direct retrieval away from the case which will repeat a known failure, and towards the case that avoids it. Multi-case replay can result in a lower quality plan if care is not taken to avoid redundancy in step addition. When derivations for positively-interacting goals are stored individually, replaying each case in sequence may result in superfluous steps in the plan. When the first retrieved derivation is replayed, none of its replayed step additions will result in redundancy. However, when subsequent goals are solved through replay of additional cases, some step additions may be unnecessary in that there are opportunities for linking the open conditions they achieve to earlier established steps. We solved this problem and obtained shorter plans by increasing the justification for replaying a step addition decision. In order to take advantage of linking opportunities, before replaying a new step addition, the replay process takes note of any links which are currently available but were not present in the previous case. When new linking opportunities are detected, the decision to add a new step is rejected. After replay, the new links are explored through the normal course of plan refinement. This careful screening of the step addition decisions improves the quality of plans in terms of the number of steps they contain. The next 4500.0 0 40000 35000 .g 30000 = 8 25000 g 20000 J $ 20 15000 3 10000 40 5000 60 0 120 1 2 3 4 5 6 Figure 7: Replay performance in the logistics transportation domain. The cumulative CPU time (in sets) on problem sets of increasing problem size (I to 6 goals) is plotted for each level of training (0 to 120 training problems solved). The insert shows total CPU time on all of the 6 test sets after increasing amounts of training section describes an empirical study demonstrating the performance improvements provided by multi-case replay. Experimental Setup: We tested the improvement in planning performance provided by multi-case replay on problems drawn from the logistics transportation domain (Veloso & Carbonell 1993). Problem test sets increasing in problem size were randomly generated from this domain. The initial state of each problem described the location of 6 packages, and 12 transport devices (6 planes and 6 trucks) within 6 cities, each containing a post office and an airport. See (Ihrig 1996) for similar tests on larger problems in a 15 city domain. The experiments were run in six phases. At the start of each phase n the library was cleared and thirty test problems, each with n goals, were randomly generated. The planner was then repeatedly tested on these problems after increasing amounts of training on randomly generated problems of the same size. During training, problems were solved and their plan derivations were stored as described above. Multi-goal problems were stored only when retrieved cases failed. In these instances the failure information was used to extract the subset of input goals responsible for the failure, and a case which solved these goals alone was stored in the library. Experimental Res ts: The results are shown in Figure 7 and 8. Figure 7 plots replay performance measured as the cumulative CPU time taken in solving the 30-problem sets tested in the 6 phases of the experiment. The figure plots replay performance (including case retrieval time) for the various levels of training prior to testing. For example, level 0 represents planning performance after no training. Since in this instance the case library is empty, level 0 represents from-scratch planning on the problem test set. Level 20 represents testing after training on 20 randomly generated Planning 853 100 90 80 70 z 60 2 8 50 s 40 30 20 10 0 L Figure 8: Percentage of test problems solved with the time limit (500 set) is plottedfor 30-problem test sets containing problems of 1, 3 and 5 goals. This percentage increased with training (0 to 120 training problems solved). The insert shows the corresponding increase in the size of the case library. problems of the same size as the test set. The results indicate that this relatively small amount of training pro- vided substantial improvements in planning performance. Moreover, performance improved with increased levels of training. The improvements provided by multi-case replay more than offset the added cost entailed in retrieving and matching stored cases. Figure 8 reports the percentage of test problems solved within the time limit which was imposed on problem solv- ing. It shows how training raised the problem-solving horizon, particularly in the later phases of the experiment when larger problems were tested. Storing cases on the basis of case failure kept the size of the library low (see insert, Figure 8) and retrieval costs were minimal. In the next section, we discuss the relationship to previous work in case storage and retrieval. elated Work and Discussion The current work complements and extends earlier treat- ments of case retrieval (Kambhampati & Hendler 1992; Veloso & Carbonell 1993). Replay failures are explained and used to avoid the retrieval of a case in situations where replay will mislead the planner. Failures are also used to construct repairing cases which are stored as alternatives to be retrieved when a similar failure is predicted. CHEF (Hammond 1990) learns to avoid execution-time failures by simulating and analyzing plans derived by reusing old cases. In contrast, our approach attempts to improve planning efficiency by concentrating on search failures encountered in plan generation, We integrate re- play with techniques adopted from the planning framework provided by SNLP+EBL (Kambhampati, Katukam, & Qu 1996). This framework includes methods for constructing conditions for predicting analytical failures in its search space. EBL techniques have been previously used to learn from problem-solving failures (Kambhampati, Katukam, & Qu 1996; Minton 1990; Mostow & Bhatnagar 1987). However, the goal of EBL has been to construct generalized control rules that can be applied to each new planning decision. Here we use the same analysis to generate case-specific rules for case retrieval. Rather than learn from all failures, we concentrate on learning from failures that result in having to backtrack over the replayed portion of the search path. As learned information is used as a censor on retrieval rather than as a pruning rule, soundness and completeness of the EBL framework are not as critical. Furthermore, keeping censors on specific cases avoids the utility problem commonly suffered by EBL systems. Conclusion In this paper, we described a framework for a case-based planning system that is able to exploit case failure to improve case retrieval. A case is considered to fail in a new problem context when the skeletal plan produced through replay cannot be extended by further planning effort to reach a solution. EBL techniques are employed to explain plan failures in the subtree directly beneath the skeletal plan. These failure explanations are then propagated up the search tree and collected at the root. The regressed plan failures form the reason for case failure which is used to censor the case and to direct the retriever to a repairing case. Our results provide a convincing demonstration of the effectiveness of this approach. eferences Barrett, A., and Weld, D. 1994. Partial order planning: evaluating possible efficiency gains. Artificial Zntelligence 67:7 1 --I 12. Hammond, K. 1990. Explaining and repairing plans that fail. Arttjicial Intelligence 45:173--228. Ihrig, L., and Kambhampati, S. 1994. Derivation replay for partial-order planning. In Proceedings AAAZ-94. Ihrig, L., and Kambhampati, S. 1995. An explanation-based approach to improve retrieval in case-based planning. In Current Trends in AI Planning: EWSP ‘95.10s Press. Ihrig, L. 1996. The Design and Implementation of a Case-based Planning Framework within a Partial Order Planner. Ph.D. Dissertation, Arizona State University, Kambhampati, S., and Hendler, J. A. 1992. A validation structure based theory of plan modification and reuse. Artificial Intelligence 55: 193--258. Kambhampati, S.; Katukam, S.; and Qu, Y. 1996. Failure driven dynamic search control for partial order planners: An explanation-based approach. Artificial Intelligence. To Appear. Minton, S. 1990. Quantitative results concerning the utility of explanation-based learning. In Artificial Intelligence, volume 42, 363--392. Mostow, J., and Bhatnagar, N. 1987. Failsafe: A floor planner that uses ebg to learn from its failures. In Proceedings IJCAZ-87, 249--255. Veloso, M., and Carbonell, J. 1993. Toward scaling up machine learning: A case study with derivational analogy in prodigy. In Minton, S., ed., Machine Learning methods for planning. Morgan Kaufmann. 854 Learning
1996
125
1,761
Steven Minton USC Information Sciences Institute 4676 Admiralty Way Marina de1 Rey, CA 90292 minton@isi.edu Abstract In this paper, we consider the role that domain- dependent control knowledge plays in problem solving systems. Ginsberg and Geddis (Gins- berg & Geddis 1991) h ave claimed that domain- dependent control information has no place in declarative systems; instead, they say, such infor- mation should be derived from declarative facts about the domain plus domain-independent prin- ciples. We dispute their conclusion, arguing that it is impractical to generate control knowledge solely on the basis of logical derivations. We pro- pose that simplifying abstractions are crucial for deriving control knowledge, and, as a result, em- pirical utility evaluation of the resulting rules will frequently be necessary to validate the utility of derived control knowledge. We ihustrate our ar- guments with examples from two implemented systems. Introduction In a AAAI paper entitled “Is there any Need for Domain-Dependent Control Information?” Ginsberg and Geddis (1991) (h enceforth G&G) consider whether domain-dependent control knowledge is necessary for domain-independent problem solving systems. Their conclusion is aptly summarized by their abstract, which consists of the single word: “No”. They argue that all domain-dependent control information can be, and should be, derived from simple declarative facts about the domain plus domain-independent control knowledge. In this paper we reconsider the question raised by G&G and arrive at a different conclusion. We argue that in many cases, domain-dependent con- trol information cannot, in a practical sense, be de- rived solely in the manner specified by G&G, due to the complexity of the reasoning that would be required. In such cases empirical testing will be required to val- idate the utility of control knowledge. Consequently, one cannot dispense with the control knowledge as sim- ply as they imply. The contribution of this paper is not simply to re- but the claim made by G&G, but more importantly, to investigate reasoning strategies for producing domain specific control knowledge. In this vein, we propose that, due to the complexity required to derive con- trol rules, simplifying abstractions and empirical utility evaluation can be valuable tools. We begin this paper by reviewing G&G’s arguments. Despite our disagreement with their conclusion, we be- lieve the issues they raise are important, and their ba- sic observations have considerable merit. Nevertheless, they leave the reader with an over-simplified view of the world. In order to show this, we consider some concrete examples that illustrate the complexities in- volved in acquiring control knowledge. We begin by reviewing the learning process used by PRODIGY-EBL (Minton 1988), and then turn our attention to some recent experiments with MULTI-TAC (Minton 1996). Ginsberg and Geddis (G&G) begin by introducing the following distinction. They propose that meta-level information consists of (at least) two different types: meta-level information about one’s base-level knowl- edge, which they call modal knowledge, and meta- level information about what to do with the base-level knowledge, which they call control knowledge. An ex- ample of a modal sentence is: “I know that Iraq in- vaded Kuwait”. An example of a control rule is: “To show that a country c is aggressive, first try showing that c invaded another country”. Meta-level information can be either domain- dependent or domain-independent. For instance, the control rule we just mentioned is domain-dependent. An example of a domain independent rule is: “When attempting to prove that a goal is true, choose a rule with the fewest subgoals first”. The crux of G&G’s argument that domain- dependent control knowledge is unnecessary rests on a simple, but important, observation: the behavior of any domain-independent problem solving algorithm depends only upon the syntactic form of the base-level domain theory. They present the following Prolog the- ory as an example: hostile(c) := allied(d, c), hostile(d). hostile(c) : = invades-neighbor(c). Planning 855 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Type37(n, ~2, ~3~01 j 02) --+ deh&l) This rule indicates that for Types7 Prolog theories, with arbitrary predicate symbols pl , ~2, p3 and con- stant symbols 01 ,oz, subgoals involving predicate pl should be delayed. As G&G admit, this construction “is in many ways trivial, since in order to transfer a control rule from one domain to another we need to know that the two search spaces are identical”. Nevertheless, the in- tuition underlying the construction is more general, since, as G&G explain, the “arguments underlying domain-dependent control rules do not typically de- pend on the complete structure of the search space”. Therefore, in the second part of their paper, they go on to examine several examples of domain-dependent control rules and to informally elaborate the rationales underlying these rules. For instance, they consider planning a trip from Stanford to MIT. The domain- dependent control knowledge is this: “When planning a long trip, plan the airplane component first”. The rationale for this control rule is based on two domain- dependent observations. First, the three legs of the journey (one in the air and two on the ground) are non- interfering, except for scheduling concerns, and sec- ond, the subproblem of selecting air transportation is likely to be more constrained than arranging for ground transportation. G&G point out that, given these two facts about the domain, the aforementioned control rule can be inferred in a domain-independent fashion. Our main quarrel with G&G is that they leave the reader with an over-simplified view of the world. In their introduction they state: “The claim we are mak- ing - and we intend to prove it - is that there is no place in declarative systems for domain-dependent con- trol knowledge”. In fact, the only formal proof in the paper involves the construction in part 1 which triv- ially divides every control rule into a modal fact and a domain-independent rule, accomplishing little more than a change in terminology. The examples analyzed in the second part of their paper are more interesting, but at the end of this section they state: Although the control rules used by declarative sys- tems might in principle be so specific as to apply to only a single domain, the domain-independent control rules in practice appear to be general enough to be useful across a wide variety of prob- lems... Whether or not this observation is valid is more an experimental question than a theoretical question.. . In this regard their work is really a proposal (not a proof) and the practical significance is left for us to address in this paper. If we carefully consider their claim that domain- dependent control rules can (and should!) always be derived from simple facts about the domain, a num- ber of questions arise. Are the required modal facts alIied(d,c):= allied(c,d). invades-neighbor(Iraq) . allied(Iraq, Jordan). To prove hostile (Jordon) , one must eventually ex- pand the subgoal Invades-neighbor (Iraq), rather than pursuing the infinite chain of subgoals: hostile( Jordan), hostile(Iraq), hostile( Jordan) . . . Thus, one appropriate control rule for this domain is that the subgoals involving Invades-neighbor should be tried before the subgoals involving hostile. This might be expressed as follows: Delay(hostile) However, as G&G point out, the rationale for this control rule is domain independent. If one were to take our prolog theory and change the predicate names and constant names, the same control rule would still hold (modulo the fact that the predicate names would have to be changed in the rule as well), as illustrated below: acute(c) := congrusnt(d, c), acute(d). acute(c) : = equilat era1 (c) . congrnent(d,c):= congruent(c,d). equilateral(T1). congruent (Tl, T2). rule, this time involving acute Here, a similar control vs. equilateral, is valid: Delay(equilateraJ.) The intuition that control knowledge only depends on the form of the base-level theory is the basis for G&G’s claims. Explaining their view, they quote David Smith’s thesis (Smith 1985): Good control decisions are not arbitrary; there is always a reason why they work. Once these rea- sons are uncovered and recorded, specific control decisions will follow logically from the domain- independent rationale, and simple facts about the domain. G&G’s paper is divided into two parts. In the first part, they prove that any control rule can be replaced by a domain independent control rule and a modal sen- tence describing the structure of the search space (with only a minimal effect on efficiency). For example, for the “hostility theory” above, G&G abbreviate the re- quired modal sentence using a “type” predicate, which captures the relevant structural aspects of the theory: Type37(hostile, invades-neighbor, allied, Iraq, Jordan) it Since the “triangle theory” has would also be a type37 theory: the same structure TypesT(acute, equilateral, congruent, Tl, T2) Once the requisite modal sentence is formalized, the domain independent form of the control rule is simple to express, e.g.: 856 Learning - the facts about the domain - always simple? Is domain-independent inference always sufficient? Will the derivation process be efficient? These questions are not considered by G&G, but as we will see, they can be of critical importance. We will henceforth restrict our consideration to the practical application of G&G’s approach, as presented in the second part of their article. Thus, we assume that the domain-independent principles that G&G would use to derive domain-dependent control knowl- edge are substantial and broadly applicable, unlike the trivial transformation discussed above. control nowledge in Prodigy-E Prodigy’s Explanation-based Learning component (Minton 1988; Minton et al. 1989) acquires domain- dependent search control rules by analyzing problem- solving traces. Commenting on PRODIGY-EBL, G&G propose that instead of acquiring new control rules, it would be preferable to acquire new base-level or modal information. In fact, this criticism is largely misguided, since, as we will explain, PRODIGY-EBL actually does learn domain-dependent modal informa- tion and then converts this information into search con- trol rules, just as G&G recommend. Thus, in many respects, PRODIGY-EBL supports G&Q’s contention that control knowledge can be acquired by learning domain-dependent modal information, and then con- verting that information into domain-dependent con- trol rules using domain-independent principles. How- ever, as we will see, there are some crucial pieces miss- ing from G&G’s story. Simpli~ing Abstractions in Prodigy-EBL Let us examine PRODIGY-EBL'S learning process in more detail. After the Prodigy problem solver finishes solving a problem (or subproblem), PRODIGY-EBL ex- amines the trace, looking for instances of its target concepts. For example, one target concept is GOAL- INTERACTION. A goal interaction occurs when achiev- ing one goal necessarily clobbers (i.e,. deletes) another goal. Consider, for instance, a blocksworld problem in which both (On A B) and (On B 6) are goals. If the solver achieves (On A B) before (On B C), then all ways of subsequently achieving (On B 6) will clob- ber (On A B). A solution is still possible, since (On A B) can be re-established after block B is put on top of block C, but this solution is suboptimal. By con- structing an explanation (a proof) as to why the in- teraction occurred, and finding the weakest conditions under which the proof holds, PRODIGY-EBLis able to generalize from the example, learning that achieving (On x y) before (On y Z) will result in a goal interac- tion. Notice that this is a modal fact, in the sense of G&G. This modal fact is then converted into a control rule (a goal ordering rule) which says that if both (On z y) and (On y Z) are goals, then (On y Z) should be solved first.’ The domain-independent justification for this last step is actually rather complex. In general, it is preferable to avoid clobberings because they tend to decrease the probability of finding a solution and, in addition, when solutions are found, the finished plans tend to be longer (since clobbered goals have to be re- established). Prodigy does not explicitly carry out any of this reasoning - it is, in effect, hardcoded into the system. The concept of a goal-interaction provides a simplifying abstraction, so that we can ignore the com- plexities mentioned above, i.e., we assume that avoid- ing goal interactions is preferable. Avoiding goal in- teractions isn’t necessarily preferable (as discussed by Minton, 1988). For instance, it may turn out that all paths leading to a solution, or the shortest paths to the solution, involve one or more goal-interactions. G&G propose that, given domain-dependent modal knowledge, one can infer the necessary domain- dependent control knowledge, but they do not investi- gate in any depth the nature of the reasoning process. In particular, they do not discuss any role for simpli- fying abstractions in the reasoning process. We use the term “abstraction” (albeit informally) because the essential characteristic of a simplifying abstraction is that it deals only with a restricted aspect of the do- main or the search space. We use the term “simplify- ing” because the theory is often incomplete, or involves some other simplifying assumptions, so that worst-case complexities are avoided. We claim that simplifying abstractions are often necssary in the reasoning pro- cess. However, we also recognize that control deci- sions based on such abstractions will not always be optimal, because of the simplifications involved (e.g., the simplifying assumptions may be violated). For example, the control rule “Achieve (On y Z) before (On 2 y)” that was learned in our blocksworld exam- ple is not guaranteed to recommend the best course of action. As we will explain later, PRODIGY-EBL is able to recover from this (to some extent) by em- pirically evaluating the utility of the learned control rules and to “discard” any rules with negative utility, but G&G’s scheme does not allow for this option. In G&G’s scheme, one would only have the modal fact “Achieving (On x y) before (On y Z) results in a goal interaction”, and the domain-independent control rule “Avoid goal interactions”. Since there is no place in their scheme for domain-dependent control rules, they cannot determine the utility of such rules and discard them when appropriate. In fact, they do not discuss ‘In an aside, G&G question why the rule learned by PRODIGY-EBL on this example states that (On y z) should be solved before (On z y). They point out that conceptu- ally, the execution order is not necessarily the same as the planning order. The answer is that in Prodigy, the order in which goals are solved does determine their ordering in the plan, because Prodigy is a means-ends, total-order planner. So there is no distinction made between “planning order” and “execution order”. Planning 857 the possibility that their reasoning process may pro- duce counter-productive results. Simplifying abstractions play a crucial role in PRODIGY-EBL. Our blocksworld example illustrated one type of simplifying abstraction, which is used when learning goal-ordering rules. Another type of simpli- fying abstraction underlies the entire EBL approach. Consider that, in general, the utility of a control rule depends on all the following factors: e the cost of matching the rule, and m the frequency which with rule is applicable, and e the benefit derived when the rule is applied. The explanation process considers only the structure of the search space, as opposed to the quantitative costs/benefits of the decision process. Thus the first two considerations are ignored when proposing a con- trol rule (and the third is only partially dealt with). This constitutes a simplifying abstraction, above and beyond the simplifying abstractions used within the explanation process. We know of no simple way to rea- son about these factors in order to derive high-utility control rules. In order for G&G’s scheme to be successful, one must not only be able to derive control information from the modal facts, but the process must be accomplished efi- ciently. In this section we have argued that simplifying abstractions can be essential for making the reasoning process efficient .2 Many varieties of simplifying ab- stractions can be found in analytic learning systems that derive control knowledge.3 Some examples in- clude: the “preservability” assumption used by Bhat- nagar and Mostow’s FAILSAFE (Bhatnagar & Mostow 1994), the focus on non-recursive explanations in Et- zioni’s STATIC (Etzioni 1993), the “plausible” domain theory used to identify chronic resource bottlenecks in Eskey and Zweben’s constraint-based payload sched- uler (Eskey & Zweben 1990), and the use of depth- limits, domain-axioms and np-conditions by Katukam and Kambhampati (1994). The Value of Utility Evaluation The control rules generated by PRODIGY-EBL are not guaranteed to be useful, because, as we have seen, 2 The simplifying abstractions that we have discussed so far involve ignoring, or leaving out, certain considera- tions. As a result, the control rules that are generated are not guaranteed to be useful. We note that there are other types of simplifying abstractions where the “simplification” results from restricting the theory to some easily analyzed special cases(Etzioni 1993; Etzioni & Minton 1992). This is an important technique for simplifying the explanation process, but not particularly germane to our argument. 31n some simp le analytic learning systems, the theory used to derive control knowledge is identical to the theory used by the problem solver, but in such systems, all that is possible is a form of generalized caching, a very simple type of control. they are based on simplifying assumptions. In fact, most rules typically have negative utility! As a result, PRODIGY-EBL empirically evaluates each rule’s utility by measuring its costs and benefits during subsequent problem solving episodes. Thus, in the blocksworld, a control rule that orders the goal (ON y z) before the goal (ON x y) will be probably be kept, since it is likely to improve the solver’s performance. In contrast a rule that orders (Clear z) after (On z y) will have low benefit in most cases (even though it is appropri- ate and has low match cost), because achieving these goals in the wrong order will not generally decrease ef- ficiency. (Note that (Clear 2) is both a precondition and postcondition of achieving (On x y).) In general, finding a good set of control rules is a complex problem, even when empirical utility evalua- tion is used; one complicating factor is that utility of an individual control rule is influenced by the choice of which other control rules to include in the system. For instance, the rule mentioned above that orders (Clear x) after (On x y) can, perhaps surprisingly, be very use- ful in some circumstances, because if there are blocks on top of x, and they are mentioned in other ON goals, it can be preferable to delay (Clear x), until all the ON goals have been achieved. However, this is only neces- sary if, while achieving (Clear x), the system is likely to do some action that interferes with one of these other goals. Thus, the utility of this rule depends on the other rules in the system. (Minton (1988) and Gratch (1995) describe schemes for evaluating the utility of control rules.) It is difficult to imagine that any domain- independent system could logically derive guaranteed, high-utility control rules for complex domains, with- out any empirical component (and we are unaware of the existence of any such system).4 However, this is what G&G are implicitly suggesting when they rec- ommend that the control rules themselves can be dis- pensed with. They do note that probabilistic domain- dependent modal information may play a part in the derivation, but the problem, of course, is knowing what probabilistic information will be sufficient to guarantee the utility of a control rule. Thus, we claim that the rationale for a control rule, i.e., the derivation based on domain-independent principles and domain-dependent modal information, will in many cases be incomplete. (Note that we are not claiming that a complete ra- tionale does not exist, but simply that for practical reasons, the rationale generated by a system will often be based on simplifying assumptions.) For this reason, we dispute G&G’s claim that control knowledge has no place in declarative systems. We need to explicitly represent the rules so we can test them and so that we can indicate which rules should actually be used for problem solving. 40bviously in certain limited situations it is possible to derive control knowledge that is guaranteed to be useful, but this is relatively uninteresting. 858 Learning Figure 1: An instance of MMM with K = 2. A solution E’ = {edge2 edge5) is indicated in boldface. Control Knowledge in ulti-TAC In this section we describe some recent experiments with MULTI-TAC which further illustrate the points raised in the previous section. MULTI-TAC is a sys- tem that synthesizes heuristic constraint-satisfaction programs(Minton 1993; Minton & Underwood 1994; Minton 1996). The system has a small library of generic algorithm schemas, currently including a back- tracking schema and an iterative-repair schema. Given one of these schemas, MULTI-TAC can specialize it to produce an application-specific program. As part of the specialization process, the system generates search control rules that can be incorporated into the schema. MULTI-TAC uses a set of sample problem instances - training inst antes - to evaluate the utility of the search control rules, in an effort to determine the best set of control rules. The system can be regarded as a learn- ing system because it uses training instances to guide the specialization process. The process is not the same as in Prodigy, but is similar in some respects. The experimental work that we describe here was done using MULTI-TAC’S backtracking schema, which is based on the standard CSP backtracking algorithm. The backtracking algorithm operates by successively selecting a variable and then assigning it a value. Back- tracking occurs when all values for a variable fail to sat- isfy the constraints. The backtracking schema includes several decision points, two of the most important be- ing the variable and value selection points. MULTI- TAG synthesizes control rules for these decision points. The results that describe here were produced as a part of an in depth study with a problem called Minimum Maximal Matching (MMM). MMM is an NP-complete problem described in (Garey & John- son 1979). An instance of MMM consists of a graph G = (V, E) and an integer K. The problem is to deter- mine whether there is a subset E’ G E with 1 E’ I< K such that no two edges in E’ share a common endpoint and every edge in E - E’ shares a common endpoint with some edge in E’. See Figure 1 for an example. To formulate MMM as a CSP, we represent each edge in the graph by a boolean variable. If an edge is as- signed the value 1, this indicates that it is in E’, other- wise, it is assigned the value 0 indicating it is in E-E’. The constraints are specified as follows. 1. If edgei is assigned 1, then every edgej that shares an endpoint with edgei must be assigned 0. 2. If edgei is assigned 0, then there must exist an edgej such that edgei and edgej share an endpoint, and edgej is assigned 1. 3. The cardinality of the set of edges 1 must be less than or equal to K. that are assigned In MULTI-TAC each of these three constraints is rep- resented by a sentence in a form of first-order logic. In the original experiments with this problem (Minton 1993), we created three different instance dis- tributions for MMM and for each distribution, we com- pared programs synthesized by computer scientists for that distribution to programs synthesized by MULTI- TAC for the same distribution. MULTI-TAC'S programs were generally competitive with the hand-coded pro- grams, and in some cases, superior. However, one problem with the original study (pointed out by Gins- berg) is that the results might be deceiving. It is pos- sible that if one picked a good pre-existing domain- independent algorithm and tried it on MMM, it might perform much better than both the handcoded pro- grams and MULTI-TAC'S programs. To address this, we recently compared MULTI-TAC'S programs (syn- thesized in the original study) to some well-known generic algorithms: TABLEAU (Crawford & Auton 1993), GSAT (Selman, Levesque, & Mitchell 1992), and FC-D (forward checking + minimum domain ordering, a standard CSP algorithm). We found that MULTI- TAC outperformed these programs (Minton 1996), and we will briefly summarize some of the results of the comparison between MULTI-TAC and TABLEAU, the “second-place” finisher, simply to illustrate the util- ity of the control knowledge produced by MULTI-TAC. We will then consider why this knowledge would be difficult to derive solely through domain-independent logical analysis. As in the original experiments, we tested the pro- grams using 100 randomly-selected test instances from each distribution. For each distribution, we also used the same time bound (per instance) that was used in the original study, 10 seconds for the first two distri- butions and 45 seconds for the third.5 Figure 1 shows a CPU-time comparison between MULTI-TAC'S pro- grams and an implementation of TABLEAU provided by Crawford (TABLEAU'S developer). For each distri- bution, the first column shows the total time used on the 100 instances. Since the programs did not neces- sarily solve each instance within the time bound, the second column shows the number of solved instances. 5 We did not include the time necessary for MULTI-TAC to synthesize its programs in the comparison, since as dis- cussed in (Minton 1993), we assume that MULTI-TAC will only be used in applications where compile time is not a critical factor. Planning 859 Distribution1 Distribution2 Distribution3 CPU set num solved CPU set num solved mu set num solved Multi-TAC 5.9 100 43.5 99 515 92 Tableau 52.3 100 892.5 26 3002 43 Table 1: Comparison: MULTI-TAC VS. TABLEAU The CPU-time results show that MULTI-TAC’S pro- grams outperformed TABLEAU, demonstrating the value of the control knowledge produced by MULTI- TAC. However, other than establishing this fact, the performance results are tangential to this discussion. The interesting part is why MULTI-TAC outperformed TABLEAU. (Out of necessity, we can only summarize the detailed results described in (Minton 1996).) value ordering : Try 0 before 1. variable ordering : 1. 2. 3. Prefer edges that have no adjacent edges along one endpoint. The first two distributions were relatively easy, as described in the original study (Minton 1993), and MULTI-TAC synthesized essentially the same program for both distributions. TABLEAU did not perform as well as MULTI-TAC on these distributions, but a closer analysis shows that TABLEAU was partly handicapped by the clause form representation that it employs. Rep- resenting the third MMM constraint in clause form in- volved the addition of a potentially large (but poly- nomial) number of auxiliary variables used to “count” the size of the subset E. The instances in the sec- ond distribution were particularly large, so there was a significant overhead for TABLEAU. However, if one looks at the search effort involved, it turns out that TABLEAU’S heuristics were reasonably effective in the first two distributions. For instance, if TABLEAU is given 5 times more time on each instance in the sec- ond distribution, it can solve most of the instances. Break ties by preferring edges with the most end- points such that all edges incident along that end- point are assigned. (I.e, an edge is preferred if all the adjacent edges along one endpoint are as- signed, or even better, if all adjacent edges along both endpoints are assigned). If there are still ties, prefer an edge with the fewest adjacent edges. In contrast, the third distribution was more diffi- cult, even though the instances were smaller than those in the second distribution. Again, MULTI-TAC out- performed TABLEAU, but here we find that MULTI- TAC was actually searching much more effectively than TABLEAU; the overhead due to auxiliary variables was not sufficient to explain TABLEAU’S difficulties on these instances. For instance, given 5 times more time, TABLEAU still performs relatively poorly. Notice that the rules in the first set, considered in- dividually, are contrary to the rules in the second set. (This was considered a surprising result when the orig- inal study was conducted.) Nevertheless, when each rule set is considered as a single entity, they make sense. Intuitively speaking, the programs for the first and second distribution operate by attempting to in- crementally construct the subset of edges involved in the matching (the subset E), whereas the program for third distribution operates by attempting to construct the complement of this set (E - E'). We speculate that in the third distribution, because the graphs are more dense, the system could more accurately guess which edges should be in E - E' , which explains the change in strategy. Examining the program synthesized by MULTI-TAC for the third distribution, we find that the variable and value ordering heuristics MULTI-TAC used were very different than those it used for the first two distribu- tions. For the first two distributions, MULTI-TAC used the following heuristics?. value ordering : Try 1 before 0 variable ordering : Prefer edges with the most ad- j acent edges. The differences in these rule sets illustrate the diffi- culty of deriving control information directly from do- main knowledge. Here we have two sets of control rules for what normally would be considered the same do- main (in standard AI terms); of course, in reality, the domains are not the same because the instances were selected from different distributions. Although the in- stance distribution is not normally considered domain knowledge, the experimental results illustrate that the distributions can be critical for determining what types of control rules are appropriate. (This can be even more dramatically demonstrated by running the pro- gram derived for the third distributions on the second distribution, or vice versa; In either case, the perfor- mance suffers, as shown in (Minton 1993).) For the third distribution, MULTI-TAC used these heuristics: 6 When reading the heuristic rules, recall that each vari- able in MMM corresponds to an edge The Role of Simplifying Abstractions G&G raise the possibility that probabilistic knowledge about the domain can be used to derive control rules. For instance, in one of their examples, they suggest that a problem solver could derive control knowledge 860 Learning by reasoning about how tightly constrained different subproblems are. Given our results, probabilistic in- formation would indeed seem necessary, but the dif- ficulty, as we have said earlier, is knowing what sort of probablistic domain knowledge will be suficient for producing good control rules (and formalizing the req- uisite inference rules). In fact, MULTI-TAC does use concepts such as “most constrained first” to derive control rules; however, these concepts are only used as simplifying abstractions, and the resulting control rules, while plausible, are certainly not guaranteed to be effective. For example, MULTI-TAC derives variable-ordering rules by attempting to operationalize generic heuristics such as “choose the most-constraining variable first” and “choose the most-constrained variable first” .7 Consider the variable-ordering rule “Prefer edges with the most adjacent edges first” which, as discussed above, was used for the first and second distributions. MULTI-TAC determines that this is a plausible rule by reasoning about the first MMM constraint (i.e., if an edge has value 1, then all adjacent edges must have value 0). The system reasons that, according to the first constraint, a variable with many neighbors is more likely to constrain the most other variables. A rule which makes exactly the opposite recommen- dation, “Prefer edges with the fewest adjacent edges first” (used for the third distribution) is produced by reasoning about the second constraint (i.e., if an edge is assigned 0 then some adjacent edge must be assigned 1). The system reasons that, according to the second constraint, an edge with few neighbors will have a more constraining effect on the other variables. “Most Constraining”, as defined in MULTI-TAC, is a simplifying abstraction in several respects. First, the constraints are analyzed individually, which simplifies the analysis greatly. Second, the notion of “constrain- ing” that we use is very simplistic - variable A can constrain variable B if instantiating variable A can eliminate any of variable B’s possible values. Third, the system relies on a completely qualitative style of reasoning, rather than on a quantitative method. (As illustrated in our example, the system can infer that “more neighbors -+ more constraining”, but it cannot determine “how much” more constraining.) As a result tradeoffs cannot be logically analyzed. It is because of such simplifications that the derived control rules must be regarded as plausible, and why empirical utility evaluation is necessary to determine which rules really are sensible. (Of course, even if the recommendations made by the control rules were guar- anteed to be correct, utility evaluation would still be required to deal with the considerations outlined in the 7 MULTI-TAC includes two different learning components for producing control rules (Minton 1996), but for peda- gogical purposes we focus on only one of them here, the analytic mechanism. Our points could be illustrated with the other component as well. previous section, i.e., quency, and benefit .) their match cost, application fre- As we have discussed, empirical utility evaluation en- ables a system to bypass very complex analyses that would be impractical to expect a system to carry out - analyses that even experts might have difficulty with. A relatively simple example of the complexities we are referring to is illustrated by the rule sets above. If one thinks about the two rule sets, and why they each work, it should be apparent that the variable and value- ordering rules interact - the choice of value-ordering rule affects the choice of variable-ordering rule, and vice-versa. This is, in our opinion, a potentially com- plex issue, and we are not aware of any analysis, prob- abilistic or otherwise, of this phenomenon in the exist- ing literature. But clearly this is the type of issue that must be completely analyzed before there can exist a fully capable domain-independent reasoning system for deriving control knowledge. It is instructive to consider what sort of theory would be required to be able to prove that, in any given cir- cumstance, a variable is most likely to be most con- straining. Presumably one would need to prove that after instantiating that variable, the average size of the explored search tree would be less than that for any other variable. It seems unlikely that there exists any tractable way of generating such proofs. In general, the complexities involved in proving, through logical inference, that a search control rule is both definitely correct and definitely useful are daunt- ing. Thus is why we believe empirical evaluation has an important role to play in finding good control rules. Conclusions This paper has made three contributions. First, we have pointed out the importance of simplifying ab- stractions in deriving control rules (a point that has not received the attention it deserves). While much of the work with PRODIGY-EBL and MULTI-TAC de- pended heavily on simplifying abstractions to generate control rules, previous papers on these systems focused on other points. G&G do not discuss the role of sim- plifying abstractions when discussing how control rules could be derived. We contend that simplifying abstrac- tions often play a critical role in systems that derive control knowledge for non-trivial domains. Second, we have argued that, since the logical deriva- tions used to produce control rules often involve sim- plifications, empirical utility evaluation can play an valuable role in determining whether a control rule is actually useful. Finally, we have critically examined G&G’s claim that domain-dependent control knowledge has no place in declarative systems. While their ideas are intrigu- ing, their proof that domain-dependent control knowl- edge can be divided into a domain-dependent modal Planning 861 fact and a domain-independent control rule establishes only a much, much narrower point that is only of tech- nical interest. The more interesting practical question is, as they say, more suited for experimental studies. We have not disproven their claim - in fact, it’s not clear that such a claim can be proven or disproven - but we have presented concrete, implemented exam- ples challenging their assumptions. In particular, we argued that in many cases we will need to empirically evaluate the utility of control rules. In such cases, we cannot easily dispense with the rules themselves. While this paper has focused on our differences with G&G’s conclusions, we do not wish to draw attention away from their substantial contributions. G&G con- tend, rightly in our view, that domain-independent principles play a primary role in acquiring control knowledge. The questions we have investigated here, and which deserve further study, are concerned with the nature of those principles and how the reasoning process can and should be engineered in practical sys- tems. Acknowledgements We are indebted to Matt Ginsberg, for many provoca- tive (and enjoyable) conversations, and to Jimi Craw- ford, for providing the Tableau implementation we used our experiments. References Bhatnagar, N., and Mostow, J. 1994. On-line learning from search failures. Machine Learning 15(l). Crawford, J., and Auton, L. 1993. Experimental re- sults on the crossover point in satisfiability problems. In AAAI-93 Proceedings. Eskey, M., and Zweben, M. 1990. Learning search control for constraint-based scheduling. In AAAI-90 Proceedings. Etzioni, O., and Minton, S. 1992. Why ebl produces overly-specific knowledge: A critique of the prodigy approaches. In Proceedings of the Ninth International Machine Learning Conference. Etzioni, 0. 1993. Acquiring search control knowledge via static analysis. Artificial Intelligence 62( 1). Garey, M., and Johnson, D. 1979. Computers and Intractability: A Guide to the Theory of NP- Completeness. W.H. Freeman and Co. Ginsberg, M., and Geddis, D. 1991. Is there any need for domain-dependent control information? In AAAI-91 Proceedings. Gratch, J. 1995. On efficient approaches to the util- ity problem in adaptive problem solving. Technical Report 1916, Department of Computer Science, Uni- versity of Illinois at Urbana-Champaign, Urbana, Illi- nois. Katukam, S., and Kambhampati, S. 1994. Learn- ing explanation-based search control rules for partial order planning. In AAAI-94 Proceedings. Minton, S., and Underwood, I. 1994, Small is beau- tiful: A brute-force approach to learning first-order formulas. In AAAI-94 Proceedings. Minton, S.; Carbonell, J.; Knoblock, C.; Kuokka, D.; Etzioni, 0.; and Gil, Y. 1989. Explanation-based learning: A problem solving perspective. Artificial Intelligence 40:63-118. Minton, S. 1988. Learning Search Control Knowl- edge: An Explanation-bused Approach,. Boston, Mas- sachusetts: Kluwer Academic Publishers. Also avail- able as Carnegie-Mellon CS Tech. Report CMU-CS- 88-133. Minton, S. 1993. Integrating heuristics for constraint satisfaction problems: A case study. In AAAI-93 Pro- ceedings. Minton, S. 1996. Automatically configuring con- straint satisfaction programs: A case study. Con- straints l( 1). Selman, B.; Levesque, H.; and Mitchell, D. 1992. A new method for solving hard satisfiability problems. In AAAI-92 Proceedings. Smith, D. 1985. Controlling Inference. Ph.D. Disser- tation, Computer Science Dept., Stanford University. 862 Learning
1996
126
1,762
Tim Oates and Department, Cohen Computer Science EGRC University of Massachusetts Box 34610 Amherst, MA 01003-4610 oates&s.umass.edu, cohen@lcs.umass.edu Abstract Providing a complete and accurate domain model for an agent situated in a complex environment can be an extremely difficult task. Actions may have different effects depending on the context in which they are taken, and actions may or may not induce their intended effects, with the probability of success again depending on context. We present an algo- rithm for automatically learning planning operators with context-dependent and probabilistic effects in en- vironments where exogenous events change the state of the world. Empirical results show that the algo- rithm successfully fh-rds operators that capture the true structure of an agent’s interactions with its en- vironment , and avoids spurious associations between actions and exogenous events. Introduction Research in classical planning has assumed that the effects of actions are deterministic and the state of the world is never altered by exogenous events, sim- plifying the task of encoding domain knowledge in the form of planning operators (Wilkins 1988). These as- sumptions, which are unrealistic for many real-world domains, are being relaxed by current research in AI planning systems (Kushmerick, Hanks, & Weld 1994) (Mansell 1993). However, as planning domains be- come more complex, so does the task of generating domain models. In this paper, we present an algo- rithm for automatically learning planning operators with context-dependent and probabilistic effects in en- vironments where exogenous events change the state of the world. We approach the problem of learning planning op- erators by first defining the space of all possible op- erators, and then developing efficient and effective methods for exploring that space. Operators should tell us when and how the state of an agent’s world changes in response to specific actions. The degree to which an operator chosen from operator space cap- tures such structure can be evaluated by looking at the agent’s past experiences. Has the state of the world changed in the manner described by the oper- ator significantly often in the past? Exploration of operator space is performed by an algorithm called Multi-Stream Dependency Detection (MSDD) that was designed to find dependencies among categorical val- ues in multiple streams of data over time (Qates et al. 1995) (Oates & Cohen 1996). MSDD provides a gen- eral search framework, and relies on domain knowledge both to guide the search and reason about when to prune. Consequently, MSDD finds planning operators efficiently in an exponentially sized space. Our approach differs from other work on learning planning operators in that it requires minimal do- main knowledge; there is no need for access to advice or examples from domain experts (Wang 1995), nor for initial approximate planning operators (Gil 1994). We assume that the learning agent’s initial domain model is weak, consisting only of a list of the different types of actions that it can take. The agent initially knows nothing of the contexts in which actions produce changes in the environment, nor what those changes are likely to be. To gather data for the learning algo- rithm, the agent explores its domain by taking random actions and recording state descriptions.’ From the agent’s history of state descriptions, the learning al- gorithm produces planning operators that characterize how the agent’s world changes when it takes actions in particular contexts. omain Mode% Our approach to learning planning operators requires minimal domain knowledge: we assume that the learn- ing agent has knowledge of the types of actions that it can take, the sensors by which it can obtain the state of the world, and the values that can be returned by those sensors. With this information, we define a space of possible planning operators. The Agent and its Environment The agent is assumed to have a set of m sensors, s = (Sl , . . . , ~~‘3, and a set of n possible actions, A = (al,. . . , a,). At each time step, each sensor pro- duces a single categorical value, called a token, from a ‘Clearly, random exploration may be inefficient; nothing in our approach precludes non-random exploration. Planning 863 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. finite set of possible values. Let x = (ti,, . . . , tik) be the token values associated with the ith sensor, and let sf denote the value obtained from sensor si at time t. Each sensor describes some aspect of the state of the agent’s world; for example, s2 may in- dicate the state of a robot hand, taking values from T2 = (open, closed). The state of the world as per- ceived by the agent at time t, denoted z(t), is simply the set of values returned by all of the sensors at that time. That is, x(t) = {sill < i 5 m) is a state vector. Agent actions are encoded in a special sensor, s,, so that si indicates which of the possible actions was attempted at time t. In general, sa E T&ion = &(none). For any time step t on which the agent does not take an action, s”, = none. Actions require one time step, only one action is allowed on any time step, and resulting changes in the environment appear a con- stant number of time steps later. (These restrictions are not required by the MSDD algorithm, but are in- stituted for the particular domain of learning planning operators.) Without loss of generality, we will assume that the effects of actions appear one time step later. We assume that the state of the world can change due to an agent action, an exogenous event, or both simul- taneously. The latter case makes the learning problem more difficult. Consider a robot whose task it is to pick up and paint blocks. (This domain is adapted from (Kushm- crick, Hanks, & Weld 1994), where it is used to ex- plicate the Buridan probabilistic planner.) The robot has four sensors and can determine whether it is hold- ing a block (HB), has a dry gripper (GD), has a clean gripper (GC), and whether the block is painted (BP). In addition, the robot can take one of four actions. It can dry its gripper (DRY), pick up the block (PICKUP), paint the block (PAINT), or obtain a new block (NEW). In terms of the notation developed above, the robot’s initial domain model can be summarized as follows: s = (ACTION, BP, GC, GD, ~3 A= (DRY, NEW, PAINT, PICKUP) TACTION = (DRY, NEW, PAINT, PICKUP, NONE3 CfBP = (BP, NOT-BP), ~~~ = (GC, NOT-GC) TGD = {GD, NOT-GD), THB = {HB, NOT433 Planning Operators Operator representations used by classical planners, such as STRIPS, often include a set of preconditions, an add list, and a delete list (Fikes & Nilsson 19’71). The STRIPS planner assumed that actions taken in a world state matching an operator’s preconditions would result in the state changes indicated by the oper- ator’s add and delete lists without fail. We take a less restrictive view, allowing actions to be attempted in any state; effects then depend on the state in which actions are taken. Specifically, an operator 0 =< a, c, e, p > specifies an action, a context in which that action is expected to induce some change in the world’s state, the state that results from the change, and the probability of the change occurring. If the state of the world matches the context c and the agent takes action a, then on the next time step the state of the world will match the effects e with probability p. Contexts and effects of operators are represented as multitokens. A multitoken is an m-tuple that spec- ifies for each sensor either a specific value or an as- sertion that the value is irrelevant. To denote irrele- vance, we use a wildcard token *, and we define the set 5 = ‘j5~(*3. A multitoken is any element of the cross product of all of the r; that is, multitokens are drawn from the set 7;* x . . . x TG. Consider a two-sensor ex- ample for which Tl = 7i = (A, B). Adding wildcards, T =T= (A, B , *). The space of multitokens for this example ((A, B, *3 x (A, B, *3) is the follow- ingset: ((A A), (A B), (A *), (B A), (B B), (B *I, (* A), (* B), (* *I). An operator’s context specifies a conjunct of sensor token values that serve as the operator’s precondition. For any given action, the values of some sensors will be relevant to its effects and other sensor values will not. For example, it might be more difficult for a robot to pick up a block when its gripper is wet rather than dry, but the success of the pickup action does not de- pend on whether the block is painted. A multitoken represents this contextual information as (* * GD *) , wildcarding irrelevant sensors (e.g. the sensor that de- tects whether a block is painted) and specifying values for relevant sensors (the sensor that detects whether the gripper is dry). While contexts specify features of the world state that must be present for operators to apply, effects specify how features of the context change in response to an action. We allow effects to contain non-wildcard values for a sensor only if the context also specifies a non-wildcard for that sensor. We also require that each non-wildcard in the effects be different from the value given by the context for the corresponding sen- sor. That is, operators must describe what changes in response to an action, not what stays the same. This restriction is similar to Wang’s use of delta-state (Wang 1995), the difference between the states of the world before and after the execution of an action, to drive learning of operator effects. Likewise, Benson (Ben- son 1995) uses differences between state descriptions to identify the effects of actions when learning from execution traces generated by domain experts. Assume that our block-painting robot’s interactions with the world are governed by the following rules: The robot can successfully pick up a block 95% of the time when its gripper is dry, but can do so only 50% of the time when its gripper is wet. If the gripper is wet, the robot can dry it with an 80% chance of success. If the robot paints a block while holding it, the block will become painted and the robot’s gripper will become dirty without fail. If the robot is not holding the block, then painting it will result in a painted block and a 864 Learning dirty gripper 20% of the time, and a painted block the remaining 80% of the time. Finally, when the robot requests a new block, it will always find itself in a state in which it is not holding the block, the block is not painted, and its gripper is clean; however, the gripper will be dry 30% of the time and wet 70% of the time. This information is summarized in our representation of planning operators in Figure 1. <pickup, (* * GD NOT-HB), (* * * HB), 0.95> <pickup, (* * NOT-GD NOT-HB), (* * * HB), 0.5> <dry, (* * NOT-GD *), (* * GD *), 0.8> <paint, (NOT-BP * * *), (BP * * *>, l.O> <paint, (* GC * HB), (* NOT-GC * *), l.O> <paint, (* GC * NOT-HB), (* NOT-GC * *), 0.2> <new, (BP * * *), (NOT-BP * JC *), 1.0> <new, (* NOT-GC * *), (* GC * *), l.O> <new, (* * * a, (* * * NOT-HB), l.O> <new, (* * GD *), (* * NOT-GD *), 0.7> <new, (* * NOT-GD *), (* * GD *), 0.3> Figure 1: Planning robot domain. operators in the block-painting D Algorithm The MSDD algorithm finds dependencies- unexpectedly frequent or infrequent co-occurrences of values-in multiple streams of categorical data (Oates et al. 1995) (Oates & Cohen 1996). MSDD is gen- eral in that it performs a simple best-first search over the space of possible dependencies, terminating when a user-specified number of search nodes have been ex- plored. It is adapted for specific domains by supplying domain-specific evaluation functions. MSDD assumes a set of streams, S, such that the jth stream, si, takes values from the set T. We denote a history of multitokens obtained from the streams at fixed intervals from time tl to time t2 as 31 = (as(t < t < t2). For example, the three streams shown below constitute a short history of twelve multitokens, the first of which is (A C B). MSDD explores the space of dependencies between pairs of multitokens. Dependencies are denoted prec =$ SUCC, and are evaluated with respect to 31 by counting how frequently an occurrence of the precursor multitoken prec is followed k time steps later by an occurrence of the successor multitoken succ. k is called the lag of the dependency, and can be any constant positive value. In the history shown below, the dependency (A C *) =$ (* * A) is strong. Of the five times that we see the precursor (A in stream 1 and C in stream 2) we see the successor (A in stream 3) four times at a lag of one. Also, we never see the successor unless we see the precursor one time step earlier. Streaml:ADACABABDBAB Stream 2: C B C D C B C A B D C B Stream 3: B A D A B D C A C B D A MSDD performs a general-to-specific best-first search over the space of possible dependencies. Each node in the search tree contains a precursor and a suc- cessor multitoken. The root of the tree is a precur- sor/successor pair composed solely of wildcards; for the three streams shown earlier, the root of the tree would be (* * *> + (* * *). The children of a node are its specializations, generated by instantiating wild- cards with tokens. Each node inherits all the non- wildcard tokens of its parent, and it has exactly one fewer wildcard than its parent. Thus, each node at depth d has exactly d non-wildcard tokens distributed over the node’s precursor and successor. The space of two-item dependencies is clearly expo- nential. MSDD performs a systematic search, thereby avoiding redundant generation without requiring lists of open and closed nodes. Specifically, the children of a node are generated by instantiating only those streams to the right of the right-most non-wildcarded stream in that node. This method ensures that each dependency is explored at most once, and it facilitates reasoning about when to prune. For example, all descendants of the node ( * A * > j (B * * > will have wildcards in streams one and three in the precursor, an A in stream two in the precursor, and a B in stream one in the suc- cessor. The reason is that these features are not to the right of the rightmost non-wildcard, and as such can- not be instantiated with new values. If some aspect of the domain makes one or more of these features un- desirable, then the tree can be safely pruned at this node. Refer to (Qates & Cohen 1996) for a more complete and formal statement of the MSDD algorithm. Learning Planning perators with MSD To learn planning operators, MSDD first searches the space of operators for those that capture structure in the agent’s interactions with its environment; then, the operators found by MSDD’S search are filtered to remove those that are tainted by noise from exogenous events, leaving operators that capture true structure. This section describes both processes. First, we map from our operator representation to MSDD’S dependency representation. Consider the plan- ning operator described earlier: <pickup, (* * NOT-GD NOT-BB), (* * * BB), 0.5> The context and effects of this operator are already represented as multitokens. To incorporate the idea that the pickup action taken in the given context is responsible for the changes described by the effects, we include the action in the multitoken representation: (pickup * * NOT-GD NOT-HB) 3 (* * * * HB) We have added an action stream to the context and specified pickup as its value. Because MSDD requires that precursors and successors refer to the same set Planning 865 of streams, we also include the action stream in the effects, but force its value to be ** The only item miss- ing from this representation of the operator is p, the probability that an occurrence of the precursor (the context and the action on the same time step) will be followed at a lag of one by the successor (the effects). This probability is obtained empirically by counting co-occurrences of the precursor and the successor in the history of the agent’s actions (31) and dividing by the total number of occurrences of the precursor. For the robot domain described previously, we want MSDD to find dependencies corresponding to the planning op- erators listed in Figure 1. Guiding the Search Recall that all descendants of a node n will be iden- tical to n to the left of and including the rightmost non-wildcard in n. Because we encode actions in the first (leftmost) position of the precursor, we can prune nodes that have no action instantiated but have a non- wildcard in any other position. For example, the fol- lowing node can be pruned because none of its descen- dants will have a non-wildcard in the action stream: (* * * GD *) j (* * jl * *) Also, our domain model requires that operator effects can only specify how non-wildcarded components of the context change in response to an action. That is, the effects cannot specify a value for a stream that is wildcarded in the context, and the context and effects cannot specify the same value for a non-wildcarded stream. Thus, the following node can be pruned be- cause all of its descendants will have the value BP in the effects, but that stream is wildcarded in the context: (pickup * * GD *) j (* Bp JI * *) Likewise, the following node can be pruned because all of its descendants will have the value GD instantiated in both the context and the effects: (pickup * * GD *> + (* * * GD *) The search is guided by a heuristic evaluation func- tion, f(X9 49 which simply counts the number of times in 31 that the precursor of n is followed at a lag of one by the successor of n. This builds two biases into the search, one toward frequently occurring precur- sors and another toward frequently co-occurring pre- cursor/successor pairs. In terms of our domain of application, these biases mean that, all other things being equal, the search prefers commonly occurring state/action pairs and state/action pairs that lead to changes in the environment with high probability. The result is that operators that apply frequently and/or succeed often are found by MSDD before operators that apply less frequently and/or fail often. Filtering Returned Dependencies We augmented MSDD'S search with a post-processing filtering algorithm, FILTER, that removes operators that describe effects that the agent cannot reliably bring about and that contain irrelevant tokens. FIL- TER begins by removing all dependencies that have low frequency of co-occurrence or contain nothing but wildcards in the successor. Co-occurrence is deemed low when cell one of a dependency’s contingency ta- ble is less than the user-specified parameter Zozu-c&1. Those that remain are sorted in non-increasing order of generality, where generality is measured by sum- ming the number of wildcards in the precursor and the successor. The algorithm then iterates, repeat- edly retaining the most general operator and removing from further consideration any other operators that it subsumes and that do not have significantly different conditional probabilities (measured by the G statistic). All of the operators retained in the previous step are then tested to ensure that the change from the con- text to the effects is strongly dependent on the action (again measured by the G statistic). When G is used to measure the difference between conditional proba- bilities, the conditionals are deemed to be “different” when the G value exceeds that of the user-specified pa- rameter sensitivity. We have omitted pseudocode for FILTER due to lack of space. Empirical Results To test the efficiency and completeness of MSDD'S search and the effectiveness of the FILTER algorithm, we created a simulator of the block-painting robot and its domain as described earlier. The simulator con- tained fives streams: ACTION, BP, GC, GD and HB. Each simulation began in a randomly-selected initial state, and on each time step the robot had a 0.1 prob- ability of attempting a randomly selected action. In addition, we added varying numbers of noise streams that contained values from the set T,, = (A, B, C}. There was a 0.1 probability of an exogenous event oc- curring on each time step. When an exogenous event occurred, each noise stream took a new value, with probability 0.5, from Tn. The goal of our first experiment was to determine how the number of nodes that MSDD expands to find all of the interesting planning operators increases as the size of the search space grows exponentially. We ran the simulator for 5000 time steps, recording all stream values on each iteration. (Note that although the simulator ran for 5000 time steps, the agent took approximately 500 actions due to its low probability of acting on any given time step.) These values served as input to MSDD, which we ran until it found depen- dencies corresponding to all of the planning operators listed in Figure 1. As the number of noise streams, n/, was increased from 0 to 20 in increments of two, we re- peated the above procedure five times, for a total of 55 runs of MSDD. A scatter plot of the number of nodes expanded vs. JV’ is shown in Figure 2. If we ignore the outliers where n/ = 12 and n/ = 20, the number of nodes required by MSDD to find all of the interest- 866 Learning ing planning operators appears to be linear in JV, with while painting and cases in which it was not. The a rather small slope. This is a very encouraging re- resulting probability is a combination of the probabil- sult. The outliers correspond to cases in which the ities of having a dirty gripper after painting in each of robot’s random exploration did not successfully exer- those contexts, 1.0 and 0.2 respectively. Similarly, the cise one or more of the target operators very frequently. last operator in Figure 3 includes cases in which the Therefore, the search was forced to explore more of the robot attempted to pick up the block with a wet grip- vast space of operators (containing 1O24 elements when per (50% chance of success) and a dry gripper (95% n/ = 20) to find them. chance of success). Nodes Expanded + 10 Noise Streams 15 20 Figure 2: The number of search nodes required to find all of the target planning operators in the block- painting robot domain as a function of the number of noise streams. In a second experiment, we evaluated the ability of the FILTER algorithm to return exactly the set of inter- esting planning operators when given a large number of potential operators. We gathered data from 20,000 time steps of our simulation, with 0, 5, 10, and 15 noise streams. (Again, the agent took far fewer than 20,000 actions due to its low probability of acting on any given time step.) For each of the three data sets, we let MSDD generate 20,000 operators; that is, expand 20,000 nodes. Figure 2 tells us that a search with far fewer nodes will find the desired operators. Our goal was to make the task more difficult for FILTER by in- cluding many uninteresting dependencies in its input. We used low-cell1 = 6 and sensitivity = 30, and in all three cases FILTER returned the same set of depen- dencies. The dependencies returned with nl = 0 are shown in Figure 3. Note that all of the operators listed in Figure 1 are found, and that the empirically-derived probability associated with each operator is very close to its expected value. For h/ > 0, the noise streams never contained instantiated values. Interestingly, the last two operators in Figure 3 do not appear in Figure 1, but they do capture implicit structure in the robot’s domain. The penultimate op- erator in Figure 3 says that if you paint the block with a clean gripper, there is roughly a 40% chance that the gripper will become dirty. Since that operator does not specify a value for the ED stream in its context, it in- cludes cases in which the robot was holding the block <pickup, (* * GD NOT-HD), (* * * HD), 0.98> <pickup) (* * NOT-GD NOT-D), (* * * HD), 0.49> <dry 9 (* * NOT-GD *), (* * GD *), 0.77> <paint, (NOT-BP * * *), (BP f * *>, l.O> <paint, (* GC * HD), (* NOT-GC * *), l.O> <paint ) (* GC * NOT-HB), (* NOT-GC * *), 0.18> <new, (BP * * *), (NOT-BP * * *), i.o> <new, (* NOT-GC * *), (* GC * *), 1.0, <new, 0 * * ml, (* * * NOT-HB), l.O> <new, (* * GD *>, (* * NOT-GD *), 0.71> <new, (* * NOT-GD *), (* * GD *), 0.31> <paint, (* GC * *>, (* NOT-GC * *)$ 0.38> <pickup, (* * * NOT-HE+), (* * * HD) 0.70> Figure 3: Operators returned after filtering 20,000 search nodes generated for a training set with ti = 0 noise streams. elated Work Existing symbolic approaches to learning planning op- erators via interaction with the environment have typ- ically assumed a deterministic world in which actions always have their intended effects, and the state of the world never changes in the absence of an action (Gil 1994) (Shen 1993) (Wang 1995). One notable excep- tion is (Benson 1995), in which the primary effect of a durative action is assumed to be deterministic, but side effects may occur with some probability. In con- trast, the work described in this paper applies to do- mains that contain uncertainties associated with the outcomes of actions, and noise from exogenous events. Subsymbolic approaches to learning environmental dy- namics, such as reinforcement learning (Mahadevan & Connell 1992), are capable of handling a variety of forms of noise. Reinforcement learning requires a re- ward function that allows the agent to learn a mapping from states to actions that maximizes reward. Our ap- proach is not concerned with learning sequences of ac- tions that lead to “good” states, but rather attempts to acquire domain knowledge in the form of explicit planning operators. Much of the work on learning planning operators assumes the availability of fairly sophisticated forms of domain knowledge, such as advice or problem solv- ing traces generated by domain experts (Benson 1995) (Wang 1995), or initial approximate planning opera- tors (Gil 1994). Our approach assumes that the learn- ing agent initially knows nothing of the dynamics of Planning 867 its environment. A model of those dynamics is con- structed based only on the agent’s own past interac- tions with its environment. MSDD’S approach to expanding the search tree to avoid redundant generation of search nodes is similar to that of other algorithms (Rymon 1992) (Schlimmer 1993) (Riddle, Segal, & Etzioni 1994). MSDD’S search differs from those mentioned above in that it explores the space of rules containing both conjunctive left- hand-sides and conjunctive right-hand-sides. Doing so allows MSDD to find structure in the agent’s interac- tions with its environment that could not be found by the aforementioned algorithms (or any inductive learn- ing algorithm that considers rules with a fixed number of literals on the right-hand-side). Conclusions and Future Wor In this paper we presented and evaluated an algorithm that allows situated agents to learn planning operators for complex environments. The algorithm requires a weak domain model, consisting of knowledge of the types of actions that the agent can take, the sensors it possesses, and the values that can appear in those sensors. With this model, we developed methods and heuristics for searching through the space of planning operators to find those that capture structure in the agent’s interactions with its environment. For a do- main in which a robot can pick up and paint blocks, we demonstrated that the computational requirements of the algorithm scale approximately linearly with the size of the robot’s state vector, in spite of the fact that the size of the operator space increases exponentially. We will extend this work in several directions. Our primary interest is in the relationship between explo- ration and learning. How would the efficiency and com- pleteness of learning be affected by giving the agent a probabilistic planner and allowing it to interleave goal- directed exploration and learning? However, our first task will be to apply our approach to larger, more com- plex domains. Acknowledgements This research was supported by ARPA/Rome Labo- ratory under contract numbers F30602-91-C-0076 and F30602-93-0100, and by a National Defense Science and Engineering Graduate Fellowship. The U.S. Gov- ernment is authorized to reproduce and distribute reprints for governmental purposes not withstanding any copyright notation hereon. The views and con- clusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied, of the Advanced Research Projects Agency, Rome Laboratory or the U.S. Government. References Benson, S. 1995. Inductive learning of reactive action models. In Proceedings of the Twelfth International Conference on Machine Learning, 47-54. Fikes, R. E., and Nilsson, N. J. 1971. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2( 2): 189- 208. Gil, Y. 1994. Learning by experimentation: Incre- mental refinement of incomplete planning domains. In Proceedings of the Eleventh International Confer- ence on Machine Learning, 87-95. Kushmerick, N.; Hanks, S.; and Weld, D. 1994. An al- gorithm for probabilistic least-commitment planning. In Proceedings of the Twelfth National Conference on Artificial Intelligence, 1074-1078. Mahadevan, S., and Connell, J. 1992. Automatic pro- gramming of behavior-based robots using reinforce- ment learning. Artificial Intelligence 55(2-3):189-208. Mansell, T. M. 1993. A method for planning given uncertain and incomplete information. In Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, 350-358. Oates, T., and Cohen, P. R. 1996. Searching for structure in multiple streams of data. To appear in Proceedings of the Thbteenth International Confer- ence on Machine Learning. Oates, T.; Schmill, M. D.; Gregory, D. E.; and Co- hen, P. R. 1995. Detecting complex dependencies in categorical data. In Fisher, D., and Lenz, H., eds., Finding Structure in Data: Artificial Intelligence and Statistic8 V. Springer Verlag. Riddle, P.; Segal, R.; and Etzioni, 0. 1994. Repre- sentation design and brute-force induction in a boeing manufacturing domain. Applied Artificial Intelligence 8: 125-147. Rymon, R. 1992. Search through systematic set enu- meration. In Proceedings of the Third International Conference on Principles of Knowledge Representa- tion and Reasoning. Schlimmer, J. C. 1993. Efficiently inducing determi- nations: A complete and systematic search algorithm that uses optimal pruning. In Proceedings of the Tenth International Conference on Machine Learning, 284- 290. Shen, W.-M. 1993. Discovery as autonomous learn- ing from the environment. Machine Learning 12( l- 3):143-165. Wang, X. 1995. Learning by observation and prac- tice: An incremental approach for planning operator acquisition. In Proceedings of the Twelfth Interna- tional Conference on Machine Learning. Wilkins, D. E. 1988. Practical Planning: Extending the Classical AI Planning Paradigm. Morgan Kauf- mann. 868 Learning
1996
127
1,763
Tara A. Estlin and Raymond J. ooney Department of Computer Sciences University of Texas at Austin Austin, TX 78712 (estlin,mooney}@cs.utexas.edu Abstract Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a pczrtial-order plan- ner. SCOPE integrates explanation-based and induc- tive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a ver- sion of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains. Introduction Efficient planning often requires domain-specific search heuristics; however, constructing appropriate heuris- tics for a new domain is a difficult task. Research in learning and planning attempts to address this prob- lem by developing methods that automatically ac- quire search-control knowledge from experience. Most work has been in the context of linear, state-based planners (Minton 1989; Leckie & Zuckerman 1993; Bhatnagar & Mostow 1994). Recent experiments, how- ever, support that partial-order planners are more effi- cient than total-order planners in most domains (Bar- rett & Weld 1994; Minton et al. 1992). However, there has been little work on learning control for partial- order planning systems (Kambhampati et al., 1996). In this paper, we describe SCOPE, a system that uses a unique combination of machine learning tech- niques to acquire effective control rules for a partial- order planner. Past systems have often employed explanation-based learning (EBL) to learn control knowledge. Unfortunately, standard EBL can fre- quently produce complex, overly-specific control rules that decrease rather than improve overall planning This research was supported by the NASA Graduate Student Researchers Program, grant number NGT-51332. performance (Minton 1989). By incorporating induc- tion to learn simpler, approximate control rules, we can greatly improve the utility of acquired knowl- edge (Cohen 1990). SCOPE (Search Control Opti- mization of Planning through Experience) integrates explanation-based generalization (EBG) (Mitchell et al., 1986; DeJong & Mooney, 1986) with techniques from inductive logic programming (ILP) (Quinlan 1990; Muggleton 1992) to learn high-utility rules that can generalize well to new planning situations. SCOPE learns control rules for a partial-order plan- ner in the form of selection heuristics. These heuris- tics greatly reduce backtracking by directing a plan- ner to immediately select appropriate plan refinements” SCOPE is implemented in Prolog, which provides a good framework for learning control knowledge. A ver- sion of the UCPOP planning algorithm (Penberthy & Weld 1992) was implemented as a Prolog program to provide a testbed for SCOPE. Experimental results are presented on two domains that show SCOPE can sig- nificantly increase partial-order planning efficiency. he UCP lanner The base planner we chose for experimentation is UCPOP, a partial-order planner described in (Pen- berthy & Weld 1992). In UCPOP, a partial plan is described as a four-tuple: (s,a$J,c) where S is a set of actions, 0 is a set of ordering constraints, L is a set of causal links, and B is a set of codesignation constraints over variables appearing in S. Actions are described by a STRIPS schema containing precondition, add and delete lists. The set of ordering constraints, 0, spec- ifies a partial ordering of the actions contained in S. Causal links record dependencies between the effects of one action and the preconditions of another. These links are used to detect threats, which occur when a new action interferes with a past decision. UCPOP begins with a null plan and an agenda con- taining the top-level goals. The initial and goal states are represented by adding two extra actions to S, Ae Planning 843 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Figure 1: Three competing refinement candidates for achieving the goal Clear(B). and A,. The effects of A0 correspond to the initial state, and the preconditions of A, correspond to the goal state. In each planning cycle, a goal is removed from the agenda and an existing or new action is cho- sen to assert the goal. After an action is selected, the necessary ordering, casual link and codesignation con- straints are added to 0, L, and B. If a new action was selected, the action’s preconditions are added to the agenda. IJCPOP then checks for threats and resolves any found by adding an additional ordering constraint. UCPOP is called recursively until the agenda is empty. On termination, UCPOP uses the constraints found in 0 to determine a total ordering of the actions in S, and returns this as the final solution. Learning Control For Planning SCOPE learns search-control rules for planning deci- sions that might lead to a failing search path (i.e. might be backtracked upon). Figure 1 illustrates an example from the blocksworld domain where control knowledge could be useful. Here, there are three pos- sible refinement candidates for adding a new action to achieve the goal Clear(B). For each set of refinement candidates, SCOPE learns control rules in the form of selection rules that define when each refinement should be applied. A single selection rule consists of a conjunc- tion of conditions that must all evaluate to true for the refinement candidate to be used. For example, shown next is a selection rule for the first candidate (from Figure 1) which contains several control conditions. Select operator Unstack(?X,?Y)to establish goal( Clear(?Y),r~) If exists-operator(r2) A establishes(sz,On(?X,?Y)) A possibly-before(sz,s1). This rule states that Unstack (?X,?Y) should be selected to add CZeur(?YJ only when there is an existing action s2 that adds On(?X,?Y) and s2 can be ordered be- fore the action ~1, which requires CZear(?Y). Learned control information is incorporated into the planner so that attempts to select an inappropriate refinement will immediately fail. The Prolog programming language provides an ex- cellent framework for learning control rules. Search algorithms can be implemented in Prolog in such a way that allows control information to be easily in- corporated in the form of clause-selection rules (Co- hen 1990). These rules help avoid inappropriate clause applications, thereby reducing backtracking. A ver- sion of the UCPOP partial-order planning algorithm has been implemented as a Prolog pr0gram.l Plan- ning decision points are represented in this program as clause-selection problems (i.e. each refinement can- didate is formulated as a separate clause). SCOPE is then used to learn refinement-selection rules which are incorporated into the original planning program in the form of clause-selection heuristics. The SCOPE Learning System SCOPE is based on the DOLPHIN learning system (Zelle & Mooney 1993), which optimizes logic programs by learning clause-selection rules. DOLPHIN has been shown successful at improving program performance in several domains, including planning domains which employed a simple state-based planner ~ DOLPHIN s however, has little success improving the performance of a partial-order planner due to the higher complexity of the planning search space. In particular, DOLPHIN’s simple control rule format lacked the expressibility necessary to describe complicated planning situations. SCOPE has greatly expanded upon the DOLPHIN algo- rithm to be effective on more complex planners. The input to SCOPE is a planning program and a set of training examples. SCOPE uses the examples to induce a set of control heuristics which are then incor- porated into the original planner. SCOPE’S algorithm has three main phases, which are presented in the next few sections. A more detailed description can be found in (Estlin 1996). Example Analysis In the example analysis phase, two outputs are pro- duced: a set of selection-decision examples and a set of generalized proof trees. Selection-decision examples record successful and unsuccessful applications of plan refinements. Generalized proofs provide a background context that explains the success of all correct planning decisions. These two pieces of information are used in the next phase to build control rules. Selection-decision examples are produced using the following procedure. First, training examples are solved using the existing planner. A trace of the planning decision process used to solve each example is stored in a proof tree, where the root represents the top-level planning goal and nodes correspond to different planning procedure calls. These proofs are ‘Our Prolog planner performs comparably to the stan- dard LISP implementation of UCPOP on the sample prob- lem sets used to test the learning algorithm. 844 Learning Figure 2: Top Portion of a Generalized Proof Tree then used to extract examples of correct and incor- rect refinement-selection decisions. A “selection deci- sion” is a planning subgoal that was solved correctly by applying a particular plan refinement, such as adding a new action. As an example, consider the planning problem introduced in Figure I. The planning subgoal represented by this figure is shown ‘below:2 l?or S = (O:Start,G:Goal), 0 = (0 < G), f,ge:z = (Clear(B),G) (On-TubIe(B) G) Select operator ?OP to ekblish goal()Cle&B),G) Selection decisions are collected for all competing plan refinements. Refinements are considered “com- peting” if they can be applied in identical planning de- cisions, such as the three refinement candidates shown in Figure 1. A correct decision for a particular refine- ment is an application of that refinement found on a solution path. An incorrect decision is a refinement ap- plication that was tried and subsequently backtracked over. The subgoal shown above would be identified as a positive selection decision for candidate 2 (adding Putdown(A))) and would also be classified as a nega- tive selection decision for candidates 1 and 3 (adding Unstack (B ,A) or Stack(A,B)). Any given training prob- lem may produce numerous positive and negative ex- amples of refinement selection decisions. The second output of the example analysis phase is a set of generalized proof trees. Standard EBG tech- niques are used to generalize each training example proof tree. The goal of this generalization is to remove proof elements that are dependent on the specific ex- ample facts while maintaining the overall proof struc- ture. Generalized proof information is used later to explain new planning situations. The top portion of a generalized proof tree is shown in Figure 2. This proof was extracted from the solution trace of the problem introduced in Figure 1. The generalized proof of this example provides a context which “explains” the suc- cess of correct decisions. 2Binding constraints in our system are maintained through Prolog, therefore, the set of binding constraints, 8, is not explicitly represented in planning subgoals. ControB. de Induction The goal of the induction phase is to produce an op- erational definition of when it is useful to apply a re- finement candidate. Given a candidate, 6, we desire a definition of the concept “subgoals for which C is use- ful” . In the blocksworld domain, such a definition is learned for each of the candidates shown in Figure 1. In this context, control rule learning can be viewed as relational Concept learning. SCOPE employs a version of the FOIL algorithm (Quinlan 1990) to learn control rules through induction. FOIL has proven efficient in a number of domains, and has a “most general” bias which tends to produce simple definitions. Such a bias is important for learning rules with a low match cost, which helps avoid the utility problem. FOIL is also .relatively easy to bias with prior knowledge (Pazzani & Kibler 1992). In our case, we can utilize the in- formation contained in the generalized proof trees of planning solution traces. FOIL attempts to learn a concept definition in terms of a given set of background predicates. This definition is composed of a set of Horn clauses that cover all of the positive examples of a concept, and none of the negatives. The selection-decision examples collected in the example analysis phase provide the sets of positive and negative examples for each refinement candidate. Initialization Definitaon := null Remaining := all positive examples While Remainrng is not empty Find a clause, C, that covers some examples in Remainang, but no negative examples. Remove examples covered by C from Remaonzng. Add C to Definitzon. FOIL’S basic algorithm is shown above. The “find a clause” step is implemented by a general-to-specific hill-climbing search. FOIL adds antecedents to the de- veloping clause one at a time. At each step FOIL evalu- ates all literals that might be added and selects the one which maximizes an information-based gain heuristic. SCQPE uses an intensional version of FOIL where back- ground predicates can be defined in Prolog instead of requiring an extensional representation. One drawback to FOIL is that the hill-climbing search for a good antecedent can easily explode, es- pecially when there are numerous background predi- cates with large numbers of arguments When select- ing each new clause antecedent, FOIL tries all possible variable combinations for all predicates before making its choice. This search grows exponentially as the num- ber of predicate arguments increases. SCOPE circum- vents this search problem by utilizing the generalized proofs of training examples. By examining the proof trees, SCOPE identifies a small set of potential liter- als that could be added as antecedents to the current Planning 845 clause definition. Specifically, all “operational” predi- cates contained in a proof tree are considered. These are usually low-level predicates that have been classi- fied as “easy to evaluate” within the problem domain. Literals are added to rules in a way that utilizes vari- able connections already established in the proof tree. This approach nicely focuses the FOIL search by only considering literals (and variable combinations) that were found useful in solving the training examples. SCOPE employs the same general covering algorithm as FOIL but modifies the clause construction step. Clauses are successively specialized by considering how their target refinements were used in solving training examples Suppose we are learning a definition for when each of the refinements in Figure 1 should be ap- plied. The program predicate representing this type of refinement is select-op, which inputs several arguments including the unachieved goal and outputs the selected operator. The full predicate head is shown below. select-op(Goal,Steps,OrderCons,Links,Agenda,RetumOp) For each refinement candidate, SCOPE begins with the most general definition possible. For instance, the most general definition covering candidate l’s selection examples is the following; call this clause C. select~~tGEoal,Steps,OrderCons,Links,Agenda,unstack(A,B)) :- This overly general definition covers all positive exam- ples and all negative examples of when to apply candi- date 1, since it will always evaluate to true. C can be specialized by adding antecedents to its body. This is done by unifying C’s head with a (generalized) proof subgoal that was solved by applying candidate 1 and then adding an operational literal from the same proof tree which shares some variables with the subgoal. For example, one possible specialization of the above clause is shown below. select-op((clear(B),Sl),StepsO,OrderCons,Links,Agenda,unstack(A,B)) establishes(on(B,A),Stepsl,S2). Here, a proof tree literal has been added which checks if there is an existing plan step that establishes the goal &@,A). Variables in a potential antecedent can be connected with the existing rule head in several ways. First, by unifying a rule head with a generalized sub- goal, variables in the rule head become unified with variables existing in a proof tree. All operational liter- als in that proof that share variables with the general- ized subgoal are tested as possible antecedents. A sec- ond way variable connections are established is through the standard FOIL technique of unifying variables of the same type. For example, the rule shown above has an antecedent with an unbound input, Stegsl, select-op((clear(B),Sl),Steps,OrderCons,Links,Agenda,unstack(A,B)) :- find-init-state(Steps,lnit), member(on(A,B),lnit), not(member((on(B,C),Sl),Ag en a ,member(on-table(B),Init)). d ) select-op((clear(A),G),Steps,OrderCons,Links,Agenda,putdown(A)) :- not(member((on(A,B),G),Agenda)). select-op((clear(A),Sl),Steps,OrderCons,Links,Agenda,putdown(A)) :- member((on-table(A),S2),Agenda), not(establishes(on-table(A),S3)). Figure 3: Learned control rules for two refinements which does not match any other variables in the clause. SCOPE can modify the rule, as shown below, so that the Steps1 is unified with a term of the same type from the rule head, StepsO. select-op((clear(B),Sl),StepsO,OrderCons,Links,Agenda,unstack(A,B)) :- establishes(on(B,A),StepsO,S2). SCOPE considers all such specializations of a rule and selects the one which maximizes FOIL’S information- gain heuristic. SCOPE also considers several other types of control rule antecedents during induction. Besides pulling lit- erals directly from generalized proof trees, SCOPE can use negated proof literals, determinate literals (Mug- gleton 1992)) variable codesignation constraints, and relational cliches (Silverstein & Pazzani 1991). Incor- porating different antecedent types helps SCOPE learn expressive control rules that can describe partial-order planning situations. Program Specialization Phase Once refinement selection rules have been learned, they are passed to the program specialization phase which adds this control information into the original planner. The basic approach is to guard each refinement with the selection information. This forces a refinement ap- plication to fail quickly on subgoals to which the refine- ment should not be applied. Figure 3 shows several learned rules for the first two refinement candidates (from Figure 1). The first rule allows unstack (A, B) to be applied only when A is found to be on B initially, and stack(B,~) should not be selected instead. The second and third rule allow putdown(A) to be applied only when A should be placed on the table and not stacked on another block. Experimental Evaluation The blocksworld and logistics transportation domains were used to test the SCOPE learning system. In the logistics domain (Veloso 1992), packages must be de- livered to different locations in several cities. A test set of 100 independently generated problems was used to evaluate performance in both domains. SCOPE was 846 Learning 200 150 a 2 t 100 F s B: 50 0 0 10 20 30 40 60 60 70 60 90 100 Tmlnlng Examples Figure 4: Performance in Blocksworld trained on separate example sets of increasing size. Ten trials were run for each training set size, after which re- sults were averaged. Training and test problems were produced for both domains by generating random ini- tial and final states. In blocksworld, problems con- tained two to six blocks and one to four goals. Logis- tics problems contained up to two packages and three cities, and one or two goals. No time limit was imposed on planning, but a uniform depth bound on the plan length was used during testing. For each trial, SCOPE learned control rules from the given training set and produced a modified planner. Since SCOPE only specializes decisions in the original planner, the new planning program is guaranteed to be sound with respect to the original one. Unfortunately, the new planner is not guaranteed to be complete. Some control rules could be too specialized and thus the new planner may not solve all problems solvable by the original planner. In order to guarantee complete- ness, a strategy used by Cohen (1990) is adopted. If the final planner fails to find a solution to a test prob- lem, the initial planning program is used to solve the problem. When this situation occurs in testing, both the failure time for the new planner and the solution time for the original planner are included in the total solution time for that problem. In the results shown next, the new planner generated by SCOPE was typi- cally able to solve 95% of the test examples. Figures 4 and 5 present the experimental results. The times shown represent the number of seconds re- quired to solve the problems in the test sets after SCOPE was trained on a given number of examples. In both domains, the SCOPE consistently produced a more efficient planner and significantly decreased solu- tion times on the test sets. In the blocksworld, SCOPE produced modified planning programs that were an av- A. . . 200 - ‘A.. ‘A . . *. loo - A--.-l, -. * -A - -. A-. . .A _ . . A- - 0 10 20 30 60 60 70 60 so 100 Tmlnlng Exampba Figure 5: Performance in Logistics erage of 11.3 times faster than the original planner. For the logistics domain, SCOPE produced programs that were an average of 5.3 times faster. These results indicate that SCOPE can significantly improve the per- formance of a partial-order planner. elated Work A closely related system to SCOPE is UCPOP+EBL (Kambhampati et al., 1996), which also learns control rules for UCPOP, but uses a purely explanation-based approach. UCPOP+EBL employs standard EBL to acquire control rules in response to past planning fail- ures. This system has been shown to improve planning performance in several domains, including blocksworld. To compare the two systems, we replicated an exper- iment used by Kambhampati et al. (1996). Prob- lems were randomly generated from a version of the blocksworld domain that contained between three to six blocks and three to four goals.3 SCOPE was trained on a set of 100 problems. The test set also contained 100 problems and a CPU time limit of 120 seconds was imposed during testing. The results are shown below. System Orig Final Speedup Orig Final Time Time %Sol %Sol UCPOP+EBL 7872 5350 1.47X 51% 69% SCOPE 5312 1857 2.86X 59% 94% Both systems were able to increase the number of test problems solved, however, SCOPE had a much higher success rate. Overall, SCOPE achieved a bet- ter speedup ratio, producing a more efficient planner. 31n order to replicate the experiments of Kambhampati et al.(1996), the blocksworld domain theory used for these tests slightly differed from the one used for the experiments presented previously. Both domains employed similar pred- icates however the previous domain definition consists of four operators while the domain used here has only two. Planning 847 By combining EBL with induction, SCOPE was able to learn better planning control heuristics than EBL did alone. These results are particularly significant since UCPOP+EBL utilizes additional domain axioms which were not provided to SCOPE. Most other related learning systems have been eval- uated on different planning algorithms, thus system comparisons are difficult. The HAMLET system (Bor- rajo & Veloso 1994) learns control knowledge for the nonlinear planner underlying PRODIGY4.0. HAM- LET acquires rules by explaining past planning deci- sions and then incrementally refining them. Since, PRODIGY4.0 is not a partial-order planner it is dif- ficult to directly compare HAMLET and SCOPE. When making a rough comparison to the results reported in Borrajo & Veloso (1994b) , SCOPE achieves a greater speedup factor in blocksworld (11.3 vs 1.8) and in the logistics domain (5.3 vs 1.8). Future Work There are several issues we hope to address in fu- ture research. First, replacing POIL~S information- gain metric for picking literals with a metric that more directly measures rule utility could further im- prove planning performance. Second, SCOPE should be tested on more complex domains which contain condi- tional effects, universal quantification, and other more- expressive planning constructs. Finally, we plan to ex- amine ways of using SCOPE to improve plan quality a8 well a8 planner efficiency. SCOPE could be modified to collect positive control examples only from high- quality solutions so that control rules are focused on quality issues a8 well a8 speedup. Conclusion SCOPE provides a new mechanism for learning con- trol information in planning systems. Simple, high- utility rules are learned by inducing concept defini- tions of when to apply plan refinements. Explanation- based generalization aids the inductive search by fo- cusing it toward8 the best pieces of background infor- mation. Unlike most approaches which are limited to total-order planners, SCOPE can learn control rules for the newer, more effective partial-order planners. In both the blocksworld and logistics domains, SCOPE sig- nificantly improved planner performance; SCOPE also outperformed a competing method based only on EBL. References Barrett, A., and Weld, D. 1994. Partial order planning: Evaluating possible efficiency gains. Artificial Intelligence 67:71-112. Bhatnagar, N., and Mostow, J. 1994. On-line learning from search failure. Machine Learning 15:69-117. Borrajo, D., and Veloso, M. 1994. Incremental learning of control knowledge for nonlinear problem solving. In Proc. of ECML-94, 64-82. Cohen, W. W. 1990. Learning approximate control rules of high utility. hr Proc. of ML-90, 268-276. DeJong, G. F., and Mooney, R. J. 1986. Explanation- based learning: An alternative view. Muchine Learning 1(2):145-176. Estlin, T. A. 1996. Integrating explanation-based and inductive learning techniques to acquire search-control for planning. Technical report, Dept. of Computer Sci- ences , University of Texas, Austin, TX. Forthcoming. URL: http://net.cs.utexas.edu/ml/ Kambhampati, S.; Katukam, S.; and Qu, Y. 1996. Fail- ure driven search control for partial order planners: An explanation based approach. Artificial Intelligence. Forth- coming. Langley, P., and Allen, J. 1991. The acquisition of human planning expertise. In Proc. of ML-91, 80-84. Leckie, C., and Zuckerman, I. 1993. An inductive ap- proach to learning search control rules for planning. In hoc. of IJCAI-93, 1100-1105. Minton, S.; Drummond, M.; Bresina, J. L.; and Phillips, A. B. 1992. Total order vs. partial order planning: Factors influencing performance. In Proc. of the 3rd Int. Conf. on Principles of Knowledge Rep. and Reasoning, 83-92. Minton, S. 1989. Explanation-based learning: A problem solving perspective. Artificial Intelligence 40:63-118. Mitchell, T. M.; Keller, R. M.; and Kedar-Cabelli, S. T. 1986. Explanation-based generalization: A unifying view. Machine Learning l( 1):47-80. Muggleton, S. H., ed. 1992. Inductive Logic Programming. New York, NY: Academic Press. Pazzani, M., and Kibler, D. 1992. The utility of back- ground knowledge in inductive learning. Machine Leorn- ing 9:57-94. Penberthy, J., and Weld, D. S. 1992. UCPOP: A sound, complete, partial order planner for ADL. In Proc. of the 3rd Int. Conf. on Principles of Knowledge Rep. and Reo- soning, 113-l 14. Quinlan, J. 1990. Learning logical definitions from rela- tions. Machine Learning 5(3):239-266. Silverstein, G., and Pazzani, M. J. 1991. Relational cliches: Constraining constructive induction during rela- tional learning. In Proc. of ML-91, 203-207. Veloso, M. M. 1992. Learning by Analogical Reasoning in General Problem Solving. Ph.D. Dissertation, School of Computer Science, Carnegie Mellon University. Zelle, J. M., and Mooney, R. J. 1993. Combining FOIL and EBG to speed-up logic programs. In Proc. of IJCAI- 93, 1106-1111. 848 Learning
1996
128
1,764
Sean P. Engelson Dept. of Mathematics and Computer Science Bar-Ilan University 52900 Ramat Gan Israel Email: engelsonQbimacs.cs.biu.ac.il Abstract We address the problem of learning robust plans for robot navigation by observing particular robot behav- iors. In this pcaper we present a method which can learn a robust reactive plan from a single example of a desired behavior. The system operates by translating a sequence of events arising from the eflector system into a plan which represents the dependencies among such events. This method allows us to rely on the underly- ing stability properties of low-level behavior processes in order to produce robust plans. Since the resultant plan reproduces the original behavior of the robot at a high level, it generalizes over small environmental changes and is robust to sensor and eflector noise. Introduction Recently, a number of sophisticated ‘reactive’ planning formalisms have been developed (Firby 1989; Gat 1991; McDermott 1991; Simmons 1994), which allow a great deal of flexibility in control flow and explicitly in- clude a notion of an intelligent plan execution sys- tem. However, the complexity of these plan represen- tations makes planning very difficult. Much of the ef- fort in developing planners for these new planning lan- guages, therefore, has focused on case-based, or truns- formationad, planning approaches (Hammond 1986; McDermott 1992). In this paradigm, given a set of goals to achieve, the planner retrieves a set of plan fragments from a plan library which it believes will help achieve those goals. The planner combines the fragments to form a complete plan, and then adapts it to fit the particular task at hand. When the plan- ner is satisfied, the plan gets executed. If execution is satisfactory, the plan may get stored back in the plan library for future reuse. An initial case-base is usually constructed by hand, to contain plan fragments thought to be useful for a particular domain. As more planning problems are solved by the system, the plan library grows, contain- ing plans that solved previous problems. However, under this learning strategy, the library will contain only complete cases derived for previously solved tasks, without the possibility of learning other types of cases. This method thus assumes implicitly that the prob- lems that will arise in the future are similar to those arising now; it also precludes serendipitous learning, as the robot only learns plans relevant to its current task. We propose here another method for augmenting the plan library, by storing plan fragments derived by breaking up the robot’s behavior in ways different from those given by its controlling plan. Suppose, for exam- ple, that our robot is to go from room A to room B, via a hallway which contains a water cooler. In the usual case-based learning framework, a plan fragment that gets the robot from room A to the water cooler would not be learned; in fact, the original plan may not mention the water cooler at all. However, if we observe the robot’s behavior between room A and the water cooler, we may be able to ‘reverse engineer’ a useful plan fragment to store in its case library. This would enable the system to learn from its incidental experience as well as its planned experience. Several related problems are beyond our scope in this paper: (a) h ow to properly index a new plan in the case base (Hammond 1986; Kolodner 1993), (b) how to eval- uate if a learned plan is actually useful (Minton 1988; Chaudhry & Holder 1996), and (c) how to recognize interesting world states (very much an open problem, cf. (Benson & Nilsson 1995)). In this paper, we describe a method for automati- cally constructing usable plan fragments from records of executed robot behavior over a period of time. Specifically, given observations of the robot’s behav- ior over a restricted period of time, our system con- structs a reactive plan which reliably repeats the be- havior when started in a similar situation. These plans are not sensitive to small changes in the environment, and are resistant to sensor and effector noise. The main idea behind our system is to represent robot behavior as a sequence of behavior events, which represent qualitative changes in the state of an under- lying behavior-based control system. This representa- tion corresponds naturally to statements in a modern reactive plan language, such that each type of event may be translated into a plan fragment for creating or handling that type of event. These plan fragments are then linked together to form plan which reproduces the entire behavior sequence. Our algorithm also incorpo- rates several techniques to ensure that the resulting plans are robust. We have applied the system in the domain of mobile robot navigation, where it produces plans which are quite robust. The Execution Architecture Our system learns reactive plans within the context of the two-level control architecture depicted in Figure 1 (similar to those of (Gat 1991) and (Firby 1994)). This Reinforcement Learning 869 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Figure 1: Our robot control architecture. architecture divides robot control into two levels: ‘sym- bolic’ and ‘behavioral’. The behavioral level consists of (a) a set of behavior processes, which implement contin- uous low-level controllers (eg, wall-following or grasp- ing), and (b) a set of sensing processes, which imple- ment object recognition and tracking. The symbolic level consists of a transformational (case-based) plan- ner and reactive execution system (as in (McDermott 1992)), as well as the learning system described in this paper. In addition to control flow code, plans consist of activation and deactivation commands to behavior and sensing processes, as well as handlers for signals from those processes. Sensing processes are connected to behavior pro- cesses via designators (McDermott 1991), which are data structures which refer to particular objects in the local environment. They thus form a sort of deictic representation (Agre 1988). When a behavioral pro- cess is activated, it may be provided with a designator upon which to act. It will then control the robot using the continuously updated state of the designator. Events and behavior traces Both sensing and behavioral processes signal eflector events to the symbolic level. One type of event is a termination event, signaling the termination of a pro- cess’s execution, either as a success or a failure (with some error code). Another type of signal is used to return values from a sensing process, eg, a designator (which can then be given as a parameter to a behav- ior process). Many other kinds of events can also be accommodated; for example, a wall-following process may signal the presence of openings in the wall, which would enable a plan to count doorways. A sequence of effector events constitutes a behavior truce, which records the evolution of the behavioral level’s state over some time interval. An event specification contains of the name of the process producing the event, the event type (activation, completion with error code, etc.), the robot resources required (such as a camera or an arm), and the values of any parameters or signaled values. The RPL Plan Language Our goal is to translate a behavior trace into a plan which will reproduce the original behavior robustly, without being affected unduly by low-level failures or small changes in the environment. The plan notation we used for this work is a subset of McDermott’s RPL, a reactive plan language developed for transforma- tional planning (McDermott 1991 . RPL is a full par- allel programming language base d on Lisp. The lan- guage contains facilities for sequential and parallel ex- ecution, including explicit process control. Most com- munication between processes is accomplished through fluents, including receiving signals from sensing and be- havior processes. Plan failure is expressed explicitly, so that plans can fail cognizantly (Firby 1989). Behavior Traces to Reactive Plans In this section, we describe our algorithm for translat- ing a behavior trace into a RPL plan. Since behavior processes are nearly stateless, to a first approximation we can translate a behavior trace into a RPL plan by translating each event in sequence to a short sequence of RPL statements. Individual event specifications are translated by a set of translation rules, whose left- hand-sides are event specifications with variables to be instantiated. The right-hand-side of each rule is a plan fragment with variables to be substituted for. The plans we construct are sequences consisting of three sorts of steps: process activation, receiving a re- turn value, and testing required conditions. Activa- tions provide parameters to sensor and behavior pro- cesses, while signals and completion events supply both values the plan can later use and conditions that must be tested in order for the plan to continue safely (such as error codes). The way the translation rules is roughly as follows (more detail may be found in (Engelson 1996)): An activation event becomes a plan step which ac- tivates a sensor or behavior process and supplies it with its parameters. It also initializes any fluents needed for receiving signals from the new process. A condition handling plan step waits for a signal of the right type from a given process, tests any error code, and ensures that return values are as expected. If the value is needed later in the plan, it is bound to a local plan variable. The value-testing code ensures that if the values signaled at run-time do not match those in the original plan, the plan fails appropri- ately. Variabilization: The use of plan variables allows plan steps to depend on one another by identifying val- ues produced and used at different points in the trace. Value identification is impossible in general, without extensive knowledge, so our system simply assumes that equal values are meant to be logically identical. Quantities that are unequal are assumed to be unre- lated; while this may be false, there is no way for us to resolve the ambiguity. Post facto parameterization: In order to produce robust plans, we must also account for effector noise. For example, a command to turn the robot 90’ may actually turn the robot 94.6’. If the command to turn the robot 90’ is repeated some other time, it may end up turning the robot 85.2’, introducing a repetition error that is larger than strictly necessary. To avoid this problem, the original “turn 90’” should be trans- lated as “turn 94.6’“. This phenomenon also arises in perceptual routines, where such ‘post facto parameter- ization’ is needed. For example, when the robot looks for a doorway and two are in its field-of-view, the direc- tion of the designator which is found and used should 870 Learning be inserted into the plan step for finding the doorway so that the correct doorway is found. We address this problem by allowing translation rules to refer to other relevant events in the trace to instantiate needed pa- rameters. TraceTranslate(T = (el, e2,. . . , e,)): 1. Let P = 0; hash-table, associating constants to 2. Let H be a null variable names; Irrelevant events: One further complication is that some trace steps reflect actions which do not affect the results of the original behavior in any way. This can cause problems if these actions are unreliable. In such a case, requiring the action to be performed (and its results to be consistent with the original trace) will cause the new plan to fail needlessly. The solution is to remove those steps from the behavior trace before translation., so that they can’t affect the results. This is difficult m general, since nearly any action could be relevant. However, some perceptual events are clearly not needed and can be removed. For example, if the robot acquires a designator that is never used in the behavior trace, the acquisition event is removed. 3. Find completion events in T with no corresponding ac- tivation events, and prefix appropriate activations to T; 4. Elide irrelevant events from T; 5. For each event e+ E T: (a) Find the translation rule r matching ei; (b) another event in T, retrieve If T requires values from those values from T; (4 the associated For each constant in e;, substitute variable name from H (if one exists); Dangling act ivat ion: A more serious problem re- quiring trace preprocessing is dangling activation. Consider the case of an behavior trace meant to repre- sent the robot’s behavior between two corridor inter- sections. If the robot did not stop at the first intersec- tion, but simply continued to follow the wall, activation of the wall-following behavior will not be reflected in the trace produced by observing behavior events only between the intersections. Simply translating the trace will result in a plan that does not move the robot at all. Dangling activation can be dealt with by adding a virtual activation event for the dangling process at the beginning of the trace. This works because be- havior processes are essentially stateless, so appropri- ately activating the process at an intermediate point will produce similar behavior. (d) Let S be the plan fragment resulting from translating e; according to r; (e) P + append(P,S); (f) If ei is a signaling event, create a variable name for each new constant, and store them in H; 6. Bind all variable names in H around P and return the resulting plan. esults Prefixing requires taking a snapshot of the state of the behavior system when trace recording starts- which processes are active (and their parameters). Currently this is not implemented; a simpler form of prefixing is used which examines the trace for end events without matching activation events, and pre- fixes appropriate activation events to the trace. This works only if the dangling behavior process completes within the trace; in our tests, this was the case. A similar method is used by Torrance (1994) to deal with dangling activation, for a trajectory-based plan repre- sentation. We evaluated our plan-learning system using the ARS MAGNA mobile robot simulator. The simulator pro- vides a robot movin in a 2-dimensional environment containing walls an % objects of various types. The robot is equipped with a set of basic behaviors such as “follow wall” and “go to designator”, as well as a full set of sensing processes such as “acquire designator on door”. All of these processes incorporate noise- behavior processes may fail randomly and sensing pro- cesses may return noisy or erroneous data. Details of the simulator can be found in (Engelson & Bertani 1992). In this section we will examine the performance of the trace translation technique described above on sev- eral navigational plans, using ARS MAGNA. Our tests demonstrate how our system produces robust repeti- tion of robot behavior. Hand-generated behavior The algorithm: The full behavior trace translation We first test the stable repetition of a short behav- algorithm for a trace 7’ is given as TraceTranslate. First, activation events are added for any dangling ac- ior. The robot behavior was generated by hand; the experimenter manually activated sensor and effector tivations. Second, irrelevant trace steps are elided. processes in sequence. Then, the steps in a behavior trace are translated se- Figure 2(a) shows the original trajectory of the robot in a simple world with no ob- quentially by applicable translation rules. When a post stacles. The behavior trace generated by this behav- facto parameterizable step is encountered, the rest of ior contained 48 event specifications. The translated the trace is searched for the needed parameter, which is RPL plan was run ten times in each of two situations, inserted in the translation. If an event signals a value, where the initial location of the robot was the same a new variable name is created, which is indexed un- as in the original run. These runs are summarized in der the value in a variable table. Then, when a value Figure 2(b), which depicts the robot’s trajectories on is used as a parameter, the value’s variable name is those ten runs. Note the variance in the precise trajec- inserted, if it exists, otherwise the value is assumed to tories the robot followed, due to noise. The plan failed be a hard-wired constant. After all the steps are trans- in just one of these runs, when the robot lost track lated, they are wrapped in a let form binding the local of the final corner on its approach due to perceptual variables in the plan. This implicitly executes the plan noise. The plan was also tested with the addition of steps in the same order that they appear in the trace. some obstacles (Figure 2(c)). As is clear from the fig- Reinforcement Learning 871 Figure 3: Robot trajectories for one test of plan generation from an automatically generated behavior trace. The robot was started in the rightmost doorway facing down. (a) Ini- tial trajectory generated by random wandering. (b) Com- posite of trajectories of 10 test runs. Figure 2: Robot trajectories and behavior repetition tests for a manually given behavior. (a) Original trajectory; the robot was started in the center of the first room. (b) Com- posite of trajectories of 10 executions of the translated plan. (c) Composite of trajectories of 10 tests with ob- stacles (squares) added. ure, low-level obstacle avoidance enabled the robot to stably repeat its original behavior. Note that different trajectories were followed on different runs, depend- ing on how the robot (locally) decided to go around the obstacles. One of these ten runs also failed, when the robot lost track of the third door in its trajectory due to perceptual noise. Note that both plan failures could not have been avoided without much more do- main knowledge. Automatically generated behaviors The next two experiments dealt with behaviors gen- erated by a random wandering plan. Figure 3 shows the robot’s trajectories during execution of the initial behavior (a) and the learned plan (b) for the first test case. The robot started in the rightmost doorway fac- ing down, and ended up, as shown, in a corner of the rightmost upper room. The behavior trace contained 72 event specifications. Despite the variance in the robot’s trajectory out of the door, 9 of the 10 tests succeeded. The one failure occurred when the robot failed to acquire a designator for the door on its re- turn; it ended up in the upper-right corner of the lower room, as shown. The second test case for automatically-generated be- havior is depicted in Figure 4(a). The robot was started in the second doorway from the left, facing right. While describing a longer and more complex behavior than the last example, the behavior trace gen- erated here was shorter, with 60 events in all. The be- havior in this example was also less robust than those in the other tests, because it depends on the robot heading for the door it started from and then losing track of it due to occlusion. (This happens at the kink in the robot’s trajectory, where it begins heading for the lower doorway.) Ten test runs were run on this example, with obstacles in different places than when the behavior was originally generated. Three of the test runs failed because the robot mistakenly headed Figure 4: Robot trajectories for another example of plan generation from an automatically generated behav- ior trace. The robot was started in the center of the second room. (a) Initial trajectory generated by random wander- ing. (b) Composite of trajectories of 10 test runs. for the lower doorway first. In all the other runs,. the robot attained its of the obstacles. 1 oal, despite the different positions lso note that the trajectories fol- lowed by the robot on different trials differed greatly, though the overall effect was the same. Limitations and Extensions While the trace translation algorithm works well in many cases (as demonstrated above), it has some im- portant limitations. Some can be remedied by simple extensions to the algorithm, while others are inherent in the current approach. In this section, we sketch some extensions to the approach which should correct for many of the limitations of the current approach. Basic extensions Timing: One limitation of our technique is the fact that the plans it produces have no notion of time; they are purely reactive. If two events A and B are adjacent in a trace, the translated plan will execute B right after A, even though there may have been a period of time between A’s completion and B’s start in the original behavior. This can cause problems if, in the original behavior, the world state changed significantly after A completed, such as if the robot was moving. One so- lution would be to add a timing ‘behavior process’, so that when the execution system wishes to wait for a period of time, it activates the timing process with the desired wait time, which process then signals when the time is up and the plan should continue. These events go into the behavior trace like any other and become a part of the translated plan, causing the robot to wait appropriately upon repetition. If, however, the wait is 872 Leilming implicit, caused by waiting for a computation, another approach is required. In navigation, the main prob- lem with getting the timing wrong comes about if the robot is moving between events A and B, so that it is at the wrong place to do B if it doesn’t wait. This prob- lem may be ameliorated by using odometry to mea- .sure the movement between adjacent trace events. If the distance moved is significant, the plan waits before performing B until the robot has moved the requisite amount. Effect or failure: When plans failed in our experi- ments, the main cause was effector failures, such as losing track of a designator or getting stuck following a wall. The effects of such runtime failures can be ame- liorated heuristically. RPL contains a construct known as a policy, which serves to maintain a needed condi- tion for a subplan. For example, if the robot needs to carry a box from A to H, after picking up the box, it will move from A to B with a policy to pick up the box if, for any reason, it finds itself not holding the box. This will (usually) ensure that the robot arrives at B with the box. A set of such heuristic policies could be designed for the different types of plan steps produced by the translation algorithm, to make these steps more robust. For example, if a trace contains a “go to desi - nator” event, the translated plan may contain, in a % - dition to an activation of “go to designator”, a policy which attempts to recover the proper designator if it is lost before “go to designator” properly completes. Such policies can be developed for many common run- time failures, improving greatly the robustness of the resultant plans. Plans as a resource for learning Fundamentally, the limitations of our approach are due to the fact that it assumes no knowledge aside from the translation rules which encode the relationships between different events. This means that the sys- tem does not understand the complete context of each event, and hence how the plan should be constructed in a global fashion. In this section, we sketch a pro- posed method for using the base plan, which originally generated the behavior to be modeled, as a resource for constructing more robust plans. The idea is that a behavior trace that the learn- ing system is given is a subsequence of that arising from the execution of some known plan. Let us con- ceive of that plan as a hierarchical decomposition of the robot’s high-level goals into meaningful subplans, including partial ordering constraints among the sub- plans. In particular, execution of a RPL plan results in a task network, whose nodes are subplans and whose arcs are relationships between them. For example, the node for (seq A B C) will have three children, one for each subplan, and will contain the constraints that A is before B is before C. Now, suppose first that we wanted to generalize a behavior trace generated by a complete run of a given plan. The best generalization (assuming our planner is reliable) would be the original plan itself. (This corre- sponds to case-based learning by storing newly-created plans in memory.) In our case, however, we are in- terested in only some part of a complete run which achieves some intermediate state of affairs. In this case, we wish, rather than translating behavior events indl- vidually, to abstract up the tree, to find the highest subplans whose execution was confined to the interval of interest. By doing so, and properly propagating pa- rameter values, we can create a new plan which dupli- cates the intended behavior during the period of inter- est, inheriting the robustness properties of the original plan. What this means is that some of the effort used in making the original plan more robust can be saved when using the new plan as a building block in later case-based planning. At the same time, the flexibility of the system is enhanced by repackaging the behavior in new ways. Related Work In the context of the development of a system for intel- ligent ‘teleassistance’, Pook (1995) describes a method for ‘learning by watching’ with similar goals to the current work. The system learns to perform an egg- flipping task from examples of master-slave teleopera- tion of a Utah/MIT hand. A coarse prior model of the task is given m the form of a Hidden Markov Model (HMM). For each example, the system segments sig- nals from the hand’s tendons into ‘actions’ by finding points of qualitative change. These sequences of ac- tions are then matched to the HMM, and its param- eters are estimated to form a complete model for the task. The primary difference between our work and Pook’s is that her system relies on a prior model of the specific task, while ours makes use of predefined control primitives (sensing and behavior processes). Our goal of learnmg plans to achieve particular goals from observing robot behavior is also related to re- cent work in learning action models for planning (Shen 1994; Wang 1994). Most of this work assumes discrete actions, unlike the present work. One significant ex- ception is the TRAIL system described by Benson and Nilsson (1995). Their system learns models of ‘teleo- operators’, which are a kind of continuous version of STRIPS operator descriptions. These teleo-operators can then be used as the building blocks in construct- ing ‘teleo-reactive’ plans. TRAIL repeatedly attempts to achieve a given goal (possibly conjunctive), learning from both successful and failed attempts. The learn- ing module uses inductive logic programming to induce from a set of execution records which achieve a given goal models of primitive operators that achieves that goal (Benson 1995 . Another relate d area of research is that apply- ing case-based reasoning (CBR) to robotic control (Kopeikina, Brandau, & Lemmon 1988; Ram et al. 1992). In this work, the results of robot actions are ob- served, and formed into ‘cases’ which inform future be- havior. In particular, Ram and Santamaria 1993), de- 6 scribe a system which learns how to adjust t e param- eters of a local behavior-based control system (using motor schemas (Arkin 1989)), based on sensor read- ings, in order to effectively navigate in crowded envi- ronments. By contrast, our work in this paper focuses on learning high-level reactive plans which combine and sequence multiple behaviors. The two approaches could probably be combined beneficially. Finally, the problem of learning plans from ob- served behavior is particularly important in the con- text of topological mapping. Most previous work on topological mapping has assumed atomic action labels Reinforcement Learning 873 (eg, (Dean et al. 1994; Kortenkamp, Baker, & Wey- mouth 1992; Kuipers & Byun 1988)). This approach, however, is not robust to even small environmental changes. Kuipers and Byun (1988) and Mataric (1990) both represent actions by activations of low-level be- haviors, but only one behavior is represented per link, so they need not consider interactions between multi- ple behaviors. Conclusions We have developed a system which learns reactive plans from traces of robot behavior. This problem arises both in the context of case-based planning sys- tems (learning new cases) and in topological mapping (associating plans with links). In each case, we need to represent behavior over some period of time in a way that will enable it to be reliably repeated in the future. The idea is to store useful fragments of behavior in a way that will allow them to be reused in the future. For reliable repetition, the plans that are thus derived must be robust with respect to sensor and effector noise, as well as small changes in the environment. Our system processes traces of the activity in a robot’s behavioral control system, reducing plans in a complex reactive plan language RPL). Our results P on learning navigation plans show the resulting plans to reliably repeat the original behavior, even in the presence of noise and non-trivial environmental modi- fications. The power of the approach comes from the choice of ‘behavior events’ as an action model. Rather than assume that a continuous action results from rep- etition of some discrete action, we take as given contin- uous control processes which signal the symbolic level regarding significant events. Our results show that this representation provides a natural level of abstrac- tion for connecting symbolic (planning) and continuous (control) processes in intelligent robotic control. Acknowledgements Thanks to Drew McDermott and Michael Beetz for many interesting and helpful discussions. The author is supported by a fellowship from the Fulbright Foundation. The bulk of this work was performed while the author was at Yale University, supported by a fellowship from the Fannie and John Hertz Foundation. References Agre, P. E. 1988. The Dynamic Structure of Everyday Life. Ph.D. Dissertation, MIT Artificial Intelligence Lab- oratory. Arkin, R. C. 1989. Motor schema-based mobile robot navigation. International Journal of Robotics Research. Benson, S., and Nilsson, N. J. 1995. Reacting, planning, and learning in an autonomous agent. In Furukawa, K.; Michie, D.; and Muggleton, S., eds., Machine Intelligence 14. Oxford: Clarendon Press. Benson, S. 1995. Inductive learning of reactive action models. In Proc. Int’l Conf. on Machine Learning. Chaudhry, A., and Holder, L. B. 1996. An empirical approach to solving the general utility problem in speedup learning. In Anger, F. D., and Ali, M., eds., Machine Reasoning. Gordon and Breach Science Publishers. Dean, T.; Angluin, D.; Basye, K.; Engelson, S.; Kael- bling, L.; Kokkevis, E.; and Maron, 0. 1994. Inferring finite automata with stochastic output functions and an application to map learning. Machine Learning. Engelson, S. P., and Bertani, N. 1992. ARS MAGNA: The abstract robot simulator manual. Technical Re- port YALEU/DCS/TR-928, Yale University Department of Computer Science. Engelson, S. P. 1996. Single-shot learning of reactive navigation plans. Technical report, Department of Math- ematics and Computer Science, Bar-Ilan University. Firby, R. J. 1989. Adaptive Execution in Complex Dy- namzc Worlds. Ph.D. Dissertation, Yale University. Tech- nical Report 672. Firby, R. J. 1994. Architecture, representation and inte- gration: An example from robot navigation. In Proceed- ings of the 1994 AAAI Fall Symposium Series Workshop on the Control of the Physical World by Intelligent Agents. Gat, E. 1991. Reliable Goal-Directed Reactive Control of Autonomous Mobile Robots. Ph.D. Dissertation, Virginia Polytechnic Institute and State University. Hammond, K. J. 1986. Case-based Planning: An Inte- grated Theory of Planning, Learning and Memory. Ph.D. Dissertation, Yale University Department of Computer Science. Kolodner, J. 1993. Case-Based Reasoning. Morgan Kauf- mann. Kopeikina, L.; ‘Brandau, R.; and Lemmon, A. 1988. Case- based reasoning for continuous control. In Proc. Workshop on Case-Based Reasoning. Kortenkamp, D.; Baker, L. D.; and Weymouth, T. 1992. Using gateways to build a route map. In Proc. IEEE/RSJ Int 1 Workshop on Intelligent Robots and Systems. Kuipers, B., and Byun, Y.-T. 1988. A robust qualitative method for robot snatial reasoning. In Proc. National Conference on Artificial Intelligen;, 774-779. Mataric, M. J. 1990. A distributed model for mobile robot environment-learning and navigation. Technical Report 1228, MIT Artificial Intelligence Laboratory. McDermott, D. 1991. A reactive plan language. Techni- cal Report 864, Yale University Department of Computer Science. McDermott, D. V. 1992. Transformational plannin of reactive behavior. Technical Report YALEUICSDBRR #941, Yale University Department of Computer Science. Minton, S. 1988. Quantitative results concerning the util- ity of explanation-based learning. In Proc. National Con- ference on Artificial Intelligence. Pook, P. 1995. Teleassistance: Using Deictic Gestures to Control Robot Action. Ph.D. Dissertation, University of Rochester. Ram, A., and Santamaria, J. C. 1993. Multistrategy learn- ing m reactive control systems for autonomous robotic navigation. Informatica 17(4). Ram, A.; Arkin, R. C.; Moorman! K.; and Clark, R. J. 1992. Case-based reactive navigation. Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology. Shen, W.-M. 1994. Autonomous Learning from the En- vironment. Computer Science Press, W. H. Freeman and co. Simmons, R. 1994. Structured control for autonomous robots. Proc. Int’l Conf. on Robotics and Automation 10(l). Torrance, M. C. 1994. Natural communication with robots. Master’s thesis, MIT Artificial Intelligence Labo- ratory. Wang, X. 1994. Learning planning operators by observa- tion and practice. In Proc. 2nd Int’l Conf. on AI Planning Sys terns. 874 Learning
1996
129
1,765
Nearly Monotonic Problems: A Key to Effective FA/C Distributed Sensor Interpretation? Norman Carver Computer Science Department Southern Illinois University Carbondale, IL 62901 (carver@cs.siu.edu) Abstract The fesractioncslly-Qcczdrrcate, cooperative (FA/C) dis- tributed problem-solving paradigm is one approach for organizing distributed problem solving among homo- geneous, cooperating agents. A key assumption of the FA/C model has been that the agents’ local so- lutions can substitute for the raw data in determining the global solutions. This is not the case in general, however. Does this mean that researchers’ intuitions have been wrong and/or that FA/C problem solving is not likely to be effective ? We suggest that some do- mains have a characteristic that can account for the success of exchanging mainly local solutions. We call such problems nearly monotonic. This concept is dis- cussed in the context of FA/C-based distributed sensor interpretation. Introduction The functionally accurate, cooperative (FA/C) dis- tributed problem-solving paradigm (Lesser & Corkill 1981; Lesser 1991) has been important in cooperative distributed problem solving (CDPS) research. Several FA/C-based research systems have been built (e.g., (Carver, Cvetanovic, & Lesser 1991; Carver & Lesser 1995a; Lesser & Corkill 1983)). However, until some re- cent work of ours (Carver & Lesser 1994; Carver 1995; Carver & Lesser 1995b), there had never been any for- mal analysis of the conditions that are necessary for an FA/C approach to be successful or of the potential per- formance of FA/C systems. In this paper we examine some of the assumptions behind the FA/C model and look at a problem domain characteristic that can make the FA/C approach successful. The development of the FA/C model was motivated by the recognition that in many CDPS domains it is impractical/impossible to decompose problems and/or transfer data so that individual agents work on inde- pendent subproblems. FA/C agents are designed to produce tentative, partial solutions based on only local information (which may be incomplete, uncertain, or inaccurate). They then exchange these results with the other agents, exploiting inter-agent constraints among the subproblems to resolve uncertainties and inconsis- tencies due to the deficiencies in their local information. 88 Agents Victor Lesser and Robert Whitehair Computer Science Department University of Massachusetts Amherst, MA 01003 (lesser or whitehair @cs.umass.edu) A critical issue for the FA/C model is whether high quality global solutions can be produced with- out the need for “excessive” communication among the agents (when exchanging and “integrating” local re- sults). Most FA/C work has assumed that this is the case because it has been assumed that the local partial solutions can substitute for the raw data in resolving contradictions and uncertainties. Unfortunately, this is not true in general. Does this mean that researchers’ intuitions have been wrong and/or that FA/C problem solving is not likely to be effective? In this paper we suggest that some domains have a characteristic that justifies the role of local solutions in producing global solutions (at least for approximate, satisficing problem solving). We call such problems nearly monotonic. Basically, while belief and/or solu- tion membership may be nonmonotonic with increasing evidence, in nearly monotonic problems they become nearly monotonic once certain conditions (like fairly high belief) are reached. This paper discusses the FA/C model and the con- cept of nearly monotonic problems in the context of distributed sensor interpretation (SI). We concentrate on this domain because most FA/C applications have been in distributed SI (particularly distributed vehicle monitoring), we are engaged in related research on the FA/C model for distributed $2, and we have available a new analysis tool for SI. SI is also a very complex problem and distributed SI plays an important role in many situation assessment (decision support) systems. The next section briefly describes distributed SI. The FA/C Issues section examines the use of local agent solutions to determine global solutions in FA/C prob- lem solving. What we mean by near monotonicity is expanded on in the Nearly Monotonic Problems sec- tion. This is followed by a section that examines the role near monotonicity can play in developing coordi- nation strategies for FA/C-based SI. The Analytic Sup- port section provides some support for the existence of nearly monotonic SI problems using a framework for analyzing SI domains. The paper concludes with a sum- mary of our conclusions and future plans. From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. istributed Sensor Interpretation By sensor interpretation, we mean the determination of high-level, conceptual explanations of sensor and re- lated data. For example, vehicle monitoring applica- tions involve tracking and identifying vehicles, and pos- sibly determining the purpose of individual vehicles and patterns of vehicles. The model of SI that we assume is essentially that described in (Carver & Lesser 1991; Carver & Lesser 1994). An interpretation of a data set is an explanation of what caused all of the data. Typically it will be a composite of a set of hypotheses whose types are from a specified subset of the abstrac- tion types (the explanation corpus (Pearl 1988)), each of which explains some subset of the data, and which together explain all of the data. In general, there will be multiple possible interpretations of a data set. A solution to an SI problem is an interpretation of the available data that is judged “best” according to some criteria. One possible definition of best is the most prob- able explanation (MPE) (Pearl 1988) given the available data. The problem with this definition is that for many SI problems it is impractical to compute the MPE or exact belief ratings (conditional probabilities). (Carver & Lesser 1994) contains an explanation of this issue.’ We will simply assume here that even centralized SI systems usually must use approximate, satisficing ap- proaches to construct solutions (so solutions are only approximations of the MPE).2 In a centralized SI system, all of the data is available to the single agent. In a distributed SI system, typi- cally each agent has (direct) access to data from only a subset of the sensors and each sensor is associated with a single agent. As a result, each agent monitors only a portion of the overall “area of interest,” so agents must somehow combine their data in order to construct a global solution. FA/C Issues As we have said, a critical issue for the FA/C ap- proach is whether high quality global solutions can be produced without the need for “excessive” communi- cation among the agents. 3 Because FA/C agents work on possibly interdependent local solutions, they must exchange and integrate these solutions to construct a global solution. 4 Integrating local solutions may not be straightforward, however, because these solutions may ’ (Pearl 1988) contains a discussion of some of the limita- tions of using the MPE even when it can be computed. For example, there may be very different utilities for identifying (or failing to identify) enemy vs. friendly aircraft. 2While it is nearly always necessary to trade-off solution quality for efficiency in SI problems, approximation can be done in a variety of ways-see (Bar-Shalom & Fortmann 1988; Carver & Lesser 1995b, Cox & Leonard 1994). 3Precisely what constitutes excessive communication will depend on the reasons for taking a distributed approach to problem solving. 4FA/C agents mus t have some mechanism to iden- tify interdependencies among their solutions/hypotheses. be incomplete and/or highly uncertain, and because so- lutions from different agents may be inconsistent (since they are based on different incomplete data subsets). Conditions under which possibly interdependent local solutions can be efficiently integrated is the main issue to be addressed in this paper. The FA/C model can impose substantial delays over a centralized model if the determination of a global so- lution requires some agent(s) to have access to d/most of the globally available raw data. This would happen, for instance, if interrelated local solutions could be in- tegrated only with access to their supporting raw data and nearly all solutions were interrelated. Delays would result because FA/C agents obtain data from external sensors by communicating with the agents responsible for those sensors as it becomes clear that the data is needed (they either explicitly request data from other agents or wait for the other agents to decide that the data needs to be sent). Thus, effective FA/C-based SI requires that agents need access to limited amounts of raw data from ex- ternal sensors. There are two basic ways in which this requirement may be met: (1) only a small subset of each agent’s subproblems interact with those of other agents (agents’ subproblems are largely independent of those in other agents) or (2) local solutions can be inte- grated with limited need for their supporting raw data. We focus on the second approach in this paper. Subproblem independence is problematic for SI, since in many domains there is no way to determine a pri- ori whether two pieces of data are interrelated and, in fact, virtually any two pieces of data may have a non- zero relationship. 5 For example, in a situation analysis application for tactical air command, targets hundreds of miles apart may be interrelated since they might be acting in concert as part of some scenario/pattern. This means that even widely separated pieces of sensor data are potentially interrelated. Furthermore, even where subproblem interactions are consistently limited, it must be possible to determine what data is relevant to which other agents, and this must be able to be done in a timely manner, without excessive communication. We are interested in understanding whether or when local solutions can substitute for the raw data in deter- mining the global solutions (where there are inter-agent subproblem interrelationships). If agents can transmit mainly solution-level hypotheses rather than raw data, then communication can be greatly reduced. Interpre- (Carver, Cvetanovic, & Lesser 1991; Carver & Lesser 199510) describes how it is possible to identify interdependencies with SI applications. Agent solutions are interdependent whenever data (evidence) for a hypothesis is spread among multiple agents or when agent “interest areas” overlap as a result of overlapping sensor coverage. 5 When we spe ak about data being interrelated and about the “strength” of this relationship, we mean evidential re- lationships: the presence, absence, characteristics, or inter- pretations of data can affect the belief in the possible inter- pretations of other data. Multiagent Problem Solving 89 tation hypotheses are abstractions of the sensor data and can generally be represented using a fraction of the storage that would be required for their support- ing data. Communication of solution hypotheses should also require receiving agents to do less processing (in- terpretation/probabilistic inference) than would be re- quired with ‘raw data. In addition’the number of pos- sible interpretations of a data set can be very large, fo- cusing on the agents’ local solutions can greatly reduce communications. The developers of the FA/C paradigm certainly be- lieved that local solutions (and other abstract hypothe- ses, or “results”) could substitute for the raw data in determining global solutions. (Lesser & Corkill 1981) refered to “consistency checking” of the tentative local solutions with results received from other nodes as “an important part of the FA/C approach.” When there were inter-agent subproblem interactions, agents would transmit the& local solutions and check the consistency of the related components of these solutions. Consistent solution components would be “integrated” using only the abstract hypotheses (not the raw data), while in- consistencies would trigger additional communication. We will refer to this basic procedure for developing a global solution ZLS the consistent local solutions strategy. This strategy has the potential to reduce communica- tions because when local solutions are consistent they are integrated wi thout requiring transmission of sup- porting raw data. Does ; - the strategy produce high qual- rty global solutions ? To answer this question we must first consider what it should mean to integrate local so- lutions into a global solution. Assume that there are two agents, A1 and AZ, with local data sets D1 and 02, respectively. Each agent’s local solution would be the “best” interpretation of its own local data set (using some common definition of best). Now for the global solution, what we would ideally like is the best inter- pretation of the joint, global data set (D1 U Dz), using the same definition of best interpretation. This is ideal because the distributed system would then produce the same solutions as an equivalent centralized-system and solutions would not vary simply with differences in the distribution of the data-among-the group of agents. Given this standard for global solutions, what can we say about the “consistent local Unfortunately, what we can say solutions is that, strategy?” in general, it provides no guarantees at all about the quality of the global solution. Again, consider a situation with two agents, and suppose that there are the same two alternative interpretations Icr and 1a for each of the data sets D1 and Da. It is entirely possible to have P(L 1 01) > P(Ib 1 01) and P(L 1 02) > p(Ib 1 h), but P(Ia 1 D1, D2) < P(& 1 D1, 02). In other words, even though interpretation Ia is the most likely (best) solution given each agent’s local data set sep- arately, it may not be the globally most likely solu- tion even though the local solutions are consistent (here identical). Likewise, if P(H I 01) > threshold and P(H I 02) > th reshold, it is not necessarily the case that P(H 1 Dl, D2) > threshold (where H is an inter- pretation hypothesis being selected for membership in the approximate solution based on its belief surpassing some acceptance threshold). These are unavoidable consequences of the nonmono- tonicity of domains like SI. The upshot of such observa- tions is that integration of even consistent interrelated local solutions can require that agents recompute hy- pothesis beliefs and redetermine best solutions-just as with inconsistent local solutions. In some cases, this can require one agent to have complete knowl- edge of the other agent’s raw data and evidential infor- mation (alternative interpretation hypotheses and their interrelationships) .6 Nearly Monotonic Problems We believe that one explanation for the apparent suc- cess of FA/C-based SI is that many SI domains have a property that makes the “consistent local solutions strategy” appropriate and effective. We have termed problems with this property nearly monotonic, because they nearly behave as if they are monotonic once certain conditions have been achieved. For example, while ad- ditional evidence can negatively affect the belief in a ve- hicle track hypothesis, once a fairly high degree-of-belief is attained, it is unlikely that the belief will change sig- nificantly and it is unlikely that the hypothesis will not be part of the best global solution. Thus, while the domain is nonmonotonic in a strict sense, the effects of additional evidence are not totally unpredictable: solu- tion components with particular attributes (e.g., high belief) are unlikely to be affected as additional evidence is considered. To proceed in examining near monotonicity, the fol- lowing notation will be used: 2) is the complete, glob- ally available data set; D is some subset of ZJ that currently has been processed by an agent; BEL(H) is the current belief in hypothesis H given data set D (it is P(H 1 D)); BEL*(H) is the “true” belief in hypoth- esis H for data set D (it is P(H I Do>); MPE is the current MPE solution given data set D ; and MPE* is the “true” MPE solution for data set ZJ. It is impossible to give a single, precise definition of what a nearly monotonic problem is. What we will present are several formulas for statistically character- izing SI problems, which can be useful in assessing and using near monotonicity. The basic approach will be to characterize ultimate properties of interpretation hy- potheses if/when 2) were processed, given current char- acteristics based on partial data/evidence. The ulti- 6A detailed explanation of the recomputation of belief and solution membership for SI is beyond the scope of this paper. In belief network terms, think of the integration of local solutions in different agents as establishing new evi- dential links between the agents’ belief networks and then recomputing beliefs by appropriate evidence propagation. This may require complete knowledge of another agent’s data because recomputation in SI problems cannot in gen- eral be done by message passing (Carver & Lesser 1995b). 90 Agents mate hypothesis properties that are of interest are be- lief and solution membership. We have considered five possible characterizations for near monontonicitv: 1. a conditional probability density function (condi- tional pdf) fB~~*l~(b) that describes the prob- ability that the ultimate belief in hypothesis H is b given that the current hypothesis be- lief is X, defined such that g12 fBEplz(b)db = P(pl 5 BEL*(H) 5 p2 I BEL(H) = x, . ..). the probability that the ultimate belief in the hypoth- esis will be greater than or equal to its current belief, P(BEL*(H) 2 BEL(H) I BEL(H) = cc’, . ..). the probability that the ultimate belief in the hypoth- esis will be greater than or equal to some specified level d, P(BEL*(H) > d 1 BEL(H) = 2, . ..). the probability that the hypothesis will ultimately be in the MPE, P(H E MPE* I BEL(H) = x, . ..). the probability that the hypothesis will eventually be in the MPE given that it is in the current MPE, P(H E MPE* I BEL(H) = x, H E MPE, . ..). For each of these characterizations, being nearly monotonic would require that once an interpretation hypothesis has certain characteristics (based on only a portion of the available data) then the probabilities will be high enough to make it appropriate to assume the hypothesis is in the solution. For example, using formula 4, we would like something along the lines of: once a vehicle track hypothesis reaches a belief level of 0.8, the probability that it is in MPE* is greater than 0.95. This would allow us to use the consistency of such a local hypothesis with another agent’s local solution to conclude with high confidence that the track is in the global solution. Which of the above characterizations is most appro- priate will depend on: (1) the domain and its character- istics; (2) the statistical information that is available; and (3) the solution selection strategy. The probabili- ties in formulas 2 and 3 can be derived from the pdf of formula 1, but are included because detailed knowledge such as the pdf may not always be practical to obtain. Variations on these formulas can result from the use of approximate beliefs and solutions, rather than the exact ones used here. We are exploring what hypothesis characteristics should be conditioning factors in the above formulas. Again, this will depend on the particular problem do- main, as the predictiveness of different characteristics is likely to vary across domains, and systems vary in their solution quality requirements. From our experi- ence, it appears that for SI problems both hypothesis belief and hypothesis type are important factors. An- other possibility is the “quantity” of data supporting the hypothesis or the fraction of the overall data that has been processed. The RESUN SI framework (Carver & Lesser 1991) also provides detailed information about the reasons for uncertainty in beliefs, and such informa- tion may be necessary to identify hypotheses that are reliable enough to be assumed for global solutions. Solution uality and Coordination Nearly monotonic problem domains are of interest for CDPS because they can make it possible to produce high quality global solutions with low communication costs. Near montonicity means that consistency of lo- cal solutions can be highly predictive that the merged solution would be the best global solution. In this sec- tion we will examine the issue of solution quality when using the “consistent local solutions strategy.” We will also discuss the trade-offs involved in developing FA/C coordination strategies to take advantage of near mono- tonicity. MPE* is one possible standard to use in evaluat- ing the quality of a global solution SG produced by an FA/C-based SI system. SG could be compared against MPE* in terms of P(SG = M PE*), however this is often not the most meaningful metric for SI problems. First, SG will be tend to be incomplete (SG c MPE*) if it is based on incomplete data (a subset of IO), but these missing hypotheses are not important in SI appli- cations if the data is selected appropriately (i.e., we care about targets, but not “noise”). Second, the likelihood of individual hypotheses being correct is more useful than the likelihood of the complete set of hypotheses being correct, because it tends to be the individual hy- potheses (e.g., vehicles) which we must decide whether to respond to rather than the entire set of solution hy- potheses. Because of these factors, in judging solution quality we will consider P(H E MPE* I H E SG). To produce solution quality results, we first need to better define what “consistency” of local solutions means and what it means to “integrate” local solutions to produce a global solution. Our definition of consis- tency of local solutions is an evidential one: solutions are consistent if hypotheses that comprise each of the lo- cal solutions are pairwise identical, independent, or cor- roborative. Two local solutions are inconsistent when any of their component hypotheses are contradictory (i.e., have a negative evidential relationship). Hypotheses can be corroborative in either of two ways: when one is evidence for the other (one explains the other and the other supports the one), or when they are of the same type and can be merged into a single consistent hypothesis. Merging typically involves pro- ducing a single “more complete” hypothesis from two or more “less complete” hypotheses. For instance, two partial vehicle track hypotheses may be merged into a longer track hypothesis. While the resulting hypothesis could always be built from scratch from the combined supporting data/evidence of the component hypothe- ses, when we refer to the “merging” of hypotheses we will assume that this is done from the solution hypothe- ses, without reference to their supporting data.7 While 7The Distributed Vehicle Monitoring Testbed (DVMT) (Durfee & Lesser 1987; Lesser & Corkill 1983) had “merge” operators that did exactly this. DRESUN (Carver & Lesser 1995a) allows hypotheses to be “external evidence” for hy- potheses of the same type. Multiagent Problem Solving 91 this is clearly more efficient, in general beliefs for com- bination hypotheses cannot be precisely computed in this way (i.e., without access to the supporting data). The last thing that needs to be done to produce solu- tion quality results is to provide a more complete defini- tion of what we mean by the “consistent local solutions strategy” for developing global solutions:s 1. 2. 3. 4. 5. 6. each agent first uses only its own local data to develop a (local) solution; upon satisfying some solution criteria, an agent com- municates its solution’s abstract hypotheses to all agents with which it has recognized subproblem in- teractions; the agent also sends its solution to any agents from which it has received solutions that were not included in step 2 and it continues to do so as solutions are received from any such additional agents; the agent now proceeds to integrate its solution, one- by-one, with each of the solutions it has received and may yet receive prior to termination;. processing terminates when all agents have transmit- ted their solutions according to steps 2 and 3, and have integrated their solution with all received solu- tions; the global solution is simply the union of all the final, integrated agent solutions. If two agents’ local solutions are consistent when they are exchanged, then the integrated solution will be as described above in our discussion of consistency and merging: indepen dent hypotheses will be added the joint solu tion and corroborative hypotheses will linked or merged. If agents’ local solutions are incon- sistent when they are exchanged then the agents will be forced to engage in further communication (possibly involving raw data) to resolve the contradictions and determine the “best” joint solution. Solution Quality Theorem: To derive some re- sults about solution quality, we will make the following assumptions: o We have available statistical information as in for- m ula 4 in the previous section, and this probability is well correlated with hypothesis type and belief (so no additional conditioning factors are needed) o Thus, we have P(H E MPE* I type(H),BEL(H)), which we will refer to as PMPE* (H). o Agents use the “consistent local solutions strategy” described above. o Agents compute BEL(H) = P(H I D) for the subset D of their own local data that they process before transmitting their solutions. o In the case of inconsistent local solutions, the agents *Our descrip tion of the strategy is intended to be clear ‘This last assumption is included to simplify the theo- for the analysis-it is not intended to represent the way one rem, but it can easily be relaxed to allow the agents to select would actually implement the strategy. It does not worry a non-MPE joint solution, as long as they do compute proper about agents duplicating work when integrating solutions, conditional probabilities. If the assumption is rela.xed, this nor how to efficiently produce a final global solution, and results in additional approximations in the global solution. so forth. Also, it does not worry about trade-offs in FA/C “The superscrip ts AI and AZ denote which agent’s belief problem solving, discussed below. we mean. The superscript Al, AZ denotes the merged result. involved compute the MPE joint partial solution (based on the data they have jointly processed).g Under these conditions, what we can say about the quality of the resulting global solution is that QH : P(H E MPE* I H E SG) 2 PMpE*(hmaz). h,,, is simply H, unless H resulted from the merging of con- sistent hypotheses (identical or of the same type). In this second case, h,,, is the hypothesis with the maxi- mum belief out of all the hypotheses merged to produce H (e.g., if H resulted from the combination of hl from A1 and h2 from Aa, and BEL(h1) 2 BEL(hz), then h ma2 = hl). Proof: Under the specified strategy, approximations will occur only when agents compute their local solu- tions, exchange them, and they are consistent. If they are inconsistent then the agents will engage in further communication to find the MPE solution to their joint data sets. When solutions are consistent, no further exchange of information will take place, and the joint solution will be the “merge” of the consistent solutions. Suppose that agent Al’s solution is 5’1 and agent AZ’s solution is S2, and they are consistent. If hypothesis H is in 5’1 then either it is (1) independent of every hypothesis Hi E S2; (2) identical to some hypothesis Hi E S2; or (3) corroborative with one or more hy- potheses, say {Hj} C 5’2 (of the same or of different types as H). If H is independent of all Hi E 5’2 then BELA’sA” = BELA’(H), SO PMPE* (H) is based on the local belief BELA1 (H) computed by A1.l’ If H is identical with some Hi E 5’2, BELA1 sA2(H) 2 maximum(BELA1 (H), BELAa(Hi)).Since PMPE* (H) will be monotonically nondecreasing with increasing hy- pothesis belief, PMPE* (H) following the merge must be greater than or equal to its value from either agent’s lo- cal data. If H is corroborative with hypotheses in 5’2 then either (1) it is supporting or explanatory for these hypotheses, or (2) it can be merged with a hypothesis of the same type. In the first case, BELAlsA2(H) 2 BELA’(H) by the definition of being corroborative. Thus, PMPE* (H) following the merge must be greater than or equal to its value from Al’s local data. Now we must deal with corroborative hypotheses of the same type. Suppose that H is the result of merging hl from Al and h2 from Aa. We must have BEL(H) 2 BEL(hl), BEL(H) > BEL(h2), and so BEL(H) 2 maximum(BEL(hl), BEL(hz)), by our definition of corroborative hypotheses. Thus, we would have PMPE* (H) L maximum(PMPEg (h), PMPE= (h)). What this theorem tells us is that we can use the “consistent local solutions strategy” and potentially get a global solution whose components are as likely to be 92 Agents in the MPE global solution as we desire (by select- ing appropriate criteria that local solutions must meet prior to being exchanged). This is a very useful result even though we are not guaranteed to produce the best global solution under this strategy, since some type of approximation is required for most SI problems. Of course, being able to use this strategy to effi- ciently achieve a desired likelihood, depends on two things being true: (1) agents can produce local solu- tions whose hypotheses have high enough belief, and (2) local solutions are largely consistent. This suggests that effective use of the “consistent local solutions strat- egy” requires appropriate structuring of the distributed system.’ ’ Usually at least one agent must have suffi- cient data to produce a high belief version of each solu- tion hypothesis. Agents must also have enough overlap in their data coverage that it is unlikely that they will produce inconsistent solutions. When these conditions are not met, the agents may be forced to communicate considerably more information/data among themselves in order to produce a global solution of the desired qual- ity. For FA/C-based SI with limited communication, it is clearly advantageous to understand whether the do- main is nearly monotonic or not, and if it is to design coordination strategies to capitalize on this property. Still, the design of a coordination strategy must con- sider numerous trade-offs. For instance, to take maxi- mum advantage of a problem being nearly monotonic, agents should try to produce appropriate (nearly mono- tonic) interpretation hypotheses based on their local data and only then (or when it is found that this can- not be done) exchange them with other agents. The problem with this approach is that while it will mini- mize the communication of raw data among the agents, it may not produce the best performance in terms of time to reach a solution. This is because agents may not be able to produce nearly monotonic solution hy- potheses from their data and their local solutions may not be consistent even if they can. Should agents fail to produce nearly monotonic solution hypotheses and/or produce inconsistent solutions then raw data generally will have to be communicated and processed by some agents. If the need to do this is discovered only after a significant amount of processing time, then production of the ultimate solution will be delayed. In this type of situation, where agents require “constraint informa- tion” from other agents, it is advantageous to receive this information as early in processing as possible. Analytic Support While we have demonstrated that nearly monotonic problems have the potential to support efficient FA/C- “These basic requirements were noted in (Lesser 1991): “some qualitative intuitions on...requirements for the use of the FA/C paradigm: local partial solutions are valid suf- ficiently often to seed system-wide problem solving with enough correct constaints.. . .” P.l.l. Start --) Trackld (p = 0.7) P.1.2. Start + Trackat (p = 0.3) P.2.0. Tracklt -+ Vlt Nit Tracklt+l (p = 1.0) P.3.0. Track2t -+ V2i N2t Back2t+l (p = 1.0) P.4.1. Vlt --) s2t (p = 0.5) P.4.2. Vlt + S3t s5t (p = 0.5) P.5.1. v2t + S3t (p = 0.5) P.5.2. v2t 3 s2t s4t (p = 0.5) P.6.1. Nt + S4t (p = 0.5) P.6.2. Nt + S5t (p = 0.3) P.6.3. Nt ---) lambda (p = 0.2) Figure 1: Simple vehicle tracking IDP grammar. based distributed SI, we have not yet shown that real- world SI problems are indeed nearly monotonic. This could be done by taking data sets, determining the cor- rect global solutions in a centralized fashion, and then collecting the necessary statistics by selecting random subsets of these data sets, interpreting them, and ana- lyzing the (partial) solutions relative to the global so- lutions. As a first step toward this eventual goal, we have instead made use of a recently developed frame- work for analyzing SI domains and problem solvers to provide some support for the concept of nearly mono- tonic problems. Complex SI problems can be represented and ana- lyzed using the Interpretation Decision Problem (IDP) formalism (Whitehair & Lesser 1993; Whitehair 1996). In the IDP formalism, the structure of both problem domains and problem solvers is represented in terms of context free attribute grammars and functions as- sociated with the production rules of the grammars. The formalism has been used to analyze a variety of simulated SI domains and SI problem-solving architec- tures. For example, grammars have been constructed that represent the SI domains and goal-directed black- board architecture used in the Distributed Vehicle Mon- itoring Testbed (DVMT) (Durfee & Lesser 1987; Lesser & Corkill 1983). We will first use a very simple vehicle tracking gram- mar to illustrate a nearly monotonic SI domain. In the problem domain defined by the grammar rules in Fig- ure 1, there are two kinds of vehicles, V 1 and V2. The nonterminals Track1 and Track2 correspond to vehicle tracks of these two types, respectively. The terminal symbols in this grammar, S2, S3, 54, and S5, corre- spond to actual sensor data generated by the moving vehicles. The nonterminal N represents random noise in the environment. The terminal “lambda” that appears in rule P.6.3. corresponds to an empty string. The subscripts, t+n, correspond to the time in which an event occurs. The nonterminals Track1 and Track2 are the potential solutions for problem instances generated with this grammar. Grammars of this form are referred to as Generational IDP Grammars, (IDP,) (Whitehair 1996). An IDP, generates a specific problem instance using the probabilities that are shown in parentheses Multiagent Problem Solving 93 with each rule. For example, given a nonterminal Vl, IDP, will generate an S2 with probability 0.5, or it will generate an S3 and S5 with probability 0.5. We re- fer to these probabilities as the grammar’s distribution function. This example grammar is important because it illus- trates the relationships that can lead to nearly mono- tonic, complex domains. For example, consider a situ- ation where the sensor data “S2 S4” is observed. This data is ambiguous because it could have been gener- ated by either a Track1 or a Track2. A Track1 could have generated “S2 S4” by generating a Vl and an N, which would have generated an S2 and an S4 respec- tively. The probability of a Vl generating an S2 is 0.5 and the probability of an N generating an S4 is 0.5. A Track2 could have generated this data by generating a V2, which would have generated “S2 S4” with proba- bility 0.5. Thus, given the possible interpretations Vl and V2 for “S2 S4,” and given that BEL(H) is the problem solver’s belief in interpretation hypothe- sis H, the values P(U E MPE* 1 BEL(V1)) and P(V2 E MPE* 1 BEL(V2)) are approximately equal for any values of BEL( V 1) and BEL( V2). This means that it is not possible to use the beliefs in Vl and V2 to differentiate between a Track1 and a Track2 interpreta- tion of “S2 S4.” Thus, the domain is nonmonotonic. On the other hand, consider the sequence of vehicle (position) hypotheses, “V2t V2t+i V2,+2 V2,+a.” Each of these V2 hypotheses would be associated with either an “S2 S4” or an S3. If a V2 explains an S3, then the probability that the partial results are from a Track2 is very high (actually it is a certainty). However, if a V2 explains an “S2 S4” it is possible that the sen- sor data was really generated by “Vl -+ S2” and “N 4 S4.” However, as the length of the truck (vehicle position sequence) “V2t V2,+i V2t+2 V2t+a . . .” in- creases, it becomes more and more likely that the MPE of the data is a Tracka. l2 In other words, as the length of the track increases, P(track E MPE 1 BEL(track)) becomes very high. This observation is intended to illustrate the follow- ing point. In complex, real-world domains, it is often the case that certain “equivalence classes” of partial results exhibit behavior that is nearly monotonic. In the vehicle tracking domain, this occurs for the equiv- alence class of partial tracks when the partial tracks extend over a significant number of time periods. More than likely, this phenomenon holds in other domains as well. For example, as the length of an interpreted frag- ment of speech increases, it is likely that the associated 12As the length of the partial track of V2 hypotheses in- creases, the probability that it was generated by a Track1 and noise decreases. If the probability of a single ‘32 S4” being generated by a Vl (i.e., by a Trackl) is 0.25 (as in the example grammar), then the probability of two “S2 S4” events occuring sequentially is 0.25 * 0.25. The probabil- ity of three such events occuring sequentially is 0.25 * 0.25 *0.25. And so forth. equivalence class of partial results will exhibit nearly monotonic properties. As a further demonstration that this phenomena oc- curs in complex domains, we used the IDP framework to collect statistics for a more complex vehicle track- ing problem domain (Whitehair 1996). This grammar modeled all of the phenomena that have been studied in the DVMT, plus some additional factors. In this do- main, we defined three different interpretation types: group level (GL), vehicle level (VL), and partial tracks of length 4 (PT). We then accumulated statistics us- ing the grammar. Problem instances were repeatedly generated and their MPE* interpretations determined (i.e., the problem was solved). For each such cycle, we recorded the credibilities l3 of any instances (hypothe- ses) of each of the interpretation types. However, we also divided each of the resulting sets into those hy- potheses that were subsequently used in MPE* and those that were not. What we found was that the GL and VL types were nonmonotonic with respect to credibility. In other words, the credibility in a GL or VL hypoth- esis was not a good indicator of whether or not the hypothesis was an element of MPE*. The distribution of credibiZity( H) was approximately the same for all GL and VL hypotheses, regardless of whether or not they were actually part of MPE* and P(H E MPE* 1 type(H) = vb,crediMity(H) = 0.7) was only 0.32. On the other hand, for PT hypothe- ses there was a strong correlation between credibility and membership in MPE*: if a partial track of length 4 had a fairly high credibility it was very likely to be part of MPE*. For example, P(H E MPE* I type(H) = PT, credibility(H) = 0.7) = 0.92, while P(H E MPE* I type(H) = PT, credibidity(H) = 0.55) = 0.5. Thus, PT hypotheses were nearly monotonic (in terms of solution membership) if they achieved a reasonable credibility. Conclusion In this paper, we have shown that while consistency checking of local agent solutions has been used in pre- vious FA/C-based SI systems, this strategy cannot nec- essarily produce high quality global solutions. However, we have also shown that in certain domains the strategy can be used to efficiently find approximate, satisficing solutions. In particular, problems that we call nearly monotonic allow for consistency checking of local solu- tions to produce reasonable global solutions-when the local solutions meet certain criteria. This work furthers our understanding of when the FA/C model is appropriate for distributed SI and what appropriate coordination strategies are. Much remains to be done, however. The importance of a problem 13The IDP analysis tools currently compute a credibility rating for each hypothesis. This is not exactly what we have termed belief (the conditional probability of the hypothesis). Credibility(H) 2 .&EL(H). 94 Agents being nearly monotonic remains an open issue. Near monotonicity can support efficient FA/C problem solv- ing, but does not alone guarantee efficiency, and FA/C- based SI can be efficient even without the property: local solutions may frequently be inconsistent or local data may provide insufficient belief to make use of the property, and in some cases (even inconsistent) local so- lutions can be integrated with limited communication of raw data-particularly if we need only approximate global solutions. The focus of our future research on near montonicity will be assessing whether real-world SI domains are nearly monotonic and determining how important this is for efficient FA/C-based SI. In par- ticular, we want to understand what other properties might make it possible to detect and resolve inconsis- tencies while still limiting the need to communciate raw sensor data among agents. The concept of nearly monotonic problems should be of interest beyond the distributed AI and FA/C communities. For example, this characteristic would support efficient satisficing problem solving in complex centralized SI problems. Here the issue is not limit- ing communications among agents, but simply limiting the amount of data that is processed to make the prob- lem tractable or real-time. Problem solving must be approximate and satisficing, and a statistical charac- terization of the monotonicity/nonmonotonicity in the domain would make it possible to evaluate the reliabil- ity of approximate solutions based on incomplete data processing. The lesson is that while nonmonotonicity is a fact of life in SI and many other domains, it is often not completely arbitrary or unpredictable, and models of its characteristics might yield important benefits. Acknowledgements This work was supported in part by the Department of the Navy, Office of the Chief of Naval Research, under contract N00014-95-1-1198. The content of the infor- mation does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. eferences Bar-Shalom, Y., and Fortmann, T. 1988. Trucking and Data Association. Academic Press. Carver, N., and Lesser, V. 1991. A New Framework for Sensor Interpretation: Planning to Resolve Sources of Uncertainty. In Proceedings of AAAI-9 1, 724-73 1. Carver, N., Cvetanovic, Z., and Lesser, V. 1991. So- phisticated Cooperation in FA/C Distributed Problem Solving Systems. In Proceedings of AAAI-91, 191-198. Carver, N., and Lesser, V. 1994. A First Step Toward the Formal Analysis of Solution Quality in FA/C Dis- tributed Interpretation Systems. In Proceedings of the 13th International Workshop on Distributed Artificial Intelligence. Carver, N. 1995. Examining Some Assumptions of the FA/C Distributed Problem-Solving Paradigm. In Proceedings of the Midwest Artificial Intelligence and Cognitive Science Society Conference, 37-42. Carver, N., and Lesser, V. 1995a. The DRESUN Testbed for Research in FA/C Distributed Situation Assessment: Extensions to the Model of External Evi- dence. In Proceedings of the International Conference on Multiagent Systems, 33-40. Carver, N., and Lesser, V. 1995b. A For- mal Analysis of Solution Quality in FA/C Dis- tributed Sensor Interpretation Systems. Technical Re- port, 95-05, Computer Science Department, South- ern Illinois University. (Can be obtained from: “http://www.cs.siu.edu/“carver”) Cox, I., and Leonard, J. 1994. Modeling a Dynamic Environment using a Bayesian Multiple Hypothesis Approach. Artificial Intelligence, vol. 66, 311-344. Durfee, E. 1987. A Unified Approach to Dynamic Co- ordination: Planning Actions and Interactions in a Distributed Problem Solving Network. Ph.D. diss., Computer Science Department, University of Mas- sachusetts. Lesser, V., and Corkill, D. 1981. Functionally Accu- rate, Cooperative Distributed Systems. IEEE Trans- actions on Systems, Man, and Cybernetics, vol. 11, no. 1, 81-96. Lesser, V., and Corkill, D. 1983. The Distributed Vehi- cle Monitoring Testbed: A Tool for Investigating Dis- tributed Problem Solving Networks. AI Magazine, vol. 4, no. 3, 15-33. Lesser, V. 1991. A Retrospective View of FA/C Dis- tributed Problem Solving. IEEE Transactions on Sys- tems, Man, and Cybernetics, vol. 21, no. 6, 1347-1362. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann. Whitehair, R., and Lesser, V. 1993. A Framework for the Analysis of Sophisticated Control in Interpretation Systems. Technical Report, 93-53, Computer Science Department, University of Massachusetts. (Can be ob- t ained from: “http://dis.cs.umass.edu/“) Whitehair, R. 1996. A Framework for the Analysis of Sophisticated Control. Ph.D. diss., Computer Science Department, University of Massachusetts. Multiagent Problem Solving 95
1996
13
1,766
n Average- ent Learning Algorit Sridhar Mahadevan Department of Computer Science and Engineering University of South Florida Tampa, Florida 33620 mahadeva@csee.usf.edu Abstract Average-reward reinforcement learning (ARL) is an undiscounted optimality framework that is generally applicable to a broad range of control tasks. ARL computes gain-optimal control policies that maxi- mize the expected payoff per step. However, gain- optimality has some intrinsic limitations as an opti- mality criterion, since for example, it cannot distin- guish between different policies that all reach an ab- sorbing goal state, but incur varying costs. A more selective criterion is bias optima&y, which can filter gain-optimal policies to select those that reach absorb- ing goals with the minimum cost. While several ARL algorithms for computing gain-optimal policies have been proposed, none of these algorithms can guaran- tee bias optimality, since this requires solving at least two nested optimality equations. In this paper, we describe a novel model-based ARL algorithm for com- puting bias-optimal policies. We test the proposed algorithm using an admission control queuing system, and show that it is able to utilize the queue much more efficiently than a gain-optimal method by learn- ing bias-optimal policies. Mot ivat ion Recently, there has been growing interest in an undis- counted optimality framework called average reward reinforcement learning (ARL) (Boutilier & Puter- man 1995; Mahadevan 1994; 1996a; Schwartz 1993; Singh 1994; Tadepalli & Ok 1994). ARL is well- suited to many cyclical control tasks, such as a robot avoiding obstacles (Mahadevan 1996a), an automated guided vehicle (AGV) transporting parts (Tadepalli & Ok 1994), and for process-oriented planning tasks (Boutilier & Puterman 1995), since the average re- ward is a good metric to evaluate performance in these tasks. However, one problem with the average reward criterion is that it is not sufficiently selective, both in goal-based tasks and tasks with no absorbing goals. Figure 1 illustrates the limitation of the average re- ward criterion on a simple two-dimensional grid-world task. Here, the learner is continually rewarded by +lO for reaching and staying in the absorbing goal state 6, and is rewarded -1 in all non-goal states. Clearly, all control policies that reach the goal will have the same average reward. Thus, the average reward criterion cannot be used to select policies that reach absorbing goals in the shortest time. Figure 1: A simple grid-world navigation task to illus- trate the unselectivity of the average-reward criterion. The two paths shown result in the same average re- ward (+lO), but one takes three times as long to get to the goal G. A more refined metric called bias optima&y (Black- well 1962) addresses the unselectivity of the average reward criterion. A policy is bias-optimal if it maxi- mizes average reward (i.e. it is gain-optimaZ), and also maximizes the average-adjusted sum of rewards over all states. The latter quantity is simply the sum of re- wards received, subtracting out the average reward at each step. For example, in Figure 1, the shorter path yields an average-adjusted reward of -22, whereas the longer path yields an average-adjusted reward of -66. Intuitively, bias optimality selects gain-optimal poli- cies that maximize the average adjusted sum of rewards over the initial transient states (e.g., all non-goal states in Figure 1). In many practical problems where the av- erage reward criterion is most useful, such as inventory control (Puterman 1994) and queueing systems (Klein- rock 1976), there may be several gain-optimal policies which can differ substantially in their “start-up” costs. In all such problems, it is critical to find bias-optimal Reinforcement Learning 875 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. policies. While several ARL algorithms have been previously proposed (Schwartz 1993; Singh 1994; Tadepalli & Ok 1994)) none of these algorithms will yield bias-optimal policies in general. In particular, while they can com- pute the bias-optimal policy for the simple grid-world task in Figure 1, they cannot discriminate the bias- optimal policy from the gain-optimal policy for the simple S-state Markov decision process (MDP) given in Figure 2, or for the admission control queueing task shown in Figure 5. The main reason is these algorithms only solve one optimality equation, namely the average reward Bellman equation. It can be shown (Puterman 1994) that solving the Bellman equation alone is in- sufficient to determine bias-optimal policies, whenever there are several gain-optimal policies with different sets of recurrent states. The MDP’s given in Figure 2 and Figure 5 fall into this category. In this paper we propose a novel model-based ARL algorithm that is explicitly designed to compute bias- optimal policies. This algorithm is related to previ- ous ARL algorithms but significantly extends them by solving two nested optimality equations to deter- mine bias-optimal policies, instead of a single equation. We present experimental results using an admission control queuing system, showing that the new bias- optimal algorithm is able to learn to utilize the queue much more efficiently than a gain-optimal algorithm that only solves the Bellman equation. Gain and Bias Optimality We assune the standard Markov decision process (MDP) framework (Puterman 1994). An MDP con- sists of a (finite or infinite) set of states S, and a (finite or infinite) set of actions A for moving between states. In this paper we will assume that S and A are finite. We will denote the set of possible actions in a state x by A(x). Associated with each action a is a state transition matrix P(u), where pZy(u) represents the probability of moving from state x to y under action a. There is also a reward or payoff function T : S x A + 72, where r(x, a) is the expected reward for doing action a in state 2. A stationary deterministic policy is a mapping x : S + A from states to actions. In this paper we con- sider only such policies, since a stationary determinis- tic bias-optimal policy exists. Two states x and y com- municate under a policy x if there is a positive prob- ability of reaching (through zero or more transitions) each state from the other. A state is recurrent under a policy x if starting from the state, the probability of eventually reentering it is 1. Note that this implies that recurrent states will be visited forever. A non-recurrent state is called transient, since at some finite point in time the state will never be visited again. A recurrent class of states is a set of recurrent states that all com- municate with each other, and do not communicate with any state outside this class. An MDP is termed unichain if the transition matrix corresponding to ev- ery policy contains a single recurrent class, and a (pos- sibly empty) set of transient states. Many interesting problems involve unichain MDP’s, such as stochastic grid-world problems (Mahadevan 1996a), the admis- sion control queueing system shown in Figure 5, and an AGV transporting parts (Tadepalli & Ok 1994). \h +o al a2 Figure 2: A simple 3-state MDP that illustrates the unselectivity of the average reward criterion in MDP’s with no absorbing goal states. Both policies in this MDP are gain-optimal, however only the policy that selects action al in state A is bias-optimal. Average reward MDP aims to compute policies that yield the highest expected payoff per step. The average reward p”(x) associated with a particular policy r at a state x is defined as p”(x) = Jiil E (CL1 Jw4) N , QXE s, where RF(x) is the reward received at time t starting from state x, and actions are chosen using policy 7r. E( .) denotes the expected value. A gain-optimal policy X* is one that maximizes the average reward over all states, that is, p”*(x) 2 p”(x) over all policies 7r and states 2. Note that in unichain MDP’s, the average reward of any policy is state independent. That is, p”(x) = p”(y) = p”, vx,y E s,v7r. As shown in Figure 1, gain-optimality is not suffi- ciently selective in goal-based tasks, as well as in tasks with no absorbing goals. A more selective criterion called bias optimality addresses this problem. The av- erage adjusted sum of rewards earned following a policy 7r (assuming an aperiodic MDP) is N-l VT(s) = jiliWE x vm) - P”> 7 t=o where p” is the average reward associated with policy K. .4 policy X* is termed bias-optimal if it is gain- optimal, and it also maximizes the average-adjusted 876 Learning values, that is VT* (2) 2 Vr( x) over all x E S and policies X. The relation between gain-optimal and bias- optimal policies is depicted in Figure 3. Figure 3: This diagram illustrates the relation between gain-optimal and bias-optimal policies. In the example 3-state MDP in Figure 2, both poli- cies are gain-optimal since they yield an average re- ward of 1. However, the policy r that selects action al in state A generates bias values V”(A) = 0.5, V”(B) = -0.5, and V”-(C) = 1.5. The policy x is bias-optimal because the only other policy is 7r’ that selects action a2 in state A, and generates bias values VT’(A) = -0.5, VT’(B) = -1.5, and Y’(C) = 0.5. Bias-Optimality Equations The key difference between gain and bias optimality is that the latter requires solving two nested optimal- ity equations for a unichain MDP. The first equation is the well-known average-reward analog of Bellman’s optimality equation. Theorem 1 For any MDP that is either unichain or communicating, there exists a value function V* and a scalar p* satisfying the equation over all states such that the greedy policy rr* resulting from V* achieves the optimal average reward p* = pX* where P=* 2 pr over all policies n. Here, “greedy” policy means selecting actions that maximize the right hand side of the above Bellman equation. There are many algorithms for solving this equation, ranging from DP methods (Puterman 1994) to ARL methods (Schwartz 1993). However, solving this equation does not suffice to discriminate between bias-optimal and gain-optimal policies for a unichain MDP. In particular, none of the previous ARL algo- rithms can discriminate between the bias-optimal pol- icy and the gain-optimal policy for the 3-state MDP in Figure 2. A second optimality equation has to be solved to determine the bias-optimal policy. Theorem 2 Let V be a value function and p be a scalar that together satisfy Equation 1. Define Av(i) C A(i) to be the set of actions that maximize the right- hand side of Equation I. There exists a function W : S + R satisfying the equation over all states such that any policy formed by choosing actions in Av that maximize the right-hand side of the above equation is bias-optimal. These optimality equations are nested, since the set Av of actions over which the maximization is sought in Equation 2 is restricted to those that maximize the right-hand side of Equation 1. The function W, which we will refer to as the bias oflset, holds the key to policy improvement, since it indicates how close a policy is to achieving bias-optimality. A Model-based ias-Opt imality Algorithm We now describe a model-based algorithm for com- puting bias-optimal policies for a unichain MDP. The algorithm estimates the transition probabilities from online experience, similar to (Jalali & Ferguson 1989; Tadepalli & Ok 1994). However, unlike these previous algorithms, the proposed algorithm solves both opti- mality equations (Equation 1 and Equation 2 above). Since the two equations are nested, one possibility is to solve the first equation by successive approximation, and then solve the second equation. However, stop- ping the successive approximation process for solving the first equation at any point will result in some finite error, which could prevent the second equation from being solved. A better approach is to interleave the successive approximation process and solve both equa- t ions simultaneously (Federgruen & Schweitzer 1984). The bias optimality algorithm is described in Fig- ure 4. The transition probabilities P;j(u) and expected rewards r(i, a) are inferred online from actual transi- tions (steps 7 through 10). The set h(i) represents all actions in state i that maximize the right-hand side of Equation 1 (step 3). The set w(i), on the other hand, refers to the subset of actions in h(i) that also maxi- mize the right-hand side of Equation 2. The algorithm successively computes A(i,e,) and w(i,cn), the set of gain-optimal actions that are within E, of the maxi- mum value, and the set of bias-optimal actions within this gain-optimal set. This allows the two nested equa- tions to be solved simultaneously. Here, E, is any series of real numbers that slowly decays as n + 00, similar to a “learning rate”. Note that since the algorithm is Reinforcement Learning 877 based on stochastic approximation, some residual error is unavoidable, and thus e‘n should be decayed only up to some small value > 0. The algorithm normalizes the bias values and bias offset values by grounding these quantities to 0 at a reference state. This normalization bounds these two quantities, and also improves the numerical stability of average reward algorithms (Puterman 1994). In the description, we have proposed choosing the reference state that is recurrent under all policies, if such exists, and is known beforehand. For example, in a standard stochastic grid-world problem (Mahadevan 1996a), the goal state satisfies this condition. In the admission control queueing task in Figure 5, the state (0,O) sat- isfies this condition. The policy output by the algo- rithm maximizes the expected bias offset value, which, as we discussed above, is instrumental in policy im- provement . Bias Optimality in an Admission Control Queuing System We now present some experimental results of the pro- posed bias optimality algorithm using an admission control system, which is a well-studied problem in queueing systems (Kleinrock 1976). Generally speak- ing, there are a number of servers, each of which pro- vides service to jobs that are arriving continuously ac- cording to some distribution. In this paper, for the sake of simplicity, we assume the M/M/l queuing model, where the arrivals and service times are independent, memoryless, and distributed exponentially, and there is only 1 server. The arrival rate is modeled by param- eter X, and the service rate by parameter ,Y. At each arrival, the queue controller has to decide whether to admit the new job into the queue, or to reject it. If ad- mitted, each job immediately generates a fixed reward R for the controller, which also incurs a holding cost f(j) for the j jobs currently being serviced. The aim is to infer an optimal policy that will maxi- mize the rewards generated by admitting new jobs, and simultaneously minimize the holding costs of the exist- ing jobs in the queue. Stidham(Stidham 1978) proved that if the holding cost function f(j) is convex and non- decreasing, a control limit policy is optimal. A control limit policy is one where an arriving new job is admit- ted into the queue if and only if there are fewer than L jobs in the system. Recently, Haviv and Puterman (Haviv & Puterman ) show that if the cost function f(j) = cj, there are at most two gain-optimal control limit policies, namely admit Z and admit L + 1, but only one of them is also bias-optimal (admit L+ 1). In- tuitively, the admit L+ 1 policy is bias-optimal because the additional cost of the new job is offset by the extra 878 Learning 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. Initialization: Let n = 0, bias function V(x) = 0, and bias-offset function W(x) = 0. Let the initial state be i. Initialize N(i, a) = 0, the number of times action a has been tried in state i. Let T(i, a, k) = 0, the number of times action a has caused a transition from i to k. Let the expected rewards r(i,u) = 0. Let s be some reference state, which is recurrent under all policies. Let H(i,u) = r(i,u) + Cj P;j(u)V(j), Vu E A(i). Let h(i) = {a E A(i)ju maximizes H(i,u)}. Let A(i,E,) be the set of actions that are within en of the maximum H(i, a) value. Let w(i, e,) = {a E A(i,E,)lu maximizes cj P;jwwJJ~. With probability 1 --pezp, select action A to be some a, E w(i, en). Otherwise let action A be any random action a, E A(i). Carry out action A. Let the next state be k, and immediate reward be +(i, a). N(i, A) t N(i, A) + 1. T(i, A, k) t T(i, A, k) + 1. P&A) t w. &A) + r(4 -A)(1 - &--& + &+,a). v@> + fnaXaEA(i)(H(i7 a>> - maXaEA(s) (H(% a>>. w@> + maXaEA(i,c,) (xi Pij(a)W(j) - v(i)) - maXuEA(s,Q (Cj %WW~) - VW). If n < MAXSTEPS, set n t n + 1, and i t k and go to step 5. Output n(i) E w(i,e,). Figure 4: A model-based algorithm for computing bias- optimal policies for unichain MDP’s. reward received. Note that since rejected jobs never return, a policy that results in a larger queue length is better than one that results in a smaller queue length, provided the average reward of both policies are equal. Since the M/n/r/l queuing model is a continuous time MDP, we first convert it by uniformization (Put- erman 1994) into a discrete time MDP. Figure 5 illus- trates the general structure of the uniformized M/M/l admission control queuing system. States in the figure are pairs (s, j), where s represents the number of jobs currently in the queue, and j is a boolean-valued vari- able indicating whether a new job has arrived. In states (s, l), there are two possible actions, namely reject the new job (a = 0), or admit the new job (a = 1). In states (s,O), there is only one possible action, namely continue the process (a = 0). Theoretically, there is an infinite number of states in the system, but in practice, a finite upper bound needs to be imposed on the queue size. Note that the two gain-optimal policies (admit L and admit L+ 1) have different sets of recurrent states, just as the 3-state MDP in Figure 2. States (0,~) to (L - 1, x) form a recurrent class in the admit L policy, whereas states (0, x) to (L, x) form the recurrent class in the admit L + 1 policy. Figure 5: This diagram illustrates the MDP represen- tation of the uniformized M/M/l admission control queuing system for the average reward case. The reward function for the average reward version of the admission control queuing system is as follows. If there are no jobs in the queue, and no new jobs have arrived, the reward is 0. If a new job has arrived and admitted in state s, the reward equals to the difference between the fixed payoff R for admitting the job and the cost of servicing the s + 1 resulting jobs in the queue. Finally, if the job is not admitted, the reward is the service cost of the existing s jobs. There is an additional multiplicative term X + p that results from the uniformization process. r((O,O),O) = r( (0, 1), 0) = 0. r((s,l),l) = [R-f(s+l)](X+& ~29 7’@, Oh 0) = r((s, l),O) = -f(s)@ + /& s r 1. Table 1 compares the performance of the bias- optimal algorithm with a simplified gain-optimal al- gorithm for several sets of parameters for the admis- sion control system. We selected these from a total run of around 600 parameter combinations since these produced the largest improvements. Each combination was tested for 30 runs, with each run lasting 200,000 steps. Of these 600 parameter sets, we observed im- provements of 25% or more in a little over 100 cases. In all other cases, the two algorithms performed equiva- lently, since they yielded the same average reward and average queue length. In every case shown in the table, there is substantial improvement in the performance of the bias-optimal algorithm, as measured by the in- crease in the average size of the queue. What this means in practice is that the bias-optimal algorithm allows much better utilization of the queue, without in- creasing the cost of servicing the additional items in the queue. Note that the improvement will occur whenever there are multiple gain-optimal policies, only one of which is bias-optimal. If there is only one gain-optimal policy, the bias optimality algorithm will choose that policy and thus perform as well as the gain-optimal algorithm. 4 4 12 1 48.0% 2 2 15 1 47.9% Table 1: This table compares the performance of the model-based bias-optimal algorithm with a (gain- optimal) simplification of the same algorithm that only solves the Bellman equation. Related Work To our knowledge, the proposed bias optimality algo- rithm represents the first ARL method designed explic- itly for bias optimality. However, several previous algo- Reinforcement Learning 879 rithms exist in the DP and OR literature. These range from policy iteration (Veinott 1969; Puterman 1994) to linear programming (Denardo 1970). Finally, Feder- gruen and Schweitzer (Federgruen & Schweitzer 1984) study successive approximation methods for solving a general sequence of nested optimality equations, such as Equation 1 and Equation 2. We expect that bias- optimal ARL algorithms, such as the one described in this paper, will scale better than these previous non- adaptive bias-optimal algorithms. Bias-optimal ARL algorithms also have the added benefit of not requiring detailed knowledge of the particular MDP. However, these previous DP and OR algorithms are provably convergent, whereas we do not yet have a convergence proof for our algorithm. Future Work This paper represents the first step in studying bias optimality in ARL. Among the many interesting issues to be explored are the following: Model-free Bias Optimality Algorithm: We have also developed a model-free bias optimality algorithm (Mahadevan 1996b), which extends previous model- free ARL algorithms, such as R-learning (Schwartz 1993), to compute bias optimal policies by solving both optimality equations. Scale-up Test on More Realistic Problems: In this paper we only report experimental results on an ad- mission control queuing domain. We propose to test our algorithm on a wide range of other problems, including more generalized queuing systems (Klein- rock 1976) and robotics related tasks (Mahadevan 1996a). Acknowledgements am indebted to Martin Puterman for many discus- sions regarding bias optimality. I thank Larry Hall, Michael Littman, and Prasad Tadepalli for their de- tailed comments on this paper. I also thank Ken Chris- tensen for helping me understand queueing systems. This research is supported in part by an NSF CAREER Award Grant No. IRI-9501852. References Blackwell, D. 1962. Discrete dynamic programming. Annals of Mathematical Statistics 331719-726. Boutilier, C., and Puterman, M. 1995. Process- oriented planning and average-reward optimality. In Proceedings of the Fourteenth JCAI, 1096-1103. Mor- gan Kaufmann. Denardo, E. 1970. Computing a bias-optimal policy in a discrete-time Markov decision problem. Operations Research 18:272-289. Federgruen, A., and Schweitzer, P. 1984. Successive approximation methods for solving nested functional equations in Markov decision problems. Mathematics of Operations Research 9:319-344. Haviv, M., and Puterman, M. Bias optimality in con- trolled queueing systems. To Appear in Journal of Applied Probability. Jalali, A., and Ferguson, M. 1989. Computation- & efficient adaptive control algorithms for Markov chains. In Proceedings of the 28th IEEE Conference on Decision and Control, 1283-1288. Kleinrock, L. 1976. Queueing Systems. John Wiley. Mahadevan, S. 1994. To discount or not to dis- count in reinforcement learning: A case study com- paring R-learning and Q-learning. In Proceedings of the Eleventh International Conference on Machine Learning, 164-l 72. Morgan Kaufmann. Mahadevan, S. 1996a. Average reward reinforcement learning: Foundations, algorithms, and empirical re- sults. Machine Learning 22: 159-196. Mahadevan, S. 199613. Sensitive-discount optimal- ity: Unifying average-reward and discounted rein- forcement learning. In Proceedings of the 13th Inter- national Conference on Machine Learning. Morgan Kaufmann. To Appear. Puterman, M. 1994. Marhov Decision Processes: Dis- crete Dynamic Stochastic Programming. John Wiley. Schwartz, A. 1993. A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the Tenth International Conference on Machine Learning, 298-305. Morgan Kaufmann. Singh, S. 1994. Reinforcement learning algorithms for average-payoff Markovian decision processes. In Proceedings of the 12th AAAI. MIT Press. Stidham, S. 1978. Socially and individually optimal control of arrivals to a GI/M/l queue. Management Science 24( 15). Tadepalli, P., and Ok, D. 1994. H learning: A rein- forcement learning method to optimize undiscounted average reward. Technical Report 94-30-01, Oregon State Univ. Veinott, A. 1969. Discrete dynamic programming with sensitive discount optimality criteria. Annals of Mathematical Statistics 40(5):1635-1660. 880 Learning
1996
130
1,767
loratory Average DoKyeong Ok and Prasad Tadepalli Computer Science Department Oregon State University Corvallis,Oregon 97331-3202 {okd,tadepalli}@ research.cs.orst.edu Abstract We introduce a model-based average reward Re- inforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also in- troduce an extension to H-learning, which au- tomatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this “Auto-exploratory H-learning” performs better than the original H-learning un- der previously studied exploration methods such as random, recency-based, or counter-based ex- ploration. Introduction Reinforcement Learning (RL) is the study of learn- ing agents that improve their performance at some task by receiving rewards and punishments from the environment. Most approaches to reinforcement learning, including Q-learning (Watkins and Dayan 92) and Adaptive Real-Time Dynamic Programming (ARTDP) (Barto, Bradtke, & Singh 95), optimize the total discounted reward the learner receives. In other words, a reward which is received after one time step is considered equivalent to a fraction of the same reward received immediately. One advantage of discounting is that it yields a finite total reward even for an infinite sequence of actions and rewards. While mathemati- cally convenient, many real world domains to which we would like to apply RL do not have a natural inter- pretation or need for discounting. The natural crite- rion to optimize in such domains is the average reward received per time step. Discounting encourages the learner to sacrifice long- term benefits for short-term gains, since the impact of an action choice on long-term reward decreases ex- ponentially with time. Hence, using discounted opti- mization when average reward optimization is what is required couZd lead to suboptimal policies. Neverthe- less, it can be argued that it is appropriate to optimize discounted total reward if that also nearly optimizes the average reward. In fact, many researchers have successfully used discounted learning to optimize av- erage reward per step (Lin 92; Mahadevan & Connell 92). This raises the question whether and when dis- counted RL methods are appropriate to use to optimize the average reward. In this paper, we describe an Average reward RL (ARL) method called H-learning, which is an undiscounted version of Adaptive Real-Time Dynamic Programming (ARTDP) (Barto, Bradtke, & Singh 95). Unlike Schwartz’s R-learning (Schwartz 93) and Singh’s ARL algorithms (Singh 94), it is model-based, in that it learns and uses explicit action models. We compare H-learning with its discounted counterpart ARTDP to optimize the average reward in the task of scheduling a simulated Automatic Guided Vehicle (AGV), a material handling robot used in manufac- turing. Our results show that H-learning is compet- itive with ARTDP when the short-term (discounted with strong discounting) optimal policy also optimizes the average reward. When short-term and long-term n this optimal policies are different, ARTDP either fails to converge to the optimal average reward policy or converges too slowly if discounting is weak. Like most other RL methods, H-learning needs ex- ploration to find a globally optimal policy. A num- ber of exploration strategies have been studied in RL, including occasionally executing random actions, and preferring states which are least visited (counter- based) or actions least recently executed (recency- based) (Thrun 94). We introduce a version of H- learning which has the property of automatically ex- ploring the unexplored parts of the state space while al- ways taking a greedy action with respect to the current value function. We show that this “Auto-exploratory H-learning” outperforms the previous version of H- learning under three different exploration strategies, including counter-based and recency-based methods. The rest of the paper is organized as follows: Section 2 derives H-learning and compares it with ARTDP. Section 3 introduces Auto-exploratory H-learning, and compares it with a variety of exploration schemes. Sec- tion 4 is a discussion of related work and future re- search issues, and Section 5 is a summary. Reinforcement Learning 881 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Average Reward Reinforcement Learning Markov Decision Processes (MDP) are described by a set of n discrete states S and a set of actions A available to an agent. The set of actions which are applicable in a state i are denoted by U(i) and are called admissible. The Markovian assumption means that an action u in a given state i E S results in state j with some fixed probability P~,J (u). There is a finite immediate reward Q(U) for executing an action u in state i resulting in state j. Time is treated as a sequence of discrete steps t = 0,1,2 ).... A policy /-I is a mapping from states to actions, such that p(i) E U(i). We only consider poli- cies which do not change with time, which are called “stationary policies.” Let a controller using a policy p take the agent through states with some probability, - so,..., st - in time 0 accumulating a total thru t, reward rp (so, t> = Ciz’, rS&(sk)). The expected total re- ward, E(+(so, t)), is a good candidate to optimize; but if the controller has infinite horizon, i.e., as t tends to 00, it can be unbounded. The discounted RL meth- ods make this sum finite by multiplying each successive reward by an exponentially decaying discount factor y. In other words, they optimize the expected dis- counted total reward limt-coo E(C:,‘, rkrJb(,$sk))), whereO<r< 1. Discounting, however, tends to sacrifice bigger long- term rewards in favor of smaller short-term-rewards, which is undesirable in many cases. A more natural criterion is to optimize the average expected reward per step over time t as t + 00. For a given starting state SO, and policy 1-1, this is denoted by pp(so) and is defined as: 1 p’l(s0) = JiE jE(r’(s0, t)) We say that two states communicate under a policy if there is a positive probability of reaching each state from the other using that policy. A recurrent set of states is a closed set of states that communicate with each other, i.e., they do not communicate with states not in that set. Non-recurrent set of states are called transient. An MDP is ergodic if its states form a sin- gle recurrent set under each stationary policy. It is a u&chain if every stationary policy gives rise to a sin- gle recurrent set of states and possibly some transient states. For unichain MDPs the expected long-term average reward per time step for any policy p is independent of the starting state SO. We call it the “gain” of the policy p, denoted by p(p), and consider the problem of finding a “gain-optimal policy,” /.I*, that maximizes p(p) (Puterman 94). Derivation of H-learning Even though the gain of a policy, p(p), is independent of the starting state, the total expected reward in time t mav not be. The total reward for a starting state s in time t for a policy p can be conveniently denoted by p(p)t + Et(s). Although limt,, et(s) may not exist for periodic MDPs, the Cesaro-limit of et(s), defined as liml,, f c:=, Et(s), always exists, and is called the bias of state s, denoted by h(s) (Bertsekas 95). Intu- itively, h(i) - h(j) is th e average relative advantage in long-term total reward for starting in state i as opposed to state j. Theorem 1 For unichuin MDPs, there exist a scalar p and a real-valued function h over S that satisfy the recurrence relation n Vi E S, h(i) = u~;yi(u) + O~Pi,j(u)h(d~ -P (1) .‘-, J’I For any solution of (l), p* attains the above muxi- mum for each state i, and p is its gain. Notice that any one solution to Equation (1) yields an infinite number of solutions by adding the same constant to all h values. Setting the h value of an ar- bitrary recurrent “reference” state to 0, guarantees a unique solution for unichain MDPs. In White’s rela- tive value iteration method, the resulting equations are solved by synchronous successive approximation (Bert- sekas 95). Unfortunately, the asynchronous version of this algorithm does not always converge, as was shown by Tsitsiklis, and cannot be the basis of an ARL al- gorithm(Bertsekas 82). Hence, instead of using Equa- tion (1) to solve for p, H-learning estimates it from on-line rewards (see Figure 1). The agent executes the algorithm in Figure 1 in each step, where i is the current state, and N(i, u) denotes the number of times u is executed in i, out of which T(i, u, j) times it resulted in state j. Our implementa- tion explicitly stores the current greedy policy in the array GreedyActions. This gives a small improvement in performance in some domains because the policy is more stable than the value function. Before starting, the algorithm initializes a to 1, and all other variables to 0. GreedyActions are initialized to the set of ad- missible actions. H-learning can be seen as a cross between R- learning, which is model-free and undiscounted, and Adaptive RTDP (ARTDP), which is model-based and discounted. Like ARTDP, it estimates the probabilities pi,j(a) and rewards ri(a) by straightforward frequency counting, and employs the “certainty equivalence prin- ciple” by using the current estimates as the true values while updating the h values using Equation (1). As in most RL methods, while using H-learning, the agent makes some exploratory moves - moves that do not necessarily maximize the right hand side of Equa- tion (1) and are intended to ensure that every state in S is visited infinitely often during training. These moves make the estimation of p slightly complicated. Sim- ply averaging immediate rewards over non-exploratory moves would not do, because the exploratory moves could make the system visit states that it never visits 882 Learning 1. Take an exploratory action or a greedy action in the current state i. Let a be the action taken, k be the resulting state, and rimm be the immediate reward received. 2. N(i, a) t N(i, a) + 1; T(i, a, k) + T(i, a, k) + 1 3. pi&) + T(i, a, Q/N@, a> 4. c(a) +- c(a) + (rimm - ri(u))/N(i, a) 5. GreedyActions +- All actions u E U(i) that max- imize {c(u) + CT,, p&+(j)} 6. If a E GreedyActions( then (a) p +-- (1 - a)p + a@@) - h(i) + h(k)) 04 Ly + & 7. h(i) + ma~~tqi){r&) + Cj”=, ~&4h(j)) - P 8. itk Figure 1: The H-learning Algorithm if it were following the greedy policy and accumulate rewards received by optimal actions in these states. Instead, we use R-learning’s method of estimating the average reward (Schwartz 93). From Equation (1), for any “greedy” action u in any state i which maximizes the right hand side, p = ri(uj-h(i)+Cy=, pi,j(u)h(j). Hence, the current p can be estimated by cumulatively averaging ri(u) - h(i) + h(j), whenever a greedy action u is executed in state i resulting in state j. H-learning is very similar to Jalali and Ferguson’s Algorithm B, which is proved to converge to the gain- optimal policy for ergodic MDPs (Jalali and Ferguson 89). Ergodicity assumption allows them to ignore the issue of exploration, which is otherwise crucial for con- vergence to the optimal policy. Indeed, the role of exploration in H-learning is to transform the original MDP into an ergodic one by making sure that every state is visited infinitely often. Secondly, to make the h values bounded, Algorithm B arbitrarily chooses a reference state and permanently sets its h value to 0. We found that this change slows down H-learning in many cases. In spite of these two differences, we be- lieve- that the convergence proof of Algorithm B extended to H-learning and R-learning as well. can be Experimental Results on H-learning In this section, we assume that we are in a domain where gain-optimality is the desired criterion, and ex- perimentally study the question whether and when it may be appropriate to use discounting. Our experimental results are based on comparing H- learning with its discounted counterpart ARTDP in a simplified task of scheduling simulated Automatic Guided Vehicles (AGVs). AGVs are mobile robots used in modern manufacturing plants to transport ma- terials from one location to another. In our “Delivery domain” shown in Figure 2, there are two job gener- ators on the left, one AGV, and two destination con- veyor belts on the right. Each job generator produces jobs and puts them on its queue as soon as it is empty. The AGV loads and carries a single job at a time to its destination conveyor belt. Job @znerattx 2 conveya-bctt 2 Cl Job F’ackagc Figure 2: The Delivery domain Each job generator can generate either of two types of jobs (when its queue is empty). Job 2, destined to belt 2, has a reward of 1 unit, while job 1, destined to belt 1, receives a reward K when delivered. The probability of generating job 1 is p for generator 1, and Q for generator 2. The AGV moves on two lanes of 5 positions each, and can take one of six actions at a time: do-nothing, load, move-up, move-down, change-lane, and unload. load and unload can only be executed from the posi- tions next to the source and the destination of jobs re- spectively. An obstacle randomly moves up and down in the right lane. There is a penalty of -5 for collisions with the obstacle. There are a total of 540 different states in this do- main specified by the job numbers in the generator queues and the AGV, and the locations of the AGV and the obstacle. The goal is to maximize the average reward received per unit time. We now present the results of comparing H-learning with ARTDP in the Delivery domain. p is set to 0.5, and Q is set to 0.0. In other words, generator 1 pro- duces both types of jobs with equal probability, while generator 2 always produces type 2 jobs. We compare the results of setting the reward ratio K to 1 and to 5 (Figure 3.) The results shown are averages of 30 trials. For exploration, with 10% probability, we executed a randomly chosen admissible action. When K = 1, since both jobs have the same reward, the gain-optimal policy is to always serve the generator 2 which produces only type 2 jobs. Since the destina- tion of these jobs is closer to their generator than type 1 jobs, it is also a discounted optimal policy. We call this type of domains “short-range domains” where the discounted optimal policy for a small value of y coin- cides with the gain-optimal policy. In this case, the discounted method, ARTDP, converges to the optimal policy slightly faster than H-learning, although the dif- ference is negligible. When K = 5, the gain-optimal policy conflicts with the discounted optimal policy when y = 0.9. When- ever the AGV is close to belt 2, ARTDP sees a short- Reinforcement Learning 883 0.25 I- ARTDP WI garnma~.~ = 0.1 t----. 0.05 v OY I I I I 0 I OOK 200K 300K StCpS 0.25 c ARTDP WI gatnma~0.6: =r 04 ’ ’ ’ ’ ’ ’ ’ ’ ’ 0 400K 600K 1.2e+O6 1 .se+os 2e+O6 steps Figure 3: Average rewards per step for H-learning and ARTDP in the Delivery domain with ~~0.5, q=O.O for I<=l(above) and K=5(below). Each point is the mean of 30 trials over the last 10K steps for I<=1 and over the last 40K steps for 1<=5. term opportunity in serving generator 2 and does not return to generator 1, thus failing to transport high reward jobs. Hence it cannot find the optimal average reward policy when y = 0.9. To overcome this diffi- culty, y is set to 0.99. Even so, it could not find the optimal policy in 2 million steps. This is because high values of y reduce the effect of discounting and make the temporally far off rewards relevant for optimal ac- tion selection.. Since it takes a long time to propagate these rewards back to the initial steps, it takes a long time for the discounted methods to converge to the true optimum. Meanwhile the short-term rewards still dominate in selecting the action. Thus, as we can infer from Figure 3, in this “long-range” domain, ARTDP served generator 2 exclusively in all the trials, getting a gain less than 0.1, while H-learning was able to find a policy of gain higher than 0.18. We found that counter-based exploration improves the performances of both ARTDP and H. While ARTDP is still worse than H, the difference between them is smaller than with random exploration. We conclude that in long-range domains where discounted optimal policy conflicts with the gain-optimal policy, discounted methods such as ARTDP and Q-learning either take too long to converge or, if y is too low, con- verge to a sub-optimal policy. When there is no such conflict, H-learning is competitive with the discounted methods. In more exhaustive experiments with 75 dif- ferent parameter settings for p, Q and K, it was found that H-learning always converges to the gain-optimal policy, and does so in fewer steps in all but 16 short- range cases, where ARTDP is slightly faster (Tadepalli & Ok 94). W e a so 1 f ound similar differences between Q-learning and R-learning. Our results are consistent with those of Mahadevan who compared Q-learning and R-learning in a robot simulator domain and a maze domain and found that R-learning can be tuned to per- form better (Mahadevan 96a). Auto-exploratory Recall that H-learning needs exploratory actions to en- sure that every state is visited infinitely often during training. Unfortunately, actions executed exclusively for exploratory purpose could lead to decreased average reward, because they do not fully exploit the agent’s currently known best policy. In this section, we will describe a version of H- learning called Auto-exploratory H-learning (AH), which avoids the above problem by automatically ex- ploring the promising parts of the state space while al- ways executing current greedy actions. Our approach is similar to Kaelbling’s Interval Estimation (IE) al- gorithm, and Koenig and Simmons’s method of rep- resenting the reward functions using action-penalty scheme (Kaelbling 90; Koenig & Simmons 96). We are primarily interested in non-ergodic MDPs here because ergodic MDPs do not need exploration. Unfortunately, the gain of a stationary policy for a multichain (non-unichain) MDP depends on the ini- tial state (Puterman 94). Hence we consider some re- stricted classes of MDPs. An MDP is communicating if for every pair of states i, j, there is a stationary policy under which they communicate. For example, our De- livery domain is communicating. A weakly communi- cating MDP also allows a set of states which are tran- sient under every stationary policy in addition (Put- erman 94). Although the gain of a stationary policy for a weakly communicating MDP also depends on the initial state, the gain of an optimal policy does not. AH-learning exploits this fact, and works by using p as an upper bound on the optimal gain. It does this by initializing p to a high value and by slowly reducing it to the gain of the optimal policy. AH is applicable to find gain-optimal policies for weakly communicating MDPs, a strict superset of unichains. There are two reasons why H-learning needs explo- ration: to learn accurate action and reward models, and to learn correct h values. Inadequate exploration could adversely affect the accuracy of either of these, making the system converge to a suboptimal policy. The key observation in the design of Auto- exploratory H-learning (AH) is that the current value of p affects how the h values are updated for the states in the current greedy policy. Let p be the cur- rent sub-optimal greedy policy, and p(p) be its gain. Consider what happens if the current value of p is less than p(p). Recall that h(i) is updated to be ma~Eu(i){~&) + Eyzl p&+(j)} - p. Ignoring the changes to p itself, the h values for states in the current 884 Learning greedy policy tend to increase on the average, because the sum of immediate rewards for this policy in any n steps is likely to be higher than np (since p < p(p)). It is possible, under these circumstances, that the h val- ues of all states in the current policy increase or stay the same. Since the h values of states not visited by this policy do not change, this implies that by execut- ing the greedy policy, the system can never get out of this set of states. If the optimal policy involves going through states not visited by the greedy policy, it will never be learned. meve(0,O.S) Figure 4: The Two-State domain. The notation m(r, p) on the arc from state a to b indicates that T iS the immediate reward and P is the probability of the next state being b when action m is executed in state a. This is illustrated clearly in the Two-State MDP in Figure 4, which is a communicating multichain. In this domain, the optimal policy p* is taking the action move in state 1 and stay in state 2 with p(p*) = 2. Without any exploration, H-learning finds the optimal policy in approximately half of the trials for this domain - those t&ls in which the stay action in state 2 is executed before the stay action in state 1. If the stay action in state 1 is executed before that in state 2, it receives a reward of +l and updates h( 1) to 1 + h( 1) - p. Since p is between 0 and 1, this increases the value of h(l) in every update until finally p converges to 1. S&k greedy action choice always results in the stay action Fn state 1, H-learning never visits state 2 and therefore converges to a suboptimal policy. Now consider what happens if p > p(p) for the cur- rent greedy policy /-1. In this case, by the same ar- gument as before, the 1% values of the states in the current greedy policy must decrease on the average. This means that eventually the states outside the set of states visited by the greedy policy will have their h values higher than some of those visited by the greedy policy. Since the MDP is assumed to be weakly com- municating, the states with higher h values are reach- able from the states with decreasing h values, and even- tually will be visited, ignoring the transient states that do not affect the gain. Thus, as long as p > p(p), there is no danger of getting stuck in a sub-optimal policy p. This siggests-changing H-learning so-that it starts with a high initial p value, po, high enough so that it never gets below the gain of any &b-opt&al In the preceding discussion, we ignored the policy. changes to the p value itself. In fact, p is constantly changing at a rate determined by CX. Hence, even though p was initially higher than p(p), because it is now decreasing, it can become smaller than p(p) after a while. To make the previous argument work, we have to adjust (Y so that p changes slowly compared to the h values. This can be done by starting with a sufficiently low initial a value, (~0. We denote H-learning with the initial values po and (~0 by Hpojcuo. Hence, the H-learning of the previous section is Ho)‘. So far, we have considered the effect of lack of ex- ploration on the h-values. We now turn to its effect on the accuracy of action models. For the rest of the discussion, it is useful to define the utility R(i, u) of a state action pair (i, u) to be R(i, u) = c(u) + &i,j(+(j) - p. (2) j=l Hence, the greedy actions in state i are actions that maximize the R value in state i. Consider the following run of H6po.2 in the Two-State domain, where, in step 1, the agent executes the action stay in state 1. It reduces h( 1) = R( 1, stay) to 1 - p and takes the action move in the next step. Assume that move takes it to state 1 because it has 50% fail- ure rate. With this limited experience, the system as- sumes that both the actions have the same next state in state 1, and stay has a reward +l while move has 0. Hence, it determines that R( 1, stay) = 1 + h(1) - p > O+h(l)-p = R(l, move) and continues to execute stay, and keeps decreasing the value of h(1). Even though h(2) > h(l), the agent cannot get to state 2 because it does not have the correct action model. Therefore, it cannot learn the correct action model for move, and keeps executing stay. Unfortunately, this problem can- not be fixed by changing po or c~. The solution we have implemented, called “Auto- exploratory H-Learning” (AH-learning), starts with a high po and low a0 (AHP”jCYo), and stores the R val- ues explicitly. In H-learning, all R values of the same state are effectively updated at the same time by up- dating the h value, which sometimes makes it converge to incorrect action models. In AH-learning, R(i, u) is updated by the right hand side of Equation (2) only when action u is taken in state i. When p is higher than the gain of the current greedy policy, the R value of the executed action is decreased, while the R values of the other actions remain the same. Therefore, eventually, the unexecuted actions appear to be the best in the current state, forcing the system to explore such actions. We experimented with AH and H with different pa- rameters po and cxo in our Two-State domain with- out any exploratory moves. Out of the 100 trials tested AH6j0.2 found the optimal policy in all of them, where& Hot1 , and H6fo.2 found it in 57 and 69 trials respectively. This confirms our hypothesis that AH- learning explores the search space effectively while al- ways executing greedy actions. Reinforcement Learning 885 Experimental Results on AH-learning In this section, we compare AH-learning with some other exploration strategies in the Delivery domain of Figure 2. Unlike H-learning, our implementation of AH-learning does not explicitly store the policy. We compared AH to three other exploration meth- ods: random exploration, counter-based exploration and recency-based exploration (Thrun 94). Random exploration was used as before, except that we opti- mized the probability with which random actions are selected. In counter-based exploration, in any state i, an action a is chosen that maximizes R(i,a) + where c(i) is the number of times state i is visited, and S is a small positive constant. In recency-based exploration, an action a is selected which maximizes R(i, a) + cdm, where n(i, a) is the number of steps since the action a is executed in state i last, and E is a small constant. In all the three cases, the parameters were tuned by trail and error until they gave the best performance. The parameters for the Delivery domain, p, q and I<, were set to 0.5, 0.0 and 5. Proper exploration is particularly important in this domain for the follow- ing reasons: First, the domain is stochastic; second, it takes many steps to propagate high rewards; and third, there are many sub-optimal policies with gain close to the optimal gain. For all these reasons, it is difficult to maintain p consistently higher than the gain of any sub-optimal policy, which is important for AH- learning to find the optimal policy. It gave the best performance with po = 2 and og = 0.0002. 0.31 , , , , , , , , , , 0.28 0.26 t 0.24 - 0.22 - 0.2 - 0.18 - 0.16 - 0.14 - 0.12 - 0.1 If’, ’ ’ ’ ’ ’ ’ ’ ’ J 0 1 OOK 200K 300K 400K 500K steps Figure 5: The on-line mean rewards of the last 10K steps averaged over 30 trials for AH2jo.0002, Hl~‘.“l, and H”jl without exploration, and Hop1 with random, recency-based, and counter-based exploration strate- gies in the Delivery domain with p=O.5, q=O.O, and IC=5. Figure 5 shows the on-line mean rewards of 30 trials of AH2~0*0002 H1jO.OO1, and H”)l with no exploratory actions, and of H”jl with 3 different exploration meth- ods: 8% random exploration, counter-based explo- ration with S = 0.05, and recency-based exploration with E = 0.02. Without any exploration, H”jl could not find the op- timal policy even once. By proper tuning of po and (~0, it improved significantly, and was only slightly worse than AH, which found the optimal policy in all 30 trials. Counter-based exploration appears much bet- ter than random exploration for this domain, while recency-based exploration seems worse. AH achieved a much better on-line reward than all other exploration methods, and did so more quickly than others. This result suggests that with proper initialization of p and o, AH-learning automatically explores the state space much more effectively than the other exploration schemes tested. A particularly attractive feature of AH-learning is that it does so without sacrificing any gain, and hence should be preferred to other meth- ods. Although AH-learning does involve tuning two parameters p and CY, it appears that at least p can be automatically adjusted. One way to do this is to keep track of the currently known maximum immediate re- ward over all state-action pairs, and reinitialize p to something higher than this value whenever it changes. iscussion and Future Work There is an extensive literature on average reward optimization using dynamic programming approaches (Howard 60; Puterman 94; Bertsekas 95). (Mahadevan 96a) gives a useful survey of this literature from Re- inforcement Learning point of view. Bias-optimality, or Schwartz’s T-optimality, is a more refined notion that seeks to find a policy that maximizes the cu- mulated discounted total reward for all states as the discount factor y -+ 1. All bias-optimal policies are also gain-optimal, but the converse does not hold. H- learning and R-learning can find the bias-optimal poli- cies if and only if all gain-optimal policies give rise to the same recurrent set of states, and the transient states are repeatedly visited by some trial-based ex- ploration strategy. To find bias-optimal policies for more general unichains, it is necessary to select bias- optimal actions from among the gain-optimal ones in every state using more refined criteria. Mahade- van extends both H-learning and R-learning to find the bias-optimal policies for general unichains, and il- lustrates that they improve their performance in an admission control queuing system (Mahadevan 96b; Mahadevan 96c). Auto-exploratory H-learning is similar in spirit to the action-penalty representation of reward functions analyzed by Koenig and Simmons (Koenig & Simmons 96). They showed that a minimax form of Q-learning, which always takes greedy actions, can find the short- est cost paths from all states to a goal state in O(n3) time, where n is the size of the state space. An ana- lytical convergence result of this kind for AH-learning would be very interesting. In the Interval Estimation algorithm (IE) of Kaelbling (Kaelbling 90), which is similar, the agent maintains a confidence interval of the value function, and always chooses an action that max- imizes its upper bound. This ensures that the chosen 886 Learning action either has a high utility, or has a large confi- dence interval which needs exploration to reduce it. The idea of Auto-exploratory learning can be adapted to R-learning as well. However, in our prelim- inary experiments we found that the value of p fluctu- ates much more in R-learning than in H-learning, un- less c~ is initialized to be very small. Making a small will have the consequence of slowing down learning. Although we have not seen this to be a problem with H-learning, the tradeoff between the need for explo- ration and slow learning due to small cr value deserves further study. To scale H-learning to large domains, it is necessary to approximate its value function and action models. Elsewhere, we describe our results of learning action models in the form of conditional probability tables of a Bayesian network, and using local linear regression to approximate its value function (Tadepalli & Ok 96). Both these extensions improve the space requirement of H-learning and the number of steps needed for its convergence. We also plan to explore other ways of ap- proximating the value function which can be effective in multi-dimensional spaces. Summary The premise of our work is that many real-world do- mains demand optimizing average reward per time step, while most work in Reinforcement Learning is focused on optimizing discounted total reward. We presented a model-based average reward RL method called H-learning that demonstrably performs better than its discounted counterpart. We also presented Auto-exploratory H-learning, which automatically ex- plores while always picking greedy actions with respect to its current value function. We showed that it out- performs many currently known exploration methods. Value function approximation for ARL systems is cur- rently under investigation. References Barto, A. G., Bradtke, S. J., and Singh, S. P. 1995. Learning to Act using Real-Time Dynamic Program- ming. Artificial Intelligence, 73(l), 81-138. Bertsekas, D. P. 1982. Distributed Dynamic Program- ming. In IEEE Transactions in Automatic Control, AC-27(3). Bertsekas, D. P. 1995. Dynamic Programming and Optimal Control, Athena Scientific, Belmont, MA. Howard, R. A. 1960. Dynamic Programming and Markov Processes. MIT press, Cambridge, MA. Jalali, A. and Ferguson, M. 1989. Computationally Efficient Adaptive Control Algorithms for Markov Chains. In IEEE Proceedings of the 28’th Conference on Decision and Control, Tampa, FL. Kaelbling, L. P. 1990. Learning in Embedded Systems, MIT Press, Cambridge, MA. Koenig, S., and Simmons, R. G. 1996. The Effect of Representation and Knowledge on Goal-Directed Ex- ploration with Reinforcement-Learning Algorithms. Machine Learning, 22, 227-250. Lin, L-J. 1992 Self-improving Reactive Agents Based on Reinforcement Learning, Planning, and Teaching. Machine Learning, 8, 293-32 1. Mahadevan, S. and Connell, J. 1992. Automatic Pro- gramming of Behavior-based Robots Using Reinforce- ment Learning. Artificial Intelligence, 55:31 l-365. Mahadevan, S. 1996a. Average Reward Reinforce- ment Learning: Foundations, Algorithms, and Em- pirical Results. Machine Learning, 22, 159-195. Mahadevan, S. 199613. An Average Reward Rein- forcement Learning Algorithm for Computing Bias- optimal Policies. In Proceedings of AAAI-96, Port- land, OR. Mahadevan, S. 1996~. Sensitive Discount Optimal- ity: Unifying Discounted and Average Reward Rein- forcement Learning. In Proceedings of International Machine Learning Conference, Bari, Italy. Puterman, M. L. 1994. Markov Decision Processes: Discrete Dynamic Stochastic Programming. John Wi- ley. Schwartz, A. 1993. A Reinforcement Learning Method for Maximizing Undiscounted Rewards. In Proceedings of International Machine Learning Con- ference, Morgan Kaufmann, San Mateo, CA. Singh, S. P. 1994. Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes. In Proceedings of AAAI-94, MIT press. Tadepalli, P. and Ok, D. 1994. H-learning: A Re- inforcement Learning Method for Optimizing Undis- counted Average Reward. Technical Report, 94-30- 1, Dept. of Computer Science, Oregon State University. Tadepalli, P. and Ok, D. 1996. Scaling Up Average Reward Reinforcement Learning by Approximating the Domain Models and the Value Function. In Pro- ceedings of International Machine Learning Confer- ence, Bari, Italy. Thrun, S. 1994. The Role of Exploration in Learning Control. In Handbook of Intelligent Control: Neu- ral, Fuzzy, and Adaptive Approaches, Van Nostrand Reinhold. Watkins, C. J. C. H. and Dayan, P. 1992. &-learning. Machine Learning, 8:279-292. Acknowledgments We gratefully acknowledge the support of NSF under grant number IRI-9520243. We thank Tom Dietterich, Sridhar Mahadevan, and Toshi Minoura for many in- teresting discussions on this topic. Thanks to Sridhar Mahadevan and the reviewers of this paper for their thorough and helpful comments. Reinforcement Learning 887
1996
131
1,768
Evolution-based Discovery of Hierarchical Behaviors Justinian P. Rosca and ana H. allard Computer Science Department University of Rochester Rochester, NY 14627 Email: {rosca,dana)@cs.rochester.edu Abstract Procedural representations of control policies have two advantages when facing the scale-up problem in learning tasks. First they are implicit, with poten- tial for inductive generalization over a very large set of situations. Second they facilitate modularization. In this paper we compare several randomized algo- rithms for learning modular procedural representa- tions. The main algorithm, called Adaptive Represen- tation through Learning (ARL) is a genetic program- ming extension that relies on the discovery of sub- routines. ARL is suitable for learning hierarchies of subroutines and for constructing policies to complex tasks. ARL was successfully tested on a typical rein- forcement learning problem of controlling an agent in a dynamic and nondeterministic environment where the discovered subroutines correspond to agent behaviors. Introduction The interaction of a learning system with a complex environment represents an opportunity to discover fea- turessd invariant properties of the problem that en- able it to tune its representations and optimize its be- havior. This discovery of modularity while learning or solving a problem can considerably speed up the task, as the time needed for the system to “evolve” based on its modular subsystems is much shorter than if the system evolves from its elementary parts (Simon 1973). Thus machine learning, or machine discovery approaches that attempt to cope with non-trivial prob- lems should provide some hierarchical mechanisms for creating and exploiting such modularity. An approach that incorporates modularization mechanisms is genetic programming (GP) with auto- matically defined functions (ADF) (Koza 1994). In ADF-GP computer programs are modularized through the explicit use of subroutines. One shortcoming of this approach is the need to design an appropriate ~rchi- tecture for programs, i.e. set in advance the number of subroutines and arguments, as well as the nature of references among subroutines. A biologically inspired approach to architecture dis- covery introduced in (Koza 1995) is based on new op- erations for duplicating parts of the genome. Code duplication transformations seem to work well in com- bination with crossover, as duplication protects code against the destructive effects of crossover. Duplica- tion operations are performed such that they preserve the semantics of the resulting programs. This paper presents an alternative, heuristic, solu- tion to the problem of architecture discovery and mod- ularization, called Adaptive Representation through Learning (ARL). ARL adopts a search perspective of the genetic programming process. It searches for good solutions (representations) and simultaneously adapts the architecture (representation system). In GP, the representation system or voccsbzslarry is given by the primitive terminals and functions. By adapting the vocabulary through subroutine discovery, ARL biases the search process in the language determined by the problem primitives. The paper outline is as follows. The next section describes the task used throughout the paper: con- trolling an agent in a dynamic and nondeterministic environment, specifically the Pat-Man game. Section 3 estimates a measure of the complexity of the task by evaluating the performance of randomly generated pro- cedural solutions, their iterative improvement through an annealing technique and hand-coded solutions. Sec- tion 4 presents details of the ARL approach. Its per- formance and a comparison of results with other GP approaches are described in the following two sections. This work is placed into a broader perspective in the related work section, before concluding remarks. The Application Task We consider the problem of controlling an agent in a dynamic environment, similar to the well known Pac- Man game described in more detail in (Koza 1992)). An agent, called Pat-Man, can be controlled to act in a maze of corridors. Up to four monsters chase Pat-Man most of the time. Food pellets, energizers and fruit objects result in rewards of 10, 50 and 2000 points respectively when reached by Pat-Man. After each capture of an energizer (also called “pill”), Pac- Man can chase monsters in its turn, for a limited time (while monsters are “blue”), for rewards of 500 points 888 Learning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. for capturing the first monster, 1000 points for the next etc. The environment is nondeterministic as monsters and fruits move randomly 20% of the time. The problem is to learn a controller to drive the Pac- Man agent in order to acquire as many points as pos- sible and to survive monsters. The agent’s movements rely on the current sensor readings, possibly past sen- sor readings and internal state or memory. Pat-Man knows when monsters are blue and has smell-like per- ceptions to sense the distance to the closest food pellet, pill, fruit and closest or second closest monster. Overt action primitives move the agent along the maze corri- dors towards or backwards from the nearest object of a given type. Representation Approaches The Pat-Man problem is a typical reinforcement learn- ing (RL) task. In response to actions taken, the agent receives rewards. Often rewards are delayed. The task in reinforcement learning is to learn a policy maximiz- ing the expected future rewards. Formally, an agent policy is a mapping p:PxS--+A where P is the set of perceptions, A the set of actions and S the agent state space. When searching for a policy, the size of the hypotheses space is the number of such mappings i.e. ~~A~~lt’pll’llsll. The number of re- quired examples for PAC-learnability is proportional to the logarithm of the hypothesis space size . This outlines two major problems. First, explicitly repre- senting the state space is undesirable from a learnabil- ity perspective. Second, the large number of percep- tions given by various distance values is critical. In contrast to explicit representations, implicit represen- tations such as programs have the potential of better generalization with a smaller number of training ex- amples. This makes GP a candidate approach to learn policies.’ Besides generalization, an implicit representation of the agent policy would improve on two important as- pects: compressibility and modularity. Compressibil- ity means that a solution is representable in a concise form. Also, small solutions may have increased gen- erality, according to Ockham’s razor principle. Rep- resentation modularity is important from a scale-up perspective. Ideally, a modular representation orga- nizes the knowledge and competences of the agent such that local changes, improvements or tuning do not affect the functioning of most other components. Researchers in “behavior-based” artificial intelligence (Maes 1993) talk about integrated competences or be- haviors as given decompositions of the problem. We are interested in discovering decompositions that natu- rally emerge from the interaction agent-environment. ‘For the same mators have been ment learning. reason, parameterized function approxi- used to replace table lookup in reinforce- A general implicit representation that can be made modular is a procedural representation. Candidate so- lutions are programs. Modules naturally correspond to functions or subroutines. Pat-Man Procedural epresentations Programs that define agent controllers will be built based on perception, action and program control primi- tives. The agent perception primitives return the Man- hattan distance to the closest food pellet, pill, fruit and the closest or second closest monster (SENSE-DIS-FOOD etc.) The “sense blue” perception is combined with an if-then-else control primitive into the if-blue (IFB) lazy evaluation function which executes its first argument if monsters are blue, otherwise executes its second argu- ment. The action primitives advance/retreat the agent with respect to the closest object of a given type and return the distance’ between the agent and the corre- sponding object (ACT-A-PILL/ACT-R-PILL, etc.) The above primitives were combined to form pro- grams in two ways. In the first alternative (A) we used the if-less-than-or-equal (IFLTE) lazy function which compares its first argument to its second argument. For a “less-than” result the third argument is executed. For a “greater-or-equal” result the fourth argument is executed. Representation A can develop an intricate form of state due to the side effects of actions that appear in the condition part of the IFLTE function. In the second alternative (B) primitives are typed. We used the if-then-else IFTE lazy evaluation function that takes a BOOL type as first argument and two more ACT type arguments. All actions and control opera- tors have type ACT, all logical expressions have type BOOL, and distances have type DIST. Relational op- erators (<, =, 2, #) and logical operators (AND, OR, NOT) are used to generate complex logical expressions. The programs that can be written in representation B are equivalent to decision trees, which makes them easy to understand and analyze. While playing the game the learner remembers mis- takes that led the game to a premature end (agent eaten by a monster). Equally useful is to improve the learner’s control skills that were acquired in previous runs. Similarly, a machine learning technique aims to generate better and better programs that control the agent by borrowing fragments from good previously learned programs. To facilitate modular learning we defined the modular alternatives of representations A and B called M-A and M-B. In these cases programs were made up of three fragments: two subroutines of two arguments each and a main program. Each frag- ment had access to the entire problem vocabulary. In addition, the second subroutine could also invoke the first one, and the main program could invoke either subroutine. 21f the shortest path/closest monster/food are not uniquely defined, then a random choice from the valid ones is returned by the corresponding function. Reinforcement Learning 889 0 5 10 15 20 25 30 35 40 4s so 5s Fitness Class Figure 1: Distribution of fitness values over 50000 ran- dom programs generated using the ramped-half-and-half method from [Koza, 921. Each fitness class covers an inter- val of 100 game points. The fitness of each program is the average number of game points on three independent runs. Programs are simulated and are assigned a fitness equal to the number of points acquired by Pat-Man on average in several simulated games, which are the fit- ness cases. Solution generality was estimated by test- ing the learned programs on new cases. Next we explore several methods for designing pro- grams to represent procedural agent policies. First Solutions Random Search The parse trees of random programs are created recur- sively in a top-down manner. First a function is chosen as the label of the program root node and then for each formal argument of the function new subprograms are generated recursively. (Koza 1992) describes a method called ramped-half-and-half for generating very diverse random tree structures. No particular structure is fa- vored due to both randomly choosing node labels and randomly varying the tree depth and balance. In order to test the Pat-Man representations we gen- erated both simple and modular random programs. Figure 1 shows the distribution of fitness values ob- tained in the four alternative representations. The best programs were obtained with problem representation B, followed by the modular versions M-A and M-B. The worst alternative was A. Simulated Annealing A simple technique for iterative improvement is sim- ulated annealing (SA) (Kirkpatrick, Gelatt, 8z Vec- chi 1983). Although SA performs well in continuous spaces, it has also been applied to combinatorial op- timization problems in search for optima of functions of discrete variables. For example, a similar search technique, GSAT (Selman & Kautz 1993), offers the best known performance for hard satisfiability prob- lems. The space of programs is also non-continuous. SA has been previously tested on program discovery problems (O’Reilly 1995). The SA implementation for program search by O’Reilly used a mutation opera- tion that modifies subtrees through an insertion, dele- tion or substitution sub-operation trying to distort the subtree only slightly. In contrast, we define a primitive mutation of a program p that replaces a randomly cho- sen subtree of with a new randomly generated subtree. The SA-based algorithm for iterative improvement of programs will be called PSA from now on. The cooling schedule is defined by the following pa- rameters: the initial temperature To, the final temper- ature Tf, the length of the Markov chain at a fixed tem- perature L and the number of iterations G (Laarhoven 1988). PSA used a simple rule to set these parameters: “accept the replacement of a parent with a loo-point worse successor with a probability of 0.125 at the ini- tial temperature and a probability of 0.001 at the final temperature.“3 No attempts have been made to opti- mize these parameters other than these initial choices. Hand-Coding We carefully designed modular programs for both rep- resentations A and B. This was not as easy as it might appear. The best programs were found after a couple of hours of code twiddling. Contrary to intuition, sim- pler programs proved to be better than the most com- plex programs we designed. The performance results of these first attempts to learn or design a Pat-Man controller are summarized in Table 1. The best result was obtained with PSA and representation M-A. Table 1: Performance, in average number of game points, of best evolved programs and carefully hand-coded pro- grams. The maximum depth of randomly generated pro- grams was 8. PSA was seeded with the best solution from 500 randomly generated programs. Training was done on three cases and testing on 100 cases. Representation A M-A B M-B Random 4110.0 3420.0 4916.7 4916.7 PSA 5436.7 7646.7 5790 5663.3 Hand-coding - 7460 - 5910 Architecture Discovery in A In the standard genetic programming algorithm (SGP) an initial population of randomly generated programs are transformed through crossover, occasional muta- tion, and fitness proportionate reproduction opera- 3We obtained TO = 48, Tf = 14. We also chose L = 100 and G = 25000. The value of G is justified by the desire to make similar the overall computational effort (the total number of programs evaluated) for PSA and the GP tech- niques to be described next. These parameters determine =r an exponential cooling parameter of: e G bl F . 890 Learning tions. SGP relies on a hypothesis analogous to the ge- netic algorithm (GA) building block hypothesis (Hol- land 1992), which states that a GA achieves its search capabilities by means of “building block” processing. Building blocks are relevant pieces of partial solutions that can be assembled together in order to generate better partial solutions. Our modular representations are modeled after the automatically defined functions (ADF) approach (Koza 1994). ADF is an extension of GP where in- dividuals are represented by a fixed number of com- ponents or brunches to be evolved: a predefined num- ber of function branches and a main program branch. Each function branch (consider for instance three such branches called ADFo, ADFI, and ADF2) has a prede- fined number of arguments. The main program branch (Program-Body) produces the result. Each branch is a piece of LISP code built out of a specific vocabu- lary and is subject to genetic operations. The set of function-defining branches, the number of arguments that each of the function possesses and the vocabu- lary of each branch define the urchitecture of a pro- gram. The architecture imposes the possible hierar- chical references between branches. For instance, if we order the branches in the sequence ADFo, ADFI , ADF2, Program-Body then a branch may invoke any component to its left. (Rosca 1995) analyzed how this preimposed hierar- chical ordering biases the way ADF searches the space of programs. In the “bottom-up evolution hypothe- sis” he conjectured that ADF representations become stable in a bottom-up fashion. Early in the process changes are focused towards the evolution of low level functions. Later, changes are focused towards higher levels in the hierarchy of functions (see also (Rosca & Ballard 1995)). ARL will consider a bottom-up ap- proach to subroutine discovery as the default. The ARL Algorithm The nature of GP is that programs that contain use- ful code tend to have a higher fitness and thus their offspring tend to dominate the population. ARL uses heuristics which anticipate this trend to focus search. It searches for good individuals (representations) while adapting the architecture (representation sys- tem) through subroutine invention to facilitate the cre- ation of better representations. These two activities are performed on two distinct tiers (see Figure 2). GP search acts at the bottom tier. The fitness proportion- ate selection mechanism of GP favors more fit program structures to pass their substructures to offspring. At the top tier, the subroutine discovery algorithm selects, generalizes, and preserves good substructures. Discov- ered subroutines are reflected back in programs from the memory (current population) and thus adapt the architecture of the population of programs. SUBR. DISCOVER r__-----___________________ I > Memory POPULATION Figure 2: Two-tier architecture of the ARL algorithm. Discovery of Subroutines The vocabulary of ARL at generation t is given by the union of the terminal set 7, the function set F and the set of evolved subroutines St (initially empty). ‘T U F represents the set of primitives which is fixed through- out the evolutionary process. In contrast, St is a set of subroutines whose composition may vary from one generation to the next. St extends the representation vocabulary in an adaptive manner. New subroutines are discovered and the “least useful” ones die out. St is used as a pool of additional problem primitives, besides ‘T and F for randomly generating some individuals in the next generation, t + 1. The subroutine discovery tier of the ARL architec- ture attempts to automatically discover useful subrou- tines and adapt the set St. New subroutines are cre- ated using blocks of genetic material from the popula- tion pool. The major issue is the detection of “useful” blocks of code. The notion of usefulness is defined by two concepts, differential jitness, and block uctivution which are defined next. The subroutine discovery al- gorithm is presented in Figure 3. Differential Fitness The concept of differential fit- ness is a heuristic which focuses block selection on pro- grams that have the biggest fitness improvement over their least fit parent. Large differences in fitness are presumably created by useful combinations of pieces of code appearing in the structure of an individual. This is exactly what the algorithm should discover. Let i be a program from the current population having raw fitness F(i). Its differential fitness is defined as: DiffFitness(i) = F(i) - minpEParents(i){F(P)l (l) Blocks are selected following property: from those programs satisfying the mai { DiffFitness(i)} > 0 (2) Figure 4 shows the histogram of the differential fit- ness defined above for a run of ARL on the Pat-Man problem. Each slice of the plot for a fixed generation represents the number of individuals (in a population of size 500) vs. differential fitness values. The figure shows that only a small number of individuals improve on the fitness on their parents. ARL will focus on such individuals in order to discover salient blocks of code. Reinforcement Learning 891 subroutine-Discovery(Pt, S”““, ?$‘@) I. Initialize the set of new subroutines Snew = 0. Initialize the set of duplicate individuals ptdUp = 0 2. Select a subset of promising individuals: B = ma~~(DiffFitness (i)} > 0 3. For each node of program i E f?, determine the number of activations in the evaluation of i on all fitness cases 4. Create a set of candidate building blocks f?B, by se- lecting all blocks of small height, high activation, and with no inactive nodes(Z3); 5. For each block in the candidate set, b E B&, repeat: (a) Let b E i. Generalize the code of block b: i. Determine the terminal subset Tb used in the block(b); ii. Create a new subroutine s having as parameter-z a random subset of Tb and as body the block@, Tb) (b) Create a new program pdup making a copy of 1 having block b replaced with an invocation of tht new subroutine s. The actual parameters of the cal to s are given by the replaced terminals. (c) Update S”“” and P,““: i. Snew = Snew U (s) ii. Pp”” = Pfup U {pduP} 6. Results Snew, PeUp Figure 3: ARL extension to GP: the subroutine discovery algorithm for adapting the problem representation. S”“” is the set additions to St. Duplicate individuals PtdzLp are added to the population before a next GP selection step. Block Activation During repeated program evalu- ation, some blocks of code are executed more often than others. The more active blocks become “candi- date” blocks. Block activation is defined as the number of times the root node of the block is executed. Salient blocks are active blocks of code from individuals with the highest differential fitness. In addition, we require that all nodes of the block be activated at least once or a minimum percentage of the total number of acti- vations of the root node.4 Generalization of Blocks The final step is to for- malize the active block as a new subroutine and add it to the set St. Blocks are generalized by replacing some random subset of terminals in the block with variables (see Step 5a in Figure 3). Variables become formal arguments of the created subroutine.5 4This condition is imposed in order to eliminate from consideration blocks containing introns and hitch-hiking phenomena (Tackett 1994). It is represented by the prun- ing step (4) in Figure 3. 5 In the typed implementation block generalization addi- tionally assigns a signature to each subroutine created. The subroutine signature is defined by the type of the function that labels the root of the block and the types of the ter- Fliness Class 20 40 60 Figure 4: Differential fitness distributions over a run of ARL with representation A on the Pat-Man problem. At each generation, only a small fraction of the population has DiEFitness > 0. Subroutine Utility ARL expands the set of subroutines St whenever it discovers new subroutine candidates. All subroutines in St are assigned utility values which are updated ev- ery generation. A subroutine’s utility is estimated by observing the outcome of using it. This is done by ac- cumulating, as rewurd, the average fitness values of all programs that have invoked s over a fixed time window of W generations, directly or indirectly. The set of subroutines co-evolves with the main pop- ulation of solutions through creation and deletion op- erations. New subroutines are automatically created based on active blocks as described before. Low util- ity subroutines are deleted in order to keep the total number of subroutines below a given number.6 ARL inherited the specific GP parameters.7 In addi- tion, ARL used our implementation of typed GP for runs with representation B. It was run only for a max- imum of 50 generations. The ARL-specific parameters are the time window for updating subroutine utilities (10) and the maximum number of subroutines (20). Next we describe a typical trace of ARL on a run with representation B, which obtained the best overall re- sults, and present statistical results and comparisons among SGP, ADF, PSA, and random generation of programs. minals selected to be substituted by variables. 61n order to preserve the functionality of those programs invoking a deleted subroutine, calls to the deleted subrou- tine are substituted with the actual body of the subroutine. 7For SGP and ADF population size = 500, the number fitness cases = 3, the crossover rate = 90% (20% on leaves), reproduction rate = lo%, number of generations = 100. 892 L4xrning Generation 3. o SlW33 (ACT 1 (BOOL)) (LAMBDA (AO) (IFTE (NOTE- QuAL (SENSE-DIS-PILL) ~0) (ACT-A-FOOD) (ACT-A- PILL) 1) Generation 10. o S1749 (ACT 0) (LAMBDA () (IFTE (< (SENSE-DIS- FRUIT) 50) (ACT-A-FRUIT) (ACT-A-MON-1))) Generation 17. e s1765 (ACT i (~1s)) (LAMBDA (~0) (IFTE (< (SENSE- DIS-FRUIT) AO) (ACT-A-FRUIT) (ACT-R-PILL))) o S1766 (BOOL 1 (DIS)) (LAMBDA (AO) (< (SENSE-DIS- FRUIT) AO)) Generation 30. 0 si997(~~~0) (LAMBDA() (IFTE(< (SENSE-DIS-FOOD) (SENSE-DIS-PILL)) (ACT-A-FOOD) (s1765 19))) Generation 33. Best-of-run individual: o (IFB (IFTE (S1766 21) (ACT-A-FRUIT) (IFB (S1997) (S166321)))(IFB (IFTE (S1766 22)(ACT-A-FRUIT)(ACT- A-FOOD)) (IFTE (S1766 22) (ACT-A-PILL) (IFTE (< (SENSE-DIS-FOOD) (SENSE-DIS-PILL)) (ACT-A-FOOD) W749))))) Programs evolves d by ARL are m odular. ARL usu- ally evolves tens of subroutines in one run, only twenty of which are preserved in St at any given time t. Sub- routines have small sizes due to the explicit bias to- wards small blocks of code. The hierarchy of evolved subroutines allows a program to grow in effective size (i.e. in expanded structural complexity, see (Rosca & Ballard 1994)) if this offers an evolutionary advantage. For instance, the best-of-generation program evolved by ARL in one run with problem representation B is extremely modular. ARL discovered 86 subroutines during the 50 generations while it ran. Only 5 subrou- tines were invoked by the best-of-run program which was discovered in generation 33. These useful subrou- tines form a three-layer hierarchy on top of the primi- tive functions. Each of the five subroutines is effective in guiding Pat-Man for certain periods of time. A trace of this run is given in Table 2. The five subroutines define interesting “behaviors.” For example, S1749 is successfully used for attracting monsters. S1765, invoked with the actual parameter value of 19 defines a fruit-chasing behavior for blue periods. The other subroutines are: an applicability predicate for testing if a fruit exists (S1766), a food- hunting behavior (S1997)) and a pill-hunting behavior (S1663). Th e main program decides when to invoke and how to combine the effects of these behaviors, in response to state changes. Table 2: Evolutionary trace of an ARL run on the Pac- Man problem (representation B). For each discovered sub- routine the table shows the signature (type, number of ar- guments and type of arguments if any), and the subroutine definition. Comparison of Results In order to detect differences in performance and qual- itative behavior from PSA, the current SGP experi- Table 3: Comparison of generalization performance of dif- ferent Pat-Man implementations: average fitness of evolved solutions over 100 random test cases. Rep. Rand PSA SGP ADF ARL Hand A 1503 2940 2906 1569 3611 2424 B 2321 4058 3370 3875 4439 2701 ments used crossover as its only genetic operation and a zero mutation rate. Most importantly, we are interested in the general- ity of the best solutions obtained with random, PSA, SGP, ADF and ARL approaches (see Table 3). For the random and PSA cases we took the results of the runs with representations M-A and M-B which were the best. We tested all solutions on the same 100 ran- dom test cases. ARL achieved the best results. Hand solutions were improved from the initial ones reported in Section 3. We also determined the 95% confidence interval in the average number of points of a solution. For example, the ARL solution for representation B has an interval of f9Opoints) i.e. the true average is within this interval relative to the average with 95% confidence. From the modularity perspective solu- tions, ADF modularity is confined by the fixed initial architecture. ARL modularity emerges during a run as subroutines are created or deleted. SGP solutions are not explicitly modular. Tackett studied, under the name “gene banking,” ways in which programs constructed by genetic search can be mined off-line for subexpressions that represent salient problem traits (Tackett 1994). He hypothesized that traits which display the same fitness and frequency characteristics are salient. Unfortunately, many subex- pressions are in a hierarchical “part-of” relation. Thus it may be hard to distinguish “hitchhikers” from true salient expressions, Heuristic reasoning was used to in- terpret the results, so that the method cannot be auto- mated in a direct manner. In contrast, in ARE salient blocks have to be detected efficiently, on-line. This is possible because candidate blocks are only searched for among the blocks of small height present in individuals with the highest differential fitness. Functional programming languages, such as LISP, treat code and data equivalently. ARL takes advan- tage of this feature to analyze the behavior of the code it constructs and to decide when and how subroutines can be created. More generally, pure functional lan- guages such as the ML language treat functions and values according to a formal set of rules. As a conse- quence, the process of formal reasoning applied to pro- gram control structures can be automated. One recent example of such an attempt is ADATE (Olsson 1995). ADATE iteratively transforms programs in a top-down manner, searching the space of programs written in a subset of ML for a program that explains a set of Reinforcement Learning 893 initial training cases. ADATE creates new predicates by abstraction transformations. Algorithms that use predicate invention are called constructive induction algorithms. Predicate invention is also a fundamen- tal operation in inductive logic programming where it helps to reduce the structural complexity of induced structures due to reuse. More importantly, invented predicates may generalize over the search space thus compensating for missing background knowledge. A difficult problem is evaluating the quality of new pred- icates (Stahl 1993). The predicate invention problem is related to the more general problem of bias in machine learning. In GP, the use of subroutines biases the search for good programs besides offering the possibility to reuse code. An adaptive learning system selects its bias automat- ically. An overview of current efforts in this active re- search area appeared recently in (Gordon & DesJardins 1995). Conclusions Although the Pat-Mati is a typical reinforcement learn- ing task it was successfully approached using GP. GP worked well for the task because it used an implicit representation of the agent state space. Therefore, GP solutions acquire generality and can be modularized. This last feature was particularly exploited in ARL. Programs, as structures on which GP operates, are symbolically expressed as compositions of functions. By applying the differential fitness and block activa- tion heuristics, the ARL algorithm manages to dis- cover subroutines and evolve the architecture of so- lutions which increase its chances of creating better solutions. Evolved subroutines form a hierarchy of “behaviors.” On average, ARL programs perform better than the ones obtained using other techniques. A comparison among solutions obtained using the PSA, GP, ADF and ARL algorithms with respect to search efficiency and generalization is ongoing on the Pat-Man domain as well as other problems. Additionally, a compari- son with memoryless and state-maintaining reinforce- ment learning algorithms is also worth further investi- gation. ARL can be studied as an example of a system where procedural bias interacts with representational bias (Gordon & DesJardins 1995). This may shed ad- ditional light on how GP exploits structures and con- structs solutions. References Gordon, D. F., and DesJardins, M. 1995. Evaluation and selection of biases in machine learning. Machine Learning 2015-22. Holland, J. H. 1992. Adaptation in Natural and Artifi- cial Systems, An Introductory Analysis with Applica- tions to Biology, Control and Artificial Intelligence. Cambridge, MA: MIT Press. Second edition (First edition, 1975). Kirkpatrick, S.; Gelatt, C.; and Vecchi, M. 1983. Op- timization by simulated annealing. Science 220:671- 680. Koza, J. R. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Se- lection. MIT Press. Koza, J. R. 1994. Genetic Programming II. MIT Press. Koza, J. R. 1995. Gene duplication to enable genetic programming to concurrently evolve both the archi- tecture and work-performing steps of a computer pro- gram. In Mellish, C. S., ed., IJCAI, volume 1, 734- 740. Morgan Kaufmann. Laarhoven, v. P. J. M. 1988. Theoretical and compu- tational aspects of simulated annealing. Netherlands: Centrum voor Wiskunde en Informatica. Maes, P. 1993. Behavior-based artificial intelligence. In SAB-2. MIT Press. Olsson, R. 1995. Inductive functional programming using incremental program transformation. Artificial Intelligence 74155-81. O’Reilly, U.-M. 1995. An Analysis of Genetic Pro- gramming. Ph.D. Dissertation, stitute for Computer Science. bttawa-Carleton In- Rosca, J. P., and Ballard, D. H. 1994. Hierarchi- cal self-organization in genetic programming. In 11th ICML, 251-258. Morgan Kaufmann. Rosca, J. P., and Ballard, D. H. 1995. Causality in genetic programming. In Eshelman, L., ed., ICGA95, 256-263. San Francisco, CA., USA: Morgan Kauf- mann. Rosca, J. P. 1995. Genetic programming exploratory power and the discovery of functions. In McDonnell, J. R.; Reynolds, R. G.; and Fogel, D. B., eds., Evo- lutioncary Programming IV Proceedings of the Fourth Annual Conference on Evolutionary Programming, 719-736. San Diego, CA, USA: MIT Press. Russell, S. J., and Norvig, P. 1995. Artificial Intel- ligence: A Modern Approach. Englewood Cliffs, New Jersey: Prentice Hall. Selman, B., and Kautz, H. A. 1993. An empirical study of greedy local search for satisfiability testing. In AAAI. AAAI Press/The MIT Press. 46-51. Simon, H. A. 1973. The organization of complex systems. In Howard H. Pattee, G. B., ed., Hierar- chy Theory; The Challenge of Complex Systems. New York. 3-27. Stahl, I. 1993. Predicate invention in ILP - an overview. In Brazdil, P. B., ed., ECML, 313-322. Springer-Verlag. Tackett, W. A. 1994. Recombination, Selection and the Genetic Construction of Computer Programs. Ph.D. Dissertation, University of Southern California. 894 Learning
1996
132
1,769
Estimating the Absolute Position of a Using Position Probability Grids Wolfram Burgard and Dieter Fox and aniel Hennig and Institut fur Informatik III, Universtiat Bonn Riimerstr. 164 D-53 117 Bonn, Germany {wolfram,fox,hennig,schmidtl) @uran.cs.uni-bonn.de Abstract In order to re-use existing models of the environment mo- bile robots must be able to estimate their position and orient- ation in such models. Most of the existing methods for pos- ition estimation are based on special purpose sensors or aim at tracking the robot’s position relative to the known starting point. This paper describes the position probability grid ap- proach to estimating the robot’s absolute position and orient- ation in a metric model of the environment. Our method is designed to work with standard sensors and is independent of any knowledge about the starting point. It is a Bayesian approach based on certainty grids. In each cell of such a grid we store the probability that this cell refers to the current po- sition of the robot. These probabilities are obtained by in- tegrating the likelihoods of sensor readings over time. Res- ults described in this paper show that our technique is able to reliably estimate the position of a robot in complex envir- onments. Our approach has proven to be robust with respect to inaccurate environmental models, noisy sensors, and am- biguous situations. Introduction In order to make use of environmental models mobile robots always must know their current position and orientation’ in their environment. Therefore, the ability of estimating their position is one of the basic preconditions for the autonomy of mobile robots. The methods for position estimation can be roughly divided into two classes: relative and absolute position estimation techniques (Feng, Borenstein, & Everett 1994). Members of the first- class track the robot’s relative position according to a known starting point. The problem solved by these methods is the correction of accumulated dead reckoning errors coming from the inherent inaccuracy of the wheel encoders and other factors such as slipping. Absolute position estimation techniques attempt to determ- ine the robot’s position without a priori information about the starting position. These approaches of the second class can be used to initialize the tracking techniques belonging to the first class. ‘In the remainder of this paper we use the notion “position” to refer to “position and orientation” if not stated otherwise. 896 Mobile Robots This paper addresses the problem of estimating the abso- lute position of a mobile robot operating in a known envir- onment. There are two reasons why we consider this prob- lem as relevant: 1. 2. Whenever the robot is switched on, it should be able to re-use its model of the environment. For this purpose, it first has to localize itself in this model. If the position tracking has failed, i.e. the robot has lost its position in the environment, it should be able to perform a repositioning. To avoid modifications of the environment and expensive special purpose sensors we are interested in map-matching techniques, which match measurements of standard sensors against the given model of the environment. We have the following requirements to such a method: The method should be able to deal with uncertain in- formation. This is important because sensors are gen- erally imperfect. This concerns wheel encoders as well as proximity sensors such as ultrasonic sensors or laser range-finders. Moreover, models of the environment are generally inaccurate. Possible reasons for deviations of the map from the real world come from imperfect sensors, measuring errors, simplifications, open or closed doors, or even moving objects such as humans or other mobile robots. The method should be able to deal with ambiguities. Typical office environments contain several places which cannot be distinguished with a single measurement. As example consider a long corridor, where changes of the position due to the limited range of the sensors do not ne- cessarily result in changes of the measured values. Thus, the set of possible positions of the robot is a region in that corridor. The method should allow the integration of sensor readings from different types of sensors over time. Sensor fusion improves reliability while the integration From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. over time compensates ambiguities. noise and is necessary to resolve Position probability grids simultaneously address all these desiderata. They allow a mobile robot to determine its position in typical office environments within a short time. Moreover, our method is able to deal with uncertain sensor information and ambiguous situations. The approach described in this paper is based on the con- struction of certainty grid maps described in (Moravec & Elfes 1985). Certainty grid maps have been proven to be a powerful means for the solution of different problems. Qri- ginally, they were designed to provide a probabilistic model of the robot’s environment. In the past such occupancy probability maps or variants of them have been successfully used for collision avoidance (Borenstein & Koren 1990; 1991) and path planning (Buhmann et al. 1995; Moravec 1988). This paper issues a further application area of this technique, namely the estimation of the absolute position of a robot. The principle of our approach is to accumu- late in each cell of the position probability grid the pos- terior probability of this cell referring to the current posi- tion of the robot. Because we have to consider a discrete set of possible orientations in addition to the discretization of the two-dimensional environment, position estimation is a three-dimensional problem. This extension, however, does not result in any principle problems, because the certainty grid concept can easily be extended to problems with higher dimensionality (Moravec & Martin 1994). elated work Various techniques for the estimation of the position of mo- bile vehicles by matching sensor readings against a given model of the environment have been developed in the past (Cox & Wilfong 1990; Feng, Borenstein, & Everett 1994). Most of them address the problem of tracking the current position and orientation of the robot given its initial configuration. Recently, more and more probabilistic tech- niques are applied to position estimation problems. These approaches can be distinguished by the type of maps they rely on. Techniques based on metric or grid-based representa- tions of the environment generally generate unimodal or Gaussian distributions representing the estimation of the ro- bot’s position. (WeiS, Wetzler, & von Puttkamer 1994) store angle histograms constructed out of range-finder scans taken at different locations of the environment. The position and orientation of the robot is calculated by maximizing the correlation between histograms of new measurements and the stored histograms. (Schiele & Crowley 1994) compare different strategies to track the robot’s position based on oc- cupancy grid maps. They use two different maps: a local grid computed using the most recent sensor readings, and a global map built during a previous exploration of the envir- onment or by an appropriate CAD-tool. The local map is matched against the global map to produce a position and orientation estimate. This estimate is combined with the previous estimate using a Kalman filter (Maybeck 1990), where the uncertainty is represented by the width of the Gaussian distribution. Compared to the approach of WeirS et al., this technique allows an integration of different meas- urements over time rather than taking the optimum match of the most recent sensing as a guess for the current position. Other researchers developed positioning techniques based on topological maps. (Nourbakhsh, Powers, & Birch- field 1995) apply Markov Models to determine the node of the topological map which contains the current position of the robot. Different nodes of the topological map are dis- tinguished by walls, doors or hallway openings. Such items are detected using ultrasonic sensors, and the position of the robot is determined by a “state-set progression technique”, where each state represents a node in the topological map. This technique is augmented by certainty factors which are computed out of the likelihoods that the items mentioned above will be detected by the ultrasonic sensors. (Sim- mons & Koenig 1995) describe a similar approach to pos- ition estimation. They additionally utilize metric informa- tion coming from the wheel encoders to compute state trans- ition probabilities. This metric information puts additional constraints on the robot’s location and results in more reli- able position estimates. (Kortenkamp & Weymouth 1994) combine information obtained from sonar sensors and cam- eras using a Bayesian network to detect gateways between nodes of the topological map. The integration of sonar and vision information results in a much better place recognition which reduces the number of necessary robot movements respectively transitions between different nodes of the topo- logical map. Due to the separation of the environment into different nodes the methods based on topological maps, in contrast to the methods based on metric maps described above, al- low to deal with ambiguous situations. Such ambiguities are represented by different nodes having high position probab- ilities. However, the techniques based on topological maps provide a limited accuracy because of the low granularity of the discretization. This restricted precision is disadvantage- ous if the robot has to navigate fast through its environment or even grasp for objects. The position probability grid method described here al- lows to estimate the robot’s position up to a few centimeters. This is achieved by approximating a position probability function over a discrete metric space defining possible posi- tions in the environment. It therefore can be used to provide an initial estimate for the tracking techniques. But even the methods based on topological maps could be augmented by our approach. If the nodes of the topological map addition- ally contain metric information, our approach could be used Mobile Robots 897 to position the robot within a node. Building position probability grids The certainty grid approach was originally designed by Elfes and Moravec as a probabilistic grid model for the rep- resentation of obstacles. The basic idea is to accumulate in each cell of a rectangular grid field the probability that this cell is occupied by an obstacle. Whereas Moravec and Elfes construct a model of the environment given the position of the robot and sensor readings, we go the opposite direction estimating the position given the environmental model and the sensor readings. For this purpose, we construct a posi- tion probability grid P containing in each field the posterior probability that this field includes the current position of the robot. For a grid field x this certainty value is obtained by repeatedly firing the robot’s sensors and accumulating in x the likelihoods of the sensed values supposed the center of x currently is the position of the robot in the environment model m. Each time the robot’s sensors are fired, the fol- lowing two steps are carried out: 1. 2. 0 e Update P according to the movement of the robot since the last update. This includes a processing of P to deal with possible dead-reckoning errors. For each grid field x of P and each reading s, compute the likelihood of s supposed x is the current position of the robot in m, and combine this likelihood with the probab- ility stored in z to obtain a new probability for x. The basic assumptions for our approach are: The robot must have a model m of the world the sensor readings can be matched against. Such models can either come from CAD-drawings of the environment or can themselves be grid representations of occupancy probab- ilities. The robot does not leave the environmental model. This assumption allows us to use the same size for the position probability grid P as for the environmental model m, and to set the probability for positions outside of P to 0. In the remainder of this section we describe how to in- tegrate different sensor readings into position probabilit- ies. Furthermore we show how to keep track of the robot’s movements with explicit consideration of possible dead- reckoning errors. Integrating multiple sensor readings In order to give reliable position estimates we have to integ- rate the information of consecutive sensor readings. Sup- pose m is the model of the environment, and p(x 1 sr A . . . A s,+r A m) is the (posterior) probability that x refers to the current position of the robot, given m and the sensor readings sr, . . . , s,-1 . Then, to update the probability for x given new sensory input s, we use the following update formula (Pearl 1988): p(x 1 Sl A.. . A sn-l A sn A m) = a - p(x 1 s1 A . . . As,-lAm)-p(s, lxAm> (1) The term p(s, 1 x A m) is the likelihood of measuring the sensory input s, given the world model m and assuming that x refers to the current position of the robot. The con- stant Q simply normalizes the sum of the position probabil- ities over all x up to 1. To initialize the grid we use the a priori probability p(x I m) of x referring to the actual position of the robot given m. The estimation of p(x I m) and p(s, 1 x A m) de- pends on the given world model and the type of the sensors used for position estimation. Below we demonstrate how to use occupancy probability maps for position estimation and how sensor readings of ultrasonic sensors are matched against such maps. Integrating the movements of the robot In order to update the grid according to the robot’s move- ments and to deal with possible dead reckoning errors we use a general formula coming from the domain of Markov chains. We regard each cell in P as one possible state of the robot, and determine a state transition probability p(x I Z A r A t) for each pair x, 5 of cells in P, which depends on the trajectory r taken by the robot and the time t elapsed since the previous update. This transition probability should also model how the trajectory r fits into the environment. For ex- ample, a trajectory leading the robot through free space has a higher probability than a trajectory blocked by an obstacle. Thus, the new probability of a grid field after a movement of the robot is: P[x] := w c P[+~(x I ZAT-A~) (2) EEP where a is a normalizing constant. At any time the field of P with the highest probability represents the best estimation for the current position of the robot. The confidence in this estimation depends on the absolute value of the probability and on the difference to the probabilities in the remaining fields of the grid. Thus, ambiguities are represented by dif- ferent fields having a similar high probability. osition estimation with occupancy probability maps as world model In this section we describe the application of this ap- proach by matching ultrasonic sensors against occupancy grid maps. Matching sonar sensor readings against occupancy grids To compute the likelihood p(s I x A m) that a sensor read- ing s is received given the position x and an occupancy grid 898 Mobile Robots map m we use a similar approach as described in (Moravec 1988). We consider a discretization Rr , . . . , Rk of possible distances measured by the sensor. Consequently, P(Ri I x A m) is the likelihood that the sonar beam is reflected in Ri. Suppose &r(Z) I x A m) is the likelihood that the cell Z reflects a sonar beam, given the position x of the robot and the map m. Furthermore suppose that Z belongs to Ri. Assuming the reflection of the sonar beam by Z being con- ditionally independent of the reflection of the other cells in Ri, the likelihood that Ri reflects a sonar beam is p(RiIxAm)=l- (1 - PW:) I x A m>> (3) ZER; Before the beam reaches Ri, it traverses RI, . . . , Ri-1. Supposed that the sonar reading s is included by range Ri, the likelihood p(s I x A m) equals the likelihood that Ri reflects the sonar beam given that none of the ranges R<i reflects it. Thus, we have p(sIxAm)= P(& (xAm)e (1 -P(Rj I xAm>> (4) j=l Computing position estimates using occupancy grids It remains to estimate the initial probability p(x I m) that the field x of m contains the current position of the robot. We assume that this probability directly depends on the oc- cupancy probability m(x) of the field x in m: the higher the occupancy probability, the lower is the position probability and vice versa. Therefore, the value p(x 1 m) is computed as follows: p(x I m) = 1 - m(x) CZEm (1 - m(z)) (5) Experiments In this section we show the results from experiments car- ried out with our robot RHINO in real world environments such as typical offices and the AAAI ‘94 mobile robot com- petition arena. For the position estimation we match sensor readings coming from ultrasonic sensors against occupancy grid maps. Implementation aspects For the sake of efficiency we implemented a simplified model of sonar sensors to compute the likelihood of a read- ing: instead of considering all cells of the grid covered by the sonar wedge as done in (Moravec 1988) we only consider the cells on the acoustic axis of the sensor. This rough simplification has already been applied successfully in (Borenstein & Koren 1991) to realize a fast collision avoidance technique for mobile robots. Figure 1: Outline and occupancy grid map of the office To compute the transition probability p(x I Z A r A t) we assume the dead reckoning errors to be normally distrib- uted. Supposing frequent updates of the position informa- tion we simply approximate the probability of the robot’s trajectory r by p(ylm) where y is the position correspond- ing to x. Thus we have p(x 1 2 A 7 A t) = w&f) -P(Y I m) (6) where +,-,t is a Gaussian distribution. ositiion estimation in a typical office To evaluate the capabilities of our approach we used the task of estimating the position in a typical office of our depart- ment. Figure 1 shows an outline of this office, which has a size of 4 x 7m2 and the occupancy grid map used to com- pute the likelihoods of the sensor readings. For the position estimation we used only 8 of the 24 ultrasonic sensors our robot is equipped with. The size of one grid field is 15 x 15 cm2, while we consider 180 possible orientations. For this grid and 8 sensor readings per step, the update of the grid takes about 6 seconds on a Pentium 90 computer. Figure 1 also shows the initial and final position of the path taken by the robot. At the beginning the robot turned to the left and moved between the bookcase and the desk. At the end of the trajectory the robot turned and started to leave the corner. On this trajectory, which is illustrated by the solid line, 12 sweeps of sonar readings were taken for the position estimation. In addition to the real trajectory A two alternative paths B and C are shown. Figure 2 shows plots of the maximum probabilities for the first, second, fourth, and twelfth reading sets for each position of the map. For the sake of simplicity only the maximal probability over all orientations at each position is shown. Note that the z-axes of the four figures have different scales. The probabilities of the corresponding points belonging to the trajectories A, B, and C are highlighted by vertical lines. Mobile Robots 899 naximum probability I maximum probability A , SC step2 - maximum probability maximum probability 0.2 0.1 0 0 Figure 2: Position probability distribution after 1,2,4, and 12 steps After the first reading we obtain a multi-modal distribu- tion with several small local maxima. At the correct posi- tion we observe only a small peak, which is dominated by the starting position of trajectory B. After interpreting the second set of readings the probabilities become more con- centrated. We observe four small peaks which now have their maximum in position 2 of trajectory C. The third and fourth reading sets support the initial position so that the po- sition on trajectory A gets the maximum probability. There are two peaks where the smaller one is a super-imposition of two different peaks for the trajectories B and C. After eval- uating 12 sonar sweeps all ambiguities are resolved, and the result is a significant and unique peak with probability 0.26 for the final point of trajectory A. This position in fact refers to the real position of the robot. Dealing with large environments In the previous example ambiguities appeared as several peaks in the position probability distribution. In large envir- onments we have to expect that due to the limited range of the proximity sensors ambiguities spread out over complete regions. In order to demonstrate the capability of our ap- proach to deal with such complex situations we applied it to the arena of the AAAI ‘94 mobile robot competition (Sim- mons 1995). The size of this arena amounts 20 x 30m2. Fig- ure 3 shows the occupancy grid map of this arena construc- ted with the map-building tool described in (Thrun 1993). The sonar sensor measurements were recorded during an ex- ploration run in this arena. The trajectory of the robot and the 12 positions at which the sensors were fired are included in Figure 3. Again we only used 8 of the 24 sonar sensors and the same resolution for the position probability grid as in the previous example. Figures 4 and 5 show logarithmic density plots of the maximum position probabilities for all directions after in- terpreting 6 and 12 sets of sensor readings. Although the in- formation obtained after the first 6 sensor readings does not suffice to definitely determine the current position of the ro- bot, it is obvious that the robot must be in a long corridor. After 12 steps the position of the robot is uniquely determ- ined. The corresponding grid cell has a probability of 0.96 while the small peak at the bottom of Figure 5 has a max- imum of 8e-6. 900 Mobile Robots Condusions We presented the position probability grid approach as a ro- bust Bayesian technique to estimate the position of a mobile robot. Our method allows the integration of sensor read- ings from different types of sensors over time. We showed that this method is able to find the position of a robot even if noisy sensors such as ultrasonic sensors and approxim- ative environmental models like occupancy grid maps are used. Our approach has any-time characteristic, because it is able to give an estimation for the current position of the robot already after interpreting the first sensor reading. By incorporating new input this estimation is continuously im- proved. Position probability grids allow to represent and to deal with ambiguous situations. These ambiguities are resolved if sufficient sensory information is provided. Our technique has been implemented and tested in several com- plex real-world experiments. The only precondition for the applicability of the posi- tion probability grid approach is an environmental model which allows to determine the likelihood of a sensor reading at a certain position in the environment. In our implement- ation we used occupancy probability grids as world model in combination with ultrasonic sensors. Alternatively one could use a CAD-model of the environment and cameras for edge detection or integrate simple features like the color of the floor. Using the currently implemented system our robot needs about one minute to determine its position in a typical of- fice. Although the computation time depends linearly on the grid size, very large environments such as the 600m2 wide AAAI ‘94 robot competition arena do not impose any prin- ciple limitations on the algorithm. We are convinced that different optimizations will make our approach applicable online even in such large environments. The most import- ant source for speed-up lies in the pre-analysis of the envir- onmental model. This includes computing and storing the likelihoods of all possible sensor readings for all positions. Additionally, in orthogonal environments the reduction of possible orientations to the alignment of the walls drastic- ally reduces the complexity of the overall problem. Further- more, the application of a map resolution hierarchy as pro- posed in (Moravec 1988) can be used to produce rough po- Figure 3: Occupancy grid map of the AAAI ‘94 mobile robot competition Figure 4: Density plot after 6 s Ps Figure 5: Density plot after 12 steps sition estimates which are refined subsequently. Despite these encouraging results there are several war- rants for future research. This concerns optimizations as de- scribed above as well as active exploration strategies. Such strategies will guide the robot to points in the environment, which provide the maximum information gain with respect to the current knowledge. References Borenstein, J., and Koren, Y. 1990. Real-time obstacle avoidance for fast mobile robots in cluttered environments. In Proc. of the IEEE International Conference on Robotics and Automation, 572-577. Borenstein, J., and Koren, Y. 199 1. The vector field his- togram - fast obstacle avoidance for mobile robots. IEEE Transactions on Robotics and Automation 7(3):278-288. Buhmann, J.; Burgard, W.; Cremers, A. B.; Fox, D.; Hof- mann, T.; Schneider, F.; Strikos, J.; and Thrun, S. 1995. The mobile robot Rhino. AI Magazine 16(2):3 l-38. Cox, I., and Wilfong, G., eds. 1990. Autonomous Robot Vehicles. Springer Verlag. Feng, L.; Borenstein, J.; and Everett, H. 1994. “Where am I?” Sensors and Methods for Autonomous Mobile Robot Positioning. Technical Report UM-MEAM-94-2 1, Uni- versity of Michigan. Kortenkamp, D., and Weymouth, T. 1994. Topological mapping for mobile robots using a combination of sonar and vision sensing. In Proc. of the Twelfth National Con- ference on Artificial Intelligence, 979-984. Maybeck, P. S. 1990. The Kalman filter: An introduction to concepts. In Cox and Wilfong (1990). Moravec, H. P., and Elfes, A. 1985. High resolution maps from wide angle sonar. In Proc. IEEE Int. Con& Robotics andAutomation, 116-121. Moravec, H. P., and Martin, M. C. 1994. Robot navigation by 3D spatial evidence grids. Mobile Robot Laboratory, Robotics Institute, Carnegie Mellon University. Moravec, H. P. 1988. Sensor fusion in certainty grids for mobile robots. AI Magazine 61-74. Nourbakhsh, I.; Powers, R.; and Birchfield, S. 1995. DERVISH an office-navigating robot. AI Magazine 16(2):53-60. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Sys- tems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc. Schiele, B., and Crowley, J. L. 1994. A comparison of position estimation techniques using occupancy grids. In Proc. of the IEEE International Conference on Robotics and Automation, 1628-1634. Simmons, R., and Koenig, S. 1995. Probabilistic ro- bot navigation in partially observable environments. In Proc. International Joint Conference on Arti$cial Intelli- gence. Simmons, R. 1995. The 1994 AAAI robot competition and exhibition. AI Magazine 16(2): 19-29. Thrun, S. 1993. Exploration and model building in mobile robot domains. In Proceedings of the ICNN-93, 175-180. San Francisco, CA: IEEE Neural Network Council. Wei& G.; Wetzler, C.; and von Puttkamer, E. 1994. Keep- ing track of position and orientation of moving indoor sys- tems by correlation of range-finder scans. In Proceedings of the International Conference on Intelligent Robots and Systems, 595-601. Mobile Robots 901
1996
133
1,770
Navigation for Everyday Life Daniel D. Fu and Kristian J. Hammond and Michael J. Swain Department of Computer Science, University of Chicago 1100 East 58th Street, Chicago, Illinois 60637 Abstract Past work in navigation has worked toward the goal of producing an accurate map of the environment. While no one can deny the usefulness of such a map, the ideal of producing a complete map becomes un- realistic when an agent is faced with performing real tasks. And yet an agent accomplishing recurring tasks should navigate more efficiently as time goes by. We present a system which integrates navigation, plan- ning, and vision. In this view, navigation supports the needs of a larger system as opposed to being a task in its own right. Whereas previous approaches assume an unknown and unstructured environment, we assume a structured environment whose organiza- tion is known, but whose specifics are unknown. The system is endowed with a wide range of visual capabil- ities as well as search plans for informed exploration of a simulated store constructed from real visual data. We demonstrate the agent finding items while map- ping the world. In repeatedly retrieving items, the agent’s performance improves as the learned map be- comes more useful. Introduction Past work in navigation has generally assumed that the purpose of navigation is to endow a robot with the ability to navigate reliably from place to place. How- ever, in focusing on this specific problem, researchers have ignored a more fundamental question: What is the robot’s purpose in moving from place to place? Presumably the robot will perform some later-to-be- named task: The point of navigation is only to get the robot to the intended destination. What happens afterwards is not a concern. This notion has led re- searchers toward building systems whose sensing ulti- mately relies on the lowest common denominator (e.g., sonar, dead reckoning). We believe that: (1) Robots will use navigation as a store of knowledge in service of tasks, and (2) Robots will have semantically rich perception in order to perform a wide range of tasks. Given these two beliefs we suggest first that: Naviga- tion must coexist with a robot’s planning and action 902 Mobile Robots mechanisms instead of being a task in and of itself, and second that: Rich perception, used for tasks, can also be used for constructing a map to make route following and place recognition more tractable. In this paper we present a system which embodies these two notions as they apply to these five areas: run-time planning, context-based vision, passive map- ping, path planning, and route following. This system differs from past navigation research in several ways - the principal difference being the integration of a pas- sive mapper with a planning system. This notion was introduced by Engelson and McDermott (1992). They view a mapping system as a resource for a planner: A mapping subsystem maintains a map of the world as the planner accomplishes tasks. When applicable, the planner uses the acquired map information for achiev- ing goals. In contrast, traditional research has viewed the map- ping subsystem as an independent program which ex- plores and maps its environment. Eventually the pro- gram produces a complete map. While no one can doubt the usefulness of such a map, we believe that this mapping may be neither realistic nor necessary. Previously, we showed that an agent can instead use its knowledge of a domain’s organization in order to accomplish tasks (Fu et al. 1995). This was shown to be effective in a man-made domain - a grocery store. Whereas past approaches have assumed an unknown and unstructured environment, e.g. (Elfes 1987), our approach assumes an environment whose structure is known, but whose specifics are unknown. For common tasks such as shopping in a grocery store, finding a book in a library, or going to an airport gate, there is much known about each setting which allows the average person to achieve his goals with- out possessing prior knowledge of each specific setting. For example, people know that: managers of grocery stores organize items by their type, how they’re used, etc; librarians shelve books according to category; air- port architects account for typical needs of travelers by From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. putting signs in relevant areas. People count on these regularities in order to act appropriately and efficiently without first completely mapping the environment. In this sense, the domains are known. Moreover, the do- mains are actively structured by someone else so we can depend on regularities being maintained. We can also rely on certain stable physical prop- erties. Previously we showed that grocery stores ex- hibit several useful physical properties allowing us to build fast and reliable vision sensing routines. For example, light comes from above, shelves always stock items, items are always displayed facing forward, etc. This work is similar to recent research done in the areas of context-based (Strat and Fischler 1991; Firby et al. 1995) and lightweight (Horswill 1993) vi- sion. These paradigms have produced perceptual algo- rithms which compute useful information reasonably fast so long as the reliability conditions for each per- ceptual algorithm are known. An immediate conse- quence of these conditions is that at least some of the sensing routines don’t have to be run continuously. If these routines are used in the navigation process, the map must represent the different conditions and re- sults of runhing a sensing routine. In contrast, pre- vious research has often committed to using uniform fixed-cost sensing (e.g., sonar). This commitment al- lows the same sensing to be used for both map learning and route following. However, since we use conditional sensing routines, the map learning and route-following methods are markedly different. In summary, the research presented here differs from traditional research in two major ways. First, we view map learning and route-following in the context of a larger system which performs a wide range of tasks in addition to navigation. We present a framework from which existing planners can integrate navigation. Sec- ond, we assume a known and structured environment which enables us to write effective search plans as well as to build a wide range of visual capabilities. We describe a method for using these capabilities in the navigation task. Shopper In order to study some of the types of knowledge and underlying mechanisms involved in everyday tasks, we selected grocery store shopping. Shopping is a common activity which takes place in a completely man-made environment. Previously, we showed how our system, SHOPPER, used structural knowledge of the environ- ment to quickly find items using a small set of regular- ities: items of a similar nature are clustered together (e-g-, cereals) as well as items which are often used together (e.g., pancake mix and maple syrup). Sev- eral regularities were then encoded into search plans for finding items. For example, if SHOPPER is looking for Aunt Jemima’s pancake-mix and it sees a “syrup” sign at the head of an aisle, it executes a plan to search an aisle for syrup. After locating the syrup, it executes another search plan for finding the pancake mix in the local area close to the syrup. These plans were tested in GROCERYWORLD; a sim- ulator of a real grocery store. GROCERYWORLD pro- vides range information from eight sonars plus com- pass information. SHOPPER is cylindrical with sonars placed equidistant along the circumference. SHOPPER also possesses a panning head. Figure 1 shows the GROCERYWORLD user interface. The top left pane shows SHOPPER in the grocery store with both head and body orientations. The top right pane shows the body-relative sonar readings while the bottom pane shows the sign information available to the agent. If the agent is located at the head of an aisle and is facing the aisle, GROCERYWORLD can supply sign data. Figure 1: User Interface GROCERYWORLD differs from most robot simulators in that it supplies real color images taken from a local grocery store. Camera views are restricted to four car- dinal directions at each point in the store. Altogether, the domain consists of 75,000 images. Figure 2 shows example views. GROCERYWORLD also differs from past simulators in that travel is limited to moving along one dimen- sion, except at intersections. However, stores, like office environments, don’t have much free space; in fact, hallways and store aisles constrain movement to be in two obvious directions. Kortenkamp and Wey- mouth (1994) h s owed that a robot could stay centered in an office hallway with less than 3.5 degrees of ori- entation error. In light of this fact, we do not be- lieve the one-dimensional travel restriction is a serious shortcoming since we actually prefer SHOPPER to stay centered in an aisle for consistent vision perception. Mobile Robots 903 Navigation SHOPPER'S navigation system is comprised of four sub- systems: RETRIEVER, PATH PLANNER, FOLLOWER, and PASSIVE MAPPER. These subsystems are shown in Figure 3. Initially a goal item is given to the RE- TRIEVER which uses similarity metrics and the current map to select a destination and a search plan to ex- ecute on arrival. The PATH PLANNER replaces the destination with a sequence of nodes (created on pre- vious visits) leading from the current location to the destination. The nodes denote intersections and places of interest. Each node is annotated with accumulated sensor information associated from past visits. Next, the FOLLOWER uses the sequence to follow the path. Vision algorithms are selected and run to ensure cor- respondence between the predicted sensor information and the current perception. Once the path has been followed, the PLAN INTERPRETER executes the search plan. The PASSIVE MAPPER updates the map during the time the search plan is executed. Figure 2: Example views. Left: A typical view down an aisle. Right: A view to the side. Two horizontal lines denote shelf locations. Color regions are enclosed by the larger rectangle. The smaller rectangle around Corn Pops denotes identification. SHOPPER'S perceptual apparatus is primarily suited towards moving around and identifying objects in an image. Below we outline the various sensing routines and explain their use. Compass: SHOPPER moves and looks in four cardinal directions. We use a compass as an aid to mapping the environment. For example, if SHOPPER knows there is a “soup” sign in view at a particular inter- section, it can turn to that direction and attempt to sense the sign. Sonar: We have sonar sensing continuously for clas- sifying intersections and verifying proximity to shelves. Signs: When SHOPPER is looking down an aisle and attempts to detect signs, GROCERYWORLD supplies the text of the signs. In Figure 2 a diamond-like sign can be seen above the aisle. However, the image res- olution, sign transparency, and specularity prohibit any useful reading. Shelf Detection: This sensor finds the vertical lo- cation of steep gradient changes in an image by smoothing and thresholding for large gradients. Color Histogram Intersection: Sample color his- tograms (Swain and Ballard 1991) are taken suc- cessively above a shelf and compared to a sought object ’ s histogram in order to identify potential re- gions according to the intersection response. Edge Template Matcher: Given an image of ob- ject, we use an edge image template matching rou- tine using the Hausdorff distance (Rucklidge 1994) as a similarity metric. This sensor is the most expen- sive, so it processes areas first filtered by the shelf and color histogram detectors. All of the above vision algorithms have been imple- mented and are used by SHOPPER. ‘ I World 0 Model / / / / / / / / / / / / WORLD Figure 3: SHOPPER'S Architecture. Arrows indicate data flow from one module to another. In the following sections we describe the four sub- systems. Later, using examples, we explain how they interact with each other as well as with the rest of the system. Passive Mapping The purpose of the PASSIVE MAPPER subsystem is to maintain a topological map of the world. This sub- system is active when the agent is exploring the world via search plans by monitoring the EXECUTOR as it performs visual and physical actions. The actions and results are used for creating the map. For each physical action the agent performs, it commits, as a policy, to perform a fixed-cost sensing procedure consisting of a compass reading and sonar readings. When the agent 904 Mobile Robots knows where it is and exactly where it’s going, the PASSIVE MAPPER is disabled since the current map will suffice for route following. Map Representation. Our map representation draws from previous navigation work using topological maps (Brooks 1985; Engelson and McDermott 1992; Kortenkamp and Weymouth 1994; Kuipers and Byun 1988; Mataric 1992). These maps use relations be- tween places (aka landmarks, distinctive places, gate- ways, waypoints) for navigation. These methods re- quire the robot be able to recognize a place, and travel from place to place. We use a topological map consisting of distinctive places and connecting edges. In Kuipers and Byun’s NX model, distinctive places are local maxima accord- ing to a geometric criterion. Examples of these can be: beginnings of open space, transitions from differ- ent spaces, dead ends, etc. Essentially, a distinctive place is a landmark which a robot can recognize and use for map learning and route following. In contrast, our notion of a distinctive place is closely coupled to a place’s usefulness to the agent, either as a navigation point where an agent may move into a different space, or a location that is somehow relevant to the agent. In the GROCERYWORLD domain these places are, respec- tively, intersections (INTER’s) and places of interest (POPS). Recall that SHOPPER is constrained to move along one dimension at a time. It can move in another dimen- sion only at intersections. One example of a distinctive place is an intersection. Similar to Kortenkamp and Weymouth, we categorize intersections qualitatively as T-SHAPE, CROSS, and CORNER. These descriptions of space are based only on sonar readings. Examples are illustrated in Figure 4. The other example of a distinc- tive place is a place of interest. These places denote locations important to the agent’s goals. For SHOP- PER, each place of interest corresponds to a where SHOPPER found a sought item. locat ion Figure 4: Qualitative descriptions of space: T-SHAPE, CROSS, CORNER, and CORRIDOR. As SHOPPER encounters new distinctive places, it updates its map by storing perceptual information as- sociated with the distinctive place. The distinctive place description, describing both intersections and places of interest, is a set of tuples (T, 5’ x C, A, R) where: C E (0, 1,2,3} is a compass direction. T E {T-SHAPE, CROSS, CORNER, CORRIDOR} x C is the distinctive place type and orientation. S E {sign, shelf, color, template matcher} is a sensing routine. S x C also accounts for the head’s direction at the time the routine was executed. A is a set of parameter arguments supplied to the sensing routine. R is the output of the routine. Note that fixed-cost sensors compass and sonar are automatically associated with each distinctive place. For the agent’s location in Figure 1, an example sign sensing tuple is: ((T-SHAPE, 2), (sign, 0), 8, { Aisle-S, Baby-foods, Asian, Mexican, Middle-east-foods, Canned- meals, Tuna, Rice, Pasta}). Landmark Disambiguation. As the PASSIVE MAPPER encounters a new distinctive place, it at- tempts to determine whether or not it’s been there before. For passive mapping, there are two problems to landmark disambiguation: passivity and disparate sensing. A tension exists between keeping a mapper passive (so as not to interfere with plan execution) and supply- ing enough information to the mapper for dependable navigation. There are two ways to alleviate this ten- sion: Maintain possible distinctive places. This method, proposed by Engelson and McDermott , requires pos- sibilities to be eventually culled as the agent moves about in the world. Map updates are delayed until the agent has disambiguated its position. Assume rich sensing. The only reason the PASSIVE MAPPER is activated is precisely because it’s explor- ing the environment. If the agent knows where it is, it would be following a route instead. Basically, if SHOPPER doesn’t know where it is, assume it will “look around” (Ishiguro et al. 1994). Since the PAS- SIVE MAPPER is active when SHOPPER executes a search plan, we adopt this assumption in the PAS- SIVE MAPPER as it appears tenable for the time be- ing. A related problem surfaces when a passive mapper uses a wider range of sensing routines. Since SHOPPER does not run all its sensing routines all the time, we can’t guarantee any other sensing routines were run except for the fixed costs of compass and sonar. For Mobile Robots 905 SHOPPER, the plans that are run are search plans. Consequently, we depend on there being additional sensing information as the agent explores the envi- ronment. If the agent were executing a different set of plans which involve movement, there might be an altogether different set of sensing routines run. If, however, we commit to changing the agent’s location only through search plans or route following, distinc- tive places will not be confused with each other. Retriever Given a goal item, search plans, and similarity met- rics, the RETRIEVER selects a target destination and search plan to execute once SHOPPER arrives. Table 1 lists conditions for selecting a particular destination and search plan. Recall earlier that SHOPPER uses regularities of a do- main in order to find items via its search plans. Regu- larities are also used for deciding on a destination. For the examples discussed later, we use two regularities: type and counterpart. The type relationship denotes a category made up of items of the same type. The counterpart relation denotes a category of items that are often used together; e.g., pancake mix and maple syrup. As an example of how the type relationship is used, Surf and CheerFree are types of laundry detergents. Items of the same type are likely to be physically close. If, in a previous visit, SHOPPER found Surf and now wants to find CheerFree, it selects a place of interest (Surf) as the destination as well as a LOCAL SAME- SIDE search plan. This particular search plan looks for an item hypothesized to be nearby on the same side of the aisle as the previously found item. Path Planning Given a target destination from the RETRIEVER and the current location from the map, the PATH PLANNER plans a route that will get the agent from the current location to the destination. Because the map is orga- nized in terms of nodes and edges, the path planner uses Dijkstra’s algorithm for finding a shortest path. No metrical information is stored, so each edge is of equal cost. After the nodes along the route have been selected, the PATH PLANNER then annotates each node with all the sensing information gathered from past vis- its. These annotations are used by the FOLLOWER. Route Following The FOLLOWER receives a path and search plan from the PATH PLANNER. The FOLLOWER'S purpose is to follow the path and then pass the search plan to the PLAN INTERPRETER. In order to follow a path the FOLLOWER must verify that the current sensing is con- sistent with the predicted sensing. The stored sensing is processed according to the particular place predic- tion. Recall these are a set of tuples (T, S x C, A, R). The consistency check is based on a match function for each sensing routine: VsdEIm, : A x R x A x R + {True, False) where m, is the match function for sensing routine s. The match functions compare the arguments and re- sults of the past and current sensing routine to ensure the agent is on course. If the results are consistent, the match function returns a match (True), otherwise no match (False). For example, suppose SHOPPER is checking if shelf positions match. After the Shelf De- tector is run on the current image, the arguments (hor- izontal subsampling) and currently found shelves (ver- tical image positions) are passed to the shelf match function as well as the stored shelf information com- posed of the same arguments and vertical image posi- tions. For this particular match function, we require one-third of the shelf positions in the current image be within twenty pixels of the stored shelf positions, and vice versa. If both match correctly, the shelf match function returns True otherwise False. The consistency check is done by using the match functions over the sensing routines common to both the past and current perception. Let P be the set of stored perception at a place, and let Q be the set of current perception. Place consistency is defined to be equivalent distinctive place types and a match on all the common sensing between P and Q. Figure 5 illustrates this method as a procedure. If the procedure returns True, the two locations are consistent. If False, the two locations are different. procedure Consistency-Check( P, Q) for all sensors s E S for all compass directions c E C if (t, (s, c),a,r) E P and (t’, (s, c), a’,r’) E Q and m,(a, T, a’, T’) = False then re$urn return t 1 t’ False Figure 5: Procedure for determining consistency be- tween places P and Q. This procedure is at the heart of the PASSIVE MAP- PER. The FOLLOWER performs the stored sensing rou- tines, and then makes use of the procedure to ensure it is on course. At an intersection the FOLLOWER checks for consistency by alternating between a consistency check and executing a sensing routine. After the in- tersection matches, the FOLLOWER orients the agent 906 Mobile Robots Strategy Name Conditions Destination Search Plan EXACT Ii was found before. PO1 with 11 None TYPE Similar item I2 was found before. PO1 with 12 LOCAL SAME-SIDE COUNTERPART II and 12 are counterparts. PO1 with 12 LOCAL 12 previously found. SIGN-TYPE Sign S seen. 11 is a type of S. INTER with S AISLE SIGN-COUNTERPART Sign S seen. Ii and S are counterparts. INTER with S AISLE DEFAULT None None BASIC Table 1: In order of preference: strategy names, their conditions, destination, and search plan for locating item Ii. to move toward the next distinctive place. A similar method is employed for finding a place of interest, ex- cept that a consistency check failure is allowed owing to the agent not being in the right place yet. Currently SHOPPER does not handle the case when it fails to find a place of interest. Examples We select three items to demonstrate SHOPPER’S navi- gation capabilities: Solo laundry detergent, Corn Pops cereal, and Downy fabric softener. Solo Initially in first coming to the store, SHOP- PER’S map is empty. Given an empty map, Solo laun- dry detergent, and preferences shown in Table 1, the RETRIEVER picks a null destination and BASIC search plan as the DEFAULT strategy. The BASIC search plan is simple: go to the beginning of an aisle, move across aisles until a relevant sign is seen, go into that aisle, look left and right until the item is found. On receiving the null destination with search plan, the PATH PLANNER outputs a null path and BASIC search plan. The FOLLOWER has no path to follow, so it passes control and the search plan to the PLAN IN- TERPRETERstarts executing the search plan starting at the store entrance in front of Aisle 1. The BASIC search plan instructs SHOPPER to move around the outside perimeter of the store reading signs and moving until it finds a relevant sign. For example, in Aisle 4, the sign reads: Aisle-4 Salad-dressing Canned-soup Sauce Nut Cereal Jam Jelly Candy. Eventually Solo is found on the left side of Aisle 6 since there is a “laundry aid” sign in front of that aisle. During the time this search was done, the PASSIVE MAPPER recorded the intersection types of Aisles 1 through 6 plus visual (sign) infor- mation. A place of interest is created where Solo was found. The PO1 is defined according to the shelf po- sitions, color region, and item identification as output by the sensing routines. Corn Pops Next, we give SHOPPER the goal of finding Corn Pops. The RETRIEVER recalls that a “ce- real” sign was seen in front of Aisle 4 and selects the SIGN-TYPE strategy. The target destination is now the beginning of Aisle 4, and the search plan is AISLE. The PATH PLANNER plans a path from the current location (a POI) to Aisle 4 (an INTER). The FOL- LOWER starts by verifying it’s at the current place. The current accumulated sensor information is a match to the stored perception. SHOPPER now orients its body to the beginning of Aisle 6 and goes there. Once it reaches the beginning, the intersection is verified to be Aisle 6 by matching the intersection type and turn- ing the head around to match sign information. Using a similar method, SHOPPER continues to the begin- nings of Aisle 5, then Aisle 4. The AISLE search plan and control is now passed to the PLAN INTERPRETER. The PLAN INTERPRETER searches Aisle 4 and eventu- ally finds Corn Pops where it creates a PO1 similar to Solo. Downy Finally, we give SHOPPER the goal of find- ing Downy fabric softener. Since fabric softener is used with laundry detergent, SHOPPER uses the COUN- TERPART strategy. The intended destination now be- comes the PO1 containing Solo, with the search plan as LOCAL. In a similar fashion, the FOLLOWER fol- lows a route from the current PO1 to the intended POI. When the agent is at the beginning of Aisle 6, the FOLLOWER then runs the PO1 sensing routines and compares results associated with Solo while mov- ing down the aisle. Once Solo is reached, a LOCAL search plan is executed. This plan allows SHOPPER to search on the left and right of Solo as well as the other side of the aisle as tries to find Downy. In this instance, Downy is on the other side. SHOPPER finds Downy and creates a new POI. Status SHOPPER has been tested successfully on fourteen items ranging over cereals, laundry aids, cake mixes, cleaning materials, and storage supplies. SHOPPER can quickly compute routes to likely areas and reliably ar- rive there. For the items SHOPPER cannot find - and there are many - it has been the case that its sensors Mobile Robots 907 failed to detect the item. So long as SHOPPER reaches a close enough proximity to point the camera at an item, we do not consider the navigation system faulty. Currently the FOLLOWER uses a static procedure for following routes. Because our search plans are declara- tive and can account for opportunistic types of behav- ior (e.g., recognizing a sought item unexpectedly), we would like the FOLLOWER to use a similar representa- tion for coping with contingencies during the naviga- tion process, c.f. (Simmons 1994). Earlier we cited the possibility of the system using different or new sensing routines, not necessarily hav- ing any overlap with previously stored sensing. We believe that landmark disambiguation is simpler if the PASSIVE MAPPER is sometimes active by signaling an ambiguity before the agent moves away. Then it du- plicates perceptual actions and compares the results to past perception. This method appears to be promising since Kortenkamp and Weymouth, in using a visual representation of vertical lines, were able to success- fully disambiguate locations without traveling away from the location. Another possible way to disambiguate position is to use dead reckoning. Currently SHOPPER'S map data does not indicate relative distances between places. So when sensor data alone indicates that SHOPPER could be in one of several places, dead reckoning al- lows SHOPPER to reject many places before needing more information. The navigation method we have presented here as- sumes that major errors in sensing will not hap- pen. For a specific set of items, our sensing rou- tines have empirically shown to be sufficient and re- liable in our particular domain. For integration with existing robots, this may not be a realistic assump- tion. However, we view errors in sensing as being just that: errors in sensing. We do not believe a map- per should bear the burden of coping with an incor- rect map because of error-prone and/or semantic-poor data. Surely there are instances in real life where one can become genuinely lost because of sheer size, or ab- sence of distinguishable cues. Although every naviga- tion system must handle those inevitable situations, we believe those instances are rare simply because we live and depend on a culturally rich world (Agre and Horswill 1992) full of distinguishing cues to support everyday activity - one of them being navigation. Acknowledgments The research reported here has benefited from discus- sions with Charles Earl, Mark Fasciano, Jim Firby, and Val Kulyukin as well as from helpful comments from reviewers. This work was supported in part by Office of Naval Research grant number N00014-91-J-1185. References Philip E. Agre and Ian Horswill. Cultural support for improvisation. In The Proceedings of the Tenth National Conference on Artificial Intelligence, pages 363-368, 1992. Rodney A. Brooks. Visual map making for a mobile robot. In Proceedings IEEE International Conference on Robotics and Automation, 1985. Albert0 Elfes. Sonar-based real-world mapping and nav- igation. IEEE J ournal of Robotics and Automation, RA- 3(3):249-265, 1987. Sean P. Engelson and Drew V. McDermott. Maps consid- ered as adaptive planning resources. In AAAI Fall Sym- posium on Applications of Artificial Intelligence to Real- World Autonomous Mobile Robots, pages 36-44, 1992. R. James Firby, Roger E. Kahn, Peter N. Prokopowicz, - and Michael J. Swain. An architecture for vision and ac- tion. In The Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pages 72-79, 1995. Daniel D. Fu, Kristian J. Hammond, and Michael J. Swain. Action and perception in man-made environments. In The Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pages 464-469, 1995. Ian Horswill. Polly: A vision-based artificial agent. In The Proceedings of the Eleventh National Conference on Artificial Intelligence, 1993. Hiroshi Ishiguro, Takeshi Maeda, Takahiro Miyashita, and Saburo Tsuji. A strategy for acquiring an environmental model with panoramic sensing by a mobile robot. In Pro- ceedings IEEE International Conference on Robotics and Automation, volume 1, pages 724-729, 1994. David Kortenkamp and Terry Weymouth. Topological mapping for mobile robots using a combination of sonar and vision sensing. In The Proceedings of the Twelfth Na- tional Conference on Artificial Intelligence, 1994. Benjamin J. Kuipers and Yung-Tai Byun. A robust, qual- itative method for robot spatial learning. In The Pro- ceedings of the Seventh National Conference on Artificial Intelligence, 1988. Maja J. Mataric. Integration of representation into goal- driven behavior-based robots. IEEE Transactions on Robotics and Automation, 8(3):304-312, June 1992. William Rucklidge. Eficient Computation of the Min- imum Hausdorff Distance for Visual Recognition. PhD thesis, Cornell University Department of Computer Sci- ence, 1994. Technical Report TR94-1454. Reid Simmons. Becoming increasingly reliable. In Pro- ceedings, The Second International Conference on Artifi- cial Intelligence Planning Systems, pages 152-157, 1994. Thomas M. Strat and Martin A. Fischler. Context-based vision: Recognizing objects using both 2-d and 3-d im- agery. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 13(10):1050-1065, 1991. Michael J. Swain and Dana H. Ballard. Color indexing. International Journal of Computer Vision, 7~11-32, 1991. 908 Mobile Robots
1996
134
1,771
Guaranteeing Safety in Spatially Situate Robert C. Kohout and James A. David J. Musliner Department of Computer Science and Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 kohout,hendler@cs.umd.edu phone: (30 1) 405-7027 fax: (301) 405-6707 Abstract “Mission-critical” systems, which include such di- verse applications as nuclear power plant con- trollers, “fly-by-wire” airplanes, medical care and monitoring systems, and autonomous mobile ve- hicles, are characterized by the fact that system failure is potentially catastrophic. The high cost of failure justifies the expenditure of considerable effort at design-time in order to guarantee the correctness of system behavior. This paper exam- ines the problem of guaranteeing safety in a well studied class of robot motion problems known as the “asteroid avoidance problem.” We establish necessary and sufficient conditions for ensuring safety in the simple version of this problem which occurs most frequently in the literature, as well as sufficient conditions for a more general and re- alistic case. In doing so, we establish functional relationships between the number, size and speed of obstacles, the robot’s maximum speed and the conditions which must be maintained in order to ensure safety. Introduction Applications in which the failure of a system to per- form correctly can result in catastrophe are known as mission-critical systems. The reliability requirements of such applications, which include nuclear power plant controllers, “fly-by-wire” airplanes, medical care and monitoring systems, and autonomous mobile vehicles, have motivated extensive research into the develop- ment of highly reliable software systems. Research into the development of systems-level support for mission- critical systems focuses upon “hard” real-time operat- ing systems, which can guarantee that the system can deliver resources stipulated by some externally gener- ated set of timing constraints. Similarly, the program- ming language community has developed technologies to ensure that programs will always behave correctly, with respect to some externally provided performance specification. In contrast to the effort in the systems and pro- gramming languages communities, there is not a large body of research into the problem of generating the Honeywell Technology Center MN65-2200 3660 Technology Drive Minneapolis, MN 55418 musliner@src. honeywell .com phone: (612) 951-7599 specifications which will ensure the correct and timely operation of a deployed mission-critical system. De- termining a correct plan of action is the focus of AI Planning research. However, the high-variance time requirements of current techniques make it difficult to guarantee that they will produce a correct solution in time to actually use it. CIRCA (Musliner, Durfee, & Shin 1995) was developed to address this problem: by modeling the world as a finite set of situation-states, with well defined transitions between them, CIRCA is able to search the situation space “offline” (i.e. before the system is actually deployed), in an effort to find a closed set of safe states such that, for any possible combination of external events, it will always be pos- sible for the control system to take an action that will keep the current situation-state within the closed set of safe states. When the situation space includes contin- uous dimensions, this technique can only be used if we can somehow discretize the continuous space. When the dimension is time, it is usually straightforward to meaningfully distinguish between times before and af- ter a deadline, as well as some small set of “deadline approaching” intervals. However, when the dimensions are spatial, there is often no simple partitioning which will allow us to reason about a finite set of discrete states. This paper examines the problem of guaran- teeing safety in a well-studied class of robot motion problems. By establishing sufficient conditions for en- suring safety, we provide the basis for automatic rea- soning about maintaining safety in spatial domains. The “Asteroid Avoidance Problem” Consider one of the simplest natural problems in dy- namic motion planning: how can we find a path for a robot, R, which travels from some initial location lc to some goal location lo, while avoiding each of n obstacles, 01, . . . , O,, where each of the Oi is moving at a known, constant velocity? We are making three simplifying assumptions that would rarely occur in a real-world application: 1. The trajectories of the obstacles are known to the system in advance. 2. The obstacles move linearly. Mobile Robots 909 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. 3. The speed of the obstacles is fixed. (Reif & Sharir 1985) named this class of prob- lems the asteroid avoidance problem, and they showed that, for the three-dimensional case, the problem is pSp&X-hard when the velocity of the robot is bounded, and Np-hard even when the robot’s velocity is unbounded. (Canny & Reif 1987) showed that the 2-dimensional case is n/p-hard. This has strong im- plications for any mission-critical application, such as a fly-by-wire or autonomous vehicle system, for which the failure to avoid obstacles could be catastrophic: in the general case, we will only be able to find timely results for problems of very small size, even under the strong simplifying assumptions listed above. Addition- ally, one should note that these results are for the prob- lem of finding a path if it exists; there is no guarantee that such a path will in fact exist. This paper focuses upon finding restrictions to the problem for which we can guarantee that some safety- preserving path exists, and for which the problem of computing the solution is tractable. For instance (Fu- jimura & Samet 1989) g ive an O(n210gn) algorithm for solving the asteroid avoidance problem, under the as- sumption that the robot can move faster than all of the obstacles. (Reif & Sharir 1985) claim that under such an assumption, it is always possible to find such a path, as long as the initial position of the robot is not in the “shadow” of any obstacle, where the shadow of an obstacle is defined to be all those locations from which escape from that obstacle is impossible. The proofs that 1) a safety-preserving path always exists, and that 2) we have a relatively efficient way of finding it, are the two most important criteria for guarantee- ing that under these conditions obstacles will always be avoided. We shall refer to these as the existence and ability criteria, respectively. If we have the luxury of knowing well in advance what the initial positions and trajectories of the robot and obstacles are, so that we can compute the solu- t ion “offline” , and use it at the appropriate time, the existence and ability criteria are all we need to satisfy to ensure the robot’s safety. However, this situation rarely occurs in practice. Normally, the relevant data become available at some time, to, and we must have the appropriate solution by some later deadline, td, or the solution will be obsolete by the time we begin to execute it. We call the requirement that a solution be produced before it is obsolete the timeliness criteria. In the case where the speed of a robot is greater than that of any object, we can establish timeliness by ex- panding the shadow of each obstacle to account for the (worst-case) time required to compute a solution to the problem. In this section, we will consider the following vari- ant of the asteroid problem: we have a single robot, R, and a set of n obstacles Or, . . . , 0, moving at con- stant speeds ~11, . . . , II, along linear trajectories. The robot is capable of instantaneous, unbounded accel- eration, up to some maximum speed V,. Under what conditions can we guarantee that 1) a safety-preserving path exists which will allow the robot to avoid being hit by any obstacle and 2) we can compute the path in time to execute it? We shall model the robot as a single point and the obstacles as circles with diameters dl,... , d,. Most path planning literature assumes we can bound the robot and obstacles by polygons. Re- ducing the robot to a point is a standard technique introduced in (Lozano-Perez & Wesley 1979) : it turns out that a solution in the case where the robot is a convex polygon is equivalent to the case where the robot is a point and the sizes of the obstacles have been increased by the size of the robot. However, this technique can only be used when the obstacles do not rotate. Since a polygon that does not rotate can be bounded by a circle, and a polygon that does rotate can also be bounded by a (possibly larger) circle, cen- tered at the center of rotation, we use circles to rep- resent obstacles in order to simplify our presentation, and actually gain some generality. In this paper, we are concerned only with avoiding the obstacles: there is no goal position to which we are trying to move the robot. Finally, though the concepts we present do generalize to higher dimensions, we will limit our treatment to the two-dimensional case for ease of presentation. The Threat Horizon The central insight of this section is that, once we fix the number, speed and sizes of the obstacles, and the maximum speed of the robot, the obstacles can always be avoided, so long as they are each initially some min- imum distance away from the robot. We call this dis- tance the threat horizon, H. It should be obvious that, if we make H extremely large relative to the speeds of the obstacles, some safety-preserving path must exist. However, we would like to make H as small as possi- ble. We also need to satisfy the ability criteria, i.e., it is not enough to know a path exists, we must be able to find it. We address these issues in the proof of the following theorem: Theorem 1 Let R be a point in a 2-dimensional Eu- clidean plane, which represents the location of a robot at time to. Assume that the robot can rotate and ac- celerate instantaneously, but is limited by a maximum speed VT. Let 01,. . . ,O, be a set of n circular obsta- cles with diameters dl, . . . , d, which move at known, constant velocities ~1, , . . , v,. Let V, be the largest of the vi. Let W be the sum of the widths of the obstacles, i.e., W = Cy’, di. If each of the obstacles is initially a distance greater than W(K + K-> 26 from R, then there exists a “safe harbor” point S such that none of the Oi will touch S at any time, and the robot can move from R to S without intersecting any of the Oi. 910 Mobile Robots Proof: Let Qi be the space occupied by obstacle Oi, from time to to t,. Since the obstacles move along linear paths, Qi is comprised of all of the space between two parallel rays, separated by a width of di. Since the space which lies between two parallel lines has been named a plank, we shall call this region a half-plank of width di. If each of the obstacles begins at a distance greater than W(VO + Vr)/2V,. from R, then for each obstacle Oi, there must exist a positive number ii, such that Oi begins exactly W(VO + V,)/2VT + ii from R. Let E = 2V,iJ(Vo+Vr). Th en e is positive, ii = e(Vo+Vr)/2VT, and Oi begins a distance (W + e)(T/, + Vr)/2VT from R. The earliest time that one of the obstacles could in- tersect the robot would be in the case that the obsta- cle 0, for which i, is the smallest of the E^i , travels at speed V, and heads directly towards R, while the robot travels a straight-line path mum speed VYT towards O,, at its maxi- In this case, 0, and the robot would collide at time to + ((W + e)(V, + Vr)/2VT(V0 + VT)), which is just to + (W + e)/2Vr. Now consider the region which comprises all of the points to which the robot could move by time ti > to, while never moving at a speed greater than V,: This area is just a circle, with radius Vr(tj - to). It follows that the area which comprises the locations to which the robot could travel before it could possibly be hit is a circle centered at R, with radius V,(to + (W + 4/w” - to), which simplifies to (W + ~‘)/a. We shall call the distance (W + e)/2 the safety radius, and the circle of this radius centered at R the safety region. Now we need to show that the n half-planks, Ql, * * *, Qn, cannot completely cover the safety region. To do this, we use the 2-dimensional version of Bang’s solution to Tarski’s “plank problem” (Bang 1951)) which states’ : Theorem 2 (Bang) If L is a convex body of minimal width 1 in a 2-dimensional Euclidean plane, and L is contained in the union of p planks of widths hl, . . . , h,, then hl +. . . + h, 2 1. Clearly, the set of objects which can be covered by planks is a superset of the set of objects which can covered by half-planks. Since the safety region is a convex body of width W + 2e, by Bang’s theorem, in order for the n half-planks to cover the safety region, Cy’1 di must be greater than or equal to W + 2~. But, by definition, W = CyZ1 di, and E is positive, so there must be some area within the safety region which is not covered by the @i. This proves the existence of S. To see that the robot can move from R to S with- out being hit, one only need remember that the safety radius was defined so-that it is possible for the robot to move anywhere in the safety region by the time the first obstacle reaches its perimeter. Q.E.D. 1 This theorem generalizes to n-dimensions. This proof satisfies the existence criteria. In order to compute the solution, we need to compute the inter- section of the n half-planks with the safety region. If k is the number of intersections of the half-planks, this can be done in 0( (k + n)logn) using a modification of (Bentley & Ottmann 1979) algorithm for reporting the intersection of line segments, as elaborated in (Melhorn 1984, pp. 154-160). (Chazelle & Edelsbrunner 1992) describe an O(nlogn+k) algorithm which could also be modified to find safe harbors within the safety region. Either of these solutions satisfies the ability criterion. In order to establish timeliness, for a fixed number of obstacles n, we need to know the actual worst-case time required to compute the solution. Call this time t,. If we increase H by the maximum distance an ob- stacle can travel in t,, then we ensure that the system will have sufficient time to compute and execute the solution. Thus, the threat horizon, H should be K * t, + W(v0 + vr)/2Vr in order to guarantee timeliness. Note that we can use a similar argument to account for robot rotation and acceleration times, in the more realistic cases where acceleration and rotation are not assumed to be in- stantaneous. The Necessity of the Threat Horizon In the previous subsection, we showed that constrain- ing obstacles to begin their travels outside of the threat horizon H was sufficient to ensure the safety of the robot. In this subsection, we show that the bound is tight. Note that the size of the safety region depends only upon the sizes of the obstacles, while H also de- pends upon the ratio of V0 to V,. This leads to the following theorem: Theorem 3 Let R be a point in a 2-dimensional Eu- clidean plane, which represents the location of a robot at time to. Assume that the robot can rotate and ac- celerate instantaneously, but is capable of only a max- imum speed V,. Let 01, . . . , 0, be a set of n circu- lar obstacles with diameters dl, . . . , d,, which move at known, constant velocities 211, . . . , vn. Let W = Cr’1 di, and let S be a positive constant, such that S > 2WVr/V,. If at time to obstacles are allowed to start a distance from the robot, then there exist configurations for which it is not possible for the robot to avoid collision. Proof: It suffices to show a single configuration for which it is not possible to avoid the obstacles. Let all the obstacles Oi have the same diameter D. Assume that the Oi start in the configuration depicted in Fig- ure 1, where the obstacles are just touching (i.e. for i < n the distance from the center of Oi to the center Of Oi+l is D), and they are all traveling at speed VO, along parallel courses as indicted in the figure. Let It (W - qvo + vr)/w” Mobile Robots 911 t 01 W I on (W - s><v() + V,) ’ 2vr Figure 1: Necessity of the Threat Horizon be the line which is tangent to all of the obstacles at time t, and which is on the same side of the obstacles as it is at time to, when it is on the side of the obstacles nearest the robot. Let the distance from R to the line 10 be (W - S)( V, + VT)/2VT. Clearly, all of the obstacles are initially at least this distance from R at time to. The robot can travel the distance (W - S)/2 in time (W - 6)/2VT. Wh en it has done so, the line I(,-,)/,, will be a distance (W - S)/2 away from the (original) point R. If the obstacles can travel the distance W be- fore the robot can move S/2, then they will completely traverse the circle of radius W/2 centered at R before the robot is able to have moved outside of this circle. That is, if W/V0 < 6/2V,., the robot will be hit by at least one obstacle. Since, by definition S > 2WV,/V0, this completes the proof. In cases where V0 is large relative to W and V’, S approaches arbitrarily close to 0. Consequently, H = W(K + K)/2vr is the minimum distance for which we can guarantee that a safety-preserving path exists, in the general case. The Dynamic Asteroid Avoidance Problem In the previous section, we presented a necessary and sufficient criterion for guaranteeing safety in the aster- oid avoidance problem. In doing so, we have estab- lished a functional relationship between the number, size and speed of the obstacles, the maximum speed of the robot and the distance which obstacles must ini- tially be from a robot in order to ensure that the robot will never collide with any of the obstacles. We have also shown that, in those cases where we can guar- antee safety, there is a simple and efficient means of finding the safety-preserving path. The problem we have addressed thus far makes the same simplifying as- sumption as in made in, e.g., (Fujimura & Samet 1989; Kant & Zucker 1986; Reif & Sharir 1985) : the position and the trajectories of the obstacles are known prior to execution time. While this formulation has proven challenging, it is overly optimistic. Normally, we can expect the existence, location and trajectories of obsta- cles to become known during execution, perhaps while the robot is already in the process of avoiding previ- ously detected obstacles. In this section, we examine the problem of guaranteeing safety when the location of obstacles must be sensed at execution time, which we have named the “dynamic” asteroid problem. Us- ing Theorem 1, we develop a sufficient condition for ensuring the existence of a safety-preserving path in this problem. Obstacles with Uniform Velocity Consider the asteroids problem described above, where we know there are at most n circular obstacles Ol)... , 0,) traveling along linear trajectories at con- stant speeds. For simplicity, we will assume that all of the obstacles are of the same diameter, D. In addition, assume that all of the Oi move at the same speed, VO. Unlike the previous section, we do not assume that we know the location of the obstacles in advance. Instead, the obstacles are allowed to appear, one or more at a time, up to a maximum of n obstacles. We wish to determine a safety horizon, B, such that we know that a safety-preserving path exists as long as all of the ob- stacles initially appear at a distance of at least I? from the robot. The following is a corollary of Theorem 1 above: Corollary 1 Let H be W(VO + V,)/2V,., where W = nD, VP is the maximum speed of the robot, and V, is the (uniform) speed of the obstacles. If each of the obstacles Oi appears at some time t,(i) 2 to, at a dis- tance greater than I? = nH + W from the position of the robot at that time, then there exists a collision-free path from the starting point R to some point S such that none of the Oi will touch S at any time. Proof (by induction on m, the number of obstacles which have already appeared2): The base case is han- dled by Theorem 1, since nH + W 2 H, and we can assume that ta(l) = to. Assume, by induction, that after the first m < n of the obstacles have appeared, there exists a safety pre- serving path to some point S, which is safe from all of the obstacles 01, . . . , 0,. Let R, be the location of the robot at time t,. If at time t,(,+l) all of the obsta- cles 01,. . . ,O, are farther than H from R,(,+lj, then 2This is not induction on n, the maximum number of obstacles which can appear. Mobile Robots once again Theorem 1 holds, and a safety preserving path exists from R,(,+r) to some point Sm+r. If one (or more) of the Oi is within H of the robot, then the robot can wait at S, until those obstacles have trav- eled at least a distance H from S,. In the worst case, this time is (H + D)/VO. Of course, another obstacle could move to within H of S, in this time. Since there are only m < n obstacles, the longest we would have to wait would be m(H + D)/VO before we can be as- sured that all of the obstacles are at least H from S,. In this worst-case, the most recent obstacle, Om+r will still be a distance greater than (nH+W)-m(H+D) = n(H+D)-m(H+D)=(n-m)(H+D)fromS,, and since m < n, we know that this distance is greater than H. Thus we know that at some time, all of the m + 1 obstacles will be outside of the safety region, and so Theorem 1 applies. Thus there exists a (linear) path from S, to some new point ,?&+I. Q.E.D. Since by definition W = nD, it follows that H = n2D((v0 + K-)/K + I), and thus this corollary gives an O(n2) upper bound for the threat horizon in the dynamic asteroids problem. It is also easy to see that B is sufficient to guarantee safety so long as there are never more than n obstacles within fi of the robot R at any single time, even if many more than n obstacles appear over time. Allowing the speed of Obstacles to Range from Vl to V. There is a straightforward generalization of the above theorem in cases where the obstacles are constrained to have constant positive velocities ranging from a lower bound of & to a maximum speed of VO. Corollary 2 Let 01,. . . ,O, be n circular obstacles of fixed diameter D, each constrained to move at a con- stant velocity, Vi, such that tJi Vl 5 Vi 5 VO, where Vj and V, are fixed positive constants. Let W = nD, and H be W(VO + V,)/2V,, where VT is the maximum speed of the robot. If each of the obstacles Oi appears at some time t,(i) 2 to, at a distance greater than N = K&H + W> -_ Vi from the position of the robot at that time, then there exists a collision-free path from the starting point R to some point S such that none of the Oi will touch S at any time. Proof (by induction on m, the number of obsta- cles which have already appeared): The base case is again handled by Theorem 1, since that theorem ap- plies whenever obstacle velocities are constrained by some maximum, VO, and obstacles occur outside the threat horizon H. Since V, 2 q, it follows that, for all values of n, V,n/Vl 2 1, and thus H > H. The inductive step is very similar to that for Corol- lary 1: Assume, by induction, that after the first m < n of the obstacles have appeared, there exists a safety preserving path to some point S, which is safe from all of the obstacles Or, . . . , 0,. Let R, be the location of the robot at time t,. If at time tacm+l all of the ob- stacles Or, . . . , 0, are farther than H 1 rom R,(,+l), then once again Theorem 1 holds, and a safety preserv- ing path exists from R,(,+l) to some point Sm+r. If one (or more) of the Oi is within H of the robot, then the robot can wait at Sm until those obstacles have traveled at least a distance H from S,. In the following, we have to change the earlier proof to account for the fact that slower moving obstacles may remain near the robot for longer periods of time: In the worst case, this time is (H + D)/& (note that (H + D)/VJ 5 (H + D)/VO, which was the worst case in the previous proof). Again, another obstacle could move to within H of Sm in this time. Since there are only m < n obstacles, the longest we would have to wait would be m(H + D)/K before we can be assured that all of the obstacles are at least H from S,. In this worst-case, the most recent obstacle, O,+r can travel a distance of up to K * m(H + D)/K (if it happens to have the maximum speed, VO). In any case, then, this will still be a distance greater than V0 * (nH+W)/l+--K,*m(H+D)/ti = (n-m)K,(H+D)/K from S,, and since n > m, and V, 2 Vl, we know that this distance is greater than H, and Theorem 1 applies. Thus there exists a (linear) path from S, to some new point Sn+r, which is safe with respect to all of the currently visible obstacles. Q.E.D. Note that this threat horizon grows with the ratio of the maximum speed to the minimum speed of ob- stacles, V,/Vl. If this number is large, the horizon be- comes prohibitive. Intuitively, slower moving obstacles should be easier to avoid, but in the context of this re- sult, allowing the speed of obstacles to approach zero will make the threat horizon approach infinity. This is a clear indication that the bound is not tight. Even in the case where V 0 = &, we choose H to account for the case when all n obstacles are a distance H from the ob- stacle, but we choose n = n H + W to account for the times when the n obstacles are “evenly” spaced. But when all the Oi are visible H alone is sufficient, and when the obstacles are evenly spaced, so that exactly one is within H or the robot at any given time, then a horizon of H’ = H/n = H + D will suffice. With this insight, it is possible to reduce l? by a factor of 4, but the resulting horizon is still O(n2). We are currently trying to prove the conjecture that a threat horizon which is linear in the number of obstacles exists. Goals of Achievement In mission-critical applications, we can distinguish two components to the planning problem: 1. Achieving Goals - in most applications, the agent will be charged with satisfying (or perhaps opti- mizing) some goal function, where the successful Mobile Robots 913 2. achievement of a goal has some positive utility, and the failure to achieve a goal is not considered catas- trophic (i.e. the utility in negative, but small com- pared to the cost incurred by a failure to maintain safety). Maintaining Safety - avoiding catastrophic failure is the primary consideration of MC control systems. For our purposes, all catastrophes are equivalent, in the sense that none is considered more or less desir- able than any other. Since the cost of failure is ex- tremely high, we seek problem solutions which will guarantee that the agent remains appropriately dis- tant from any and all threats. In these domains, the constraints imposed by the sec- ond component absolutely dominate the influences of the first. So, for example, a mission-critical system will not attempt to achieve a non-critical goal, no matter what its utility is, unless it can assure itself that it is possible to maintain safety. This dominance allows us to effectively decouple the two components, and to consider the problem of maintaining safety indepen- dently of the influences of goals-of-achievement. This has allowed us to develop and implement a simulation in which a robot can achieve goals while avoiding mov- ing obstacles. (Reif & Sharir 1985) present a search- based solution to the asteroids problem which is expo- nential in the number of moving obstacles. The high- variance time requirements of this algorithm make it unsuitable for ensuring that obstacles are avoided, but we are using a version of it to determine non-critical, safety-preserving paths to goals, while using a much faster algorithm based upon the results above to en- sure safety. Using the Maruti hard real-time operating system (Saksena, da Silva, & Agrawala 1993), we are able to guarantee processing time to the safety-critical routines, and allow the search to use the processor time that is left over when the critical routines have finished. If the search algorithm is able to find a path to a goal quickly, then the system can use it without compro- mising safety. Otherwise, the lower-level competences for finding and reaching a safe-harbor will ensure that the robot remains safe while it searches for a way to achieve its goals. Conclusions It is easy to see the limitations of this work in its cur- rent form. However, without similar, albeit more com- prehensive, results, we cannot deploy mission-critical systems in spatially-situated domains. The time and expense involved in developing hard real-time oper- ating systems running provably correct software is wasted if the system specifications are not sufficient to ensure that catastrophe will be avoided. Presumably, the application domains will have sufficient restrictions and regularities to allow the development of provably correct behavioral competences. We believe the tech- niques introduced in this paper provide a basis for rea- 914 Mobile Robots soning about safety maintenance in spatial domains in particular, and continuous domains in general. Acknowledgements This research was supported in part by grants from NSF(IRI-9306580), ONR (N00014-J-91-1451), AFOSR (F49620-93- l-0065), the ARPA/Rome Lab- oratory Planning Initiative (F30602-93-C-0039), the ARPA 13 Initiative (N00014-94-10907) and ARPA con- tract DAST-95-C0037. Dr. Hendler is also affiliated with the UM Institute for Systems Research (NSF Grant NSF EEC 94-02384). Thanks to V.S. Subrah- manian for additional support. References Bang, T. 1951. A solution of the “plank problem”. In Proceedings of the American Mathematical Society, volume 2, 990-993. Bentley, J. L., and Ottmann, T. A. 1979. Algorithms for reporting and counting geometric intersections. IEEE Transactions on Computers C-28(9):643-647. Canny, J., and Reif, J. 1987. New lower bound tech- niques for robot motion planning problems. In Pro- ceedings of the 28th IEEE Symposium of Foundations of Computer Science, 49-60. Chazelle, B., and Edelsbrunner, H. 1992. An optimal algorithm for intersecting line segments in the plane. Journal of the Association for Computing Machinery 39(1):1 - 54. Fujimura, K., and Samet, H. 1989. A hierarchi- cal strategy for path planning among moving obsta- cles. IEEE Transactions on Robots and Automation 5( 1):61-69. Kant, K., and Zucker, S. 1986. Towards efficient tra- jectory planning: Path velocity decomposition. Inter- national Journal of Robotics Research 5172-89. Lozano-Perez, T., and Wesley, M. 1979. An algorithm for planning collision-free paths among polyhedral ob- stacles. Communications of the ACM 22( 10):560-570. Melhorn, K. 1984. Multi-dimensional Searching and Computational Geometry, volume 3. New York, Berlin, Heidelberg: Springer-Verlag. Musliner, D. J.; Durfee, E. H.; and Shin, K. G. 1995. World modeling for the dynamic construction of real- time control plans. Artificial Intelligence 74( 1):83- 127. Reif, J., and Sharir, M. 1985. Motion planning in the presense of moving obstacles. In Proceedings of the 25th IEEE Symposium of Foundations of Computer Science, 144-154. Saksena, M.; da Silva, J.; and Agrawala, A. 1993. De- sign and implementation of Maruti. Technical Report CS-TR-3181, University of Maryland Department of Computer Science.
1996
135
1,772
Recognizing and inter reting gestures on a obile robot David Kortenkamp, Eric her, and R. Peter Bonasso Metrica, Inc. Robotics and Automation Group NASA Johnson Space Center - ER2 Houston, TX 77058 kortenQmickey.jsc.nasa.gov Abstract Gesture recognition is an important skill for robots that work closely with humans. Gestures help to clar- ify spoken commands and are a compact means of re- laying geometric information. We have developed a real-time, three-dimensional gesture recognition sy,s- tern that resides on-board a mobile robot. Using a coarse three-dimensional model of a human to guide stereo measurements of body parts, the system is ca- pable of recognizing six distinct gestures made by an unadorned human in an unaltered environment. An active vision approach focuses the vision system’s at- tention on small, moving areas of space to allow for frame rate processing even when the person and/or the robot are moving. This paper describes the gesture recognition system, including the coarse model and the active vision approach. This paper also describes how the gesture recognition system is integrated with an intelligent control architecture to allow for complex gesture interpretation and complex robot action. Re- sults from experiments with an actual mobile robot are given. Introduction In order to work effectively with humans, robotsfwill need to track and recognize human gestures. Gestures are an integral part of communication. They provide clarity in situations where speech is ambiguous or noisy (Cassell 1995). Gestures are also a compact means of relaying geometric information. For example, in robotics, gestures can tell the robot where to go, where to look and when to stop. We have implemented a real- time, three-dimensional gesture recognition system on a mobile robot. Our robot uses a stereo vision ‘sys- tem to recognize natural gestures such as pointing and hand signals and then interprets these gestures within the context of an intelligent agent architecture. The entire system is contained on-board the mobile robot, tracks gestures at frame-rate (30 hz), and identifies ges- tures in three-dimensional space at speeds natural to a human. Gestures , for mobile robots Gesture recognition is especially valuable in mobile robot applications for several reasons. First, it pro- vides a redundant form of communication between the user and the robot. For example, the user may say “Halt” at the same time that they are giving a halting gesture. The robot need only recognize one of the two commands, which is crucial in situations where speech may be garbled or drowned out (e.g., in space, un- derwater, on the battlefield). Second, gestures are an easy way to give geometrical information to the robot. Rather than give coordinates to where the robot should move, the user can simply point to a spot on the floor. Or, rather than try to describe which of many objects the user wants the robot to grasp, they can simply point. Finally, gestures allow for more effective com- munication when combined with speech recognition by grounding words such as “there” and “it” with recog- nizable objects. However, mobile robot applications of gesture recog- nition impose several difficult requirements on the sys- tem. First, the gesture recognition system needs to be small enough to fit onto the mobile robot. This means that processing power is limited and care must be taken to design efficient algorithms. Second, the system needs to work when the robot and the user are moving, when the precise location of either is unknown and when the user may be at different distances from the robot. It is also likely that objects will be mov- ing in the background of the image. Third, precise calibration of cameras is difficult if not impossible on a mobile platform that is accelerating and decelerat- ing as it moves around. Finally, the system needs to work at a speed that is comfortable for human tasks, for example the halting gesture needs to be recognized quickly enough to halt the robot within a reasonable time. In this paper we present a gesture recognition system that meets these requirements. Mobile Robots 915 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Figure 1: A mobile robot with on-board vision system. Related work Gesture recognition has become a very important re- search area in recent years and there are several mature implementations. The ALIVE system (Darrell et al. 1994) allows people to interact with virtual agents via gestures. The ALIVE system differs from ours in that the cameras are fixed (i.e., not on a mobile platform as ours are) and that it requires a known background. Similar restrictions hold for a system by Gavrila and Davis (Gavrila & Davis 1995). The Perseus system (Kahn et d. 1996) uses a variety of techniques (e.g., motion, color, edge detection) to segment a person and their body parts. Based on this segmentation the Perseus system can detect pointing vectors. This sys- tem is very similar to ours in that it is mounted on a mobile robot and integrated with robot tasks. The Perseus system differs from ours in that it requires a static background, doesn ’ detect gestures in three di- mensions and relies on off-board computation, which can cause delays in recognition of gestures. Wilson and Bobick (Wilson & Bobick 1995) use a hidden Markov model to learn repeatable patterns of human gestures. Their system differs from ours in that it requires peo- ple to maintain strict constraints on their orientation with respect to the cameras. Recognizing gestures Our gesture recognition system consists of a stereo pair of black and white cameras mounted on a pan/tilt/verge head that is, in turn, mounted on the mobile robot (see Figure 1). The basis of our stereo vi- sion work is the PRISM-3 system developed by Keith Nishihara (Nishihara 1984). The PRISM-3 system pro- vides us with low-level spatial and temporal disparity Average Surface Depth Object Proximity Space partially enveloping object Figure 2: A proximity space. measurements. We use these measurements as input to our algorithms for gesture recognition. Our gesture recognition process has two compo- nents. First, we concentrate our vision system atten- tion on small regions of 3-D visual space. We call these regions proximity spaces. These spaces are designed to react to the visual input much as a robot reacts to its sensory input. Second, we spawn multiple proximity spaces that attach themselves to various parts of the agent and are guided by a coarse, three-dimensional model of the agent. The relationships among these proximity spaces gives rise to gesture recognition. Each of these two components is described in the following two subsections and then their application to gesture recognition. The proximity space method The active vision philosophy emphasizes concentrating measurements where they are most needed. An impor- tant aspect of this approach is that it helps limit the number of measurements necessary while remaining at- tentive to artifacts in the environment most significant to the task at hand. We abide by this philosophy by limiting all our measurements in and about cubic vol- umes of space called proximity spaces. Within the bounds of a proximity space, an array of stereo and motion measurements are made in order to determine which regions of the space (measurement cells) are occupied by significant proportions of surface material, and what the spatial and temporal disparities are within those regions. Surface material is identified by detecting visual texture, i.e., variations in pixel val- ues across regions of the LOG (Laplacian of Gaussian) filtered stereo pair (see Figure 2). The location of a proximity space is controlled by behaviors generating vectors that influence its motion from frame to frame. Behaviors generate motion vec- tors by assessing the visual information within the proximity space. There are behaviors for following, clinging, pulling, migrating to a boundary and resiz- 916 Mobile Robots Figure 3: Coarse 3-D model of a human used ture recognition. for ges- I ing (which does not generate a motion vector, but a size for the proximity space). Patterned after the sub- sumption architecture (Brooks 1986), these behaviors compete for control of the proximity space. In dynamic terms, the proximity space acts as an inertial mass and the behaviors as forces acting to accelerate that mass (see (Huber & Kortenkamp 1995) for a more detailed description). Chaining multiple proximity spaces u&g a human model In order to recognize gestures, multiple proximity spaces are spawned, which attach themselves to var- ious body parts in the image of the gesturing per- son. Each of these proximity spaces has its own set of behaviors independently controlling its location in space. However, these behaviors are constrained by a coarse three-dimensional, kinematic model of a hu- man that limits their range of motion. With perfect tracking there would be no need for a model as the proximity spaces would track the body parts in an un- constrained manner. However, real-world noise may sometimes cause the proximity space to wander off of their body parts and begin tracking something in the background or another part on the body. While the behaviors acting on the proximity spaces continue to generate motion vectors independent of the model; the final movement of the proximity spaces is overridden by the model if the generated vectors are not consistent with the model. We have chosen a model that resembles the human skeleton, with similar limitations on joint motion. The model, shown in Figure 3, consists of a head connected to two shoulder joints. There are four proximity spaces that are tracking various parts of the body. The first proximity space (PSl) is tracking the head. Its be- haviors allow it to move freely, but are also biased to migrate it upward. The second proximity space (PS2) tracks the shoulder. PSl and PS2 are connected by two links, Ll and L2. These links are stiff springs that al- low very little movement along their axes. The lengths of Ll and L2 are set at the start of tracking and do not change during tracking. We have a procedure for auto- matically determining the lengths of the links from the height of the person being tracked, which is described later in the paper. The connection between Ll and L2 (Jl) acts as a ball joint. The motion of L2 relative to Ll (and thus, PS2 relative to PSl) is constrained by this joint to be Odeg in the up-down direction (i.e., shrugging of the shoulders will not affect the location of the proximity space). The joint Jl also constrains the movements of L2 relative to Ll by f38 deg in the in-out direction (i.e., towards and away from the cam- era). This means that the person must be facing the cameras to within 38 deg for the gesture recognition system to work correctly. The third proximity space (PS3) is tracking the end of the upper arm and is connected to PS2 by the link L3. Again, L3 is modeled as a very stiff spring with a length fixed at the start of tracking. L3 is connected to L2 with a ball joint (52). L3 can move relative to L2 up at most 75 deg and down at most 70 deg. It can move into or out-of perpendicular to the camera by at most f45 deg. This means that a pointing gesture that is towards or away from the robot by more than 45 deg will not be recognized. Finally, the fourth proximity space (PS4) tracks the end of the lower arm (essentially the hand) and is connected to PS3 with link L4 at joint 53. The range of motion of L4 relative to L3 at 53 is up at most 110 deg, down at most -10 deg and into or out- of perpendicular with the camera by at most f45 deg. All of the links in the model are continuously scaled based on the person’s distance from the cameras. One limitation of our gesture recognition system is that it can only track one arm at a time. There are two reasons for this limitation. First, we do not have enough processing power to calculate the locations of seven proximity spaces at frame rate. Second, when both arms are fully extended the hands fall out of the field of view of the cameras. If the person backs up to fit both hands into the field of view of the cameras then the pixel regions of arms are too small for tracking. The arm to be tracked can be specified at the start of the tracking process and can be switched by the user. While Figure 3 only shows the model of the right arm for simplicity, the model for the left arm is simply a Mobile Robots 917 2 . . 0 0 r relaxed L c 0 0 k raised arched halt Figure 4: Recognizable gestures. mirror image of the right. Experiments, which are described in more detail later, show that our system can recognize gestures from distances as close as 1.25 meters from the camera (at which point the person’s arm extends out of the field of view of the cameras) to as far as 5 meters away from the robot. The system can track a fully extended arm as it moves at speeds up to approximately 36 deg per second (i.e., a person can move their arm in an arc from straight up to straight down in about five seconds without the system losing track). Acquisition and reacquistion using the model Gesture recognition is initiated by giving the system a camera-to-person starting distance and a starting arm. Four proximity spaces are spawned and lay dormant waiting for some texture to which to attach themselves. As a person steps into the camera at approximately the starting distance the head proximity space will attach itself to the person and begin migrating towards the top, stopping when it reaches the boundary of the per- son. The other three proximity spaces are pulled by their links up along with the head. While the person’s arm is at their side, these proximity spaces are contin- ually sweeping arcs along the dashed arrows shown in Figure 3 looking for texture to which to attach them- selves. When the arm is extended the three proximity spaces “lock onto” the arm and begin tracking it. If they lose track (e.g., the arm moves to fast or is oc- cluded) they begin searching again along the dashed arcs shown in Figure 3. If the head proximity space loses track it begins an active search starting at’the last known location of the head and spiraling outward. Many times this re-acquisition process works so quickly that the user never realizes that tracking was lost. Defining gestures Figure 4 shows the gestures that are currently rec- ognized by the system. These gestures are very eas- 918 Mobile Robots ily determined by looking at the relative angles be- tween the links L2, L3 and L4 at the joints 52 and 53 (see Figure 3). Let’s call the angle between L2 and L3 (i.e., the upper arm angle) 01 and the an- gle between L3 and L4 (i.e., the lower arm angle) 02. Odeg is straight out to the left or right. Then, if 01 < -5Odeg and 02 < 45deg the gesture is re- Iwed. If 01 < -50 deg and 02 > 45 deg the gesture is thumbing. If -5Odeg < 01 < 45deg and 02 < 45deg the gesture is pointing. If -50 deg < 01 < 45 deg and 02 > 45 deg the gesture is halt. If 01 > 45 deg and 02 < 45 deg the gesture is raised. If 01 > 45 deg and 02 > 45deg the gesture is arched. Thus, the person is always producing some kind of gesture based on the joint angles. Gesture recognition is not immediate as that may lead to many spurious gestures. Confidence in a ges- ture is built up logarithmically over time as the an- gles stay within the limits for the gesture. When the logarithmic confidence passes a threshold (0.8 in our experiments) then the gesture is reported by the sys- tem. That gesture continues to be reported until the confidence drops below the threshold. This gesture recognition technique does not cur- rently support recognizing gestures that occur over time (e.g., a waving gesture). We believe that our ap- proach, because it is active, lends itself to this and we are working towards implementing it. Connecting gestures to robot action Simply recognizing gestures is not enough for them to be useful; they need to be connected to a specific robot actions. For the last several years we have been work- ing on an intelligent control architecture, known as $I’, which can integrate reactive vision and robot processes with more deliberative reasoning techniques to pro- duce intelligent, reactive robot behavior (Bonasso et al. 1995). The architecture consists of three layers of control: skills, sequencing and planning. Only the first two layers (skills and sequencing) have been used in the system described in this paper. The next two sub- sections will describe the skills of our robot and how those skills can be intelligently sequenced to perform tasks. Visual skills Skills are the way in which the robot interacts with the world. They are tight loops of sensing and acting that seek to achieve or maintain some state. Skills can be enabled or disabled depending on the situation and the set of enabled skills forms a network in which in- formation passes from skill to skill. Figure 5 shows the skill network for our work in gesture recognition. The Figure 5: Mobile robot skills for gesture recognition. skills labeled vf h are obstacle avoidance and robot mo- tion skills base on the Vector Field Histogram method (Borenstein & Koren 1991). They take a goal location, generated from any skill, and move the robot to that goal location. The move-to-point, the track-agent and the recognize-gesture skills allow can provide goal locations to the vfh skills. The recognize-gesture skill encapsulates the pro- cesses described in the previous section and produces one of the five gestures or no gesture as output. It also generates as output the (x,y,z) locations of the four proximity spaces when the gesture was recognized. The next several subsections described the more inter- esting gesture recognition skills in detail. Moving to a point This skill produces an (x,y) goal for the robot corresponding to the location at which the person is pointing. This skill takes the (x,y,z) location of the centroid of the shoulder proximity space (PS2 in Figure 3) and the hand proximity space (PS4 in Figure 3) and computes a three-dimensional vector. It then determines the intersection point of this vector with the floor. Assuming the vector does intersect with the floor, the skill begins generating a goal for the robot and the motion control and obstacle avoidance skills move the robot to that point. We conducted a number of experiments to determine the accuracy of the pointing gesture. The experimental set-up was to have a person point to a marked point on the floor. The vision system would recognize’ the pointing gesture and the move-to-point skill would determine the intersection of the pointing vector with the floor. We would then compare this point with the actual location of the target. We choose eight different target points on the floor in various directions and at various distances. We pointed five times at each target point. Two different people did the’ point, both of them familiar with the system. No feedback was given to the X @ -~ 3m X XK 3m Figure 6: A sample of experimental results. The per- son is standing directly in front of the robot and point- ing at different points on the floor (black circles). The ‘X’ is the point that the robot calculated as the inter- section between the pointing gesture and the floor. user between trials. Figure 6 shows a sample of those points and the system’s performance. For the five points that were between 2.5 and 4.5 meters away from the person, the mean error distance from the target to the vector intersection was 0.41 me- ters, with a standard deviation of 0.17. As the distance from the person to the target grew the error also grew rapidly, up to a mean error of over 3 meters at 5.5 meters away. These results need to be taken with a grain of salt. There are several factors that can introduce errors into the system and that cannot be accounted for, includ- ing: how accurately a person can actually point at a spot; the initial accuracy of the robot both in position and orientation; and the tilt of the robot due to an uneven floor. Acquiring along a vector When this skill is en- abled, a pointing gesture will result in the vision sys- tem searching along the pointing vector and stopping if it acquires a distinct object. The vision system then begins tracking that object. This skill takes the (x,y,z) location of the centroid of the shoulder proximity space and the hand proximity and computes a three dimen- sional vector. The skill then causes the vision system to search along through a tube of space surrounding that vector until a patch of significant texture is en- countered. The skill stops searching after a certain distance, which is passed to the skill as a parameter at run time. Informal experiments allowed two people Mobile Robots 919 standing about 2 meters apart to “pass control” of the system back and forth by pointing at each other. The system successfully moved from person to person over 10 consecutive times. Tracking an agent While the vision system is rec- ognizing gestures it tracks the person’s head. The po- sition of the person’s head is converted to a goal for the robot and the robot moves, under local obstacle avoidance towards that goal. The speed of the robot is set at a maximum of 0.4 meters per second, but the robot moves more slowly as it approaches the per- son it is tracking or as it maneuvers to avoid obsta- cles. We have successfully tracked people for periods of twenty to thirty minutes in previous work (Huber & Kortenkamp 1995). For this work we added gesture recognition and allowed the person to stop the robot by giving the “halting” gesture. When the robot de- tects this gesture it stops moving, but the robot’s head continues to track the person and the vision system continues to perform gesture recognition. The robot resumes moving when the person gives a “raised” ges- ture. Determining tracking height The coarse 3-D model used for gesture recognition requires a rough estimate of the height of the person. For this reason we have implemented a skill that will automatically acquire the height of a person being tracked and reset the 3-D model on-the-run. This skill uses height of the centroid of the head proximity space as the height of the person. Experiments on five people ranging from 1.60m to 1.90m tall showed that the system estimated their height to within an average error of 0.07m. This is well within the gesture recognition system’s toler- ance for tracking based on a fixed model. Interpreting gestures in task contexts Our target environments involve robots working with astronauts in space or on planetary surfaces. Recently, in support of these environments, we have begun to in- vestigate human-robot interaction through gesturing. Wanting to exploit the skills described above in as many situations as possible, we have observed that in many tasks a human pointing gesture can have a wide range of interpretations depending on the task. The middle layer of our $I’ architecture is the RAPS system (Firby 1994). A reactive action package (RAP) spec- ifies how and when to carry out routine procedures through conditional sequencing. As such, a RAP pro- vides a way to interpret gestures through context lim- iting procedures of action. Finding an agent to track One example of inter- preting the same gesture in two different contexts can 920 Mobile Robots (define-rap (respond-to-gesture ?agent) (method motion-gesture (context (or (current-gesture “Pointing”) (current-gesture “Halting”))) (task-net (t 1 (interpret-gesture-for-motion ?agent)))) (method normal-acquisition (context (current-gesture “Raised”)) (task-net (sequence (tl (speak “Point to the agent’s feet”)) (t2 (interpret-gesture-for-tracking ?agent))))) (method long-range-acquisition (context (current-gesture “Arched”)) (task-net (sequence (tl (speak “Point at the agent”)) (t2 (find-agent-along-vector ?agent)))))) Figure 7: RAP that uses task context to interpret a gesture. be shown in the task of pointing out an agent to be tracked. In our research we have noted that the desig- nator agent can simply point to the designated agent’s feet and the robot can use the move-to-point skill. But when the designated agent is some distance away from the designator, the acquire-along-vector skill, while slower, is less error prone. We devised a two step gesture approach wherein the first gesture tells the robot the method to be used to designate the agent to be tracked, and the second gesture would be the point- ing gesture itself. Figure 7 shows this RAP (simplified for the purposes of this paper). This RAP assumes a gesture has been received. If it is a pointing or halting gesture, a lower level RAP is called to stop the robot or to move to a point on the floor. If the gesture received is “raised”, the usual tracking RAP will be invoked (interpret-gesture-for- tracking) which gets a pointing gesture, computes the point on the floor, and then looks for an agent at the appropriate height above that point. If, on the other hand, the arched gesture is detected, the find-agent- along-vector RAP will be invoked to get a pointing ges- ture and find an agent somewhere along the indicated vector. That RAP also enables the tracking skill. The higher level RAP in Figure 8 sets up a single ges- ture (such as go to place) or the first of a two gesture sequence. This RAP has three methods depending on whether there is a gesture stored in the RAP memory. Normally, there is no current gesture and the robot must look for the designating agent, get a gesture, and respond appropriately (as described in the previ- ous RAP). Once a gesture task is completed, memory rules associated with lower level RAPS will remove the [define-rap (get-and-respond-to-gesture ?agent) (succeed (or (last-result timeout) (last-result succeed))) (method no-current-gesture (context (not (current-gesture ?g))) (task-net (sequence (tl (find-agent ?agent)) (t2 (get-gesture ?agent)) (t3 (respond-to-gesture ?agent))))) (method useful-current-gesture (context (and (current-gesture ?g) (not (= ?g “Halting”))) (task-net (sequence (t 1 (recognize-gesture ?agent)) (t2 (respond-to-gesture ?agent))))) (method current-halt-gesture (context (and (current-gesture ?g) (= ?g “Halting”))) (task-net (sequence (tl (speak “I need another gesture”)) (t2 (find-agent-at ?agent)) (t3 (get-gesture ?agent)) (t4 (respond-to-gesture ?agent)))))) Figure 8: RAP that sets up the gesture recognition process. used gestures from the RAP memory. But sometimes a lower level RAP will fail, e.g., when the designated agent can’t be found, and a ges- ture such as “Raised” will remain in the RAP mem- ory. .Thus, in the second method, an other than halting gesture is current and the robot will turn on the recognize-gesture skill (for subsequent gestures) and attempt to carry out (retry) the task indicated by the current gesture. In some cases, the robot will receive an emergency halting gesture before a lower level RAP is completed such as in the middle of a movement. If this happens the robot’s last recollection of a gesture will be “‘Halt- ing.” In these cases, the robot tells the designating agent that they need to start over, and continues as in the first method. These RAPS do not show the details of enabling actual skills, see (Bonasso et al. 1995) for details of how this works. Conclusions Our goal is to develop technologies that allow for ef- fective human/robot teams in dynamic environments. The ability to use the human’s natural communication tendencies allows the robot to be more effective and safer when working among humans. The contributions of our system include a demonstration of gesture recog- nition in real-time while on-board a mobile robot. The system does not require the user to wear any special equipment nor does it require that the robot, user or background be static. Our contributions also include integrating the gesture recognition system with an in- telligent agent architecture that can interpret complex gestures within tasks contexts. This complete system is a first step towards realizing effective human/robot teams. In the future we hope to extend the system by recognizing gestures over time and by integrating gesture recognition with speech recognition. References Bonasso, R. P.; Kortenkamp, D.; Miller, D. P.; and Slack, M. 1995. Experiences with an architecture for intelligent, reactive agents. In Proceedings 1995 IJCAI Workshop on Agent Theories, Architectures, and Languages. Borenstein, J., and Koren, Y. 1991. The Vector Field Histogram for fast obstacle-avoidance for mo- bile robots. IEEE Journal of Robotics and Automa- tion 7(3). Brooks, R. A. 1986. A Robust Layered Control Sys- tem for a Mobile Robot. IEEE Journal of Robotics and Automation 2( 1). Cassell, J. 1995. Speech, action and gestures as con- text for on-going task-oriented talk. In Working Notes of the 1995 AAAI Fall Symposium on Embodied Lan- guage and Action. Darrell, T. J.; Maes, P.; Blumberg, B.; and Pentland, A. 1994. A novel environment for situated vision and behavior. In Workshop on Visual Behaviors: Com- puter Vision and Pattern Recognition. Firby, R. J. 1994. Task networks for controlling con- tinuous processes. In Proceedings of the Second Inter- national Conference on AI Planning Systems. Gavrila, D. M., and Davis, L. 1995. 3-D model-based tracking of human upper body movemerit: A multi- view approach. In IEEE Symposium on Computer Vision. Huber, E., and Kortenkamp, D. 1995. Using stereo vision to pursue moving agents with a mobile robot. In 1995 IEEE International Conference on Robotics and Automation. Kahn, R. E.; Swain, M. J.; Prokopowicz, P. N.; and Firby, R. J. 1996. Gesture recognition using the perseus architecture. Computer Vision and Pattern Recognition. Nishihara, H. 1984. Practical real-time imaging stereo matcher. Optical Engineering 23(5). Wilson, A., and Bobick, A. 1995. Configuration states for the representation and recognition of gesture. In International Workshop on Automatic Face and Ges- ture Recognition. Mobile Robots 921
1996
136
1,773
Classifying and Recovering ures in Autonomous M Robin R. Murphy avid ershberger Center for Robotics and Intelligent Systems Colorado School of Mines Golden, CO 80401-1887 phone: (303) 273-3874 fax: (303) 273-3975 (rmurphy,dhershbe}@mines.edu Abstract This paper presents a characterization of sensing fail- ures in autonomous mobile robots, a methodology for classification and recovery, and a demonstration of this approach on a mobile robot performing landmark navi- gation. A sensing failure is any event leading to defec- tive perception, including sensor malfunctions, software errors, environmental changes, and errant expectations. The approach demonstrated in this paper exploits the ability of the robot to interact with its environment to acquire additional information for classification (i.e., ac- tive perception). A Generate and Test strategy is used to generate hypotheses to explain the symptom result- ing from the sensing failure. The recovery scheme re- places the affected sensing processes with an alternative logical sensor. The approach is implemented as the Sen- sor Fusion Effects Exception Handling (SFX-EH) archi- tecture. The advantages of SFX-EN are that it requires only a partial causal model of sensing failure, the control scheme strives for a fast response, tests are constructed so as to prevent confounding from collaborating sensors which have also failed, and the logical sensor organiza- tion allows SFX-EH to be interfaced with the behavioral level of existing robot architectures. Introduction The transfer of autonomous mobile robot (AMR) tech- nology to applications in manufacturing, defense, space, hazardous waste cleanup, and search and rescue missions has been impeded by a lack of mechanisms to ensure ro- bust and certain sensing. The actions of an AMR depend on its perception; if perception is faulty and goes unno- ticed, the robot may “hallucinate” and act incorrectly. One key mechanism for robust sensing is fault-tolerance: the ability to detect sensing failures and either recover from them in such a way as to allow the robot to resume performance of its task(s) or to gracefully degrade. Previous work in robotic sensing has demonstrated how certain types of sensing failures can be detected either at the behavioral (i.e., self-monitoring) (Ferrell 1993; Murphy & Arkin 1992) and/or deliberative layer (i.e., global monitoring) (Hughes 1993; Noreils & Chatila 1995). An open research question is how to recover from these failures. In the general case, recovery requires identification of the source of the problem; if the cause is not known, the wrong response may be employed. Detec- tion of a failure does not necessarily mean that the cause is known. For example, in (Murphy 1992), three dif- ferent problems which interfered with sensing in a secu- rity robot (sensor drift, incorrect placement of the robot, sensor malfunction) evinced that same symptom: a lack of consensus between the observations. The appropri- ate response to each problem was significantly different (recalibrate the offending sensor, rotate the robot until it reached the correct view, and replace the damaged sensor with an alternative, respectively). However, the correct response was known once the cause was identi- fied. While classification is essential for the general case, it may be unnecessary in situations where the recovery options are limited, i.e. “do whatever works” (Payton et al. 1992). This paper presents a symbolic AI approach to classi- fying and recovering from sensing failures. The char- acteristics of the AMR domain is differentiated from typical diagnosis applications (e.g. 9 medicine, geologi- cal interpretation) in the next section. Related work in problem solving and diagnosis for robotic sensing fol- lows. An overview of the approach taken in this paper is given next. Classification of errors is done with a novel extension of the basic Generate and Test strat- egy developed for Dendral (Lindsay et al. 1980)) with contributions from Generate, Test, Debug (Simmons & Davis 1987). This classification scheme takes advantage of the robot’s ability to actively use other sensors and feature extraction algorithms to test hypotheses about the sensing failure; it can be considered a form of active perception (Bajcsy 1988). The classification and recov- ery scheme is implemented as the exception handling (EH) portion of the Sensor Fusion Effects (SFX) archi- tecture. Demonstrations of SFX-EH on a mobile robot with a landmark navigation behavior are reviewed. The paper concludes with a summary and brief discussion, including on-going research efforts. 922 Mobile Robots From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Sensing Failures in AMR or unanticipated event. Even if multi-sensor modeling A characterization of sensing failures in AMRs is useful at this point for two reasons. First, it provides the con- text for justifying the approach taken in this paper. Sec- ond, it will distinguish classifying and recovering from sensing failures for AMRs from the connotations associ- ated with general diagnosis in other domains such as medicine and the identification of geological features. The unique attributes of this domain are: The class of sensing failures includes more than sen- sor failures. For the purposes of this paper, a sensing failure is defined as any event leading to defective per- ception. These events may stem from sensor hardware malfunctions, bugs in the perceptual processing software (e.g., does not work in a particular situation), changes in the environment which negatively impact sensing either at the hardware or software level (e.g., turning the lights off), or errant expectations (e.g., looking for the wrong thing at the wrong time). The inclusion of software defects, environmental change, and errant expectations as sources of sensing failures makes classification particularly challenging. In- deed, one of the motivations for (Payton et al. 1992) is to avoid having to attempt to identify software defects. These sources of faulty sensing have the potential to in- terrupt progress, especially changes in the environment. Exploiting the environment is a fundamental principle of behavioral robotics. However, (Howe & Cohen 1990) note the difficulty of designing agents that can toler- ate environmental change. Since AMRs function in an open world, this suggests that this difficulty will be ex- acerbated and environmental change will be a significant source of problems as robots are deployed in more de- manding settings. Sensing failures occur frequently, but diflerent types occur infrequently. (Ferrell 1993) noted that in ex- periments with Hannibal, a hexapod robot with over 100 sensors, a hardware sensor failure occurred approx- imately once every two weeks. Our experience with two different mobile robots is consistent. It is unrealistic and undesirable to attempt to explic- itly model all possible failure modes. (Velde & Carig- nan 1984) devised one such explicit modeling scheme. However, this scheme assumed that all sensors were of the same type and their observations could be corre- lated statistically. But it begs the issue of how to ac- quire statistical data about a set of events, when, by definition, the very members of that set may not be known a priori. The difficulties are increased as roboti- cists turn to multiple sensors (sensor fusion). Model- ing the interactions between sensors for the environment and the task leads to a combinatorial explosion with a statistical method such as (Velde & Carignan 1984; Weller, Groen, & Hertzberger 1989), again ignoring that a sensing failure may result from a never encountered could be done satisfactorily, the causal models are un- likely to be portable to new sensor configurations and application domains. An AMR can actively perceive. One advantage that an AMR has is that it can acquire new information by delib- erately engaging its environment = per active perception (Bajcsy 1988), and/or by extracting new meanings from previous observations (e.g., examines the recent history of measurements) . An AMR may have both redundant and complemen- tary sensing modalities. The trend in robotic sensing is to use a small set of general purpose sensors. Some sensors may be redundant (i.e., two or more of the same sensor). However, the majority of sensors are likely to be complementary. For example, at the AAAI Mobile Robot Competitions, the entries are invariably equipped with vision and sonar. This makes classification chal- lenging because the scheme cannot assume that there is an alternative sensor which can directly collaborate a suspect sensor; instead, inferences from the behaviors of other sensors will have to be made. Exception handling is a secondary function in an AMR. In other domains, diagnosis is their primary task. In an AMR, sensing failures can be viewed as exceptions which cause the robot’s progress to be suspended. Reli- able sensing must be reestablished before the robot can resume the behavior and complete the intended task. However, an AMR may have only a finite time to spend on exception handling. It can’t remain indefinitely in a hostile environment such as Three Mile Island or an outgassing Near Earth Object without increasing the risk of a hardware failure from radiation or catastrophe. Therefore, the time dedicated to exception handling is an important consideration in the development of any classification and recovery scheme. Exception handling must be integrated with the whole system. To see how sensing failures impact the whole system, consider the following examples. First, because the robot cannot act correctly without adequate sens- ing, an AMR must cease execution of the failed behav- ior and possibly revert to a stand-by, defensive mode if it cannot continue other behaviors. This requires infor- mation about sensing failures to be propagated to the behavioral or task manager. If the behavior cannot re- cover quickly, the mission planner aspect of the robot must be informed so that it can replan or abort the mis- sion. Second, since classification and recovery may in- volve active perception, contention for sensing resources may occur, e.g., is it safe to take away sensor X from behavior Y and point it in a different direction? Con- tention resolution requires knowledge about the robot’s goals, interchangeability of sensors, etc., making excep- tion handling a process which must communicate with other modules in the robot architecture. Third, if the Mobile Robots 923 source of a sensing failure is used by other behaviors, the recovery scheme should include replacing the failed component in the other behaviors which may be hallu- cinat ing , problem. as well as the behavior that first detected the The attributes of the classification and recovery task for AMR itemized above lead to a characterization of a desirable exception handling mechanism and an ap- propriate problem solving strategy. This exception han- dler is intended to be applicable to any AMR sensing configuration. Because sensor failures occur frequently and suspend progress of the robot, exception handling must attempt to effect a timely recovery. The exception handler must interact with the task manager to prevent unsafe actions from occurring during classification and recovery. The exception handling scheme can reduce the down time by exploiting any situations where a recov- ery scheme can be directly invoked, either because the symptom clearly defines the cause or because all possi- ble causes result in the same response. It should con- tinue to attempt to identify the source of the sensing failure in a background mode if it invokes a direct re- covery scheme. The exception handler can use active perception to overcome the open world assumption and the resultant difficulty in constructing a complete model of failure modes. But active perception leads to a new issue of how to safely reallocate sensing resources from other behaviors (if needed) to identify the source of the problem. Therefore, the exception handling mechanism must be a global, or deliberative, process in order to reason about possible corroborating sensors which may not be allocated to it. These sensors may or may not be redundant. The mechanism must be able to handle con- tention resolution or communicate its needs to the ap- propriate sensor allocation process. When the exception handler identifies the source of the failure, it propagates the information to other behaviors so they don’t hallu- cinate or go into a redundant classification and recovery cycle. Related Work As noted in the introduction, detection, classification, and recovery from sensing failures in mobile robots has been addressed by (Noreils & Chatila 1995), (Ferrell 1993) and (Payton et al. 1992). Other noteworthy ef- forts are those by (Weller, Groen, & Hertzberger 1989), (Velde & Carignan 1984)) (Hanks & Firby 1990), and (Chavez & Murphy 1993). (Weller, Groen, & Hertzberger 1989) and (Velde & Carignan 1984) deal with sensor errors in general. (Weller, Groen, & Hertzberger 1989) create modules for each sensor containing tests to verify the input based on local expert knowledge. Environmental conditions de- termine whether a test can be performed or not. The partitioning of problem space by symptom is based on these modules. The approach taken in this paper follows (Weller, Groen, & Hertzberger 1989), testing corrobo- rating sensors before using them for error classification. (Hanks & Firby 1990) propose a planning architecture suitable for mobile robots. As with (Noreils & Chatila 1995), a plan failure triggers exception handling. The system recovers by either choosing another method ran- domly whose pre-conditions are currently satisfied (sim- ilar in concept to logical sensors (Henderson & Shilcrat 1984) and behaviors (Henderson & Grupen 1990)), or by running the same method again (similar to the retesting strategy used by (Ferrell 1993)). As with (Payton et al. 1992), there is no formal error classification scheme. No check is performed to confirm that the sensors providing information about the pre-conditions are still function- ing themselves. If they are not, the recovery scheme may pick a method that will either fail, or, more significantly, hallucinate and act incorrectly. An earlier version of SFX-EH was presented in (Chavez & Murphy 1993). This article builds on that work, with two significant advances. The control scheme is now a global, deliberative process with the ability to access information from sensors not allocated to the be- havior. The original was restricted to using only in- formation directly available to the behavior. This was intended to provide fault tolerance entirely within a be- havior; in practice with landmark navigation and hall- following this proved to be too severe. Approach This paper concentrates on the exception handling strat- egy needed to classify a sensing failure. It assumes that an AMR accomplishes a task via independent behaviors which have no knowledge about sensing processes being used by the other behaviors. A behavior is assumed to consist of two parts: a motor process or schema, which defines the pattern of activity for the behavior, and a perceptual process or schema, which supplies the mo- tor process with the necessary perception to guide the next action. This assumption allows the perceptual pro- cess to be treated as a logical sensor. Alternative logical sensors may exist for the percept. The sensor and fea- ture extraction algorithms used to compute the percept are referred to a5 a description of the percept, synony- mous with a logical sensor. There may be more than one description of a percept using the same sensor. For example, a hazardous waste container can be modeled in terms of 2D visual features or 3D visual features; each set would form a unique description even though they were extracted from the same camera. A logical sensor may fuse the evidence from than one description; this is generally referred to as sensor fusion of multiple logical sensors. A description is the smallest granularity for identify- ing a sensing failure; therefore, the difference between a 924 Mobile Robots software defect (e.g., the algorithm fails after the 100th iteration) and errant instantiation (e.g., the algorithm is triggered with the wrong parameters) is indistinguish- able. However, the exception handler should not assume that a failed logical sensor means that the physical sen- sor is “bad.” Instead, it should attempt to isolate and test the physical sensor separately where possible. Either the behavior or a global supervisory monitor is assumed to detect a sensing failure and supply the exception handler with the symptom and relevant infor- mation. The symptom may provide an explicit classi- fication of the source of the problem, i.e., serves as a complete causal model; for example, upon malfunction- ing, the hardware returns a failure code. The symptom may only be a partial causal model (e.g., lack of con- sensus between observations), thereby necessitating fur- ther investigation. The exception handler assumes that there is only one sensing failure at a time. This simpli- fies classification. By solving one sensing failure, it is hoped that any additional failures would be taken care of. If not, the new logical sensor will fail and excep- tion handling reinvoked. It is worth emphasizing that the system does not assume that any additional sensors used for classification or recovery are operational; there- fore any sensors used for corroboration must be validated as functional in advance. Also, the exception handling approach supports graceful degradation by acknowledg- ing when it can’t solve the problem and turning control over to whatever mission planning arrangement is used by the robot. The exception handling strategy is divided into two steps: error classification and error recovery. The error classification module uses a variation of Generate and Test (Lindsay et al. 1980) to generate hypotheses about the underlying cause of the failure. There are three ad- vantages to using Generate and Test. First, since it is an exhaustive search, it catches errors that occur infre- quently. Second, Generate and Test allows the robot to actively collect additional information. Because robotic behaviors generally are reactive in the sense of (Brooks 1986)) their perception is limited to local representations focused solely on the motor action. As a result, there is usually not enough information available to a behav- ior to isolate the cause locally. Active acquisition of additional information is critical to the success of error classification. Three, the tests do not require redundant sensors, instead information from other modalities can be used. A Generate and Test strategy does have one disadvantage; because it performs an exhaustive search, it can be time consuming. However, this disadvantage has not been encountered in practice to date because of the small search space for the set of sensors typically used by mobile robots. Error classification follows the same basic procedure as Generate and Test (Lindsay et al. 1980): 1. Generate all possible causes based on the symptom. 2. Order the list of associated tests and execute the tests to confirm any of these causes. 3. Terminate classification when all tests have been per- formed or an environmental change has been con- firmed. Testing does not terminate upon the first con- firmed sensor failure because an environmental change can cause a sensor diagnostic test to report a false positive. This can be determined by examining the results of all tests. If the list of tests is exhausted and no source of the failure can be identified, an errant ex- pectation (i.e., planner failure) is assumed to be the cause. There are five novel extensions to Generate and Test for classifying sensor failures in AMRs. One, the prob- lem space is constrained by the symptom (e.g., missing observation, lack of consensus between multiple obser- vations, highly uncertain evidence, etc.) in order to re- duce search. Two, the exception handler generates all possible hypotheses and tests associated with that symp- tom at one time in order to reduce testing time and re- sources, and to prevent cycles in testing. Portions of the tests associated with the hypotheses may be redun- dant; this prevents them from being rerun. Three, the list of hypothetical causes always includes violations of the pre-conditions for each description (sub-logical sen- sor) in the logical sensor. This is similar in philosophy to GTD (Simmons & Davis 1987) where the debugger challenges the pre-conditions of nodes in the dependency structure. Note that in this application, the challenge is part of the initial hypothesis generation step rather than a debugging step. Example pre-conditions are suf- ficient ambient light and adequate power supply. Four, the tests are ordered to ensure correctness. If additional sensors are being used in the tests to corroborate obser- vations or verify the condition of the environment, they must first be tested (if possible) to confirm that they are operational. Five, the list of tests is examined and redundant tests removed in order to speed up testing. Once the sensing failure is classified, recovery is straightforward since the logical sensor scheme explicitly represents equivalences between sensing processes. The search for an alternative degenerates to a table look- up. If the sensing failure is due to either a malfunction or an environmental change, error recovery attempts to replace the logical sensor with an alternative. The alter- native must satisfy any new pre-conditions discovered by the classification process. For example, if the reason for a sensing failure with a video camera is because the ambient lighting is extremely low, then a logical sensor using a redundant video camera is not considered. If there is no viable alternative logical sensor, the error recovery process declares a mission failure and passes control to the planner portion of the robot. Mobile Robots 925 Motor Process Error Ckzssif~ation 1. Generate hypotheses 2. Order tests and execute Search for akemative Active Perception Mission Planner/ Routines Task Manager Mission Planner/ Task Maauger Figure 1: Overview of SFX-EH. Implementation: SFX-E The exception handling strategy described above has been implemented as an extension to the Sensor Fu- sion Architecture (SFX) (Murphy & Arkin 1992)) called SFX-EH (SFX Exception Handling). Figure 1 shows a conceptual layout of sensing activities in SFX-EH. The perceptual process component of a behavior executes in three steps, as per SFX. First, observations are collected from each description in the logical sensor, e.g., grab an image, run feature extraction algorithms on it. Next, the descriptions are preprocessed to compensatefor asyn- chronous observations, etc. The fusion step integrates the evidence for the percept from each description and passes it to the motor process. Situations where the log- ical sensor consists of a single description are treated as a degenerate case of sensor fusion and the fusion step is null. At this time, self-monitoring perceptual processes within a behavior are the only mechanisms for detect- ing sensing failures, but behavioral and planning level monitoring is not precluded. SFX examines the data for a failure after each step. The four symptoms currently recognized by SFX are: missing data (the description has not been updated with a new reading), highly un- certain data (the observation of a description is vague or ambiguous), highly confEicting observations (the observa- tions from multiple descriptions do not show a consen- sus), and below minimum certainty in the percept (the evidence that the percept is correct is too low for the motor process to safely use). Hardware or dedicated di- agnostic software can short-circuit the detection process. If an explicit error is detected, perceptual processing for the behavior is immediately suspended, and the associ- 926 Mobile Robots Number Rnvironmental Conditions Environmental Pre-conditions Feature ID Feature Value List I Number of Affected Sensors Affected Sensors List Environmental Sensor Function I I I L I I 1 Figure 2: Diagram of the Exception Handling Knowl- edge Structure (EHKS) ated recovery scheme implemented (if any) or control is passed to the exception handler. The exception handling module is global. It relies on the Exception Handling Knowledge Structure (EHKS) to provide it with the relevant data about the sensing failure and the task. The EHKS, shown in Figure 2, is a frame with six slots. The failure step slot is a flag that describes whether the failure occurred at what stage of execution. The errors slot gives the failure condition encountered. The bodies of evidence slot is list of frames, each of which holds data from each description in the logical sensor. The environmental pre-conditions slot also holds a list of frames, each of which describe the attribute of the environment (if any) which serves as a pre-condition for using that sensor, the expected value of the environmental attribute for acceptable performance of the sensors, and pointers to other sensors which share the same environmental pre- condition. The EKHS contains this so it can challenge the environmental pre-conditions. The hypotheses take the form that a particular de- scription or logical sensor has failed. The failure condi- tions describe if the failure occurred during the collection step, the pre-processing step, or the fusion step, along with what type of failure occurred. If the failure oc- curred during the collection step or the pre-processing step, then individual suspect bodies of evidence are di- rectly known; otherwise, all bodies of evidence are con- sidered suspect. Once the suspect descriptions have been identified, the actual list of tests is generated. The tests are used to determine the specific cause of the error by investi- gating potential sensor malfunctions and environmental changes. Generating the test list requires deciding which environmental conditions need to be tested, based on which descriptions are suspect. Because the environ- mental pre-conditions may hold different attribute val- ues for each sensor, an environmental change can affect some sensors, but not others. Also, because challeng- ing environmental pre-conditions may require additional sensing, the system must be certain that the sensor to be used for additional sensing is operating nominally. Thus, a sensor diagnostic must be run before collecting additional sensor data. The test list is generated by initially checking if any descriptions in the Affected Sensors slot of an envi- ronmental frame and in the sensing plan contribute a suspect body of evidence. If so, and no sensing is re- quired to acquire data to determine the value of the desired environment al attribute, then the environmen- tal pre-condition challenge is added to the test list. If additional sensing is required to challenge an environ- mental pre-condition, then a diagnostic for the sensor which performs the additional sensing is added to the test list in front of the environmental pre-condition chal- lenge. Finally, duplicate sensor diagnostic routines are removed from the list, if present. Each test list item contains identification of the test and indicates if the test is for an environmental change, a sensor diagnostic for a sensor contributing a body of evidence, or a sensor diagnostic for an environmental sensor. Demonstrations The current version of SFX-EH has been transferred to Clementine, a Denning MRV-4 mobile robot, shown in Figure 3 and demonstrated for landmark navigation using redundant sensors. The objective was to show the operation of the classification and recovery scheme for all types of failures in a realistic setting. The behavior used for this demonstration was move- to-goal(goal=purple-square), where the goal was a purple square landmark. The behavior was purely reac- tive; the robot had no a priori knowledge of its relative position. The presence of the landmark in the image elicited a constant attraction potential field. Two logical sensors were available for perceiving the purple square. The default logical sensor consisted of one description taken from the color video camera mounted on front of the robot (camera 0). The landmark was represented by two features: the intensity values in HSV space corre- sponding to that shade of “ purple ” and shape via the Hu invariant spatial moments (Pratt 1991). The belief in the landmark was computed as how well the Hu spa- tial moments of a candidate “ purple ” region matched the a. b. Figure 3: Landmark navigation: a.) Initiating sensor malfunction by covering camera b.) Recovery by turning to alternative sensor (shown in mid turn) c.) Resump- tion of behavior and completion of task. Mobile Robots 927 ** STARTING ERROR CLASSIFICATION ** Body of evidence 0: Sensor type is color camcorder STEP 1: Identification of suspect Pre-processing errors discovered of evidence Fig. 3a. the robot is making normal progress towards the landmark using the default logical sensor as a grad- uate student is about to place a box over the camera to simulate a sensor malfunction (e.g., dirt on the lens). In Fig. 3b. the robot has halted while it generates hy- potheses and tests them. It uses the video camera in the rear to attempt to establish whether an environmental change has occurred. If so, both cameras should report images with a high average intensity level. The output of the two cameras does not agree; camera 1 shows no in- dication of an environmental change but camera 0 does. Since the cameras are mounted on the same small robot, it is unlikely that only one camera would be affected by an environmental change. Therefore, SFX-EH concludes that camera 0, or its software, has become defective and must be replaced. Fig. 3c. shows the robot resuming progress towards the landmark, but turned 180” in or- der to use the alternative logical sensor. The sign of the motor commands are automatically reversed when cam- era 1 is the “leader;” other behaviors which depend on the direction of motion receive the reversed commands. Suspect body of evidence: 0 This BOE did not report missing data STEP 2: Generation of candidate hypotheses (tests) Building test list. Color camera diagnostic: This is an environmental sensor diagnostic. Check intensity: This test challenges an environmental pre’cond. Color camera diagnostic: This is a suspect sensor diagnostic for sensor number 0. Check intensity: This is s+ suspect sensor diagnostic for sensor number 0. Done building test list. STEP 3: Execution of tests Test 1: This is a color video hardware diagnostic function. to see if any good color cameras exist. Testing color sensor number 0 which was marked good... Found a good color sensor, number 0. Ok to run other tests. Test 2: This is to find out if any color sensor reports good intensity. Color sensor 0 reports below minimum intensity threshold. Environmental intensity is ok, detected with sensor 1. == CONFIRMED TEST LIST ====== Color camera error __---__-____c---_-------w---- __----_---------------------- ** ERROR CLASSIFICATION COMPLETE ** ** STARTING ERROR RECOVERY ******I* Recovery 1 Original sensing plan: description 0: sensor number 0, named Sony- Videocam Performing color video hardware error recovery. REPLACING sensor number 0 with sensor number 1 Repaired sensing plan Description 0: sensor number 1, named Sony- Videocam ** ERROR RECOVERY COMPLETE ** Conclusions and On-going Work The Generate and Test approach taken by SFX-EH has several advantages. It requires only a partial causal model of sensing failure, and that partial causal model is based on interactions between physical sensors and the environment, rather than limited to models of how the sensors respond for a task, which are difficult to acquire. This is expected to allow the problem solving knowledge associated with a specific physical sensor configuration to be portable to other tasks. The classification pro- cess can be short-circuited when all causes of a sv-mp- tom have the same recovery scheme. The constructionof tests takes into account possible confounding from other failed sensors, adding more reliability to the classifica- tion and recovery process, plus preventing cycles in test- ing. Unlike previous systems, the tests themselves can extract information from complementary sensors. The logical sensor organization allows exception handling to be interfaced with the behavioral level of existing robot Figure 4: Abbreviated output from SFX-EH. landmark model. The alternative logical sensor applied the same algorithms but used the color video camera mounted on the rear of the robot (camera 1). While the logical sensors are redundant in terms of the type of in- formation they produce, the robot must face backwards in order to use camera 1 for landmark navigation. In each run, the robot was placed in an open area within 25 feet of the purple-square landmark. Depend- ing on the purpose of the demonstration, the robot may architectures. or-may not have been placed facing the landmark. As SFX-EH has two disadvantages. The most significant the robot made progress towards the landmark, a failure is that the hypotheses and tests are based on domain- would be introduced. Sensor malfunctions were intro- duced by pulling the video cable out of a camera and putting a box over one camera to simulate a problem with the optics. Turning out the lights, an environmen- tal change, was simulated by putting boxes over both cameras simultaneously. An errant expectation was gen- erated by moving the landmark in the middle of a run or orienting the robot where it was not seeing the land- mark. dependent knowledge, not purely general purpose prob- lem solving skills. The basic structure can be ported to new applications, but new knowledge will have to be added. However, most of the domain-dependent knowl- edge is portable because the knowledge base is organized around sensor interactions, not a casual model of the sensors for a specific behavior. For example, a change in the environment can be confirmed with a redundant sen- sor regardless of what the robot was attempting to per- ceive prior to the failure. The addition of general prob- lem solving strategies and a learning mechanism, such as Figure 3 shows instances from a typical sequence; the corresponding output of SFX-EH is in Figure 4. In 928 Mobile Robots Case-Based Learning, is being considered. Second, the logical sensor representation allows rapid generation ofa small set of tests and ease of generation, but introduces other problems due to coarse granularity and a possible lack of available alternate logical sensors. However, these problems, especially the issue of when to re-consider a “bad” physical sensor, appear to be tractable and will be addressed in future refinements of SFX-EH. The demonstrations provided additional insights and directions for future research. A practical issue is when to retry a sensor that has been identified as “bad.” It should also be noted that experience with SFX-EH has shown that the testing, not the problem space search needed for hypothesis generation is the bottleneck in re- covering from a sensing failure. Part of this experience is due to the small set of possible hypotheses implemented at this time. But a large part of the rapid generation of hypotheses is due to a) the category of sensing failure indexing the classifier into the subspace of potentially applicable hypotheses and b) the coarse granularity of the failure modes. SFX-EH currently lacks the ability to resolve resource contention due to active perception demands and does not update the sensing status to other behaviors. These issues are being actively addressed by the addition of a global event-driven sensing manager. The utility of the SFX-EH style of classification and recovery is not limited to AMR; SFX-EH is currently being applied to intelligent process control for power generation as well. Acknowledgments This research is supported in part by NSF Grant IRI- 9320318 ARPA Grant AO#B460, and NASA/JSC Con- tract NAS.9-19040. The authors would like to thank Greg Chavez for his original research efforts, Elizabeth Nitz for coding a previous version of SFX-EH and the Hu moments, and the anonymous reviewers. eferences Bajcsy, R. 1988. Active perception. Proceedings of IEEE 76(8):886-1005. Brooks, R. A. 1986. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Au- tomation RA-l(l):l-10. Chavez, G., and Murphy, R. 1993. Exception handling for sensor fusion, In SPIE Sensor Fusion VI9 142-153. Ferrell, C. 1993. Robust Agent Control of an Au- tonomous Robot with Many Sensors and Actuators. M.S. Dissertation Department of Electrical Engineer- ing and Computer Science, MIT. Hanks, S., and Firby, R. 1990. Issues and architectures for planning and execution. In Proceedings of a Work- shop on Innovative Approaches to Planning, Scheduling and Control9 59-70. Henderson T., and Grupen, R. 1990. Logical behav- iors. Journal of Robotic Systems 7(3):309-336. Henderson, T., and Shilcrat, E. 1984. Logical sensor systems. Journal of Robotic Systems 1(2):169-193. Howe, A., and Cohen, P. 1990. Responding to environ- mental change. In Proceedings of a Workshop on Inno- vative Approaches to Planning, Scheduling and Control, 85-92. Hughes, K. 1993. Sensor confidence in sensor integra- tion tasks: A model for sensor performance measure- ment. In SPIE Applications of Artifkial Intelligence XI: Machine Vision and Robotics, 169-193. Lindsay, R.; Buchanan E.; Feigenbaum, E.; and Leder- berg, J. 1980. Applications of Artificial Intelligence for Organic Chemistry: The Dendral Project. New York: McGraw-Hill. Murphy, R., and Arkin, R. 1992. Sfx: An architecture for action-oriented sensor fusion. In 1992 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 225-250. Murphy, R. 1992. An Architecture for Intelligent Robotic Sensor Fusion. Ph.D. Dissertation College of Computing, Georgia Institute of Technology. GIT-ICS- 92142. Noreils, F., and Chatila, R. 1995. Plan execution moni- toring and control architecture for mobile robots. IEEE Transactions on Robotics and Automation 11(2):255- 266. Payton, D.; Keirsey, D.; Kimble, D.; Krozel, J.; and Rosenblat, J. 1992. Do whatever works: A robust approach to fault-tolerant autonomous control. Journal of Applied Intelligence 2:225-250. Pratt, W. 1991. Digital Image Processing, 2nd ed. New York: Wiley. Simmons, R., and Davis, R. 1987. Generate test and de- bug: Combining associational rules and causal models. In Proceedings of 10th International Joint Conference on Artificial Intelligence, 1071-1078. Velde, W. V., and Carignan, C. 1984. Number and placement of control system components considering possible failures. Journal of Guidance and Control 7(6):703-709. Weller, G.; Groen, F.; and Hertzberger, L. 1989. A sen- sor processing model incorporating error detection and recovery. In Henderson T., ed., Traditional and Non- traditional Robotic Sensors. Maratea, Italy: Springer- Verlag. 225-250. Mobile Robots 929
1996
137
1,774
GARGOYLE: An Environment for e, Context-Sensitive Active Vision eter N. Prokopowicz, Michael J. Swain, R. James Firby, and Roger E. Kahn Department of Computer Science University of Chicago 1100 East 58th Street Chicago, IL 60637 peterp@cs.uchicago.edu, swain@cs.uchicago.edu, firby@cs.uchicago.edu, kahn@cs.uchicagd.edu Abstract Researchers in robot vision have access to sev- eral excellent image processing packages (e.g., Khoros, Vista, Susan, MIL, and XVision to name only a few) as a base for any new vision soft- ware needed in most navigation and recognition tasks. Our work in automonous robot control and human-robot interaction, however, has de- manded a new level of run-time flexibility and performance: on-the-fly configuration of visual routines that exploit up-to-the-second context from the task, image, and environment. The re- sult is Gargoyle: an extendible, on-board, real- time vision software package that allows a robot to configure, parameterize, and execute image- processing pipelines at run-time. Each operator in a pipeline works at a level of resolution and over regions of interest that are computed by up- stream operators or set by the robot according to task constraints. Pipeline configurations and operator parameters can be stored as a library of visual methods appropriate for different sens- ing tasks and environmental conditions. Beyond this, a robot may reason about the current task and environmental constraints to construct novel visual routines that are too specialized to work under general conditions, but that are well-suited to the immediate environment and task. We use the RAP reactive plan-execution system to select and configure pre-compiled processing pipelines, and to modify them for specific constraints de- termined at run-time. Introduction The Animate Agent Project at the University of Chicago is an on-going effort to explore the mech- anisms underlying intelligent, goal-directed behavior. Our research strategy is centered on the development of an autonomous robot that performs useful tasks in a real environment, with natural human instruction and feedback, as a way of researching the links between perception, action, and intelligent control. Robust and timely perception is fundamental to the intelligent be- havior we are working to achieve. 930 Mobile Robots As others have done before us (Bajcsy 1988; Ullman 1984; Chapman 1991; Ballard 1991; Aloimonos 1990), we have observed that a tight link between the per- ceptual and control systems enables perception to be well tuned to the context: the task, environment, and state of the perceiving agent (or robot in our case). As a result, perception can be more robust and efficient, and in addition these links can provide elegant solu- tions to issues such as grounding symbols in plans of the control system. Our computer vision research concerns the problems of identifying relevant contextual constraints that can be brought to bear on our more or less traditional com- puter vision problems, and applying these constraints effectively in a real-time system. We have demon- strated that certain vision problems which have proven difficult or intractable can be solved robustly and effi- ciently if enough is known about the specific contexts in which they occur (Prokopowicz, Swain, & Kahn 1994; Firby et al. 1995). This context includes what the robot is trying to do, its current state, and its knowl- edge of what it expects to see in this situation. The dif- ficulty lies not so much identifying the types of knowl- edge that can be used in different situations, but in applying that knowledge to the quick and accurate in- terpretation of images. Our most important techniques in this regard so far have been the use of several low-level image cues to select regions of interest before applying expensive op- erators, and the ability of the robot to tune its vision software parameters during execution, according to what it knows about the difficulty or time-constraints of the problem. For example, one of the visual rou- tines for object search that we have developed (Firby et al. 1995) can use a color model of the object to restrict search to areas where it is more likely to be found, prior to edge matching. The edge matcher is restricted by knowledge of the size of the object, and what is known about its likely orientations and loca- tions. As another example, when tracking an object, From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. the robot lowers the level of resolution at which images are processed to speed processing and thereby improve eye-body or eye-hand coordination. The loss of acuity is acceptable because the improved focus of attention created by tight tracking greatly reduces the number of false matches that might otherwise result from de- graded inputs. We would like to push the use of context much fur- ther. For example, as the robot moves closer to an object that it is tracking, it should be able to adjust the level of resolution downward. Also, if the robot needs to do a number of visual tasks at once (e.g., “free-space” obstacle avoidance and target tracking) it could tune each algorithm for improved speed, at the cost of some accuracy. These are two examples of us- ing task context to improve visual performance. We can also gain from using environmental context. For example, the robot could decide which low-level cues it will use for determining regions of interest, according to feedback from the visual routines about the useful- ness of the cues in a previous scene, or perhaps from long-term knowledge or short term memory of what is likely to be around it. The visual interest cues pro- vide image context that not only can be used to limit computation to a part of the scene, but that places ge- ometric constraints about how an object can appear if it is to be in a certain region of the image. In general, careful restriction of viewpoint search based on image location may ameliorate the inherent combinatorics of most general purpose model-matching algorithms. We want to make it possible for a robot to construct, at run time, from a fixed set of visual operators, a vi- sual routine appropriate for the current goal, and adapt it on-the-fly to the state of the task and environment. The routine itself would exploit appropriate low-level cues to reduce the aspects of the scene under consid- eration. Gargoyle: Run-time configurable pipelines for active vision In developing Gargoyle, one of our aims is to provide composability that is available in most vision software packages, but not at the overhead required by these packages. For example, Khoros’ Cantata is an inter- preter that can combine operators into new routines at run-time. Unfortunately Cantata’s design is not suited for real-time processing, mainly because it uses expen- sive inter-process communication to pass copies of im- ages from one module to another. Xvision (Hager & Toyama 1994) allows new operators to be derived very flexibly from others, but requires new C++ classes and recompilation to use them. Although no vision software package provides the performance and run-time flexibility we seek, we don’t want to write an entirely new system if there is one that provides an adequate base. We have chosen the Teleos AVP vision system’, which provides lowllevel real-time image processing, because it is designed for high per- formance, and is supported on a standard Windows NT platform. Gargoyle is a multithreaded, multiprocessing Java program that calls external C or C++ functions for most image processing. It augments the Teleos AVP vision library to provide the kinds of visual routines an autonomous mobile robot needs for navigation, target recognition and manipulation, and human-computer interaction, and is extendible to other tasks. Gargoyle provides a run-time interpreter that allows dynamic configuration of image-processing pipelines. The pipelines constitute visual routines for accom- plishing specific visual goals. Gargoyle includes a set of visual operator modules that are composed into pipelines, and it can be extended with new modules. The pipelines are constructed out of linked image- processing and computer vision modules. In a graphi- cal representation of a visual routine in Gargoyle such as is shown in Figure 1, images and regions of interest (ROI) can be thought of as flowing one-way along the links. A feature of the Gargoyle runtime system is that no unnecessary copying is done when executing these pipelines. Images can be tagged with viewer state in- formation that can be used to translate from image to world coordinates. Gargoyle communicates with a robot control sys- tem (we use the RAP reactive plan-execution sys- tem (Firby et al. 1995)) via string messages. There are command messages sent to Gargoyle for construct- ing, reconfiguring, and executing pipelines. Gargoyle returns messages that contain the results from pipeline execution, and error conditions from modules within a pipeline. ata structures and processing modules Gargoyle defines two basic data structures, images and regions of interest, and provides a set of vision modules that process images within regions of interest. This section describes the image and ROI data structures, and the vision modules we provide. ROI: Regions of Interest The ROI defines a rectangular area in an image, and specifies a level of resolution as an integer subsampling factor. Optionally, individual pixels with the region may be masked out. ROIs are used to restrict subse- quent image processing to the given area. They are ’ http://www.teleos.com Mobile Robots 931 Figure 1: Region and sub-region of interest within full field of view. Offsets defining a region are always with respect to full field of view. Regions are rectangles with optional binary masks. also used to annotate image buffers with information about the source of the data in the image. A ROI is specified with respect to a full size, high- resolution image area, whose upper left pixel is indexed at @o> 0% I ure 1). A region is defined by its offsets, measured in pixels from the upper left corner of the full field of view, and its width and height. Regions of interest are created from other regions by specifying the offset and size of the subregion within the larger region. If the larger region was itself a subregion, its offsets are added to the new offsets, so that all ROIs remain in a canonical coordinate system. ROIs may also include a binary 2-D mask for defin- ing regions that are not rectangular. There are meth- ods to initialize the mask to the entire region or to a pre-defined mask, empty it, and add or remove pixels from it. ROI objects also provide iterators that se- quentially return each location in the region, whether it is a simple rectangular region or an arbitrary set of pixels in a mask. Calibrated regions of interest Regions of interest can be calibrated with an object that describes the viewer’s (i.e., camera’s) location and pose in a world coordinate system. The calibrated ROI class is derived from the ROI class and adds cam- era height, location, viewing angles, magnification, real clock time, and a flag to indicate if the camera is in mo- tion. To get this information, Gargoyle calls a global function called viewerstate, with a single parameter for the camera number in a multiple camera system, From To monocular image noint 3D line of sight Y L stereo image point pair 3D world lolation 3D world location image point 3D world volume or mane image region Table 1: Calibration functions whenever image data is created (see Feature Maps, be- low). The Gargoyle user must replace this function with one that can query the robot control system for its current location, bearing, and the position and mag- nification of the camera if these are adjustable. The values are simply stored with the region of interest. Since images also define regions of interest, any image can be annotated with viewer state information. Gargoyle includes a camera model class that can be parameterized for different cameras. The model can convert from image coordinates into lines of sight and back, and, if an image is annotated with viewer state information, image locations and regions can be con- verted into real world locations and regions, as listed in table 1. Finally, Gargoyle can generate image ROIs corresponding to volumes in real-world coordinates, us- ing the current viewer state. This is needed in order for the robot to give hints about where the visual sys- tem should look for an object. For example, if the task is to clean up trash from the floor, then the visual sys- tem only needs to look in the parts of the image that correspond to the floor and some small height above it. If the cameras are tilted down so that only the floor is visible, the ROI generated will include the entire im- age, but if the camera needs to be raised slightly for some other purpose, or can’t be moved, image process- ing will still take place only where it makes sense for the current task. Image buffers Image buffers hold one or three-band images, and are defined as a C++ template class. The class provides methods only for pixel access, reading and writing to disk, and setting a color space tag. A visual processing module is available to convert images from one color space to another. Image buffers inherit from the calibrated region of interest class, above. The ROI defines the source re- gion of the image with respect to the maximum field of view. It also indicates if the pixel data represents a subsampling of full-resolution data. If the viewer state global function is redefined for the robot, the im- age will be annotated with the camera’s position. If this information is not available, the image is simply 932 Mobile Robots r Map output Parameters Color 3bnd int. color space Gray scale lbnd int. none Contrast 1 bnd signed filter size Edge lbnd binary filter size Motion lbnd int. max. movement Disparity lbnd int. max. disp. Frame diff. 1 bnd signed interval Table 2: Gargoyle input modules or feature maps. Outputs (1 or 3 band images) are processed as inputs by other modules in a pipeline. stamped with the time it was taken and the camera number. Images, including viewer state information, can be written to disk, so that real experiments can be replayed later. This makes it possible to debug an experiment any time after it takes place. Visual processing operators The processing modules which comprise every visual routine fall into three general categories, based on the type of links into and out of the operators. Feature maps provide system input. In a pipeline they take no image input, but have an optional ROI input, and produce an image buffer as output. Segmenters take an image and optional ROI input, and produce an ROI output. Processors take image and ROI inputs, and may produce an image output. They also may send result messages to the robot control system. The bulk of image processing and computer vision modules are processors in this scheme. Feature Map Modules Feature maps get input from the cameras. For efficiency and programming simplicity, Gargoyle provides a small set of modules that interface to the underlying image processing sys- tem (Teleos AVP). Th ese modules provide the input to all pipelines, and are often used by several visual rou- tines that execute simultaneously. To efficiently share information about features, and to keep inputs to sepa- rately executing routines in synchronization, the lowest level of the Gargoyle system is a fixed set of image fea- ture map modules. Since these modules include color and grayscale images, users can perform any type of image processing by starting with these, and adding new processing modules (see below). The set of feature maps is determined by the under- lying AVP hardware and libraries. The Gargoyle user is not expected to add new feature maps. The feature maps available are shown in table 2. Figure 2 shows the contents of an edge feature map. Figure 2: top: gray-scale image; second: binary edge image; third: color histogram back-projection of a pic- ture of a brown trash can, bottom: rectangular regions- of-interest around local peaks Mobile Robots 933 Segmentation and Region of Interest Modules These modules split an image into one or more regions for further processing. The output is a stream of ROIs. Some modules may use the optional image mask to de- fine nonrectangular ROIs. Typically, the ROI stream is passed to a cropping module which produces a stream of sub-images corresponding to the ROIs. This image stream is then piped into other modules for process- ing. The segmentation and ROI-generating modules standard in Gargoyle are listed in table 3. In figure 2, the local peak segmenter finds local peaks in an intensity image, which in this case is a color his- togram back-projection image. The peaks must be sep- arated by a minimum distance, which is a parameter of the module. The peaks are bounded with rectan- gles whose edges are determined by where the inten- sity image falls below a threshold fraction of the lo- cal peak value, measured along horizontal and vertical lines through the local peak point. The threshold frac- tion is another parameter of the module. The connected component segmenter uses image morphology operations to find connected or nearly connected binary image segments of sufficient size. Each connected component generates a rectangular ROI with a binary mask that gives the exact shape of the segment. Subsequent image processing would normally use the ROI pixel iterator to step through only the pixels in the segment. This is how, for ex- ample, the average intensity of an arbitrary segment would be computed. The tracker module tracks the location defined by ROIs as they come into the module, and generates a prediction of the next ROI it will see. Since each ROI is stamped with the time that it was produced (from a feature map, originally), the tracker knows how fast the ROI is moving in image coordinates. It also can compute when the next ROI will likely be produced, because it tracks the time intervals between successive inputs. The output is a predicted ROI which can be used to direct perception toward where a moving ob- ject will likely be next. The world volume ROI module produces an im- age ROI corresponding to a rectangular volume in the world. The volume is set from the robot control sys- tem as parameters of the module. The module does not take any input from the pipeline. This module requires a user supplied viewer state function, and camera pa- rameters (see Calibrated ROIs, above). If the world volume described does not project into image based on the current viewer state, an empty stream of ROIs will be generated. The set operation module combines two ROIs into one, using the intersection, union, and subtraction op- 934 Mobile Robots ROI module Pipe In Parameters Local peaks scalar img thr, min sep Conn camp bin img max gap, min area Tracker ROI smoothing rate World vol calib. ROI rect vol ROI comb ROI a, b set op (fl, U, -) Table 3: Region of interest and segmentation modules. Each module produces a ROI or stream of ROIs for further processing. Filter module Input Output Parameters Threshold img bin img thr Convolution img img kernel Warp img img interp func Frame avg. img img interval Color conv. img img , color space Back-proj. 3bnd img img color hist Cropper img img none Lines img bin img thr, min seg Table 4: Image filtering modules. Outputs from these modules are images that are processed as pipeline in- puts by other modules. erators. Processing modules The processing modules com- prise the bulk of the Gargoyle system. There are filter- ing modules for processing images into other images, and recognition and measurement modules for finding objects or calculating properties of the scene. The filter modules currently in use are shown in table 4. These are largely self-explanatory. The color back-projection module produces an intensity image whose pixel values indicate the “saliency” of the corresponding color im- age’s pixel as an indicator of the presence of a known colored object (Swain & Ballard 1991). New filters are easily added using a C++ template class. Recognition and measurement modules implement computer vision algorithms, which can be very com- plex. In keeping with our research strategy we have tried to use well-established algorithms and software when possible. The modules we have incorporated so far are listed in table 5. These modules normally pro- duce results that need to be returned to the robot con- trol system, rather than an image that is sent further along the pipeline. Table 5 shows the messages that are sent back to the control system. New algorithms can be added using a template class. The programmer has access to an input image with a pixel-access iterator that will step through the image. 1 Module I Input I Signal 1 Params Table 5: Computer vision modules. The outputs of these modules encode answers to perceptual queries and are transmitted as signals to the RAP system If the image is calibrated with viewer state informa- tion, the algorithm can use image point-to-world-line functions (table 1) to get information about where the image points may come from. For example, we find the lines of sight corresponding to the highest and lowest parts of the image, and intersect those lines with the plane of the floor to determine how far away an object in the image could be if it is assumed to be on the floor. This is used to control the viewpoint search in the Hausdorff template matching algorithm. Visual routines as pipelined processes Consider a visual routine for finding objects that we use frequently in our research. This routine searches for an object by shape, matching a library of edge models against an edge image. Other binary models and images could be used instead of edges. The model database contains different views of the object, which are needed to the extent that the object’s surface is not planar or circularly symmetric. The search al- gorithm (Huttenlocher & Rucklidge 1992) translates, scales, and skews the models as it searches through the space of possible viewpoints. The viewpoint space can be controlled through parameters depending on the task context and knowledge of where the object might be, as well as through calibrated image context as described above. Searching across a large space of viewpoints is ex- pensive; with a cluttered, high resolution (roughly .25 megapixel) image, searching from all frontal viewpoints between, for example, 0.5 to 5 meters, for a single model, takes over a minute on Sun Sparcstation 20. To reduce the space of possible viewpoints, we restrict the search to areas of the scene containing colors found in the model. This requires a color model of the ob- ject, which is represented as a 256 bin histogram of an &bit color-resolution image of the object. The 8-bits could represent, for example, 4 bits of hue, 4 bits of saturation, and 0 bits of value (a distribution we have found effective). Figure 3 shows the processing pipeline that imple- ments the routine. Two feature maps generate input simultaneously, or as close toget her as the underlying colAimg; J [F] max peaks threshold r & , /binary image model set, matching thresholds 1 dT[ e] r lgure 3: An image processing pipeline for finding ob- jects by shape, using color as an interest cue. hardware allows. The RGB map is piped into a color histogram back-projection module. The RGB map can be parameterized with a low-resolution (subsampled) ROI so that the output image is very small and later segmentation is fast. The edge detector can also be operated at different levels of resolution according to how large and nearby the object is expected to be. The modules need not operate at the same resolution. The color histogram back-projection output is piped into the local peak segmenter to generate regions of in- terest where the object could possibly be found. The output of the local peak module is a stream of ROIs. The interpreter processes the stream by marking the module as the head of an implicit iteration loop. Each ROI in the stream is then processed separately by the next module. When the pipeline bottoms out, that is, no more links exist, or a module finishes processing without generating an output image, the interpreter returns to the nearest upstream iteration head, and continues processing with the next ROI. After the last ROI is processed, a special end-of-stream ROI is sent to the next module and the interpreter notes not to return for processing. Most modules do nothing but pass the end-of-stream markers on. Others may pro- cess a stream of ROIs and wait for the end-of-stream indicator to perform a calculation based on the entire Mobile Robots 935 stream of inputs. The hausdorff template matcher is one such module, as we will see. The ROI stream from the local peak segmenter passes into the cropper, which also receives an image input from the edge detector. The cropper produces a binary subimage from the edge image for each ROI passed in. The resolution of the binary image is pre- served, even though the ROI may be at a lower reso- lution. The template matcher searches each image for in- stances of its models. The viewpoint search is con- trolled according to parameters from the robot con- trol system, which may know how far away the object should be, and by inferring the possible viewpoint of the object if it is to appear in the image, as described earlier. The template matcher runs in several search modes, in that it can report an answer as soon as it finds any match, or wait until the stream of images has been processed to return the best match. When the robot receives an answer from the module, it can tell the interpreter to halt further execution of the pipeline. Toward more flexible visual routines Pipeline configurations and the parameters for their operators can be stored as a library of visual meth- ods appropriate for different sensing tasks and environ- mental conditions. We use the RAP system to store the configuration commands for constructing pipelines. Visual routines, then, are represented as RAP plans for achieving particular perceptual goals. On-the-fly pipeline reconfiguration One somewhat more flexible routine we are developing uses two low-level cues, motion and color, for following a person (Prokopowicz, Swain, & Kahn 1994). A RAP encodes the connections for a simple pipeline to track an object by motion or color. The encoding is sent to GARGOYLE to build the pipeline. The pipeline is very simple to begin with: a motion map is piped into the local peak segmenter, which finds sufficiently large areas of motion. These are piped into a ROI tracker module. A color feature map is also created and piped into a histogram backprojector. This feature map is initially turned off to save processing power. GARGOYLE will not execute a pipeline whose input is off. The RAP is sketched in Figure 4. If more than one moving area is segmented, the tracker module will follow the largest one, but will also send a warning message (:no-target or :multiple- target), which the GARGOYLE interpreter sends to the RAP system. Likewise, if no region is found, a warning is sent. The RAP level is where the knowl- edge for how to track an object under different con- (define-rap (track-motion-or-color ?target-name ?color-model) (init (pipeline-create 1 (I motion-feature-map RESLEVELI) (2 local-peak-segment MINPEAK MINSEP) (3 ROI-tracker) (4 RGB-feature-map) (5 histogram-BP ?color-model RESLEVEL2) (I into 2) (2 into 3) (4 into 5) (4 off) (I off))) // track by motion (method (context (not track-by color)) (pipeline-config 1 (I into 2) (I on)) (pipeline-execute I> (wait-for (:object-at ?x ?y> (mem-set target-at ?target-name ?x ?y>> (wait-for (or (:no-target) (:multiple-target)) (pipeline-config 1 (I off)) // stop pipe (mem-de1 track-by motion) // use color (mem-add track-by color)>> // next time // track by color (method (context (track-by color)) (pipeline-config 1 (5 into 2) (4 on)) (pipeline-execute I) (wait-for (:object-at ?x ?y> (mem-set target-at ?target-name ?x ?y>> (wait-for (or (:no-target) (:multiple-target)) (pipeline-config 1 (4 off)) // stop pipe (mem-de1 track-by color) // use motion (mem-add track-by motion)>>> // next time Figure 4: RAP that constructs and modifies visual routine for flexible tracking using color and motion ditions is encoded. In this case, the RAP responds to the warning by instructing the GARGOYLE inter- preter to reconfigure the pipeline so that a different module is used as input to the segmenter. If the RAP used the motion map the last time, it will use color and histogram backprojection. Of course, flipping be- tween two modes of segmentation is only the simplest example of run-time flexibility. We feel that GAR- GOYLE provides the basis for flexible real-time vision, and that a high-level planning system like RAPS pro- vides the reasoning and knowledge representation ca- pacity needed to exploit that flexibility in much more interesting and powerful ways. Because Gargoyle is a single process, creating and linking a module into a pipeline only involves adding an entry to the current configuration graph in memory. Executing a module requires spawning a new thread, 936 Mobile Robots which is also a very fast operation. The interpreter runs as a separate thread and accepts messages from the skill processes asynchronously, so it can respond immediately to requests to reconfigure, reparameterize, or control execution, even during image processing. Related work Ullman proposed that visual routines (Ullman 1984) which answer specific perceptual questions concerning shape and geometric relationships can be constructed out of elemental visual operators and control struc- tures. Our work is fundamentally similar, but explores the issues that come up when these ideas are extended to a real-time active system that is interacting with its environment. In particular, it emphasizes the need for different ways to compute the same result, depending on the immediate context. The Perseus vision system (Kahn & Swain 1995) is being developed at the University of Chicago to aug- ment human-computer interaction through gestures such as pointing. Perseus is currently implemented with the DataCube server as the underlying visual pro- cessor . Perseus is based on the notion of active vi- sual objects, or markers, that are spawned to recognize and track relevant parts of a scene, such as a person’s head and hands. One of the Gargoyle design goals was to provide a platform to support Perseus on a visual robot. The markers will be implemented as RAPS that construct image pipelines for recognition and tracking. Gesture recognition is carried out by a higher-level vi- sual routine that spawns markers and monitors their locations, looking for certain configurations that indi- cate human gestures. Horswill has shown that it is possible to translate a Horn clause into a custom program, written in a “visual-computer assembly language”, that attempts to satisfy the clause. The operators of this assem- bly language are much more primitive than what most robot researchers would like to use, and the underly- ing hardware (the Polly robot) is not in widespread use. Gargoyle will provide ourselves and other robotic researchers with the means for writing visual programs in a high-level language, using the best computer vi- sion algorithms and software as operators, on standard PC hardware. Conclusion be easily and affordably ported to other systems, and will benefit from further advances in these platforms. Gargoyle’s extendibility will allow it to be useful for other domains besides the mobile robot domain de- scribed here - we are already working towards its use in creating a human-computer interface for a virtual reality environment. References Aloimonos, J. 1990. Purposive and qualitative ac- t ive vision. In International Conference on Pattern Recognition, 346-360. Bajcsy, R. 1988. Active perception. -Proceedings of the IEEE 76:996-1005. Ballard, D. H. 1991. Animate vision. Artificial Intel- ligence 48:57-86. Chapman, D. 1991. Vision, Instruction, and Action. MIT Press. Firby, R. J.; Kahn, R. E.; Prokopowicz, P. N.; and Swain, M. J. 1995. An architecture for vision and action. In Proceedings of the International Joint Con- ference on Artificial Intelligence. Hager, G. D., and Toyama, K. 1994. A framework for real-time window-based tracking using off-the-shelf hardware. Technical Report 0.95 Alpha, Yale Uni- versity Computer Science Dept. Huttenlocher, D. P., and Rucklidge, W. J. 1992. A multi-resolution technique for comparing images us- ing the hausdorff distance. Technical Report CUCS TR 92-1321, Department of Computer Science, Cor- nell University. Kahn, R. E., and Swain, M. J. 1995. Understanding people pointing: The Perseus system. In Proceedings of the IEEE International Symposium on Computer Vision. Prokopowicz, P. N.; Swain, M. J.; and Kahn, R. E. 1994. Task and environment-sensitive tracking. In Proceedings of the IAPR/IEEE Workshop on Visual Behaviors. Swain, M. J., and Ballard, D. H. 1991. Color indexing. International Journal of Computer Vision 7111-32. Ullman, S. 1984. Visual routines. Cognition 18:97- 159. Gargoyle will provide us with a tool that is sufficiently flexible to create much more specialized and efficient visual routines that are adapted to solve specific tasks, and fully exploit run-time context, with no significant cost in run-time efficiency. Because Gargoyle will run on a standard, multi-platform operating system, it can Mobile Robots
1996
138
1,775
Robot Navigation Using Image Sequences Christopher Rasmussen and Gregory ager Department of Computer Science, Yale University 51 Prospect Street New Haven, CT 065208285 rasmuss@powered.cs.yale.edu, hager@cs.yale.edu Abstract We describe a framework for robot navigation that ex- ploits the continuity of image sequences. Tracked vi- sual features both guide the robot and provide predic- tive information about subsequent features to track. Our hypothesis is that image-based techniques will allow accurate motion without a precise geometric model of the world, while using predictive information will add speed and robustness. A basic component of our framework is called a scene, which is the set of image features stable over some segment of motion. When the scene changes, it is appended to a stored sequence. As the robot moves, correspondences and dissimilarities between current, remembered, and expected scenes provide cues to join and split scene sequences, forming a map-like directed graph. Visual servoing on features in successive scenes is used to traverse a path between robot and goal map locations. In our framework, a human guide serves as a scene recognition oracle during a map-learning phase; there- after, assuming a known starting position, the robot can independently determine its location without gen- eral scene recognition ability. A prototype implemen- tation of this framework uses as features color patches, sum-of-squared differences (SSD) subimages, or image projections of rectangles. Introduction A robot whose duties include navigation may be called upon to perform them in many different environments. In such fluid situations, it is unlikely that a geomet- ric model will always be available, and arranging and testing fiducial markers may be onerous or impossi- ble. An attractive way to alter the job description of a navigating robot would be to simply “show” it the path(s) it should follow, and have it rely on its own sensing systems to develop an internal representation sufficient to allow repetition of the path within some bounds. Specific points of interest should be easy to in- dicate, although special locations (e.g. a delivery site) 938 Mobile Robots may be adorned with guide markers which can be used for precise position control once they are approached. Our aim is to develop a vision-based navigation sys- tem capable of such performance. Our choice of vision is based on the necessity of observing large areas to find useful landmarks for defining the path, and the need to distinguish them from one another as easily as possible. Navigation depends on the notion of location, which can be characterized in a number of ways. Strictly metrical approaches describe it as a position in a fixed world coordinate system. The system in (Kosaka & Kak 1992) k nows the environment’s three-dimensional geometry and its initial position and orientation; the navigation task is to find a path to a goal position and orientation. Kalman filtering is used to reconstruct and predict the coordinates of model features, which are matched at regular intervals during motion. The sonar-based algorithm of (Leonard, Durrant-Whyte, & Cox 1990) builds its own map of beacons’ absolute positions using Kalman filtering to account for uncer- tainty. Their mapping process is dynamic in that the robot constantly compares sensory evidence with ex- pectations, and updates its map and/or position es- timate when the two disagree. Prediction allows di- rected sensing: “knowing where to look” can improve the speed and reliability of the mapping process. Graph-based techniques smear out exactness some- what by relating position regions to nodes or “places” in a connectivity graph, and rely on closed-loop feed- back to guide the robot from place to place. In (Taylor & Kriegman 1994) the model of the environment is a graph constructed while exploring, based on visibil- ity of barcode landmarks. Similarly, in (Kortenkamp et al. 1992), (K ui ‘p ers & Byun 1981)) and (Engelson 1994), the robot learns a model of the environment by building a graph based on topological connectivity of interesting locations, but it attempts to decide what constitute landmarks on its own. The robot in (Ko- rtenkamp et al. 1992) 1 earns the environment’s topol- From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. ogy from doorways detected by sonar while storing im- ages for place recognition. The robot in (Kuipers & Byun 1981) uses sonar to detect places that maximize a distinctiveness function and to match or add them to its map. A disadvantage of graph-based approaches is their tendency to rely on a solution to the difficult problem of general place recognition. We avoid reference to absolute coordinates by tak- ing advantage of vision to define locations by the no- tion of view. A robot is at place A if what it sees corresponds to what can be seen at place A. Given that we know where we are, we use visual tracking to extend the definition of place to a range of motions, and require that all (or nearly all) of the world corre- sponds to some place. We then utilize the continuity of our representation to predict changes of view between places, thereby eliminating the need for a strong notion of recognition. Navigation then becomes a problem of moving from view to view. The desired motion is com- puted using visual feedback between what the robot currently sees, and what it saw when it was at the goal location before. The work described in (Fennema et al. 1990) is sim- ilar to ours. They use a servoing approach to nav- igation, but with reference to an accurate geometric model of the environment which we do not assume. Navigation proceeds by identifying landmarks along a path through the graph that are visible from their pre- decessors, and servoing on them in succession via vi- sual feedback. Recognizing landmarks is a matching problem between 2-D image data and the 3-D world model. Other route-based approaches to navigation are described in (Hong et al. 1992) and (Zheng & Tsuji 1992). The robots in these papers are guided in a learning stage through the environment, periodically storing whole panoramic images. A graph is built up by matching new images with the known corpus; paths can be planned in the completed graph with a stan- dard algorithm. When executing a path, image-based servoing is used to traverse individual edge segments. (Kortenkamp et al. 1992) argue that taking snapshots at, regular intervals leads to the storage of much un- necessary information. Moreover, new scenes must be matched against all remembered ones, an inefficient process. Our system addresses the first of these is- sues by finding image features and storing them only as frequently as necessary for servoing. We sidestep t,he place recognition problem with human assistance during a learning phase; general recognition ability is not needed during navigation. Map Construction In order to present our navigation system it is useful to first define some terminology and to sketch the pro- cedure used to create the internal map used by the robot. Terms We use the term marker to denote any visual entity that is in some way distinctive or meaningful to the system. A marker is characterized by a set of rela- tively invariant properties (color, shape, texture, func- tion) and a dynamic state (image location and size). Within most images there are many potential markers that can be discovered and analyzed; the current scene is the set of markers upon which the robot is currently focusing its attention. The question of what kind of visual entities will gain attention is left to the imple- mentation, but these can run the gamut of complex- ity from chairs, doors, and trees to lines, points, and splotches of color. The limiting factors of the recogni- tion process are sophistication and speed; these must be traded against the distinctiveness, trackability. and rarity of the markers thus found. As an additional consideration, to successfully servo from one scene to another certain geometric constraints on the number and configuration of the markers in them must be met. These constraints vary according to what kinds of markers are used. As a shorthand, we will say that an evaluation function C(s) returns a truth value T if scene s meets these criteria and F oth- erwise. Likewise, we shall say that two scenes, s1 and s2 are equivalent, s1 z ~2, if an abstract comparison function returns true. Intuitively, equivalency is mea- sured by some matching function which is invariant over small changes in viewpoint caused by the motion of the robot. Furthermore, we say that the robot is viewing a previously stored scene .si if sview& 3 Si, where Sviewed is the scene currently being viewed. A sequence is an ordered list of scenes stored as the viewer moves through an environment. The currently- viewed scene is added to the current sequence when there is a scene change: here we use the occurrence of a marker appearing or disappearing to signal this. When a marker appears, the current scene is added. When a marker disappears, the last scene containing the disappearing marker is added. In essence, scenes serve as key frames for reconstructing movement later through visual servoing (Hutchinson, Hager, & Corke 1995). A location is a set of two or more scenes from differ- ent sequences that are the same under the equivalency function; this is where sequences intersect. There are two kinds of locations: divergences and convergences. Mobile Robots 939 A divergence is a choice point: multiple possible se- quences can be followed away from a single scene. Con- vergences are scenes at which multiple sequences termi- nate. Since a scene can satisfy both of these definitions (where two corridors cross each other, for instance), the type assigned to a location can vary according to its place in the order of scenes encountered by the robot. The key quality of locations is that they are topologi- tally meaningful events that can be recognized by the robot simply by keeping track of what it is seeing and what it remembers. A map is the directed graph con- sisting of all sequences recorded and all locations noted by the robot as it moves. Map-Building The state of the robot as it makes a map is described by the following variables: {S}, the set of sequences recorded so far; {L}, the set of known locations; S,,, , the sequence the robot is in; {spossible}, the set of scenes that the robot could be viewing; and s&wed. ST denotes the set of scenes reachable in one scene-step from si via the current map graph.Map-making begins with {S} = 0; {L} = 8; Snew = 0, the empty list; and {Spossible} = 00, the set of all possible scenes. This cor- responds to seeing unfumiliur scenery; familiar scenery is indicated when {Spossible} has a finite number of members. When the robot is instructed to cease map- ping, S,iewed is added to Snew and exploration halts. It is assumed that an exploration process that drives movement is running in the background. This could be the human guide suggested, a sonar-based obsta- cle avoidance routine, or any other desired program. While not finished, Sviewed is updated continually un- til C(s viewed) = F; whenever S,iewed changes the steps in Figure 1 are executed. The key question in this algorithm is the nature of the scene comparison function, with the additional problem of efficient image indexing to determine the Ltinfluence of sequences. We describe the current in- stantiation of equivalency in the Implementation sec- tion. For the moment, we rely on a human guide to inform the robot when a convergence or divergence has occurred, and hence do not need to solve the general indexing problem. In Figure 2 the map-making process is illustrated for a sample environment. By continuing this process of exploration, the robot would eventually arrive at a complete map of the floor, but even in its intermedi- ate forms it has partial usefulness. A more complete exposition is given in (Rasmussen & Hager 1996). 940 Mobile Robots 1. If unfamiliar, does 3 s8 E S, = (SO, . . . , sn) s.t. svrewed G ? St * (a) Yes (convergence) : i. Add location (svtewed, s,} to (L} ii. If S,,, # S,, then replace S, in {S} with S,, = (so,..., s;) and S,, = (si, . . . , sn) iii. Set (spossible} = SF (b) No: Append Sviewed to S,,, in (S} 2. If familiar, does 3 st E {sposstble} s.t. SvteWed E s,? (a> yes: Set {~posstbk.} = 3: (b) No (divergence): i. ii. 0.. 111. Add Sm.o, = (so,. . . , st), Sne,, = (st, . . . , sn) to {S}, and add location (s;) to {L} Replace S,,, in {S) with S,,, = (s,, s,,,,,d) Set (s poastble} = 00 Figure 1: Decision process for map-building During and after map-building, the robot can navigate between any two scenes in the map that are connected by a truversuble path. A traversable path is a series of sequences or partial sequences in the map, all directed head to tail, from the robot’s current scene to the goal scene. As the learning process progresses, more places in the environment will become represented in the map by scenes, and more pairs of scenes will be traversable. In order to initialize this process, the robot must an- alyze the current scene, identifying markers, and then match them to the markers it expects to see given its current location. The basic building block of navigation is the truver- sul. Let si and sj be scenes in the map such that the robot is viewing si and there is a traversable path from si to sj. Let Mij = si n sj and define S$ to be sj re- stricted to the markers in Mij. If Mij is not empty and C(Mij) = T ( 1 a ways the case if j = i + 1) , then by us- ing some variant of visual servoing the robot can move Until Sviewed = f$ within an arbitrary tolerance. In the limit, this movement will bring the robot to the abso- lute position and angle that it was at when it recorded Sj. In many cases, Mij is empty, meaning that the goal scene is not even partially visible from the initial scene. This necessitates traversing intermediate scenes and performing a hundon between each. Let Si, sj, and Sk be scenes in the map such that the robot is currently si and there is a traversable path from si to Sk through sj. Let Mij and Mjk be defined as above such that both are non-empty, but Mik = Mij (7 Mjk is empty. The robot cannot traverse directly from Si to Sk; in- stead it must first servo to sj using markers in Mij Figure 2: Hypothetical exploration. The path the robo follows is shown schematically on the left, starting at 1 (numbers are for spatial reference only); successive updates of the map occurring at steps A-E are shown on the right. Location nodes are marked to indicate why they were cre- ated: S = start event, C = convergence, and D = diver- gence . until the markers in Mjk are found, and then “hand off” and servo on those markers to the destination. The simplest way of implementing a handoff is to rely on the default marker search technique used for mapping. As the robot servos toward sj and searches for new markers concurrently, it may find all of Mjk be- fore it ceases movement, allowing it to smoothly switch servoing control at the appropriate moment. If not all of Mjk is found before stopping, the robot can continue starching until it acquires the markers, then change control to the next servoing phase. In practice, though, the time needed to find the markers can be quite long, and the matching process is likely to be error-prone. Furthermore, if a significant location discrepancy ex- ists between S,iewed and si at the end of servoing, not all of the desired features for si may be in view. Angu- lar errors are especially significant because they often cause more distant markers in Mjk to be partially or completely out of view. Correcting such errors is critical, so we have investi- gated a predictive mode for marker location in which the robot dynamically estimates the image locations of Mjk as it moves (Rasmussen 8z Hager 1996). Since we do not have an underlying geometric map, all feature prediction must operate directly from image informa- tion. Thus, we have adapted a result from the theory of geometric invariance (Barrett et al. 1992) to ve- hicles moving in a plane. We have found that when the image projections of four three-dimensional points PO, pl, ~2, p3 are known in two different views ~1 and ~12, knowledge of the image projections of po, pl, and p2 in a third view vs is sufficient to predict the image location of p3 modulo certain geometric constraints on the points and the transformations between views. This result means that if three points in Mij and one in Mjk are simultaneously visible in two frames dur- ing map-building, then during servoing on the mark- ers in Mij, any points that were visible in Mjk have predictable locations. The predictions are somewhat noisy, but they are reasonable hints for initializing and limiting search procedures. Furthermore, if the pre- dicted point locations fall consistently outside the im- age, the robot can make an open-loop adjustment to bring them into view. Putting this together, a navigational directive is is- sued as a pair of scenes (ssta,.t, sf;,ish), where the robot is assumed to be currently viewing s,tart. A path- planning algorithm returns an ordered list of scenes constituting a traversable path to the goal. This list is input to a greedy scheduling algorithm that selects which set of markers can be servoed on for the most consecutive scenes before the robot must switch to a new target set. This approach is based on a desire to minimize the number of handoffs. entatio We have implemented most of the architecture de- scribed above on a tethered Nomad 200 robot (lack of an onboard digitizer has limited movement to one lab and an adjoining hallway). In this section, we describe the implementation of various system modules. 3- e&angles One type of marker we have tested is image rectangles, defined by two approximately horizontal image edges and two approximately vertical image edges at least 20 pixels in length meeting at four distinct corners. We are using the XVision system (Hager 1995) for real- time visual tracking as the basis for our marker identifi- cation and tracking algorithms, which are described in detail in (Rasmussen & Hager 1996). Rectangles were chosen primarily because they are simple to detect and track, and in an indoor environment they are relatively common. The default marker-finding method is ran- dom search over the entire image. When traversing a previously known area, the same search procedure is used, but the markers found are matched to those previously located in the scene. For our limited exper- iments, we have found matching rectangles by aspect ratio alone satisfactory, although we have also exper- imented with using statistics on the gray value distri- bution of the interior of the rectangle as well. Based on the techniques described in (Chaumette, Rives, & Espian 199 1)) the four corners of one rectangle are suf- ficient to servo on. Thus, the scene evaluation function C(s) is simply that s # 0. To maintain this condition, the robot must block movement whenever it loses track Mobile Robots 941 a b Figure 3: Rectangle-finding. Corners are marked with circles of radius inversely proportional to the confidence of correctness. The door w&low was the only rectangle found here. of the only marker it has until it finds another one. Experiment al Results We have tested the above modules on a variety of in- door scenes. This section briefly describes our experi- mental results. Marker Finding When not using prediction, the performance of our rectangle-finding algorithm varied. In particular, it performed poorly on cluttered scenes, due to the combinatorics of testing many pairs of edges as possible corners and many groups of corners as pos- sibly belonging to rectangles. It performed roughly as well at locating lines when moving slowly as when im- mobile. A sample image and the rectangle found in it is shown in Figure 3. One central issue is the ability of the algorithm to find markers quickly, and to consistently find roughly the same set of markers in an image. On a SPARC 10/40, over twenty runs of the algorithm on the door view in Figure 3, it found the window rectangle 11 times within 60 seconds (for those times, in an aver- age of 29 seconds). Four other times it found another rectangle in the tile pattern on the wall above the coat rack. In the other five trials it did not find any rectan- gle in less than 60 seconds. Slight variations in robot position and viewing angle did not affect performance. As described above, our goal is to use prediction to improve performance when attempting to refind mark- ers. Searching the entire 640 x 480 image is cumber- some and slow. Our view is that prediction provides a seed about which a local search can be centered. If we are looking specifically for the door window rectangle of Figure 3.a and assume that prediction has yielded a seed approximately in its center, we can constrain search to a smaller window tailored to the remembered area of the rectangle. Accordingly, another 20 runs were done with random search limited to a 300 x 300 square uniform distribution about the door window. The door window rectangle and only the window rect- d Figure 4: Prediction. (a) and (b) are calibration views taken at the start and finish of a three-foot leftward trans- lation of the robot, with four tracked SSD features marked. (c) and (d) are the first and last images in a sequence of six taken at equal intervals along a ten-foot forward translation from (b). In them, only the image location of the feature marked with a square in (a) is unknown to the robot. The crosses indicate the predicted locations of this feature. angle was found for all 20 trials. Furthermore, the average time to find it was reduced to only 12 seconds. In this manner distracting lines elsewhere in the image were effectively ignored. Prediction In Figure 4 the performance of our pre- diction scheme is illustrated. We must imagine that the first two images were taken during learning, and the other two (along with the four not pictured) were taken later as the robot consulted its map. The pre- dicted feature is one that it wants to pick up while moving in order to effect a smooth handoff.The mean error magnitude over the six images of the sequence between the predicted and actual x coordinate of the feature was 14.1 pixels, and for the y coordinate it was 2.1 pixels; yielding a mean distance error of about 14.3 pixels. This is close enough that the robot could find it very quickly with a local search. Servoing Traversal-type servoing has been success- fully done in a number of situations (such as approach- ing the door in Figure 3). We are working to improve the reliability of handoffs by addressing the angular error problem described above. Discussion One vulnerability of the scene-based framework is its assumption that an acceptable group of markers will always be in view. For any kind of markers chosen, in 942 Mobile Robots a real-world environment there will be gaps. In the in- door environment expected by our implementation, for instance, there are not enough closely-spaced rectan- gles for the robot to handoff its way around a ninety- degree corner. Even straight corridors may be rela- tively featureless for long stretches, rendering them im- passable. Clearly, we can use odometry locally as a sec- ond source of predictive information. Open-loop move- ments based on saved odometric information could be useful on a limited basis-say, to negotiate corners. The system we have presented is fairly robust with re- spect to slight angular and positional errors, and errors would not be cumulative. Another possible solution is to consider a different kind of servoing. The servoing we have focused on so far is transformational, insofar as the robot tries, by moving, to make a tracked object change appear- ance to look like a target object. A different, home- ostatic paradigm is suggested by road-following and wall-following systems (Turk et al. 1988). In them, the goal is not to change one sensory entity into another through movement, but rather to move while main- taining a certain constancy of sensory input. Road- followers seek to move forward while keeping the sides of the road centered. Something like this could be used to guide movement while no target is visible. If the robot can discern situations in which the maintenance of a visual constraint-e.g., staying centered in a hall- way, keeping a building in front of it, etc.-can provide appropriate cues for movement, then it can bridge gaps between homing-type targets. The difficulty of this ap- proach is that the robot does not know accurately, as with our framework, when something should be visi- ble. Rather, it must be vigilant for the appearance of a marker signalling that it has arrived. Some odom- etry, either in distance traveled or time spent, would help limit the time it spends looking for such a signal. With multiple markers in one scene, joining these in- dividual discrimination criteria to constraints imposed by spatial relationships (above, next-to, etc.) as well as the invariant relationship used for prediction would lead to still greater confidence in matching. Finally, an important area of additional research is dynami- cally updating the map during motion. In particular, it’ is likely that certain markers are stable and easy to find, while others occasionally appear and disappear. Ideally, as the robot moves around it should modify its internal representation to more heavily weight more reliable markers, and possibly add new markers. In conclusion, we have presented a vision-based nav- igation framework for mobile robots that requires no a priori environmental model and very little odometry, and implemented most of its fundamental components. The notion of the environment that it builds remains on the image level, both as stored and when gener- ating movement. It is also modular, permitting easy modification, especially to its recognition and match- ing capabilities and its task description. ACklnQW This research was supported by ARPA grant N00014- 93-1-1235, Army DURIP grant DAAH04-95-1-0058, by NSF grant IRI-9420982, and Yale University. eferenees Barrett, E.; Brill, M.; Haag, N.; and Payton, P. 1992. Invariant Linear Methods in Photogrammetry and Model-matching. In Geometric Invariance in Computer Vision, Mundy, J., and Zisserman, A. eds., Cambridge, Mass.: MIT Press. Chaumette, F.; Rives, P.; and Espian, B. 1991. Positioning of a Robot with respect to an Object, Tracking it, and Estimating its Velocity by Visual Servoing. In Proc. IEEE Inter. Conf. Robotics and Automation, 2248-53. Sacramento, CA, April. Engelson, S. P. 1994. Passive Map Learning and Visual Place Recognition. Ph.D. diss., Dept. of Comp. Sci., Yale Univ. Fennema, C.; Hanson, A.; Riseman, E.; Beveridge, J.; and Kumar, R. 1990. Model-Directed Mobile Robot Navigation. IEEE Trans. Systems, Man, and Cybernetics 20(6): 1352-69. Hager, G. 1995. The ‘X-Vision’ System: A General-Purpose Substrate for Vision-Based Robotics. Workshop on Vision for Robotics. Hong, J.; Tan, X.; Pinette, B.; Weiss, R.; and Riseman, E. 1992. Image-Based Homing. IEEE Control Systems: 38-44. Hutchinson, S.; Hager, G.; and Corke, P. 1995. A Tutorial Introduction to Visual Servo Control. IEEE Trans. Robotics and Automation. Kortenkamp, D.; Weymouth, T.; Chown, E.; and Kaplan, S. 1992. A scene-based, multi-level representation for mobile robot spatial mapping and navigation. Technical Report, CSE- TR-119-92, Univ. of Michigan. Kosaka, A., and Kak, A. C. 1992. Fast Vision-Guided Mobile Robot Navigation Using Model-Based Reasoning and Predic- tion of Uncertainties. CVGIP: Image Understanding 56(3): 271-329. Kuipers, B., and Byun, Y. T. 1981. A robot exploration and mapping strategy based on a semantic hierarchy of spatial rep- resentations. Robotics and Autonomous Systems 8: 47-63. Leonard, J.; Durrant-Whyte, H.; and Cox, I. 1990. Dynamic Map Building for an Autonomous Mobile Robot. IEEE Inter. Workshop on Intelligent Robots and Systems: 89-95. Rasmussen, C., and Hager, G. 1996. Robot Navigation Using Image Sequences. Technical Report, DCS-TR-1103, Yale Univ. Taylor, C., and Kriegman, D. 1994. Vision-Based Motion Plan- ning and Exploration Algorithms for Mobile Robots. Workshop on the Algorithmic Foundations of Robotics. Turk, M.; Morgenthaler, D.; Gremban, K.; and Marra, M. 1988. VITS-A Vision System for Autonomous Land Vehicle Naviga- tion. IEEE Trans. Pattern Analysis and Machine Intelligence lO(3): 342-61. Zheng, Z., and Tsuji, S. 1992. Panoramic Representation for Route Recognition by a Mobile Robot. Inter. Journal of Com- puter Vision 9(l): 55-76. Mobile Robots
1996
139
1,776
Analysis of Utility-Theoretic Netw nt elligent A tive Armin R. Miklery Vasant Honavar and Johnny S.K. Wong Department of Computer Science Iowa State University Ames, Iowa 50011, USA miklerlhonavarlwong@cs. iastate. edu Abstract Utility theory offers an elegant and powerful theoret- ical framework for design and analysis of autonomous adaptive communication networks. Routing of mes- sages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. In this paper, we incre- mentally develop a set of heuristic decision functions that can be used to guide messages along a near- optimal (e.g., minimum delay) path in a large network. We present an analysis of properties of such heuris- tics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical re- sults presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research. Introduction With the unprecedented growth in size and and com- plexity of communication networks, the development of intelligent and adaptive approaches to network man- agement (including such functions as routing, conges- tion control, etc.) have assumed considerable theoreti- cal as well as practical significance. Knowledge rep- resentation and heuristic techniques (Pearl 1984) of artificial intelligence, utility-theoretic methods of de- cision theory, as well as techniques of adaptive control offer a broad range of powerful tools for the design of intelligent, adaptive, and autonomous communica- tion networks. This paper develops and analyzes some utility-theoretic heuristics for adaptive routing in large communication networks. Routing (Bertsekas & Gallager 1992) in a communi- cation network refers to the task of propagating a mes- sage from its source towards its destination. For each Xrmin Mikler is currently a post-doctoral fellow at the Scalable Computing Laboratory of the Ames Labora- tory (U.S. Department of Energy) at Iowa State University, Ames, IA 50011 96 Agents message received, the routing algorithm at each node must select a neighboring node to which the message is to be sent. Such a routing algorithm may be required to meet a diverse set of often conflicting performance requirements (e.g., average message delay, network uti- lization, etc.). This makes routing an instance of a multi-criterion optimization problem. For a network node to be able to make an opti- mal routing decision, as dictated by the relevant per- formance criteria, it requires not only up-to-date and complete knowledge of the state of the entire network but also an accurate prediction of the network dynam- ics during propagation of the message through the net- work. This, however, is impossible unless the rout- ing algorithm is capable of adapting to network state changes in almost real time. In practice, routing decisions in large communica- tion networks are based on imprecise and uncertain knowledge of the current network state. This impreci- sion is a function of the network dynamics, the mem- ory available for storage of network state information at each node, the frequency of, and propagation de- lay associated with, update of such state information. Thus, the routing decisions have to be based on knowl- edge of network state over a local neighborhood supple- mented by a summary of the network state as viewed from a given node. Motivated by these considerations, a class of adaptive heuristic routing algorithms have been developed over the past few years (Mikler, Wong, & Honavar 1994). Experiments demonstrate that such algorithms have a number of interesting properties in- cluding: automatic load balancing and message delay minimization. The work described in this paper is a step toward the development of a theoretical frame- work for the design and the analysis of such heuristics. In what follows, we draw upon concepts of utility theory (French 1986) to design and analyze utility- theoretic heuristics for routing in large communication networks. Various heuristics are designed and their properties are precisely analyzed. The paper concludes From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. with a discussion of the relevance and limitations of the main results and some directions for further research. Utility-Theoretic euristics for Routing Routing messages in large communication networks so as to optimize some desired set of performance crite- ria presents an instance of resource-bounded, multi- criteria, real-time, optimization problem. Our pro- posed solution to this problem involves the use of utility-theoretic heuristics. Utility is a measure that quantifies a decision maker9s preference for one action over another (relative to some criteria to be maxi- mized) (French 1986). When the result of an action is uncertain, it is convenient to use the expected util- ity of each action to pick actions which maximize the expected utility. The heuristic function enables each node nj in the network to select a best neighbor in its neighborhood to route a message A4 (which it has received or generated) towards its destination. The utility Ut of node ni (with respect to a des- tination nd) is computed by a neighboring node, nj 9 as nj attempts to route a message A4 that it has re- ceived, along a desired (e.g., minimum delay) path, to M’s destination, nd. A node nj preference-orders its neighbors ni according to their respective utilities. We say that the router at nj is indifferent to the choice be- tween two neighbors nk and say n1 if U,d = Uf (where nd is the destination of the message A4 being routed by nj). We denote the indifference between two nodes as nk - nl. We say that a neighboring node nk is pre- ferred (by the router at nj) over another neighbor nl if u,d > uld. We denote this preference by nk + nl. For the purpose of the analysis that follows, it is assumed that the network is a regular rectangular grid (with adjacent nodes being unit distance of each other). Additional assumptions concerning load and load dynamics are made as necessary. A suitably de- fined reward function provides the directional guidance necessary to route each message towards its destina- tion. In the regular grid network, let Di,d denote the Man- hattan distance between a node ni and nd. Other topologies may require the use of other distance mea- sures. We define the partial reward for node ni as f$ = ff@i,d), h f w ere R is a reward function chosen such that v’ivj D;,d 5 Dj,d e fR(Dj,d) < fR(Di,d). There are many possible choices for the reward func- tion f~(.) A particular example of fR( .) is given by fR(Di,d) = (m + n) - $$$, where n and m are the dimensions of the grid network. Note that the results that follow are independent of any particular choice of fR( .) so long as the reward increases as a message approaches its destination. We define a cumulative reward RP obtained by a message M traveling along a path P (from its source n8 to its destination nd) as RP = zn,EP Rf At each node ni along its path P, the delay encoun- tered by a message M is modeled by a non-negative, bounded cost Ci. That is, Vi 0 _< Ca 5 <. It is further assumed that the penalty Ci remains constant during the time it takes to make a routing decision for message A4 at node ni. If cumulative delay is to be minimized, a natural interpretation of C’i is the delay (on account of load) at ni. However, since delays can become un- bounded when there is queueing, it may be necessary to discard some messages in order to keep the delay bounded at the expense of message loss. If cumula- tive load is to be minimized, Ci is guaranteed to be bounded by the maximum utilization p 5 1. The total cost incurred by a message along a path P is given by Cp = C, ,eP C; . We can now define the net partial pay08 ZSd received by a message M when it reaches the node ni on its way to its destination nd as .ZZG = Rt - Ci. Correspondingly, the total payoff along a path P is given by Zp = RP - Cp. Let II be a minimum cost path from a source n, to a destination nd. The cost C” along this path is given by C” = minvp{CP}. In the discussion that follows, in order to simplify our analysis, we proceed under the assumption that the network is uniformly loaded. This assumption is captured by the following definition: Definition 1 If Vi, Ci = K (0 5 tc 2 <), we refer to the network as a uniform cost network. In a uniform cost network, a simple utility function U” defined by Uf = Z,d is sufficient to route each mes- sage along a minimum cost path to its destination. The uniform cost assumption renders the cost component in the payoff function irrelevant to the routing deci- sion. This is no longer true when the network is not a uniform cost network. In what follows, we relax the uniform cost assumption by allowing a single hotspot (a node with a high load relative to its neighbors) in an otherwise uniform cost network. Routing in presence of a Single Definition 2 A hotspot, nh, in an otherwise uniform cost network is a single network node which has a higher load than its neighbors so that a message R4 traveling through it incurs a cost Ch > K (where Ci = K Vi # h). Note that since the costs Ci are bounded by c, it follows that Ch 5 c. Further note that the above definition of a hotspot does not say anything about the relative difference in costs Ch and Ci. A more Multiagent Problem Solving 97 realistic definition of a hotspot might insist that the cost of routing a message through a hotspot is signif- icantly larger than that of routing the same message through a node in the neighborhood of the hotspot. Also, when a network deviates substantially from the uniform cost assumption, it is more useful to focus on the load distribution in the vicinity of a node rather than hotspots. However, to make the analysis mathe- matically tractable, the discussion that follows focuses on routing in an otherwise uniform cost networks with a single hotspot. As the uniform cost assumption is relaxed by allow- ing a single hotspot nh with cost Ch > Cj Vj # h in the network, it is easy to show that relying on partial payoffs alone as utilities for routing messages can result in sub-optimal routes. Consider a grid network with node coordinates increasing as a message M travels east and south. From the uniform cost assumption, we have Ci = Cj = K Vi, j # h. Let x~, g8, xd, and yd be the x and y coordinates of M’s source and destination, respectively. Let xh and ?Jh be the x and y coordinates of a hotspot in one of the following configurations: 1. 2s 5 xh 5 xd A ys 5 Yh 2 Yd 2. 2s 2 xh > xd A ys 2 yh 2 Yd Here, the probability that a shortest path from n, to ?2d passes through the hotspot nh is non-zero. That is, 3 a node ni in the neighborhood of hotspot nh that must decide how to route M so as to minimize the total cost incurred by M. As we show below, if this decision is based on a preference ordering induced by the naive utility function 77’ given by Uid = Zt, messages can be routed through the hotspot thereby incurring a higher cost than they would have otherwise. Assumption 1 For the discussion below, we as- sume that the reward functions chosen guarantee that Vnk Vni in the network such that 1 Rf - Ri I> < when- ever Di,d # Dj,d. This ensures that the cost Ci of a node ni, (and nh in particular) does not offset the guidance provided through Rf unless two nodes with equal rewards are being compared. In the following we distinguish 4 canonical cases (see figure 1). We focus in our analysis on configuration 1 above. Similar arguments hold for configuration 2. Case 0 This case combines 4 scenarios of placing nodes n,, nd, and nh in the grid network, each of which presents a trivial routing problem. In these scenarios, at least two of the nodes n,, nd, and nh are identical. That is, 72, = nd = nh, ns = nd, ns = nh # nd, and Figure 1: Sample node placement ns # nh = nd. Clearly, in the first two scenarios, no routing decisions are needed as the message source coincides with the destination. Whenever the mes- sage source coincides with the hotspot as in the third scenario, the routing algorithm will select a neighbor nk E Hi with the highest utility. Hence, the routing algorithm performs as in the case of a uniform cost network (without hotspots). For the fourth scenario, Assumption 1 assures that nd yields the highest partial reward Rf , Vi, despite the fact that the cost incurred by hotspot conditions reduces its partial payoff. Hence, routing decisions can be made without taking cost Ci into consideration, as in the case of a network without hotspots. Case 1 Let PAi,j denote the number of minimum hop paths from a node ni to node nj. This case encompasses all placements of nodes n,, nh, and nd, such that 1. P&,h > 1 A P&,d > 1 or 2. p&h = 1 A P&d 2 1 where P&,d > 1 Here, the hotspot nh does not share either the x or y coordinates of ns or nd. That is, (xs < xh < xd) A (Ys < Yh < yd). H ere, the all partial minimum hop paths from n, to nh may be part of a minimum cost path from n, to nd if all nodes ni that neighbor the hotspot take action to route A4 so as to circumvent nh. Thus, the utility function U” given by Uid = ZZG is guaranteed to route M on a minimum cost path to its destination nd. Lemma 1 In a uniform cost network with a single hotspot nh located such that (xs < xh < xd) A (ys < 98 Agents yh < yd), a routing algorithm which propagates a mes- sage M such that U” is maximized at every intermedi- ate step will yield an optimal path IYI with cost Crr. Case 2 Here, n,, nd, and nh are placed such that (xs < xh < xd) A (YS < Yh = yd) or (xS < xh = xd) A (yS < Yh < yd), i.e.; (P&h > 1) A (p&d = 1). Assuming the former, there exists a node ni with coordinates (xi, yi) with (xs < xi < xh) A (yi = yh = yd) from which the number of minimum hop routes PAi,d = 1. Since in a uniform cost network nk - nl, Vk, I # h the naive utility function e/O can guide a mes- sage M through ni, thereby committing to a path P with cost Cp > C n. Assuming that M is only routed using utilities to choose among minimum hop routes, the additional cost (Cp - Cn) is inflicted on M by 12h. If M is permitted to deflect from a minimum hop route, the additional cost (CP - Cn) is inflicted by nh itself or due to the extended length of P in circumvent- ing nh. Case 3 This scenario consists of all placements of n,, nd, and 72h such that (xs = xh = xd) A (YS 5 yh 5 yd) Or (xs 5 xh < Ed) A (ys = yh = yd). Since there is only a single optimal path II from n, to nd, i.e., PA,,-j = 1, message M must either visit nh or deflect from the minimum hop path in order to circumvent nh. U”, however is not sufficiently informative to guarantee an optimal routing decision. Hence, M may be routed along a path P for which Cp > Cn. Assumption 2 In the following we assume that a node nj upon receiving a message M from a neighbor node ni E Hj will refrain from propagating M back to ni . This is a natural assumption that is meant to avoid the so-called bouncing of messages back to a node from which it was routed. Lemma 2 In a uniform cost network with a single hotspot nh, a routing algorithm based on U” will de- flect a message M at most once in order to circumvent nh provided bouncing is avoided (via Assumption 2). The analysis of the performance of a routing algo- rithm based on U” for each of the 4 cases above yields the following theorem: Theorem 1 In a uniform cost network with a single hotspot nh with Ch > K (where Vi # h, Ci = K), a routing algorithm which propagates a message M such that U” is maximized at every intermediate step is guaranteed to yield a path P with cost Cp such that cp - cn 5 maX((ch - &),26). The proof of this theorem is given in (Mikler, Honavar, & Wong 1996). Eliminating Suboptimality Using A Modified Utility Function Zf, is determined solely from local information Sub- optimal routing scenarios discussed above arise primar- ily as a result of a lack of knowledge at ni at the time it is routing a message M to a neighbor nj, regarding the likely cost of completing the path from nj to the destination of M, namely, nd. As shown in section 2.4, source-hotspot-destination configurations correspond- ing to scenarios described in Case 2 and Case 3 can result in sub-optimal routes (i.e., Cp > C”) when routing decisions are based on U”. In what follows, we will modify U” to obtain a utility function which is guaranteed to eliminate suboptimal routing decisions that arise in source-hotspot destina- tion placements corresponding to the scenarios in Case 2 and Case 3. We proceed in two steps: First, we de- fine a utility function U1 that eliminates suboptimal routing decisions that arise in scenarios corresponding to Case 3. We then modify U1 by introducing a cost estimator function to obtain a utility function U2 de- signed to eliminate suboptimal routing decisions that arise in Case 2 scenarios as well. Eliminating Sub-Qptimality in Case 3 Definition 3 Let U1 be a utility function given by: lp = Rf ifK<Cj<3~A$k(R~=R~) Acnj # nd> 2: otherwise U1 exploits the fact that messages are to be routed in a uniform cost network with a single hotspot. If rout- ing decisions are based on the preference ordering in- duced by U1 in an otherwise uniform cost network with a single hotspot, every message originating in a source n, and a destination nd that correspond to a source- hotspot destination placement described in Case 3 is guaranteed to be propagated along an optimal path II between n, to nd. Using U1, ni can decide whether or not to propagate M through a hotspot nh in its neigh- borhood or to circumvent the hotspot by routing M through a different neighbor nk # nh. In other words, the preference ordering induced by U1 ensures that at a node neighboring a hotspot in a Case 3 scenario we have: @ (ch - ck) = (ch - K) > 26 ( nk % nh (ch - ck) = (ch - %)<2Kenh>nk. Multiagent Problem Solving 99 Thus all routing decisions based on U1 in Case 3 sce- narios result in optimal (minimum cost) routes. How- ever, it is easy to see that U1 does not eliminate the possibility of a sub-optimal route in a source-hotspot- destination configurations corresponding to the sce- nario in Case 2. Eliminating Sub-Optimality in Case 2 As shown by the preceding analysis, U1 can result in a sub-optimal routing decision in a source-hotspot- destination configuration corresponding to the scenario in Case 2. In particular, any routing decision in a configuration corresponding to Case 2 will result in a sub-optimal path P if it results in the propaga- tion of a message M to a node nk E P such that xk < xh < xd A yk = yh = yd or xk = xh = xd A Yk < yh < yd- Routing decisions based on a preference ordering induced by U1 can lead to such a situation since in a neighborhood Hi of ni such that nh $i! Hi, v’nj nk E Hi, nk - nj provided Ri = Rf. Note that Case 2 scenarios include all placements of n,, nh, and nd, such that V {ni 1 xi # xd Ayi # yd} 3 k,l, such that (nk E II) V (nl E III). These observations suggest the possibility of using an estimate of the cost along paths from nk to nd as a component of a modified utility function U2 so as to induce a preference ordering between nodes (where no such preference ordering is induced by U1 ) so as to eliminate suboptimal routing decisions altogether. In other words, U2 should be able to induce a preference ordering among nodes nk and nl in the neighborhood of a node ni (the node making the routing decision for a message M) such that: (nk E H) A (nr # II) d nk + nl. We now proceed to define a cost estimator function Ef as follows: Definition 4 A cost estimator function I$( .) esti- mates the cost Ef of a minimal cost path to a des- tination nd from a node nk. It would be nice if the cost estimator function de- fined above helps U2 to induce the desired preference ordering necessary to guarantee routing along an op- timal path in the scenario corresponding to Case 2. We capture this property by defining what are called admissible cost estimator functions. Definition 5 A cost estimator function is said to be admissible if V nodes ni in the network, for all nodes nk, nl in the neighborhood Hi of ni, it is guaranteed that (nk E n) A (nl $$ n) * E,d < EfE Definition 6 We define a utility function U2 as fol- lows: ifx, =zd v ys =pd Cj - Ef otherwise In the discussion that follows, it is assumed that the cost estimator function Eg is admissible. The estimate returned by E,d(.) must be based, at the very least, on some knowledge of the current cost distribution in the network. More precise estimates would require knowledge of the network dynamics. If costs associated with each node are allowed to change with time, as would be the case in a more realistic rout- ing task, since E,d is computed at the time a message M is being considered for propagation through nk, to a destination nd, E,d has to reflect changes in network load over time. We need to represent at each node, the cost distribution over the network in a form that is independent of specific destination nodes (because the destinations become known only after arrival of the respective messages). Any such representation, in order to be useful in practice in large networks, must not require the storage and update at (or broadcast to) each node, of cost values for all the nodes in the network regions of the network. Ideally, it must ade- quately summarize the load values in large regions of the network as viewed from a given node. These considerations (among others) led us to de- fine a view, vk, which is maintained in every node in the network (Mikler, Wong, & Honavar 1994). In a rectangular grid network, this view consists of four components, one for each of the four directions - north, south, east, and west. Thus we have: vk = [V,“, v,s, v,E, V,“]. Each component Vi : (S E {N, S, E, W}) represents a weighted average of costs C; along the minimum hop path from nk to the border of the grid network in the direction specified by 6. Consider two nodes, ni and nks located such that nk E Hi and nk is to the east of ni, i.e., xi < xk A yi = yk. Then ViE is given by: V.E = ck + ‘kE 2 2 KN ,ys, and T/;rw are computed using analogous for- mulae. In the discussion that follows, we assume that suf- ficient time has elapsed for the view computation to stabilize following major load changes in the network before the view is used in the computation of cost es- timates using E,d( .). In practice, this assumption need not be satisfied exactly so long as the views are adequately precise to ensure the admissibility of the cost estimator function defined below. Assuming that nd is located such that Xs <xd A Ys <Yd- Let 0; =I xi - xd 1 and Dy =I Yi - &J I denote the distance from ni to nd in x and y 100 Agents direction, respectively. Ef(.) is given by: of It is easy to verify that this estimator (which is one several alternatives that are possible) is admissible. Lemma 3 For all nodes ni in the network, for each message M from a source n, to a destination nd that reaches a node ni, the routing decision at ni based on the preference ordering induced by U2 wild route M along a path P selected only from the set of minimum hop paths from ni to nd, unless PAi,d = 1 and (nh E P) A(nh E Hi). The preceding discussion sets the stage for Theo- rem 2 (proved in Mikler, Honavar, & Wong 1996) that establishes a major property of the utility function U2, namely, that it eliminates suboptimal routes in an otherwise unformly loaded grid network with a single hotspot. Theorem 2 In a uniform cost network with a single hotspot nh with an associated cost Ch > K: (where vi # h,Ci = tc), a routing algorithm which makes routing decisions at each node based on a preference ordering induced by U2 is guaranteed to propagate M along a minimum cost path I-I. each message Discussion and Summary In this paper, we have formulated some simple utility- theoretic heuristic decision functions for guiding mes- sages along a near-minimum-delay path in a large net- work. We have analyzed some of the interesting prop- erties of such heuristics under a set of simplifying as- sumptions regarding network topology and load dy- namics. For a network with a regular grid topology and certain assumption about load dynamics we have identified the precise conditions under which a simple and computationally efficient utility-theoretic heuris- tic decision function is guaranteed to route a message along a minimum delay path. This analysis was, at least in part, motivated by a desire to understand and explain the results of a wide range of experiments (Mik- ler, Honavar, & Wong, 1994) using heuristics that are very similar in spirit to U2 in more precise mathemat- ical terms. Given the simplifying assumptions used in our anal- ysis, it is natural to question the applicability of the results when the simpifying assumptions may not hold. It is worth pointing out that experiments with heuris- tics similar to U2 display automatic loud balancing in the network. This suggests that the simplifying as- sumption of uniform network load (except at a hot spot) is useful at least as a crude first approximation of a more realistic scenario. In the presence of hotspots, the routing functions compensates for this change by redistributing traffic away from the hotspots. This suggests that our analytical results are likely to be useful to guide the design utility-theoretic heuristics for a a more complex network. Work in progress is aimed at extending our analysis to a range of increas- ingly complex scenarios such as: irregular grids; non- uniform load distributions; multiple hotspots or con- tiguous hotspot regions. The performance of utility-theoretic heuristics, as described in this paper, critically depends on the exis- tence of an adequately precise estimator of the relevant performance measure. It would be useful to analyze different estimators and the resulting heuristics - espe- cially since the design of good heuristics for complex problems is commonly based on solution of simplified or relaxed versions of the original problem (Pearl 1984 j 0 Other interesting research directions include the inves- tigation of methods for adaptation or tuning of heuris- tics in real-time. For this we may draw upon machine learning techniques that modify existing heuristics as a function of measured network behavior or as a function of information gathered through directed experiments initiated by the network during otherwise idle periods. References Bertsekas, D. and Gallager, R. 1992. Data Networks. Englewood Cliffs, NJ.: Prentice-Hall French, S. 1986. Decision Theory: an introduction to the mathematics of rationality. New York: Halsted Press. Mikler, A.R., Wong, J .S.K., and Honavar, V.G. 1994. Quo Vadis - A F ramework for Intelligent Routing in Large Communication Networks. Technical Report TR94-24, Dept. of Computer Science, Iowa State Uni- versity. To appear in: The Journal of Systems and Software. Mikler, A.R., Honavar, V.G. and Wong, J.S.K. 1996. Utility-Theoretic Heuristics for Intelligent Adaptive Routing in Large Communication Networks. Techni- cal Report TR95-14, Dept.of Computer Science, Iowa State University. Pearl, J. 1984. Heuristics: Intelligent Search Strate- gies for Computer Problem Solving. Reading, MA.: Addison-Wesley. Multiagent Problem Solving 101
1996
14
1,777
r&grating Grid- pologica ile Sebastiau Thrunt~ tcomputer Science Department Carnegie Mellon University Pittsburgh, PA 152 13 Abstract Research on mobile robot navigation has produced two ma- jor paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accu- rate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environ- ments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topolog- ical maps are considerably difficult to learn in large-scale environments. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayesian in- tegration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms-grid-based and topological-, the approach presented here gains the best of both worlds: accuracy/consistency and efficiency. The pa- per gives results for autonomously operating a mobile robot equipped with sonar sensors in populated multi-room envi- ronments. Introduction To efficiently carry out complex missions in indoor environ- ments, autonomous mobile robots must be able to acquire and maintain models of their environments. The task of ac- quiring models is difficult and far from being solved. The following factors impose practical limitations on a robot’s ability to learn and use accurate models: 1. 2. 3. 4. 5. 6. Sensors. Sensors often are not capable to directly mea- sure the quantity of interest (such as the exact location of obstacles). Perceptual limitations. The perceptual range of most sensors is limited to a small range close to the robot. To acquire global information, the robot has to actively explore its environment. Sensor noise. Sensor measurements are typically cor- rupted by noise, the distribution of which is often un- known (it is rarely Gaussian). Drift/slippage. Robot motion is inaccurate. Qdometric errors accumulate over time. Complexity and dynamics. Robot environments are complex and dynamic, making it principally impossible to maintain exact models. Real-time requirements. Time requirements often de- mand that the internal model must be simple and easily ac- cessible. For example, fine-grain CAD models are often disadvantageous if actions must be generated in real-time. SInstitut fur Informatik Universitat Bonn D-53 117 Bonn, Germany Recent research has produced two fundamental paradigms for modeling indoor robot environments: the grid-based (metric) paradigm and the topological paradigm. Grid- based approaches, such as those proposed by Moravec/Elfes (Moravec 1988) and many others, represent environments by evenly-spaced grids. Each grid cell may, for exam- ple, indicate the presence of an obstacle in the correspond- ing region of the environment. Topological approaches, such a those described in (Engelson & McDermott 1992; Kortenkamp & Weymouth 1994; Kuipers & Byun 1990; MatariC 1994; Pierce & Kuipers 1994), represent robot en- vironments by graphs. Nodes in such graphs correspond to distinct situations, places, or landmarks (such as doorways). They are connected by arcs if there exists a direct path be- tween them. Both approaches to robot mapping exhibit orthogonal strengths and weaknesses. Occupancy grids are considerably easy to construct and to maintain even in large-scale envi- ronments (Buhmann et al. 1995; Thrun & Bticken 1996). Since the intrinsic geometry of a grid corresponds directly to the geometry of the environment, the robot’s position within its model can be determined by its position and orientation in the real world-which, as shown below, can be deter- mined sufficiently accurately using only sonar sensors, in environments of moderate size. As a pleasing consequence, different positions for which sensors measure the same values (i.e., situations that look alike) are naturally disambiguated in grid-based approaches. This is not the case for topological approaches, which determine the position of the robot relative to the model based on landmarks or distinct sensory features. For example, if the robot traverses two places that look alike, topological approaches often have difficulty determining if these places are the same or not (particularly if these places have been reached via different paths). Also, since sensory input usually depends strongly on the view-point of the robot, topological approaches may fail to recognize geometrically nearby places. On the other hand, grid-based approaches suffer from their enormous space and time complexity. This is because the resolution of a grid must be fine enough to capture every im- portant detail of the world. Compactness in a key advantage of topological representations. Topological maps are usu- ally more compact, since their resolution is determined by the complexity of the environment. Consequently, they per- mit fast planning, facilitate interfacing to symbolic planners and problem-solvers, and provide more natural interfaces for human instructions. Since topological approaches usually 944 Mobile Robots From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Grid-based approaches + easy to build, represent, and maintain + recognition of places (based on geometry) is non-ambiguous and view point-independent + facilitates computation of shortest paths - planning inefficient, space- consuming (resolution does not depend on the complexity of the environment) - requires accurate determina- tion of the robot’s position - poor interface for most sym- bolic problem solvers Topological approaches permits efficient planning, low space complexity (res- olution depends on the com- plexity of the environment) does not require accurate de- termination of the robot’s position convenient represen- tation for symbolic planners, problem solvers, natural lan- guage interfaces difficult to construct and maintain in larger environ- ments recognition of places (based on landmarks) often am- biguous, sensitive to the point of view may yield suboptimal paths Table 1: Comparison of grid-based and topological approaches to map building. do not require the exact determination of the geometric po- sition of the robot, they often recover better from drift and slippage-phenomena that must constantly be monitored and compensated in grid-based approaches. To summarize, both paradigms have orthogonal strengths and weaknesses, which are summarized in Table 1. This paper advocates to integrate both paradigms, to gain the best of both worlds. The approach presented here com- bines both grid-based (metric) and topological representa- tions. To construct a grid-based model of the environment, sensor values are interpreted by an artificial neural network and mapped into probabilities for occupancy. Multiple in- terpretations are integrated over time using Bayes’ rule. On top of the grid representation, more compact topological maps are generated by splitting the metric map into coher- ent regions, separated through critical lines. Critical lines correspond to narrow passages such as doorways. By parti- tioning the metric map into a small number of regions, the number of topological entities is several orders of magnitude smaller than the number of cells in the grid representation. Therefore, the integration of both representations has unique advantages that cannot be found for either approach in iso- lation: the grid-based representation, which is considerably easy to construct and maintain in environments of moderate complexity (e.g., 20 by 30 meters), models the world consis- tently and disambiguates different positions. The topological representation, which is grounded in the metric representa- tion, facilitates fast planning and problem solving. The robots used in our research are shown in Figure 1. All robots are equipped with an array of 24 sonar sensors. Throughout this paper, we will restrict ourselves to the inter- pretation of sonar sensors, although the methods described here have (in a prototype version) also been operated using cameras and infrared light sensors in addition to sonar sen- sors, using the image segmentation approach described in (Buhmann et al. 1995). The approach proposed here has extensively been tested in various indoor environments, and is now distributed commercially by a leading mobile robot Figure 1: The robots used in our research: RHINO (University of Bonn), XAVIER, and AMELIA (both CMU). manufacturer (Real World Interface, Inc.) as part of the reg- ular navigation software. The metric maps considered here are two-dimensional, dis- crete occupancy grids, as originally proposed in (Elfes 1987; Moravec 1988) and since implemented successfully in vari- ous systems. Each grid-cell (z, y) in the map has an occu- pancy value attached, which measures the subjective belief whether or not the center of the robot can be moved to the center of that cell (i.e., the occupancy map models the con- $guration space of the robot, see e.g., (Latombe 1991)). This section describes the four major components of our ap- proach to building grid-based maps (see also (Thrun 1993)): (1) sensor interpretation, (2) integration, (3) position esti- mation, and (4) exploration. Examples of metric maps are shown in various places in this paper. Sensor Interpretation To build metric maps, sensor reading must be “translated” into occupancy values OCC~:,~ for each grid cell (z, y) . The idea here is to train an artificial neural network using Back- Propagation to map sonar measurements to occupancy val- ues. The input to the network consists of the four sensor readings closest to (x:, y), along with two values that encode (z, y) in polar coordinates relative to the robot (angle to the first of the four sensors, and distance). The output target for the network is 1, if (z, y) is occupied, and 0 otherwise. Training examples can be obtained by operating a robot in a known environment and recording its sensor readings; notice that each sonar scan can be used to construct many training examples for different x-y coordinates. In our implemen- tation, training examples are generated with a mobile robot simulator. Figure 2 shows three examples of sonar scans along with their neural network interpretation. The darker a value in the circular region around the robot, the larger the occupancy value computed by the network. Figures 2a&b depict sit- uations in a corridor. Situations such as the one shown in Figure 2c-that defy simple interpretation-are typical for cluttered indoor environments. Integration Over Time Sonar interpretations must be integrated over time, to yield a single, consistent map. To do so, it is convenient to interpret Mobile Robots 945 (4 (b) Figure 2: Sensor interpretation: Three example sonar scans (top row) and local occupancy maps (bottom row), generated by thk the network’s output for the t-th sensor reading (denoted by set)) as the pro bability that a grid cell (z, y) is occupied, conditioned on the sensor reading ~(~1: A map is obtained by integrating these probabilities for all available sensor readings, denoted by s(l), ~(~1, . . . , dT). In other words, the desired occupancy value for each grid call@, y) can be expressed as the probability PT(OCCx,l/lS(‘)I d2), . . . , sq, which is conditioned on all sensor reading. A straightfor- ward approach to estimating this quantity is to apply Bayes’ rule (Moravec 1988; Pearl 1988). To do so, one has to as- sume independence of the noise in different readings. More specifically, given the true occupancy of a grid cell (x, y), the conditional probability Pr(sct) locc,,,) must be assumed to be independent of Pr(sct’) (occ~,~) for t # t’. This as- sumption is not implausible-in fact, it is commonly made in approaches to building occupancy grids. The desired proba- bility can now be computed as follows: Pf(OCC,&(l), 5(2), . . . ) P)) = Here Pi- denotes the prior probability for occupancy (which, if set to 0.5, can be omitted in this equation). Notice that this formula can be used to update occupancy values incrementally. An example map of a competition ring con- structed at the 1994 AAAI autonomous robot competition is shown in Figure 3. Position Estimation The accuracy of the metric map depends crucially on the alignment of the robot with its map. Unfortunately, slippage and drift can have devastating effects on the estimation of the robot position. Identifying and correcting for slippage and drift is therefore imperative for grid-based approaches to robot navigation (Feng, Borenstein, & Everett 1994; Rencken 1993). Figure 4 gives an example that illustrates the importance of position estimation in grid-based robot mapping. In Fig- ure 4a, the position is determined solely based on dead- 23.1 neters 7 4 32.2 meters c Figure 3: Grid-based map, constructed at the 1994 AAAI au- tonomous mobile robot competition. reckoning. After approximately 15 minutes of robot opera- tion, the position error is approximately 11.5 meters. Obvi- ously, the resulting map is too erroneous to be of practical use. Figure 4b is the result of exploiting and integrating three sources of information: Wheel encoders. Wheel encoders measure the revolution of the robot’s wheels. Based on their measurements, odometry yields an estimate of the robot’s position at any point in time. Odometry is very accurate over short time intervals. Map correlation. Whenever the robot interprets an ac- tual sensor reading, it constructs a “local” map (such as the ones shown in Figure 2). The correlation of the lo- cal and the corresponding section of the global map is a measure of their correspondence (Schiele & Crowley 1994). Thus, the correlation-which is a function of the robot position-gives a second source of information for estimating the robot’s position. Wall orientation. The third source of information esti- mates and memorizes the global wall orientation (Crow- ley 1989; Hinkel & Knieriemen 1988). This approach rests on the restrictive assumption that walls are either parallel or orthogonal to each other, or differ by more than 15 degrees from these canonical wall directions. In the beginning of robot operation, the global orientation of walls is estimated by searching straight line segments in consecutive sonar measurements. Once the global wall orientation has been estimated, it is used to readjust the robot’s orientation based on future sonar measurements. All three mechanisms basically provide a probability density for the robot’s position (Thrun & Biicken 1996). Gradient descent is then iterated to determine the most likely robot position (in an any-time fashion). Notice that position con- trol based on odometry and map correlation alone (items 1 and 2 above) works well if the robot travels through mapped terrain, but seizes to function if the robot explores and maps unknown terrain. The third mechanism, which arguably relies on a restrictive assumption concerning the nature of indoor environments, has proven extremely valuable when autonomously exploring and mapping large-scale indoor en- vironments. 946 Mobile Robots (W Figure 4: Map constructed without (a) and with (b) the position estimation mechanism described in this paper. Exploration To autonomously acquire maps, the robot has to explore. The idea for (greedy) exploration is to let the robot always move on a minimum-cost path to the nearest unexplored grid cell; The cost for traversing a grid cell is determined by its occupancy value. The minimum-cost path is computed using a modified version of value iteration, a popular dy- namic programming algorithm (Howard 1960) (which bears similarities to A* (Nilsson 1982)). In a nutshell, starting at each unexplored grid-cell, value iteration propagates values through the map. After conver- gence, each value measures the cumulative costs for mov- ing to the cost-nearest unexplored grid cell. Figure 5a shows a value function after convergence. All white re- gions are unexplored, and the grey-level indicates the cu- mulative costs for moving towards the nearest unexplored point. Notice that the all minima of the value function cor- respond to unexplored regions-there are no local minima. Once value iteration converges, greedy exploration simply amounts to steepest descent in the value function, which can be done very efficiently. Figure 5b, sketches the path taken during approximately 15 minutes of autonomous ex- ploration. The value function can, however, be used to gen- erate motion control at any time (Dean & Boddy 1988), long before dynamic programming converges. Value iter- ation has the nice property that values are generated for all cells in the grid, not just the current robot position. Figure 5: Autonomous exploration. (a) Exploration values, com- puted by value iteration. White regions are completely unexplored. By following the grey-scale gradient, the robot moves to the next unexplored area on a minimum-cost path. (b) Actual path traveled during autonomous exploration, along with the resulting metric map. The large black rectangle in (a) indicates the global wall orientation 6L1i. Thus, if the robot has to change its path to avoid a colli- sion with an unexpected obstacle, it can directly continue exploration without further planning. During exploration, the robot moves constantly, and frequently reaches a veloc- ity of 80 to 90 cm/set (see also (Buhmann et al. 1995; Fox, Burgard, & Thrun 1995)). In grid maps of size 30 by 30 meters, optimized value iteration, done from scratch, requires approximately 2 to 10 seconds on a SUN Spare station. For example, the planning time in the map shown in Fig. 3 is typically under 2 seconds, and re-planning (which becomes necessary when the map is updated) is performed usually in a tenth of a second. In the light of these results, one might be inclined to think that grid- based maps are sufficient for autonomous robot navigation. However, value iteration (and similar planning approaches) require time quadratic in the number of grid cells, imposing intrinsic scaling limitations that prohibit efficient planning in large-scale domains. Due to their compactness, topological maps scale much better to large environments. In what fol- lows we will describe our approach for deriving topological graphs from grid maps. Topological Maps Topological maps are built on top of the grid-based maps. The key idea is simple but very effective: The free-space of a grid-based map is partitioned into a small number of regions, separated by critical lines. Critical lines correspond to narrow passages such as doorways. The partitioned map is then mapped into a isomorphic graph. The precise algorithm is illustrated in Figure 6, and works as follows: 1. Thresholding. Initially, each occupancy value in the occupancy grid is thresholded. Cells whose occupancy value is below the threshold are considered free-space (denoted by C). All other points are considered occupied (denoted by C). Mobile Robots 947 0 I topological graph ilHV3 / I v2----._v4/vs Figure 6: Extracting topological maps. (a) Metric map, (b) Voronoi diagram, (c) critical points, (d) critical lines, (e) topological regions, and (f) the topological graph. 2. Voronoi diagram. For each point in free-space (IC, y) E C, there is one or more nearest point(s) in the occupied space C. We will call these points the basis points of (2, y), and the distance between (z, y) and its basis points the clearance of (z, 9). The Voronoi diagram (Latombe 1991) is the set of points in free-space that have at least two different (equidistant) basis-points (see Figure 6b). 3. Critical points. The key idea for partitioning the free- space is to find “critical points.” Critical points (2, y) are points on the Voronoi diagram that minimize clear- ance locally. In other words, each critical point (z, y) has the following two properties: (a) it is part of the Voronoi diagram, and (b) the clearance of all points in an &-neighborhood of (2, y) is not smaller. Figure 6c illustrates critical points. 4. Critical lines. Critical lines are obtained by connecting each critical point with its basis points (c$ Figure 6d). Critical points have exactly two basis points (otherwise they would not be local minima of the clearance function). Critical lines partition the free-space into disjoint regions (see Figure 6e). 5. Topological graph. The partitioning is mapped into an isomorphic graph. Each region corresponds to a node in the topological graph, and each critical line to an arc. Figure 6f shows an example of a topological graph. Critical lines are motivated by two observations. Firstly, when passing through a critical line, the robot is forced to move in a considerably small region. Hence, the loss in per- formance inferred by planning using the topological map (as opposed to the grid-based map) is considerably small. Sec- ondly, narrow regions are more likely blocked by obstacles (such as doors, which can be open or closed). Figure 7 illustrates the process of extracting a topological map from the grid-based map depicted in Figure 3. Figure 7a shows the Voronoi diagram of the thresholded map, and Fig- 948 Mobile Robots (a) Voronoi diagram (b) Critical lines (d) Topological graph Figure 7: Extracting the topological graph from the map depicted in Figure 3: (a) Voronoi diagram, (b) Critical points and lines, (c) regions, and (d) the final graph. ure 7b depicts the critical lines (the critical points are on the intersections of critical lines and the Voronoi diagram). The resulting partitioning and the topological graph are shown in Figure 7&d. As can be seen, the map has been partitioned into 67 regions. Performance Results Topological maps are abstract representations of metric maps. As is generally the case for abstract representations and abstract problem solving, there are three criteria for as- sessing the appropriateness of the abstraction: consistency, loss, and eficiency. Two maps are consistent with each other if every solution (plan) in one of the maps can be represented as a solution in the other map. The loss measures the loss in performance (path length), if paths are planned in the more abstract, topological map as opposed to the grid-based map. Eficiency measures the relative time complexity of problem solving (planning). Typically, when using abstract models, efficiency is traded off with consistency and performance loss. Consistency The topological map is always consistent with the grid-based map. For every abstract plan generated using the topologi- cal map, there exists a corresponding plan in the grid-based map (in other words, the abstraction has the downward solu- tion property (Russell & Norvig 1995)). Conversely, every path that can be found in the grid-based map has an abstract representation which is a admissible plan in the topological map (upward solution property). Notice that although con- sistency appears to be a trivial property of the topological maps, not every topological approach proposed in the lit- erature generates maps that would be consistent with their corresponding metric representation. (a) Grid-based map (b) Topological regions Figure 8: Another example of a map. Loss Abstract representations-such as topological maps-lack detail. Consequently, paths found in the topological map may not be as short as paths found using the metric represen- tation. To measure performance loss, we empirically com- pared paths generated using the metric map shown in Figure 3 with those generated using the corresponding topological map, shown in Figures 7d. Value iteration can be applied us- ing both representations. In grid-based maps, value iteration is applied just as described above. However, instead of plan- ning paths to unexplored regions, paths were planned from a particular start point to a particular goal point. To compare the results to those obtained using topological representa- tions, first the corresponding shortest path in the topological graph was determined. Subsequently, the shortest path was determined that followed exactly this topological plan. As a result, the quality of topological plans can directly be com- pared to those derived using the metric map. We conducted a total of 23,88 1,062 experiments, each us- ing a different starting and goal position that were generated systematically with an evenly-spaced grid. The results are intriguing. The average length of the shortest path is 15.88 meters. If robot motion is planned using the topological map, this path length increases on average only by 0.29 meters, which is only 1.82% of the total path length. It is remarkable that in 83.4% of all experiments, the topological planner re- turns a loss-free plan. The largest loss that we found in our experiments was 11.98 meters, which occurred in 6 of the 23,88 1,062 experiments. Figure 9a shows the average loss as a function of the length of the shortest path. Figure 8 depicts a different map. Here the loss is zero, since both maps are free of cycles. Efficiency The most important advantage of topological planning lies in its efficiency. Dynamic programming is quadratic in the number of grid cells. The map shown in Figure 3 happens to possess 27,280 explored cells. In the average case, the num- ber of iterations of value iteration is roughly equivalent to the length of the shortest path, which in our example map is 94.2 cells. Thus, in this example map, value iteration requires on average 2.6. lo6 backups. Planning using the topological representation is several orders of magnitudes more efficient. The average topological path length is 7.84. Since the topo- logical graph shown in Figure 7d has 67 nodes, topological planning requires on average 525 backups. Notice the enor- mous $ ain in efficiency ! Planning using the metric map is 4.9-10. more expensive than planning with the topological 0% 300 600 900 1200 1500 1800 2100 2400 2700 3000 3300 3600 3900 4200 shorts+ path length Figure 9: Loss, as a function of optimal path length. map. In other words, planning on the topological level in- creases the efficiency by more than three orders of magnitude, while inducing a performance loss of only 1.82%. The map shown in Figure 8, which is smaller but was recoded with a higher resolution, consists of 20,535 explored grid cells and 22 topological regions. On average, paths in the grid-based map lead through 84.8 cells, while the average length of a topological plan is 4.82 (averaged over 1,928,540 systematically generated pairs of points). Here the complexity reduction is even larger. Planning using the metric map is 1.6. IO4 more expensive than planning with the topological map. While these numbers are empirical and only correct for the particular maps investigated here, we conjecture that the relative quotient is roughly correct for other maps as well. It should be noted that the compactness topological maps allows us to exhaustively pre-compute and memorize all plans connecting two nodes. Our example maps contain 67 (22) nodes, hence there are only 2,211 (23 1) different plans that are easily generated and memorized. If a new path planning problem arrives, topological planning amounts to looking up the correct plan. The reader may also notice that topological plans often do not directly translate into motion commands. In (Thrun & Biicken 1996), a local “triplet planner” is described, which generates cost-optimal plans for triplets of adjacent topo- logical regions. As shown there, triplet plans can also be pre-computed exhaustively, but they are not necessarily op- timal, hence cause some small additional performance loss (1.42% and 1.19% for the maps investigated here). Discussion This paper proposes an integrated approach to mapping in- door robot environments. It combines the two major ex- isting paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayes’ rule. Topological maps are generated by partitioning the grid-based map into critical regions. Building occupancy maps is a fairly standard procedure, which has proven to yield robust maps at various research sites. To the best of our knowledge, the maps exhibited in this paper are significantly larger than maps constructed from sonar sensors by other researchers. The most important as- pect of this research, however, is the way topological graphs are constructed. Previous approaches have constructed topo- logical maps from scratch, memorizing only partial metric information along the way. This often led to problems of dis- ambiguation (e.g., different places that look alike), and prob- lems of establishing correspondence (e.g., different views of the same place). This paper advocates to integrate both, grid- based and topological maps. As a direct consequence, differ- Mobile Robots 949 ent places are naturally disambiguated, and nearby locations are detected as such. In the integrated approach, landmarks play only an indirect role, through the grid-based position es- timation mechanisms. Integration of landmark information over multiple measurements at multiple locations is auto- matically done in a consistent way. Visual landmarks, which often come to bear in topological approaches, can certainly be incorporated into the current approach, to further improve the accuracy of position estimation. In fact, sonar sensors can be understood as landmark detectors that indirectly-through the grid-based map -help determine the actual position in the topological map (cJ: (Simmons & Koenig 1995)). One of the key empirical results of this research con- cerns the cost-benefit analysis of topological representations. While grid-based maps yield more accurate control, planning with more abstract topological maps is several orders of mag- nitude more efficient. A large series of experiments showed that in a map of moderate size, the efficiency of planning can be increased by three to four orders of magnitude, while the loss in performance is negligible (e.g., 1.82%). We believe that the topological maps described here will enable us to control an autonomous robot in multiple floors in our univer- sity building-complex mission planning in environments of that size was completely intractable with our previous methods. A key disadvantage of grid-based methods, which is inher- ited by the approach presented here, is the need for accurately determining the robot’s position. Since the difficulty of po- sition control increases with the size of the environment, one might be inclined to think that grid-based approaches gener- ally scale poorly to large-scale environments (unless they are provided with an accurate map). Although this argument is convincing, we are optimistic concerning the scaling proper- ties of the approach taken here. The largest cycle-free map that was generated with this approach was approximately 100 meters long; the largest single cycle measured approx- imately 58 by 20 meters. We are not aware of any purely topological approach to robot mapping that would have been demonstrated to be capable of producing consistent maps of comparable size. Moreover, by using more accurate sensors (such as laser range finders), and by re-estimating robot po- sitions backwards in time (which would be mathematically straightforward, but is currently not implemented because of its enormous computational complexity), we believe that maps can be learned and maintained for environments that are an order of magnitude larger than those investigated here. Acknowledgment The authors wish to thank the RHINO mobile robot group at the University of Bonn, in particular W. Burgard, A. Cremers, D. Fox, M. Giesenschlag, T. Hofmann, and W. Steiner, and the XAVIER mobile robot group at CMU. We also thank T. Ihle for pointing out an error in a previous version of this paper. This research is sponsored in part by the National Science Foun- dation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F336 1593- I- 1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. References Buhmann, J.; Burgard, W.; Cremers, A. B.; Fox, D.; Hofmann, T.; Schneider, F.; Strikos, J.; and Thrun, S. 1995. The mobile robot Rhino. AI Magazine 16( 1). Crowley, J. 1989. World modeling and position estimation for a mobile robot using ultrasonic ranging. In Proceedings 1989 IEEE International Conference on Robotics and Automation. Dean, T. L., and Boddy, M. 1988. An analysis of time-dependent planning. In Proceeding Seventh NCAZ, AAAI. Elfes, A. 1987. Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation 3(3):249-265. Engelson, S., and McDermott, D. 1992. Error correction in mobile robot map learning. In Proceedings 1992 IEEE International Conference on Robotics and Automation. Feng, L.; Borenstein, J.; and Everett, H. 1994. “where am I?” sensors and methods for autonomous mobile robot positioning. TR UM-MEAM- 12, University of Michigan at Ann Arbor. Fox, D.; Burgard, W.; and Thrun, S. 1995. The dynamic window approach to collision avoidance. TR IAI-TR-95- 13, University of Bonn. Hinkel, R., and Knieriemen, T. 1988. Environment perception with a laser radar in a fast moving robot. In Proceedings Sympo- sium on Robot Control. Howard, R. A. 1960. Dynamic Programming and Markov Pro- cesses. MIT Press. Kortenkamp, D., and Weymouth, T. 1994. Topological mapping for mobile robots using a combination of sonar and vision sensing. In Proceedings Twelfth NCAI, AAAI. Kuipers, B., and Byun, Y.-T. 1990. A robot exploration and mapping strategy based on a semantic hierarchy of spatial repre- sentations. TR, University of Texas at Austin. Latombe, J.-C. I99 1. Robot Motion Planning. Kluwer Academic Publishers. MatariC, M. J. 1994. Interaction and intelligent behavior. Techni- cal Report AI-TR-1495, MIT, AI-Lab. Moravec, H. I? 1988. Sensor robots. AI Magazine 6 l-74. fusion in certainty grids for mobile Nilsson, N. J. 1982. Principles of Artijcial Intelligence. Springer Publisher. Pearl, J. 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers. Pierce, D., and Kuipers, B. 1994. Learning to explore and build maps. In Proceedings Twelfth NCAI, AAAI. Rencken, W. 1993. Concurrent localisation and map building for mobile robots using ultrasonic sensors. In Proceedings JEEE/RSJ International Conference on Intelligent Robots and Systems. Russell, S., and Norvig, P. 1995. Artificial Intelligence: A Modern Approach. Prentice Hall. Schiele, B., and Crowley, J. 1994. A comparison of position es- timation techniques using occupancy grids. In Proceedings IEEE International Conference on Robotics and Automation. Simmons, R., and Koenig, S. 1995. Probabilistic robot navigation in partially observable environments. In Proceedings ZJCAl-95. Thrun, S., and Bucken, A. 1996. Learning maps for indoor mobile robot navigation. TR CMU-CS-96- 121, Carnegie Mellon University. Thrun, S. 1993. Exploration and model building in mobile domains. In Proceedings ICNN-93, IEEE NN Council. robot 950 Mobile Robots
1996
140
1,778
Improving Model-Based Diagnosis throng Algebraic Analysis: the Petri Net Challenge Luigi Portinale Dipartimento di Informatica - Universita’ di Torino C.so Svizzera 185 - 10149 Torino (Italy) e-mail: portinal@di.unito.it Abstract Petri Nets: Outline The present paper describes the empirical evaluation of a linear algebra approach to model-based diagnosis, in case the behavioral model of the device under ex- amination is described through a Petri net model. In particular, we show that algebraic analysis based on P- invariants of the net model, can significantly improve the performance of a model-based diagnostic system, while keeping the integrity of a general framework de- fined from a formal logical theory. A system called IN- VADS is described and experimental results, performed on a car fault domain and involving the comparison of different implementations of P-invariant based di- agnosis, are then discussed. Introduction In some recent papers (Portinale 1993), we have shown that Petri nets (PNs) (Murata 1989) can be fruitfully employed to face the problem of model-based diagno- sis. This is accomplished by taking into account a for- mal logical framework of reference, defining classical notions (from the AI point of view) concerning the characterization of a diagnostic problem. In partic- ular, it is shown that classical reachability analysis of PNs can naturally be exploited in order to realize “for- mally correct” (with respect to the logical framework of reference) diagnostic inference procedures. In the present paper, we focus on the empirical evaluation of a particular reachability analysis technique, namely P- invariant analysis, in order to show its practical useful- ness and its possible advantages with respect to a log- ical inference mechanism. This analysis exploits a ma- trix representation of the net model and it is grounded on a linear algebra algorithm able to compute the so- called P-invariants of the net. They informally repre- sent the correspondent of logical derivations and form the basis for the computation of the diagnoses. We will report on the empirical results obtained from some tests performed on a car fault domain, by comparing different implementations of P-invariant diagnosis and a classical abductive approach. A Petri net is a directed bipartite graph N = (P, T, F) whose vertices are called places (the elements of P rep- resented as small circles) and transitions (the element of T represented as bars). The set of arcs is repre- sented by F. In case the transitive closure F+ of the arcs is irreflexive, the net is said to be acyclic. In a Petri net, an arc multiplicity function is usually de- fined as W : (P x T) U (T x P) -+ LW; in case W is such that w(f) = 1 if f E F and w(f) = 0 if f 6 F, the net is said to be an ordinary Petri net. We will mainly be interested in such a kind of nets. For each z E P U 7’ we use the classical notations Ox = {y/yFx} and x0 = {y/xFy}. If ex = 0, x is said to be a source, while if xe = 0, x is said to be a sink. A marking is a function from the set of places to nonnegative integers, represented by means of tokens into places; a place containing a token is said to be marked. A marked Petri net is a pair (N, ,LL) where N is a Petri net and ,Q is a marking. The dynamics of the net is described by moving tokens from places to places according to the concession and firing rules. In ordinary Petri nets we say that a transition t has con- cession at a marking ,Q if and only if Vp ~~ t &I) > 1. If a transition t has concession in a marking p, it may fire (execute) producing a new marking p’ such that Vp E P p’(p) = p(p)-W(p,t)+W(t,p). A markingp’ is reachable from a marking p in a net N (p’ E R(N, ,x)) if and only if there exists a sequence of transitions pro- ducing P-I’ from ,V in N. If a place of a marked net can- not be marked with more than one token, the place is said to be safe; if the property holds for every place, the net itself and every marking are said to be safe. Given a Petri net N = (P,T, F), if n = IT] and m = IPI, the incidence matrix of N is the n x m matrix of integers A=[aij] such that aij = W(i, j) - W(j, i)(i E T, j E P) An m-vector of integers Y such that A . Y = 0 is said to be a P-invariant of the net represented by A, the entry Y(j) corresponding to place j. The support ay of a P-invariant Y is the subset of places correspond- 952 Model-Based Reasoning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. ing to nonzero entries of Y. In a dual way, if AT is the transpose matrix of A, an n-vector of integers X such that AT . X = 0 is said to be a T-invariant (en- tries corresponding to transitions). It is well known that any invariant can be obtained as a linear combi- nation of invariants having minimal (with respect to set inclusion) supports (Murata 1989). Petri nets and Model-based Diagnosis Model-based diagnosis deals with the problem of de- termining the explanation of the abnormal behavior of a given device, by reasoning on a model (usually a be- havioral model) of such a device (Hamscher, Console, & de Kleer 1992). Approaches based on “consistency” between the observed and the predicted system behav- ior (with some components assumed to be faulty) are usually considered when the model represents the ex- pected behavior of the system; however, when also the faulty behavior is taken into account, approaches based on “abduction” can be more adequately adopted. Since both purely consistency based and purely ab- ductive approaches suffer from some drawbacks, some effort has been done in order to combine them (Poole 1989; Console & Torasso 1991). In the present paper, we will refer to the framework defined in (Console & Torasso 1991). Definition 1 A model-bused diagnostic problem is a tuple DP = (M, H, CXT, (Q+, @I-)) where: - M is a logical theory representing the model of the system to be diagnosed; - H is a set of ground atoms of M identified us possible diagnostic hypotheses (abducibles); - CXT is a set of ground atoms of M representing contextual information; - @r-t is a set of ground atoms of M representing the observations to be covered in the current case; - !i!!- is a set of ground atoms of M representing the values of observable parameters conflicting with the ob- servations. We assume that M is represented by a set of definite clauses. This allows us to focus on a simple kind of model that is however representationally adequate for significant classes of behavioral models (see (Console et al. 1993)). If OBS is the set of current observations, the set Qkf is in general a subset of OBS (q+ s OSS), while Q- = {m(x)/m(y) E OBS,x # y}. In a similar way, given the set CXT we define the set CXT- = {c(x)/c(y) E CXT}. The framework has the implicit assumption of abstracting from time; this allows us to further simplify the logical model by assuming M to be a set of definite clauses without recursion (i.e. a hierarchical definite logic program). We also assume the set OBS be composed by ground atoms having no consequences. Similarly, atoms in H cannot appear in the head of any clause (i.e. diagnostic hypotheses are independent) and so atoms in CXT. Definition 2 Given a diagnostic problem DP a diug- nosis to DP is a set E c H such that Vm(x) E q+ MU CXT U E !- m(x) Vm(y) E 9- M U CXT U E y m(y) We will refer to a diagnostic problem defined in this way as a logic-bused diagnostic problem. Notice that definition 2 does not require the set E to mention every abducible predicate of M; however, E could not be a partial diagnosis in the sense of (de Kleer, Mackworth, & Reiter 1992), since there could be an extension of E to unmentioned abducible predicates such that some atoms in the set qk- are derived. However, in the fol- lowing we will consider only fault models (i.e. models describing only the consequences of the faulty behav- ior of the device under examination); in this case, a diagnosis E is interpreted as assigning a “normal” or “correct” value to abducible predicates not mentioned in E. The capability of dealing with models mixing the correct and the faulty behavior of the system requires a slight revision of the definition of diagnosis, by con- sidering the set Q- to be a set of denials, to be used in a contrapositive way. This would allows us to get the equivalent of the kernel diagnoses defined in (de Kleer, Mackworth, & Reiter 1992)l. In (Portinale 1993) a simple Petri net model, called Behavioral Petri Net (BPN), has been introduced, in order to capture the representational issues discussed above. Definition 3 A Behavioral Petri Net (BPN) is a 4- tuple M = (P, TN, TOR, F) such that (P, TN U TOR, F) is an acyclic ordinary Petri net that satisfies the fol- lowing axioms: 1. VP E W’PI I 1 A lP”l I 1) 2. VPl,PZ E p ((“Pl =O P2) A (P; = Pz’> + Pl = P2) 3. Vt E TN (let1 = 1 A It"1 > 0) V (lot1 > Or\ It') = 1) 4. vi! E ToR(l”tl 2 2 A It”1 = 1) The set of transitions is and TO&;. those in the ormer set are the usual kind F artitioned into two subset TN of transrtrons of ordinary Petri nets while a transition t E TOR has concession in a marking iff at least one of its input places is marked. They are actually “macro- transitions” that can be obtained by means of a set of classical transitions (see (Portinale 1993)). It can be shown that a BPN model; the same kind’of knowledge of a hierarchical definite logic pro ram. Fi ure 1 shows an example of a BPN correspon % ing to t B e following set of clauses (OR transitions are represented as empty thick bars) : grcz(zow) A roco(poor) -5 oiZs(hoZed) oiZs(hoZed) --t obca(hugeam) ‘Notice that definition 2 can be directly used if we con- sider E to contain exactly one ground instance for each abducible predicate. Model-Based Reasoning 953 roco(poor) + jerk(verystrong) osga(worn) + oils(lea/cing) oiZs(Zecc/cing) --f obca(smolZam) piws(uorn) + Zaoi(severe) jerk(verystrong) --f vibr(verystrong) ente(incr) + htin(recZ) engi(run) A Zaoi(severe) --f ente(incr) oiZs(hoZed) --t Zaoi(severe) jerk(very-strong) A oiZs(Zeaking) + Zaoi(severe) Table 1 shows the key for acronyms used. The net of figure 1 is just for explanatory purposes, corresponding to a simplified part of a more general model, describ- ing the faulty behavior of a car engine. Notice that the BPN contains some “dummy places” (labeled in figure 1 with capital letters) used to split places rep- resenting ground atoms involved in the body of more than one clause. This allows us to identify the token flow on the net with logical derivations in the logical model. OR-transitions model alternative way of ob- taining a given atom. Since a BPN is acyclic, a partial order 4 over transi- tions is defined as tr 4 t2 ++ tr F+tz. A concession rule with priority for transitions can then be introduced, re- sulting in the enabling rule of a BPN. Definition 4 Given a BPN, a transition t is enabled (i.e. it may fire) in a given marking 1-1 if and only if it has concession at 1-1 and /iIt’ + t such that t’ has concession at p. For example, in the net of figure 1, if both places piws(worn) and oiZs(hoZed) are marked, transition t27 is not enabled, since there is transition t2 having con- cession and such that t2 4 t27. Definition 5 An initial marking of a BPN is a safe marking ~0 such that PO(~) = 1 +* p = 8. A marked BPN is always considered with respect to a marking reachable from an initial marking. As shown in (Portinale 1993), every marked BPN is safe and in a marked BPN there is a unique marking, called the fina2 marking, from which no transition can fire. Given a BPN N = (P, TN, TOR, F) corresponding to a hierarchical definite logic program M, if BM is Figure 1: Example of a BPN [ ENTITY 1 ACRONYM 11 ENTITY 1 ACRONYM 1 engine status engi engine temper. ente ground clear. grcl high temp. ind. htin jerks jerk lack of oil laoi oil below car obca oil sump gasket osga oil sump status oils piston wear piws road conditions roco vibrations vibr Table 1: Acronyms used in the BPN of fig. 1 and in the corresponding logical model the Herbrand base of M, an interpretation function @ : P -+ BM associating places of N with ground atoms of M can be defined. The function is in general a partial function; for example, in figure 1 the func- tion @ is considered undefined (I) for places labeled with capital letters (dummy places having no direct correspondence with ground atoms of M), while for the other places the label itself shows the value of Qi. Given a conjunction of ground atoms J (represented as a set) we can determine a corresponding marking pJ such that pJ(p) = 1 if a((~) E J and pJ(p) = 0 otherwise. Definition 6 Given a logic-based diagnostic problem DP = (M, H,CXT, (@, \k-)) and a BPN NM cor- responding to M, we can define the diagnostic prob- lem in terms of the BPN model in the following way: BPN-DP= (N,PH,Pc,(P+,P-)) where PH = (p E P/@(P) E H}, PC = {p E: P/@(p) E CXT}, P+ = {p E P/@(p) E @+} and P- = {p E P/@(p) E Q’-}. Notice that, Vp E P+ U P- -+ p” = 0 (i.e. p is a sink place); similarly, vp E PH u PC +@ p = 8 (i.e. p is a source place). The formal connection between logic- based and BPN-based characterizations is established by means of the following theorem whose proof can be found in (Portinale 1993) : Theorem 1 Given a logic-based diagnostic problem DP = (M, H,CXT, (@, gk-)), let NM be the BPN corresponding to M and PE CXT be the marking corre- sponding to E U CXT (E C H); if (N~,ps~~) I- (U(C) G 3~ E R(NM, pgXT >/P(P) = 1 A @(PI = 44 then M U E U CXT I- Q!(C) t) (NM,P~~~) I- a(c) Definition 7 Given a diagnostic problem BPN-DP= WMJ’H,PC, (P+,P-)), a candidate diagnosis is a marking ~0 such that PO(~) = 1 + p E PH. We indicate with PC the marking corresponding to contextual information (i.e. PC(~) = 1 t) p E PC) and with Pz the set of places corresponding to CXT- (i.e. Pz = {p E P/+(p) E CXT-}). Definition 8 Given a diagnostic problem BPN-DP= (NM,PH,Pc, (P’,P-)) a candidate diagnosis ,x~ is a solution to BPN-DP (i.e. is a diagnosis) if and only if VP E P+ (NM, PO U PC) t- Q(P) Vq E P- (NM,PO U PC> Y a(q) 954 Model-Based Reasoning Definition 9 A marking ,!.L of a Behavioral Petri Net covers a set of places Q if and only if Vp E Q -+ p(p) = 1, while it zero-covers Q if and only if Vp E Q + /4P> = 0. The following theorem provides us with an operational notion of diagnosis in a BPN framework (see (Portinale 1993) for the proof): Theorem 2 A candidate diagnosis ~0 is a solution to BPN-DP= (NM, PH, PC, (P+, P-)) (i.e. is a diagno- sis) if and only if the final marking p of (NM, PI-J U pc) covers P+ and zero-covers P-. This means that the problem of finding the solutions to a diagnostic problem can be re-formulated as a reacha- bility problem on the net model; this can classically be tackled in two different ways, with a reachability graph approach (as shown in (Anglano & Portinale 1994)) or with an algebraic (invariant-based) approach. The aim of this paper is to concentrate on invariant analysis and to discuss the performance of a diagnostic algorithm based on such a principle, with respect to a classical approach based on symbolic manipulation. Diagnostic Reasoning by Computing FInvariants In this section, we will show how to generate an initial marking satisfying the condition of theorem 2 from a set of P-invariant supports. By definition, P-invariants of a net N = (P,T,F) correspond to T-invariants of its dual net No = (T, P, F). The following lemma has been proved in (Peterka & Murata 1989). Lemma 1 Let N = (P, T, F) be a Petri net such that Vt E Tjt’I 5 1 and t E T be a sink transition; there exists a T-invariant X of N such that X(t) # 0 if and only if t is firable from the empty marking. This means that in N there are some source transitions firing from the empty marking, eventually leading to the firing of t. Consider now an ordinary Petri net: in order a place p to be marked, there must be a tran- sition t E. p that fire, while in order a transition t to fire, every place p E” t must be marked. If every transition of a Petri net has exactly one input place, the sentence “a place is marked” corresponds to the sentence “a transition can fire” in the dual net. Let us then consider the following transformation on a BPN: A-fusion. Given a BPN N = (P, TN, TOR, F), pro- duce the ordinary Petri net N’ = (P’, {TN U ToR}, F’) as follows: for each t E TN such that 9 = {Ph. .-pk} (k > 1) substitute in P the set {Ph. . .pk} with the place pl,k such that *pl,k = lJf=, 'pi and~;,~ = {t} This transformation simply collapses places that are “AND-ed” into a single place representing their con- junction; even if the resulting net is no longer a BPN, it encodes the same kind of knowledge of the original BPN. In fact, let us consider the interpretation func- tion @ of N and the following operator @ on it: @(P> @ w?) = @(P) uwd With a(p) @ I = I @ (a(p) = +(p) We can define an interpretation function a’ for N’ from the interpretation function <p of N as follows: VP> = { @a(P) ifpE PnP’ 93~=, @(pi) if p = pl,k E P’ - P Figure 2 shows the net obtained from the BPN of fig- ure 1 by means of the A-fusion. Places grcZ(2ow) and A are collapsed into place “grcZ(Zow) + A”, V and R into place “V + R”, la&( severe) and engi(run) into place “Zaoi(moder) + engi(run)“. The interpretation function ip’ is such that W(grcZ(Zow) + A) = {grcZ(Zow)}, ia’(V + R) = I, fV(laoi(severe) + engi(run)) = {Zaoi(severe), engi(run)}, @’ 3 @ for remaining places. Theorem 3 Given a BPN NM corresponding to a hi- erarchical definite logic program M, let N& be the net obtained from NM through the A-fusion transforma- tion, W the interpretation function of Nh and p a sink place; the following are equivalent propositions: 1) there is a P-invariant Y of Nh such that Y(p) # 0; 2) by marking source places p, such that Y(p,) # 0, the place p can eventuaEly be marked; 31 up, @‘(PS) lJ M I- Q'(P) Proof. 1) E 2) is a consequence of lemma 1 and of the fact that T-invariants of a net are P-invariants for osga worn) P &(low)tA Figure 2: BPN for P-invariant Computation Model-Based Reasoning 9.55 its dual net. 2) E 3) is a consequence of theorem 1. 0 From theorem 3 we conclude that the supports of the P-invariants of Nh characterize the logical derivations from atoms representing diagnostic hypotheses and contexts, to atoms representing observable parameters. Consider for instance the following diagnostic problem BPN-DP= (NM, PH,, PC, (P’, P-)) where NM is the net of figure 1 (we remember that such a net is in- tended to represent a fault model). Let us suppose to have the following set of observations: OBS = {htin(red), obca(smaZllzm), vibr(normal)} ( i.e. the temperature indicator is red, there is a small amount of oil below the car and vibrations are normal). Contextual information are CXT = {grcZ(normaZ), engi(run)} (i.e. we are considering a car with a normal ground clearance and in the context of the engine being running). Let us also suppose that all the “abnormal” observations have to covered, then: PH = {piws(worn), osga(wo?%), roco(poor)}, PC = {engi(run)} PC = {grcZ(Zow)} P+ = {htin(red), obca(smaZZ~m)}, P- = {vibr(very-strong), obca(huge-am)}. The net of figure 2 is the result of the A-fusion of NM; the minimal supports of its P-invariants are: u1 = {grcZ(Zow) f A, oiZs(hoZed), roco(poor), M, obca(huge-am)} 02 = {grcZ(Zow) f A, oiZs(hoZecZ), roco(poor), Y, ente(incr) Zaoi(severe) + engi(run), htin(red)} u3 = {B, roco(poor),jerk(very-strong), osga(worn), V + R, W, oiZs(Zeaking), Zaoi(severe) + engi(rzln), ente(incr), htin(red)} (~4 = {B, roco(poor), jerk(very-strong), L, vibr(very-strong)} ~75 = {obca(smaZZ-am), osga(worn), oiZs(Zeaking), N} (~6 = {piws(worn), Zaoi(severe)+engi(run), ente(incr), htin(red)} Consider for instance 04: we notice that the sup- port contains the source place roco(poor) E PH and the sink place vibr(very-strong) E P-. From theorem 3 we conclude that M U {roco(poor) ) l- vibr(very-strong); this means that any initial mark- ing having place roco(poor) marked is not a diagnosis, since it will eventually produce a final marking having place vibr(very-strong) E P- marked. From these considerations, we can devise a P- invariant based diagnostic algorithm: after having computed the minimal supports of P-invariants, (ef- ficient algorithms exist for this task (Martinez & Silva 1982)) those related to predictions corresponding to places in P- are eliminated by taking into account the fact that if C? and 6’ are two sets of ground atoms such that 6 E 6’, if 5 I- a then &’ I- cu; at the same way, supports containing places belonging to Pz are also eliminated. We have then to consider the coverability of P+ ; for each place p E P+ , we build from remain- ing supports the list of places having interpretation function corresponding to a diagnostic hypothesis and supporting p (i.e. contained in a P-invariant support containing p). Final diagnoses are obtained by com- bining such lists. Let us consider again the diagnostic problem intro- duced above; supports 01,02 are discarded since they contain place grcZ(Zow) + A such that W(grcZ(Zow) + 4 = grcZ(Zow) E CXT- (ai also contains obca(huge-am) E P-) and support 04 because it contains place vibr(very-strong) E P-, Moreover, also support ~3 is discarded because of the prun- ing of 04; indeed, 6.4 = {roco(poor)} and 15.3 = {roco(poor), osga(worn), engi(run)). Since 6.4 C 6.3, we need to prune also 03. Only supports 05 and 06 survive to the pruning phase and we then obtain: 6~ = {osga(worn} for obca(smaZZ_am) E Pf 89 = {piws(worn),engi(run)} for htin(red) E P+ The only possible combination in this case is 6~ U f?g = {piws(worn), engi(run), osga(worn)} represent- ing the diagnosis “piws(worn) A osga(worn)” in the context “engi(run) A grcZ(normaZ)” (i.e. if the engine is running and the car has a normal ground clearance, the normal intensity of vibrations, the red temperature indicator and the small amount of oil below the car are explained by the fact that both the state of the pistons and the oil sump gasket are worn). Experimental Results We implemented the P-invariant approach to diagnosis in a system called INVADS (INVAriant based Diagnos- tic System) and we performed different series of exper- iments addressing the following two issues: 1. comparison of different implementations of invariant-based diagnosis; 2. comparison between invariant-based diagnosis and logical-abductive diagnosis. Both types of experiments have been done on a BPN relative to a knowledge base describing the fault “causal” model of a car engine and consisting in more than 100 places and more than 100 transitions. We ran the experiments on a SUN Spare station Classic with 32 Mbytes of memory; the software environment has been realized in SICStus prolog, with an embed- ded module for invariant computation written in C. We considered 48 different cases of car engine malfunctions in such a way to consider all the main fault evolutions described in the model. Different running of the same batch of cases have been considered for each implemen- tation; results about the running time showed a quite low variance between different runs, so they have been simply averaged. Implementation Testing We tested three different kinds of implementation of P- invariant based diagnosis that we classified as follows: 956 Model-Based Reasoning off-line invariant computation (OFF); net simplifica- tion (SIM); observation addition (ADD). The first kind of implementation (OFF) simply con- sists in the off-line computation of all the P-invariants of the net obtained from the A-fusion on the BPN un- der examination; this approach makes explicit the in- formation related to the P-invariants once for all and any diagnostic case that will be provided to the system, will use the same set of P-invariants. However, we have to search the solution in a search space (the set of P- invariants supports) that contains information that is not relevant to the current case. The complexity of a diagnostic algorithm based on this principle is just the complexity of the phase concerning the generation of diagnoses from P-invariant supports. The SIM approach consists in simplifying the net with respect to the observations made in the case to be solved. This can be done by considering the sets P’ and P- from the current diagnostic problem, iter- atively repeating the following actions, until no transi- tion is removed. for each place p $ P+ U P- do remove p; for each transition t/t’ = 0 do remove t; This allows us to consider only the part of the net rele- vant to the current set of observations and to compute the P-invariants only for this reduced net. A diag- nostic algorithm based on SIM must take into account three different phases for each case to be solved: net simplification, P-invariant computation and diagnosis generation. Since the set of P-invariant supports from which to generate diagnoses is reduced with respect to the previous approach, diagnosis generation could result much faster than in OFF. The ADD approach is conceptually similar to SIM; we consider the net obtained from the A-fusion on the current BPN, by deleting all the sink places represent- ing observable parameters. Given the current set of observations, we then add to N the sink places cor- responding to sets P+ and P-. Also in this case we have to consider three distinct phases namely observa- tion addition, P-invariant computation and diagnosis generation and, as in the previous case, the set of P- invariant supports we get does not contain information irrelevant to current observations. Results concerning the comparison of the three pro- posed strategies are summarized in figure 3, where the average computation times of the above strategies are reported for the 48 sample cases we used. The SIM approach resulted in very high execution times, essen- tially because of the expensiveness of the net simplifica- tion phase. This can be see in figure 4 where cpu times of net simplification and observation addition phases are plotted. Notice also that the basic pattern of the SIM strategy in figure 3 is essentially determined by the net simplification phase. We did not investigate the possibility of directly performing the simplification on the incidence matrix of the net; our claim is that a matrix simplification will be less expensive, but it would not improve to much the result. Between OFF and ADD, the latter strategy resulted to be better in terms of global execution time, even if without showing the huge difference of the SIM strategy. Obviously, the OFF strategy resulted in the higher computation time (with respect to both SIM and ADD strategies) for the diagnosis generation phase and for the invariant computation phase, but the fact that the latter phase is done off-line, determined the situation depicted in figure 3. Logical and P-invariant Diagnosis Comparison To test the performance of a P-invariant diagnostic al- gorithm against a classical logical approach, we chose to compare the INVADS system using the ADD strat- egy with an abductive diagnostic system called AID (Console et al. 1993). The reasons for such a direct comparison are twofold: 1. both systems rely on the same formal framework of reference we previously discussed; 1600 Figure 3: Comparison of different strategies for P- invariant diagnosis 1400 T 0 I ~~~C~~Z~~~~~~~~~~~SBs -334 Figure 4: Net Simplification vs Observation Addition Model-Based Reasoning 957 2. both systems share the same implementation en- vironment (a SICStus prolog implementation for SUN spare stations). Also for this experiment, we tested different runs of the batch of our sample cases. In particular, we measured for each case C the percentage gain of INVADS vs AID, defined as follows: G(C) = T%D-T%WADS Tkk4DS where TzID and TFf&VADs represent the execution times on case C of AID and INVADS respectively. Re- sults on our car engine fault domain showed a quite good behavior of P-invariant based approach (see fig- ure 5); the average gain resulted to be of 34.41% with some peaks of about (or even more then) 100%. Conclusions In the present paper, we have shown how Petri net reachability analysis could be used as a formal basis for explaining the misbehavior of a given device. We briefly discussed a net model called BPN, used to de- scribe the behavior of the device under examination. The BPN model is not proposed as a direct tool of diag- nostic knowledge representation, but rather as an anal- ysis formalism that can be derived from other forms of knowledge representation, like for instance causal net- works as described in (Portinale 1992). We concen- trated on P-invariant reachability analysis, represent- ing the starting point for the definition of an innovative approach to model-based diagnosis. We tested the different implementation of the ap- proach on a car engine fault domain, by getting an en- couraging comparison with a classical logical approach to diagnosis. Notice also that, besides the fact that P- invariants are obtained through a linear algebra based computation (that can result more efficient than sym- bolic computation), parallel algorithms can be devised for this kind of approach (Marinescu, Beaven, & Stan- sifer 1991; Lin et al. 1993)). This clearly adds more interest to the net invariant approach to diagnosis, by Figure 5: Percentage Gain INVADS vs AID also taking into account the fact that its complemen- tary approach (i.e. the diagnosis based on reachabil- ity graph analysis) has been shown to be very ade- quate to a parallel implementation (Anglano & Porti- nale 1994). Future works are planned in order to com- pare P-invariant diagnosis also with this approach. References Anglano, C., and Portinale, L. 1994. B-W analysis: a backward reachability analysis for diagnostic problem solving suitable to parallel implementation. In LNCS 815, 39-58. Springer Verlag. Console, L., and Torasso, P. 1991. A spectrum of logical definitions of model-based diagnosis. Compu- tational Intelligence 7(3):133-141. Console, L.; Portinale, L.; Theseider Dupre, D.; and Torasso, P. 1993. Combining heuristic and causal reasoning in diagnostic problem solving. In Second Generation Expert Systems. Springer Verlag. 46,68. de Kleer, J.; Mackworth, A.; and Reiter, R. 1992. Characterizing diagnoses and systems. Artificial In- telligence 56(2-3):197-222. Hamscher, W.; Console, L.; and de Kleer, J. 1992. Readings in Model-Based Diagnosis. Morgan Kauf- mann. Lin, C.; Chaundhury, A.; Whinston, A.; and Mari- nescu, D. 1993. Logical inference of Horn clauses in Petri net models, IEEE TKDE 5(3):416-425. Marinescu, D.; Beaven, M.; and Stansifer, R. 1991. A parallel algorithm for computing invariants of a Petri net model. In Proc. 4th PNPM, 136-143. Martinez, J., and Silva, M. 1982. A simple and fast algorithm to obtain all invariants of a general- ized Petri net. In Applications and Theory of Petri Nets. Springer Verlag. 301-310. Murata, T. 1989. Petri nets: Properties, analysis and applications. Proceedings of the IEEE 77(4):541-580. Pete&a, G., and Murata, T. 1989. Proof procedure and answer extraction in Petri net model of logic pro- grams. IEEE TSE 15(2):209-217. Poole, D. 1989. Normality and faults in logic-based diagnosis. In Proc. 11th IJCAI, 1304-1310. Portinale, L. 1992. Verification of causal models us- ing Petri nets. International JournaZ of Intelligent Systems 7(8):715-742. Portinale, L. 1993. Petri net models for diagnos- tic knowledge representation and reasoning. PhD Thesis, Dip. Informatica, Universita’ di Torino. ftp://ftp.di.unito.it/pub/portinal. 958 Model-Based Reasoning
1996
141
1,779
Eleni Stroulia Ashok K. Gael Center for Applied Knowledge Processing College of Computing Helmholtzstr. 16 Georgia Institute of Technology 89081 Ulm, Germany Atlanta, GA 30332-0280 stroulia@faw.uni-ulm.de goel@cc.gatech.edu Abstract Blame assignment is a classical problem in learning and adaptation. Given a problem solver that fails to deliver the behaviors desired of it, the blame-assignment task has the goal of identifying the cause(s) of the failure. Broadly cat- egorized, these causes can be knowledge faults (errors in the organization, content, and representation of the problem- solver’s domain knowledge) or processing faults (errors in the content, and control of the problem-solving process). Much of AI research on blame assignment has focused on identifying knowledge and control-of-processing faults based on the trace of the failed problem-solving episode. In this paper, we describe a blame-assignment method for iden- tifying content-of-processing faults, i.e., faults in the speci- fication of the problem-solving operators. This method uses a structure-behavior-function (SBF) model of the problem- solving process, which captures the functional semantics of the overall task and the operators of the problem solver, the compositional semantics of its problem-solving methods that combine the operators’ inferences into the outputs of the overall task, and the “causal” inter-dependencies between its tasks, methods and domain knowledge. We illustrate this model-based blame-assignment method with examples from AUTOGNOSTIC. al 1989) assume that the causes of the failures of their prob- lem solvers lie in their incorrect operator-selection heuris- tics. Their blame-assignment methods assume that the set of available operators is both complete and correctly specified. This assumes that the exact same operators used in the failed problem-solving episode can be in some way combined to produce a correct solution. In contrast, in this paper, we are interested in the issue of identifying faults in the specifica- tion of the operators. While we too assume that the set of available problem-solving operators is complete relative to the problem-solver’s task, we admit the possibility that they may be incorrectly specified. That is, the information trans- formations the operators are designed to perform may not be sufficient for delivering a solution to (all of) the problems presented to the problem solver. Thus, the research issue becomes, given a problem-solver that fails to deliver the overall behavior desired of it, to specify a combination of knowledge and processing that enables the failing problem solver to identify faults in the specification of its operators. Introduction Blame assignment is a classical problem in learning and adaptation (Samuel 1959). Given a problem solver that fails to deliver the behaviors desired of it, the general blame- assignment task has the goal of identifying the cause(s) of the failure. The types of the identified cause(s) can then be used as indices to appropriate learning strategies which can eliminate the causes of the failure and thus improve the problem solver. A problem solver may fail due to a wide variety of causes that may be broadly categorized into knowledge faults and processing faults. The former (Davis 1980; Weintraub 1991) pertain to errors in the organization, con- tent, and representation of the problem-solver’s domain knowledge, while the latter refer to errors in the content of, and control over the steps of its processing. Much of AI research on blame assignment has focused on the identifi- cation of knowledge faults. In this paper, we focus on the identification of processing faults. AI work on identification of processing faults itself has focused on faults in the control of processing. For example, both Lex (Mitchell et. al 198 1) and Prodigy (Carbonell et. Traditional blame-assignment methods for identifying faults in the control of processing are based on problem- solving traces. For example, both Lex and Prodigy require the trace of the processing in the failed problem-solving episode as well as a trace of the processing that would have led to problem-solving success. Under the assumption that the cause of the failure is that the problem solver does not know the exact conditions under which each of the opera- tors should be used, both these systems compare the failed trace against the successful one to identify situations where operators were incorrectly used. In contrast, we describe a method which, in addition to the trace of the failed problem solving, uses a model of the problem-solver’s processing and knowledge to identify faults in the specification of the problem-solving operators. We posit that the identification of faults in operator specification is facilitated by knowl- edge of (i) the functional semantics of the overall task of the problem solver, (ii) the functional semantics of its op- erators, (iii) the compositional semantics of the problem- solving methods that recursively synthesize the inferences carried out by the available operators into the outputs of its overall task, and (iv) the “causal” inter-dependencies be- tween (sub)tasks, methods and domain knowledge. We use structure-behavior-function (SBF) models to capture this semantics of problem solving. This model-based method for blame assignment is im- Model-Based Reasoning 959 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. plemented in AUTOGNOSTIC, a “shell” which provides (i) a language for representing SBF models of problem solvers, and (ii) mechanisms for monitoring the problem solving, receiving and assimilating feedback on the result, assigning blame in case of failure, and repairing the problem solver. In this paper, we illustrate AUTOGNOSTIC'S blame-assignment method for processing faults with examples from AUTOG- NOSTIC’S integration with ROUTER (Goel et. al 1994), a path-planning system. SBF Models of Problem Solving SBF models of problem solving analyze the problem- solver’s task structure, its domain knowledge and their inter-dependencies. In this model, the problem-solver’s tasks constitute the building blocks of its problem-solving mechanism. The problem-solving methods that it employs decompose its complex overall tasks into simpler subtasks. These, in turn, get recursively decomposed into even sim- pler subtasks until they become elementary reasoning steps, i.e., “leaf” tasks. These leaf tasks are directly accomplished by the problem-solver’s domain operators. A task is specified as a transformation from an input to an output information state. It is characterized by the type(s) of information it consumes as input and produces as output, and the nature of the transformation it performs between the two. The functional semantics of a task describes the na- ture of the information transformation this task is intended to perform, and thus, constitutes a partial description of its expected, correct behavior. It is expressed in terms of spe- cific domain relations that hold true among the task’s inputs and outputs. For a non-leaf task, the functional semantics of the subtasks into which the task is recursively decomposed, and the ordering relations that the decomposing methods impose over them, constitute a partial description of a cor- rect reasoning strategy for this task. Henceforth, we will use the term strategy to refer to the task tree that results from a task’s decomposition by a particular method. Methods can be thought of as general plans for how the solutions to different low-level subtasks get combined to deliver the output desired of higher-level tasks. They specify compositions of the problem-solver’s domain op- erators into its higher-level tasks. Each method captures the semantics of the composition of a set of lower-level subtasks into a higher-level task in terms of control inter- dependencies (that is, a set of ordering relations), and in- formation inter-dependencies, (that is, a set of information producer-consumer relations), among these subtasks. In addition to the task structure, the SBF model of a problem solver specifies its domain knowledge in terms of the types of domain objects that the problem solver knows about, and the relations applicable to them. This specifi- cation of object types and relations captures the problem- solver’s ontology of its domain. In addition, the information types flowing through the task structure are specified as in- stances of domain-object types, and thus each particular task input or output is related to the ontological commitments associated with the object type of which it is an instance. Fi- nally, the specification of the tasks’ functional semantics in terms of domain relations, that must hold between their in- puts and outputs, captures the “causal” inter-dependencies of the inferences drawn to accomplish the tasks with the problem-solver’s domain knowledge, where each inference is based on some specific domain knowledge. The assump- tion here is that if the semantics of a task is specified in terms of a particular domain relation, then in order to meet its semantics, the set of inferences drawn in service of this task will use the knowledge of the problem solver about this domain relation. The Case Study: ROUTER is a multistrategy navigational planner which will be used in this paper to illustrate Au- TOGNOSTIC’S model-based method for blame assignment. ROUTER'S task, path-planning, is to find a path from an initial location to a goal location in a physical space. Its spatial model of the navigation world is organized in a neighborhood-subneighborhood hierarchy. High-level neighborhoods describe large spaces in terms of major streets and their intersections. They get refined into lower- level neighborhoods which describe both major and mi- nor streets and their intersections but over smaller spaces. Figure 1 diagrammatically depicts ROUTER'S task struc- ture and gives part of the SBF specification of some of its tasks and types of domain knowledge. In addition to the spatial model, Router contains a memory of past path- planning cases; the case memory is organized around the neighborhood-subneighborhood hierarchy. AUTOGNOSTIC’S SBF model of ROUTER’S problem solv- ing specifies that its overall task, path-planning, is de- composed into the subtasks elaboration, retrieval, search and storage. The elaboration subtask classi- fies the initial and goal locations into the neighborhood- subneighborhood hierarchy; it identifies the neighbor- hoods to which the two locations belong. This is a leaf task, i.e., it is directly solved by the domain operator elaboration-op. The retrieval subtask recalls from ROUTER’S memory a similar path which connects locations spatially close to the current initial and goal locations. Next, the search subtask produces the desired path, and, subse- quently, the storage subtask stores it in memory for future reuse. The search task can be accomplished by three differ- entmethods,the intrazonal, the interzonal, and the case-based method. The first two methods are model- based, that is, the semantics of the subtasks resulting from the use of these methods refer to model relations. In anal- ogy, the semantics of the subtasks resulting from the use of the case-based method refer to case-memory relations. The first method is applicable only under the condi- tion that the initial and the goal problem locations be- long in the same neighborhood. It decomposes the searchtaskinto thesubtasks search-initialization, temp-path-selection and path-increase. Thefirst of these subtasks initializes the set of paths already explored by ROUTER to contain only a path consisting of a single loca- tion, i.e., the initial location. The temp-path-selection subtask takes as input this set of explored paths and selects from it a particular temporary path which feeds as input to the path-increase subtask. The latter task extends the temporary path to reach its neighboring points, i.e., all the 960 ,Model-Based Reasoning TYPES OF DOMAIN OBJECTS id-test domain attrs rels PATH path-equivalence nodes self (list-of intersection) begins-in-neighborhood ends-in-neighborhood length length(self) number Path-Planning Semantics same-point(initial-lot initial-node(path)) same-point(goal-lot final-node(path)) Elaboration Semantics belongs-in(initial-lot initial-neighborhood) belongs-in(goal-lot goal-neighborhood) Tmp-Path-Selection Semantics 3p E explored-paths: same-path(p tmp-path) Path-Increase Semantics V n E nodes(path) belongs-in(n initial-neighborhood) initial-neighborhood) prefix(path tmp-path) INTERSECTION intersection-equivalence intersection-domain streets self (list-of 2 street) connected-to belongs-in TYPES OF DOMAIN RELATIONS PREFIX CONNECTED-TO input-args pati intersection outputargs (list-of path) (list-of intersection) truth-table - - predicate is-prefix adjacent-ints inv-predicate prefix-of adjacent-ints Router’s map Figure 1: Fragment of ROUTER'S planning task structure and part of the SBF specification of some of ROUTER’S domain objects and relations. intersections that belong in the common initial- and goal- neighborhood and are immediately adjacent to its last node. These extended paths are all added to the set of explored paths. The last two subtasks are repeatedly performed (as denoted by the small circle in the illustration of ROUTER'S task structure in Figure l), until one of the explored paths reaches the goal location, in which case, it is returned as the desired output of the overall task. As shown in the bottom of Figure 1, ROUTER'S world is described in terms of several different object types, such as intersections,neighborhoods,streetsandpaths. For each type of these domain objects, the SBF model spec- ifies the set of values that specific instances of objects may take, the attributes of the object type, the predicate that eval- uates whether two instances of this object type are identical, and the domain relations that relate it to other domain ob- jects. The objects in ROUTER'S world are related through relations, such as belongs-in which relates intersec- tions to neighborhoods. For each type of domain relation the SBF model specifies the types of domain objects it ap- plies to, and its truth table or the predicate that evaluates whether a tuple belongs in this relation or not. Model-based Blame Assignment In this paper, we focus on the blame-assignment task that arises when the problem solver is given as feedback infor- mation that a value desired for one of its outputs is different from the value actually produced. More specifically, the symptom of the failure is a divergence between the ac- tual and the desired problem-solving behavior, although the actual behavior may be consistent with the range of behav- iors intended of the problem solver. We will illustrate the model-based method for addressing this task with an exam- ple from ROUTER, which given the problem of going from (10th center) to (walnut dalney), produces the path ((center 10th) (10th atlantic) (atlantic walnut) (walnut dalney)), for which AUTOGNOSTIC receives the shorter path ((center 10th) (center mapple) (mapple dalney) (dalney walnut)) as feedback (see Figure 1 right for a map of ROUTER’S navigational domain). Before localizing the cause of the failure into a spe- cific operator or piece of domain knowledge, the blame- assignment method evaluates whether the feedback is within the class of values the overall problem-solver’s task was in- tended to produce; otherwise it would be meaningless to examine why it was not actually produced. To this end, it evaluates whether the feedback conforms with the overall task’s expected correct behavior as specified by the task’s functional semantics. As mentioned above, the SBF spec- ification of a task’s semantics consists of task’s input and output information types and a domain relation. For each domain relation, the SBF model specifies a predicate (or, a truth table), which makes it possible to evaluate whether or not the specific values of input and output information in the episode belong to the domain relation. The tuple formed by the specific values of the input and output information in a particular problem-solving episode should belong in the domain relation. In our example, AUTOGNOSTIC’S first step is to establish, based on the semantics of the overall path-planning task (see Figure l), whether the feedback path belongs in the class of paths that ROUTER was intended to produce given its actual input initial and goal locations. The feedback path begins at the initial and ends at the goal location, therefore, AUTOGNOSTIC infers that the feedback is indeed a valid output for ROUTER’S current problem. If the feedback belongs indeed in the class of intended correct outputs for the current problem input (see Fig- ure 2[1]), then the strategy employed to accomplish this task should have produced it. Thus, the blame-assignment method postulates that the cause of the failure must lie within this strategy, that is, within some of its subtasks. From the trace of the failed problem-solving episode, it identifies the method which was used for the task in ques- Model-Based Reasoning 961 tion, and focuses the search for the cause of the failure to the last subtask of this method producing the erroneous output (see Figure 2[ 1.21). Having established that the feedback belongs in the class of paths that path-planning could have produced for this problem, AUTOGNOSTIC postulates that the cause of the failure must lie within the strategy used to accomplish this task. Thus, it successively refines the focus of its investigation to the subtasks involved in the production of thepath,i.e., search andnext path-increase. If at some point, the semantics of some task is not vali- dated by its actual input and the feedback (see Figure 2[2]), then the blame-assignment method attempts to infer alter- native inputs which would satisfy it. This is meaningful only when the task’s input is not part of the overall problem specification, otherwise it would be an attempt to redefine the problem in order to fit the desired solution. If, however, the input of the task in question is produced by some ear- lier subtask, and, if alternative values can be found for it such that the current task’s semantics is satisfied (see Figure 2[2.1]), then the blame-assignment method infers that the fault must lie within this earlier subtask which produced the “wrong” input for the task currently under examination. Therefore, it identifies the highest earlier subtask producing the information in question, and shifts its focus to assign- ing blame for its failure to produce the alternative desired value. To identify the producing subtask, the method uses the compositional semantics of the methods that gave rise to the subtask under examination. Had this knowledge not been available in the SBF model, the method would have to examine all the subtasks performed before the current subtask with failing semantics. AB-undesired-value(task infofeedback) IFfeedback belongs in the class of values tusk produces for info THEN III IF tusk is accomplished by an operator THEN under-specified-task-semantics WI IF the task semantics is an enumerated domain relation THEN incorrect-domain-relation [l.l.a] IF tusk is accomplished by a method M THEN AB-undesired-value(task-i infofeedback) [W where tusk-i E subtasks(tusk M) and info E output(task-i) ELSE PI IF there is alternative value, v, for info i, where i E input(task) for which tusk could have produced feedback for info THEN AB-undesired-value(task-i i v) WI where i E output(task-i) n ,?I tusk-j: i E output(task-j) n tusk-i E subtasks(task-j) ELSE over-constrained-task-semantics WI IF the violated semantics is an enumerated domain relation THEN incomplete-domain-relation [2.2.a] Figure 2: The blame-assignment algorithm. The different diagnostic hypotheses that can be postulated are shown in boldface. The ability to infer possible alternative values for types of information produced by the problem-solver’s interme- diate subtasks is based on the SBF specification first, of the functional semantics of the task under inspection, and second, of the domain relations on which this semantics is based. As we have already described, based on these types of knowledge, the blame-assignment method is able to evaluate whether or not a particular value tuple validates a task’s semantics. In addition, given a partially specified tuple, the blame-assignment method is potentially able to identify possible values for the unspecified tuple members such that the tuple belongs in the relation and satisfies the semantics. If the domain relation is exhaustively described in a truth table, then the possible values are inferred through a search of this table for all the tuples that match the par- tially specified one. If it is evaluated by a predicate, then there are two possibilities. Either an inverse predicate is also specified in the SBF model, such that it maps the task’s output to its possible inputs, or the task’s input is an instance of an object type with an exhaustively described domain. In the former case, the inverse predicate is applied to the task’s desired output to produce the possible values for the alternative input which could lead to its production. In the latter case, the input domain is searched for these values which, together with the desired output, would satisfy the semantics of the task. Thus, given the output desired of a task, and based on the SBF specification of its semantics the blame-assignment method is potentially able to infer the input which could possibly lead the task to produce it. Clearly, if none of the above conditions is true, then no alternative inputs can be inferred. The semantics of path-increase, which specify that the produced path must be a extension of the path selected by tmp-path-selection (preJix(tmp-path path)), fails for the feedback path. The path selected in the last repetition of the loop was ((center 10th) (10th utluntic) (at&tic walnut)) and it is not a prefix of the feedback, therefore the desired path could not possibly have been produced by the path-increase task given the temporary path it received as input. Thus, AUTOGNOSTIC attempts to infer alternative values for the in- put temporary path which could enable path-increase to produce the desired path. The relation prefix is eval- uated by a predicate, and the SBF model also specifies an inverse predicate for it which, given a path, produces all its possible prefixes. Given the possible prefixes of the desired path, AUTOGNOSTIC infers that if ((center 10th) (center mup- pie) (mupple dulney)) had been selected, the path- increase subtask could, in principle, have produced the feedback. Thus, the cause of the failure must lie within the subtask which selected the “wrong” path, tmp-path-selec tion. Therefore, it focuses the investigation towards identifying why this earlier subtask did not select the right path. The blame-assignment method may reach a leaf task whose semantics is validated by both the feedback and the value actually produced for its output (see Figure 2[ 1.11). This situation implies that the problem-solver’s task struc- ture is not sufficiently tailored to producing the right kind of solutions. In such situations the blame-assignment method postulates the following two types of errors as possible causes for the failure. First, the task’s semantics may be under-speci$ed and they allow both the actual and feed- back values to be produced, when only the latter conforms with the requirements of the problem-solver’s environment. 962 Model-Based Reasoning In such cases, the function of this subtask should be re- fined (i.e., more domain relations should be added to its functional semantics), so that the overall problem-solving process becomes more selective. Second, if the task seman- tics refer to domain relations exhaustively described in truth tables, the blame-assignment method hypothesizes as an ad- ditional cause of the failure the incorrect domain knowledge of the problem solver regarding this relation which allows the mapping from its actual input to its actual output (see Figure 2[1 .l .a]). This mapping could potentially be in- correct, in which case the task should have never produced the undesired actual value, and it should have preferred the feedback. Among these two hypotheses, the subsequent re- pair step will first attempt to address the former one, which is the more grave one, by identifying new semantics for the under-specified operator. If this is not possible, it will attempt to address the latter one. AUTOGNOSTIC evaluates the functional semantics of tmp-path-selection and notices that it is satisfied by the desired temporary path. Indeed, this path belongs in the set of paths that ROUTER has already explored. Thus, AUTOGNOSTIC infers that this desired value could have been produced by tmp-path-selection. This task is a leaf task, and therefore, the error must lie within the operator that accomplishes it (notice, the task’s semantics does not depend on any truth-table defined domain relations). That is, the specification of the tmp-path-selection opera- tor is incomplete with respect to quality of solutions that is desired of the overall path-planning task. Indeed, the intrazonal-search-method performs a breadth- first search in the space of possible paths with no particular quality criterion to guide it. If there is a quality desired of ROUTER’S paths, then a “best’‘-first search method would be more appropriate. By postulating the under-specification of the tmp-path-selection operator as cause for this failure of ROUTER, AUTOGNOSTIC’S subsequent repair step is able to search for additional semantics with which to characterize the information transformation of this opera- tor. As a result, it modifies this operator so as to select the shortest of the available explored paths, thus transforming the intrazonal-search method intoagreedy,shortest- path first search. Alternatively, the blame-assignment method may reach a task whose semantics is violated by the feedback, and no alternative values can be found for its input which can satisfy it (see Figure 2[2.2]). Under these circumstances, it postulates that the cause of the failure may be the over- constrained task semantics which does not allow it to pro- duce the feedback as its output, although this value is ac- ceptable given the information transformation intended of the overall task. In such cases the function of this subtask should be respecified in terms of other domain relations, in order to extend the class of its output values to include in it the feedback value. In addition, if the domain relations defining its semantics are exhaustively described by truth ta- bles, the blame-assignment method postulates that the cause of the failure may be the incomplete domain knowledge of the problem solver regarding this relation which does not include the a tuple relating the task’s actual input with its de- sired output, although it belongs in this relation (see Figure 2[2.2.a]). This could have been the case in our example, if the tmp-path-selection semantics specified that the se- lected temporary path had to be the most scenic (or, for that matter, any other property that the desired temporary path does not satisfy) of the already explored paths. In this case, the blame-assignment method would have postulated that this operator was over-constrained. Assigning blame is ntless unless it results in repair- ing the error and improving the problem solving. Indeed, whether or not repair of the fault identified by blame as- signment results in improvement in problem-solving per- formance provides a good measure of the efficacy of the blame-assignment method. The repair of an incorrectly specified operator in AUTOGNOSTIC involves the discovery of relations that characterize the examples of behavior de- sired of the operator, and that differentiate them from the examples of its actual undesired behavior. As described in (Stroulia 1995), these discovered relations are used to re-specify the operator’s functional semantics. In addition to ROUTER, AUTOGNOSTIC to date has been integrated with QITIK:! (Goel 1989; Goel 1991), a de- sign system, and REFLECS a reactive robot. In one set of experiments, AUTOGNOSTIC was tested with 8, 4, and 1 individual problems with ROUTER, KRITIK2, and RE- FLECS respectively. Each experiment in this set addressed a different learning task in that blame assignment identi- fied a different kind of fault. We found that after repair the problem-solver’s performance improved in each exper- iment. The differences among the three problem solvers (paradigm: deliberative vs. reactive; task: planning, de- sign, navigation) provide some evidence for the generality of the SBF language and the model-based blame-assignment method. Also, to evaluate long-term learning, in another set of experiments, AUTOGNOST~ integration with ROUTER was tested twice with 3 sequences of 40 randomly generated problems. For each problem, a different kind of “better” path was given as feedback. In this set of experiments, we found that AUTOGNOSTIC converged to a modified task structure of ROUTER after modifying the same three or four operators. The modified task structure was significantly superior to the original one in terms of problem-solving performance (Stroulia 1995). Collectively these exper- iments appear to indicate that the SBF language and the model-based method for blame assignment are appropriate for problem solvers whose behaviors can be described in terms of the interactions among a set of identifiable design elements with well-defined function Related Resea The analysis of problem solving in terms of task structures builds on Chandrasekaran (1989). The representation lan- guage of SBF models is based on another type of SBF mod- els that describe how physical devices work (Goel 1989). KIUTIK=! uses SBF models of physical devices for diagnosis Model-Based Reasoning 963 (Stroulia et. al 1992; Goel& Stroulia 1996) and for design adaptation (Goel 1991). In formulating AUTOGNOSTIC’S SBF language for modeling problem solvers, we needed to make many changes to KRITIK2’s SBF models. For exam- ple, we had to significantly enhance the SBF language for capturing the functional semantics of tasks. Teiresias (D avis 1980), Gordius (Simmons 1988), Cream (Weintraub 1991), and Meta-Aqua (Ram & Cox 1994) identify knowledge faults. In addition to Lex (Mitchell et. al 1981) and Prodigy (Carbonell et. al 1989), which we have already discussed, Castle (Freed at. al 1992) too identifies processing faults. It uses a model of the problem solver that specifies the behavior expected of it, the interacting components of the problem solver, and the as- sumptions underlying their correct behavior. This is similar to the SBF specification of functional semantics of the tasks and subtasks of the problem solver. Like AUTOGNOSTIC, Castle’s model provides a justification structure for the ex- pected problem-solving behavior. But Castle’s models lack the hierarchical organization and compositional semantics of SBF models, thus, they do not provide any guidance in searching through the inter-dependencies of the problem- solver’s components. The blame assignment task in Castle is also different: given the failure of an explicitly stated assumption about the problem-solver’s behavior, it identi- fies the component whose design assumptions support the failed expectation and postulates errors in its functioning. In contrast, given a behavior desired of the problem solver, AUTOGNOSTIC uses the functional semantics of tasks and subtasks to postulate alternative behaviors desired of them. Conclusions In this paper, we have described a blame-assignment method, able to identify faults in the specification of a problem-solver’s operators, based on the problem-solver’s SBF model. The SBF model of a problem solver captures (i) the functional semantics of the problem-solver’s tasks, (ii) the compositional semantics of the methods that recur- sively synthesize the inferences drawn by its operators into the outputs of its overall task, and (iii) the “causal” inter- dependencies between its tasks and domain knowledge. The SBF specification of the tasks’ functional semantics plays a variety of roles in this blame-assignment method. First, the functional semantics of the problem-solver’s over- all task establishes the range of behaviors that the problem solver is intended to deliver, irrespective of whether or not it is explicitly designed to do so. Second, based on the functional semantics of the problem-solver’s intermediate subtasks and the overall behavior desired of it, the blame- assignment method infers the behaviors desired of these subtasks. Third, by comparing the functional semantics of the problem-solver’s operators with the behaviors desired of them, it identifies when the functions originally designed in these operators are incorrect (under-specified or over- constrained) with respect to the behaviors desired of the problem-solver. The SBF specification of the compositional semantics of the methods that the problem solver uses to accomplish its overall task enables the blame-assignment method to in- vestigate only the tasks involved in the production of the output for which an undesired value was produced . The blame-assignment method focuses the in vestigation from higher- to lower- level tasks and from one type of informa- tion to another, and thus, limits the number of information inter-dependencies that it examines. Finally, based on the “causal” inter-dependencies be- tween the tasks’ functional semantics with the problem- solver’s domain knowledge, the blame-assignment method is able to identify incorrect uses of this knowledge specification of the problem-solver’s operators, and in this domain knowledge. in the errors eferences Carbonell, J.G.; Knoblock, C.A.; and Minton, S. 1989. Prodigy: An Integrated Architecture for Planning and Learning. Architec- tures for Intelligence, Hillsdale, NJ: LEA. Chandrasekaran, B. 1989. Task Structures, Knowledge Acquisi- tion and Machine Learning. Machine Learning 4: 341-347. Davis, R. 1980. Meta-Rules: Reasoning about Control. ArtiJciuZ Intelligence 15: 179-222. Freed, M.; Krulwich, B.; Birnbaum, L.; and Collins, G. 1992. Reasoning about performance intentions. In Proc. of the Four- teenth Annual Conference of Cognitive Science Society, 7- 12. Goel, A. 1989. Integration of Case-Based Reasoning and Model- Based Reasoning for Adaptive Design Problem Solving, Ph.D. diss, The Ohio State University. Goel, A. 1991. A Model-Based Approach to Case Adaptation, In Proceedings of the Thirteenth Annual Conference of the Cogni- tive Science Society, 143-148. Goel, A.; Ali; K.; Donnellan, M.; Gomez, A.; and Callantine, T. 1994. Multistrategy Adaptive Navigational Path Planning. IEEE Expert, 9(6):57-65. Goel, A. and Stroulia, E. 1996. Functional Representation and Functional Device Models and Model-Based Diagnosis in Adap- tive Design. In ArtiJiciul Intelligence in Design, Engineering and Manufacturing (to appear). Mitchell, T.M.; Utgoff, P.E.; Nudel, B.; and Banerji, R.B. 1981. Learning problem-solving heuristics through practice. In Proc. of the Seventh International Joint Conference on Artijciul Intel- ligence, 127- 134. Ram, A.; and Cox M.T. 1994. Introspective Reasoning Using Meta-Explanations for Multistrategy Learning. Machine Learn- ing: A Multistrategy Approach IV. (eds.) R.S. Michalski and G. Tecuci, 349-377. San Mateo, CA: Morgan Kaufmann, Samuel, A. 1959. Some studies in machine learning using the game of checkers. IBM Journal of R&D. Reprinted in Feigen- baum and Feldman (eds): Computers und Thought, McGraw- Hill, New York. Simmons, R.G. 1988. Combining Associational Causal Reason- ing to Solve Interpretation and Planning Problems, Ph.D. diss, MIT. Stroulia, E.; Shankar, M.; Goel, A.; and Penberthy, L. 1992. A Model-Based Approach to Blame Assignment in Design. In J.S. Gero (ed.) Proc. of the Second International Conference on AI in Design, 5 19-537. Kluwer Academic Publishers. Stroulia, E. 1995. Failure-Driven Learning as Model-Based Self- Redesign, Ph.D. diss, Georgia Inst. of Technology. Weintraub, M. 1991. An Explanation-Based Approach to As- signing Credit, Ph.D. diss, The Ohio State University. 964 Model-Based Reasoning
1996
142
1,780
ualitative Multi iagnosis of Continuous ehavioral Modes Siddarth Subramanian Raymond J. Mooney National Instruments - Georgetown Dept. of Computer Sciences 1978 S. Austin Ave. University of Texas at Austin Georgetown, Texas 78626 Austin, Texas 78712 sid@georgetown.com mooney@cs.utexas.edu Abstract Most model-based diagnosis systems, such as GDE and Sherlock, have concerned discrete, static systems such as logic circuits and use sim- ple constraint propagation to detect inconsisten- cies. However, sophisticated systems such as QSIM and QPE h ave been developed for qualita- tive modeling and simulation of continuous dy- namic systems. We present an integration of these two lines of research as implemented in a system called QDOCS for multiple-fault diagno- sis of continuous dynamic systems using QSIM models. The main contributions of the algo- rithm include a method for propagating depen- dencies while solving a general constraint satis- faction problem and a method for verifying the consistency of a behavior with a model across time. Through systematic experiments on two re- alistic engineering systems, we demonstrate that QDOCS demonstrates a better balance of gen- erality, accuracy, and efficiency than competing methods. Kuipers 1984). Our work uses QSIM (Kuipers 1994) as the modelling language and applies a very general diagnostic technique to models described in this lan- guage. Previous approaches to diagnosing faults in systems described with QSIM models have been limited in scope and have been unable to work with fault modes (Ng 1990; Lackinger & Nejdl 1991) or have made a single- fault assumption (Dvorak 1992). Most previous work on model-based diagnosis (Reiter 1987; de Kleer & Williams 1987) h as concentrated on static systems and is generally insufficient to diagnose continuous dynamic systems. Few of the other approaches to diagnosis of continuous systems (Oyeleye, Finch, & Kramer 1990; Dague et al. 1991) h ave made use of a general mod- elling language such as that provided by QSIM or used any of the general diagnostic formalisms introduced by Reiter or DeKleer. Introduction In a world increasingly filled with devices that exhibit complex dynamic behavior, online diagnostic systems are becoming increasingly important. To address this problem, researchers have devised various solutions over the last two decades (Shortliffe & Buchanan 1975; de Kleer & Williams 1987). These systems have been applied to the problems of medical diagnosis, as well as to combinational circuit diagnosis and similar domains. However, as we shall see, these diagnosis approaches are not directly suited to the kinds of continuous dy- namic systems that we are interested in. Traditional modes of reasoning about physical sys- tems use differential equations to model their dy- namics. However, these techniques are limited in their ability to usefully model large systems because of the difficulties in constructing accurate formula- tions of large systems, and because of the computa- tional complexities involved in solving large systems of differential equations. One solution to this prob- lem is to use Qualitative Reasoning (Forbus 1984; This work’ is an integration of the two paradigms of model-based diagnosis and qualitative reasoning into a general, multiple-fault diagnosis system for contin- uous dynamic systems using behavioral modes with a priori probabilities. The diagnostic architecture is sim- ilar to SHERLOCK (de Kleer 8z Williams 1989) and the algorithm builds on INC-DIAGNOSE (Ng 1990). The system uses a general constraint-satisfaction technique to detect faults and trace dependencies in order to generate conflicts and diagnoses. A QsIM-based sim- ulation component is used to verify hypotheses and detect additional inconsistencies. The implemented system, QDOCS (Qualitative Diagnosis Of Continuous Systems), is powerful enough to accurately diagnose a number of different faults in the Space Shuttle’s Re- action Control System and a simple chemical reaction tank. An Example An example used to illustrate the algorithm consists of a simple bathtub with a drain. It is assumed that the bathtub is monitored by sensors measuring the ‘A much more detailed account of this work can be found in (Subramanian 1995). Model-Based Reasoning 965 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. amount of water in the tub and the flow rate of the water through the drain. Some of the faults that can be posited about this system include a blocked drain, leaks in the tank, and sensors stuck at various levels. This system is described using a qualitative differen- tial equation or a QDE. A QDE is a set of constraints, each of which describes the relationship between two or rnore variables. For instance, an M+ relation is said to exist between two variables if one is a monotoni- cally increasing function of the other. So, in our nor- mal bathtub model, there is an M+ relation between the amount and the level of water in the bathtub and also between the the level and pressure, and the pressure and outflow rate. However, in a model of a blocked bathtub, the outflow rate is zero, and it is described by the constraint ZERO-STD. The use of discrete mode variables in QSIM allows us to combine normal and faulty models of a system into a single description as shown here: (M+ amount level) (M+ level pressure) (mode (drain-mode normal) (M+ pressure outflow)) (mode (drain-mode blocked) (ZERO-STD outflow)) (ADD netflow outflow inflow) (D/DT amount netflow) (CONSTANT inflow if*) Here, the variable drain-mode takes on the possible val- ues of normal, blocked, or unknown and the constraints shown above correspond to the two known modes of t,he bathtub’s behavior. For the purposes of diagnosis, these mode variables can then be associated with components of the sys- tem and their different values with behavioral modes of the component. Each of these behavioral modes has an a priori probability specified by the model-builder. The component structure used to represent the bath- tub looks like this: (defcomponents bathtub (drain drain-mode (normal 0.89)(blocked 0.1) (unknown 0.01)) (levelsensor levelsensor-mode (normal 0.79) (stuck-at-0 0.1) (stuck-at-top 0.1) (unknown 0.01)) (flowsensor flowsensor-mode (normal 0.79) (stuck-high O.l)(stuck-at-0 O.l)(unknown 0.01)) (inletvalve inletvalve-mode (normal 0.79) (stuck-closed 0.1) (unknown 0.01))) Here, each entry consists of the component name (e.g., drain), the mode variable (drain-mode) and a list of behavioral modes with their a priori probabilities ((normal 0.89) (blocked 0.1) (unknown 0.01)). The input to the diagnostic algorithm consists of a behavior, which is a sequence of qualitative values for a subset of the variables corresponding to sensor read- ings. The output of the algorithm is an assignment of values to the mode variables such that the resulting model is consistent with the observed behavior, i.e., the behavior corresponds to a QSIM simulation of the model. As an example, suppose QDOCS is given the fol- lowing single set of sensor readings from a be- havior of the bathtub: (level-sensed (0 top) > , (outflow-sensed 0) (i.e., t’he level sensed is some- where between 0 and top and the outflow sensed is 0). This is clearly inconsistent with the nor- mal model of the system which would predict a flow through the drain. Some of the valid diagnoses for this behavior include [(drain-mode blocked)] , [(flowsensor-mode stuck-at-O)] and [(drain-mode blocked) (flowsensor-mode stuck-at-O)]. The above example motivates an approach of apply- ing QSIM’S constraint satisfaction techniques to detect inconsistencies between the sensor readings and the model. However, since the systems under study are dynamic systems that maintain temporal consistency, satisfying the constraints for a given set of sensor read- ings does not guarantee that the sequence of readings is consistent, The approach we discuss in the next sec- tion includes using the continuity checking of QSIM to check this temporal consistency. QDOCS’S Diagnostic Approach QDOCS uses a standard diagnostic approach similar t’o that of (de Kleer & Williams 1989) and combines it with a hypothesis checker (and conflict, generator) based on the QSIM algorithm of (Kuipers 1994). Diagnosis Construction: Like SHERLOCK’S tech- nique for constructing diagnoses, QDOCS uses a best- first search mechanism and focusses its search on the leading candidate diagnoses as determined by t$heir a priori probabilities. QDOCS maintains an agenda of hypotheses to be tested and a list of conflict set,s. The former is init,ialized t’o the single hypothesis t’hat, ev- erything is functioning normally while the latter is ini- tialized to the null set. The hypothesis checker is first called with the initial hypothesis of all the components being normal. If it re- turns a null value, the given behavior is consistent with the hypothesis; in other words, the given behavior is a possible result of running QSIM on the model assuming all component mode variables are in the normal mode. If there is no QSIM simulation that results in the given behavior, the checker returns a conflict’ set of compo- nent mode variable values. This conflict set is then added to the set of conflict set,s, and t,he agenda is ex- panded by adding all hypotheses generated by chang- ing the mode value of a single component in such a way that it hits2 all the conflict sets. This process is repeated until one or more hypotheses are found to be consistent with the observations. Checking Hypotheses: Most diagnostic systems like GDE (de Kleer & Williams 1987) use simple con- straint propagation to det#ermine conflict sets. How- ever, QSIM requires a more complete constraint sat- 2A conflict is hit by a hypothesis if some literal in the diagnosis contradicts a literal in the conflict. 966 Model-Based Reasoning isfaction algorithm since a qualitative constraint typ- ically does not entail a unique value for a remaining variable when all its other variables have been assigned. An earlier attempt to use QSIM to track dependencies for diagnosis (Ng 1990) only used a simple propagator. Since the propagator alone is not complete, Ng’s pro- gram, INC-DIAGNOSE is not guaranteed to detect all inconsistencies. QSIM takes a set of initial qualitative values for some or all of the variables of a model and produces a rep- resentation of all the possible behaviors of the system. The inputs to QSIM are 1) a qualitative differential equation (&DE) represented as a set of variables and constraints between them, and 2) an initial state rep- resented by qualitative magnitudes and directions of change for some of these variables. QSIM first com- pletes the state by solving the constraint satisfaction problem (CSP) defined by the initial set of values and the &DE. For each of the completed states satisfying the constraints, QSIM finds qualitative states that are possible successors and uses constraint satisfaction to determine which of these are consistent. The process of finding successors to states and filtering on constraints continues as QSIM builds a tree of states called a be- havior tree. There are two possible ways in which the &DE cor- responding to a hypothesis can be inconsistent with a given set of sensor readings: 1) a particular set of readings may be incompatible with the &DE, or 2) all the sets of readings may be compatible with the &DE but the sequence may not correspond to any particular behavior in a QSIM behavior tree. QDOCS’S approach is to first test for consistency between individual sets of readings and the QDE by using QSIM’S CSP algo- rithms, and then, test to see if the model fits the se- quence, i.e., if the sequence of readings corresponds to a behavior generated by QSIM. For the first step, QDOCS modifies QSIM’S constraint satisfier to keep track of mode-variables whose values played a role in reducing the set of possible values for a variable. Each variable and constraint is associated with an initially empty dependency set of mode vari- ables. Whenever a constraint causes a variable’s set of possible values to decrease, the dependency set of the variable is updated with the union of its old de- pendency set, the dependency set associated with the constraint, and the mode variable, if any, that is associ- ated with the constraint. When a variable reduces the set of possible tuples associated with the constraint, the constraint’s dependency set is similarly updated with the union. When a variable is left with no possi- ble values, its current dependency set is returned as a conflict set. QDOCS’S approach to solving the CSP, based on QSIM’S, is to first establish node consistency by en- suring that each constraint is satisfied by the possible values of the variables it acts upon, and then use Waltz filtering (Waltz 1975) to establish arc consistency, by propagating the results of the node consistency checker to other variables and constraint$s. Finally, QDOCS uses backtracking to assign values to variables. The first step above is a standard constraint propagation algo- rithm as used in t,raditional diagnostic systems while the last two steps will be referred to as the constraint satisfaction algorithm of QDOCS. Mode variable de- pendencies are maintained at each stage of this pro- cess so that the procedure can stop if an inconsistency is detected at any step. The Waltz filtering step is performed incrementally and at each point selects the most restrictive constraint (i.e., the one most likely to fail) to process and propagates its effect on the rest of the network. This heuristic of first filtering on the most restrictive constraints helps reduce the size of conflict sets since the most restrictive constraints are those with the least number of initial possible tuples, and therefore are more likely to lead to an inconsistency. For the second part of the algorithm, QDOCS must track a QSIM simulation and match all possible suc- cessors at each stage of the simulation with the given sensor readings. Successors that do not either match the current, set of sensor readings or t,he next set of sen- sor readings in the observed sequence are pruned out. Whenever the computed states corresponding to a par- ticular set of sensor readings fail to have any successors matching the next set of sensor readings, an inconsis- tency is noted and the entire hypothesis is returned as a conflict. This last step differs from the general QDOCS ap- proach of trying to isolate the individual mode vari- able values responsible for an inconsistency. We dis- covered through our experiments (Subramanian 1995) that keeping track of the dependencies of variable val- ues on mode variables across time was computationally expensive while giving us little benefit as most conflict sets were still almost as large as the entire hypothesis set. This kind of inconsistency was much rarer than in- consistencies in individual states detected through ei- ther the propagation or constraint satisfaction phases of the hypothesis checker. Experiments The experiments presented in this section test t,hree primary claims about QDOCS. First,, because it can de- tect inconsistencies and generate conflicts when prop- agation is blocked, QDOCS is more accurate than an approach that only uses propagation such as INC- DIAGNOSE. Second, because it4 uses dependencies and conflict sets to focus diagnosis, QDOCS is more efficient than a baseline generate-and-test approach. Third, each of the phases of QDOCS’S hypothesis checking al- gorithm contributes to improving its accuracy or effi- ciency. Experimental Methodology: In each of our do- mains, the a priori probabilities in the model were used to randomly generate sets of multiple faults. QSIM was used to simulate the model corresponding to these mul- Model-Based Reasoning 967 tiple fault hypotheses, and a behavior randomly cho- sen from the resulting behavior tree was used to test QDOCS. QDOCS was compared to various different techniques and for each of these we collected data on the effi- ciency and accuracy of the methods. First, a generate- and-test method was used as a baseline comparison. This technique used the same hypothesis checker as QDOCS, and simply tests hypotheses generated in most-probable-first order until one or more hypothe- ses are found to be consistent with the observations. Note that given the fact that QSIM makes acausal in- ferences, a generate-and-test procedure is the best we can do without using &Docs-style dependency propa- gation. We also compared QDOCS with a number of ablated versions in order to justify all the different parts of the hypothesis checker. First, it was compared against a system that simply used QDOCS’S constraint propaga- tion procedure which is equivalent to INC-DIAGNOSE (enhanced to handle behavioral modes). Another ab- lated version of QDOCS we test against is one with both the propagation and constraint satisfaction parts of the code but without across-time verification. This test is to determine if the across-time verifier (which is one of the most computationally expensive parts of QDOCS) is worthwhile in improving the accuracy of the system. Finally, we test a version of QDOCS that used the constraint satisfaction and across-time verifi- cation portions of the hypothesis checker but skipped the constraint propagation portion. This comparison was run to verify that that the constraint propagation algorithm speeds up the constraint satisfaction process even though the constraint satisfaction and across-time verification algorithms together are just as powerful (in terms of accuracy of diagnoses) as the complete QDOCS hypothesis checker. On each problem, the tested technique was run until the best remaining hypothesis had a probability of less than a tenth of the probability of the best (i.e., most probable) hypothesis that was found to be consistent with the observations thus far. This would give us a range of all the consistent hypotheses that were within an order of magnitude of each other in a priori prob- ability and would provide a termination condition for the top-level procedure of QDOCS. In each of our do- mains, we first generated a test suite of 100 examples and ran the above experiments on all of them. Reaction Control System: The first problem we look at is that of diagnosing faults in the Reaction Con- trol System (RCS) of the Space Shuttle. The RCS is a collection of jets that provides motion control for the orbiter when it is in space. These jets are fired ap- propriately whenever changes need to be made to the orientation or position of the craft. Detailed descrip- tions of this problem domain and our approaches to it can be found in (Subramanian 1995). A QSIM model for this system was first built by (Kay I Most 1 Run 1 Figure 1: Results in the RCS domain 1992). This model has been extended and modified by us for the purposes of diagnosis. The complete QSIM model contains 135 constraints and 23 compo- nents, each with multiple behavioral modes. Some of the kinds of faults modeled include pressure regulators stuck open and closed, leaks in the helium tank, the fuel tank, or the fuel line, and sensors being stuck low or high. Since the actual probabilities of the faults were un- known, they were assigned by us with normal modes being much more common t#han the fault modes. As with all QDOCS models, we make the assumption that the faults are independent of each other. We ran the series of experiments described above on the RCS system. The results are summarized in Fig- ure 1. The first column reports tIhe average number of hypotheses generated per diagnosis problem for each of the tests. The second and third columns show different measures of accuracy for each method, while the last two columns show different measures of efficiency. For each method we separated out the most proba- ble hypotheses (often more than one if there were a few equally probable hypotheses) and compared these to the correct hypothesis. The percentage of cases where the correct hypothesis was among these is reported in the second column. In many cases, a subset, of the correct faults is sufficient to model the given behav- ior. The third column shows the percentage of cases in which some hypothesis is a subset of the faults of the correct hypothesis. The last, two columns show respectively the average time taken for each problem on a Spare 5 workstation running Lucid Common Lisp and the number of hypotheses the hypothesis checker actually had to test. When we compare the generate-and-test method (first line) to the complete QDOCS algorithm (last line), we see that they both have identical accuracies - in 77% of the cases the correct solution was among the most probable. This result is as expected since a sys- tematic elimination of hypotheses as in the generate and test method is guaranteed to reach the right hy- potheses eventually. The big difference appears in the average number of hypotheses tested - the generate and test method tests 8.7 times more hypotheses than QDOCS. This shows that QDOCS is able to narrow the search space considerably using its dependency propa- gation algorithms but the ratio of run times, which is 2.8 to 1 in favor of QDOCS, indicates that there is a 968 Model-Based Reasoning Figure 2: Results in the Level-Controller domain cost to be paid for this. This is still a substantial ad- vantage for QDOCS over the simpler generate and test method. Figure 1 also shows the results of the ablation tests. We find that using just propagation or propagation and constraint satisfaction reduces the accuracy of the method since we are not verifying the hypotheses across time, while leaving out propagation has no effect on accuracy (compared to QDOCS) but the propagation step does speed up the process of finding contradictions and hence the overall computation time. Another interesting experiment we conducted re- garding run time comparisons between QDOCS and the generate and test method was a study of a part of the RCS subsystem consisting of a single propellant flow path to the thruster. The model for this system is al- most exactly half the size of the full RCS subsystem model. We generated problems in the same way as for the experiments reported on in Figure 1, and ran 100 problems through the generate and test and QDOCS al- gorithms. The accuracies were identical (86% correct, 100% subset) between the two methods. However, the run times averaged 264 seconds for the generate and test and 221 for QDOCS. This is a ratio of only 1.2 to 1 even though the ratio of hypotheses tested was 4.4 to 1. The corresponding ratios for the complete system are 2.9 to 1 and 8.7 to 1. This suggests that for similar problems, the larger the problem size, the greater the advantage of using a dependency propa- gation algorithm like QDOCS to generate conflict sets. We therefore expect the advantages of QDOCS to be greater for even larger problem sizes. LeveLControlled Tank: We studied one other system, a level controller for a reaction tank, taken from a standard control systems textbook, (Kuo 1991). The main reason this system is of interest is to show that the QDOCS mode of dependency propagation is useful even for feedback systems. Some researchers (e.g., (Dvorak & Kuipers 1992)) have held that such an algorithm would not be useful in dynamic systems with feedback loops because variable values are usually dependent on all constraints. The level-controlled tank is modeled using a QSIM model with 45 constraints and a component structure with 14 components. We ran all the experiments de- scribed in the methodology section on this model of the controlled tank. The results are summarized in Figure 2. As in the equivalent experiments with the RCS, QDOCS does better than the other techniques. It is about 2.2 times faster and tests 6.3 times fewer hy- potheses than the generate and test method. QDOCS is also more accurate than either propagation alone or propagation and constraint satisfaction. Future Work This work needs to be further extended and applied to a variety of different engineering systems. One im- portant first step towards applying such a system is to integrate it with a monitoring system such as MIMIC (Dvorak 1992). This would require the use of semi- quantitative information which is likely to add more power to QDOCS. One area we have investigated but which could use further research is that of efficient caching of possi- ble values of different variables during the constraint satisfaction phase of the algorithm. Traditional truth maintenance systems like the ATMS(de Kleer 1986) are not useful for this purpose since the range of possi- ble values for a variable is rarely narrowed to a single one. Initial results on our attempt at implementing a more general caching mechanism are reported in (Sub- ramanian 1995) but these are somewhat discouraging in that the overheads required to maintain the caches in our implementation are often higher than the com- putational savings. Further investigation will be re- quired to formulate a truly efficient caching scheme. Related Work Compared to QDOCS, the previous diagnosis systems for QSIM models all have important limitations. INC- DIAGNOSE (Ng 1990) was an application of Reiter’s theory of diagnosis (Reiter 1987) to QSIM models. Its main limitations were that first, like Reiter’s theory, it was restricted to models where no fault mode informa- tion was known, and second, it used a constraint prop- agator that was not guaranteed to detect all inconsis- tencies. Another system that used the INC-DIAGNOSE approach in the context of a monitoring system is DI- AMON(Lackinger & Nejdl 1991). Again, due to its de- pendence on the simple constraint propagation in INC- DIAGNOSE, it is only able to detect a small subset of possible faults which QDOCS can diagnose. The other previous diagnosis work on QSIM models, MIMIC (Dvorak 1992), h as several limitations. First, MIMIC requires the model builder to provide a struc- tural model of the system in addition to the QSIM constraint model. This structural model was fixed and could not change under different fault models. QDOCS does not require this since it uses a constraint- satisfaction algorithm to determine the causes for in- consistencies. Second, MIMIC uses a very simple dependency tracing algorithm to generate potential single-fault diagnoses. This algorithm looks at the structural graph from the point at which the fault is de- tected and considers all components it finds upstream Model-Based Reasoning 969 as possible candidates for failure and thus generates a larger set of possible component failures. A number of other researchers have looked at di- agnosis in the context of monitoring continuous sys- tems (Oyeleye, Finch, 8z Kramer 1990; Doyle & Fayyad 1991). Each of these systems concentrates on different aspects of the monitoring process, but none performs multiple-fault diagnosis using behavioral modes. Some recent work by Dressler (Dressler 1994) per- forms model-based diagnosis on a dynamical system (a ballast tank system) using a variant of GDE. It first reduces the model to a version suitable for constraint propagation, and then considers only conflicts gener- ated by constraints acting at a particular time. While this is an efficient technique that apparently works well for their application, it is not a general method since some systems may have faults which can only be de- tected using information gathered across time. Conclusion We have described an architecture for diagnosing sys- tems described by qualitative differential equations that performs multiple-fault diagnosis using behavioral modes. An implemented system, QDOCS, has been shown to be powerful enough to accurately generate di- agnoses from qualitative behaviors of a fairly complex system - the Reaction Control System of the Space Shuttle. The approach is more powerful than previous methods in that it uses 1) a general modelling frame- work (QsIM), 2) a more complete diagnostic architec- ture and 3) a more complete constraint-satisfaction al- gorithm as opposed to simple propagation. References Dague, P.; Jehl, 0.; Deves, P.; Luciani, P.; and Tail- libert, P. 1991. When oscillators stop oscillating. In Proceedings of the Twelfth International Joint Con- ference on Artificial Intelligence, 1109-1115. de Kleer, J., and Williams, B. C. 1987. Diagnosing multiple faults. Artificial Intelligence 32:97-130. de Kleer, J., and Williams, B. C. 1989. Diagnosis with behavioral modes. In Proceedings of the Eleventh International Joint Conference on Artificial Intelli- gence, 1324-1330. de Kleer, J. 1986. An assumption-based TMS. Arti- ficial Intelligence 281127-162. Doyle, R. J., and Fayyad, U. M. 1991. Sensor selection techniques in device monitoring. In Proceedings of the Second Annual Conference on AI, Simulation and Planning in High Autonomy Systems, 154-163. IEEE Computer Society Press. Dressler, 0. 1994. Model-based diagnosis on board: Magellan-MT inside. In Fifth International Workshop on Principles of Diagnosis, 87-92. Dvorak, D., and Kuipers, B. 1992. Model-based mon- itoring of dynamic systems. In Hamscher, W.; Con- sole, L.; and de Kleer, J., eds., Readings in Model- Based Diagnosis. San Mateo, CA: Morgan Kaufmann. 249-254. Dvorak, D. 1992. Monitoring and Diagnosis of Continuous Dynamic Systems Using Semiquantita- tive Simulation. Ph.D. Dissertation, University of Texas, Austin, TX. Forbus, K. D. 1984. Qualitative process theory. Ar- tificial Intelligence 24~85-168. Kay, H. 1992. A qualitative model of the space shuttle reaction control system. Technical Report AI92-188, Artificial Intelligence Laboratory, University of Texas, Austin, TX. Kuipers, B. J, 1984. Commonsense reasoning about causality: Deriving behavior from structure. Artificial Intelligence 24:169-203. Kuipers, B. J. 1994. Qualitative Reasoning: Model- ing and Simulation with Incomplete Knowledge. Cam- bridge, MA: MIT Press. Kuo, B. C. 1991. Automatic Control Systems. Engel- wood Cliffs, New Jersey: Prentice Hall. Lackinger, F., and Nejdl, W. 1991. Integrating model-based monitoring and diagnosis of complex dy- namic systems. In Proceedings of the Twelfth Inter- national Joint Conference on Artificial Intelligence, 1123-1128. Ng, H. T. 1990. Model-based, multiple fault diagnosis of time-varying, continuous physical devices. In Pro- ceedings of the Sixth IEEE Conference on Artificial Intelligence Applications, 9-15. Oyeleye, 0.0.; Finch, F. E.; and Kramer, M. A. 1990. Qualitative modeling and fault diagnosis of dynamic processes by Midas. Chemical Engineering Commu- nications 96:205-228. Reiter, R. 1987. A theory of diagnosis from first principles. Artificial Intelligence 32:57-95. Shortliffe, E., and Buchanan, B. 1975. A model of inexact reasoning in medicine. Mathematical Bio- sciences 23:351-379. Subramanian, S., and Mooney, R. J. 1995. Multiple- fault diagnosis using qualitative models and fault modes. In IJCAI-95 Workshop on Engineering Prob- lems in Qualitative Reasoning. Subramanian, S. 1995. Qualitative Multiple-Fault Di- agnosis of Continuous Dynamic Systems Using Be- havioral Modes. Ph.D. Dissertation, Department of Computer Science, University of Texas, Austin, TX. Also appears as Artificial Intelligence Lab- oratory Technical Report AI 95-239 and at URL ftp://ftp.cs.utexas.edu/pub/mooney/papers/qdocs- dissertation-95.ps.Z. Waltz, D. 1975. Understanding line drawings of scenes with shadows. In Winston, P. H., ed., The Psychology of Computer Vision. Cambridge, Mass.: McGraw Hill. 19-91. 970 Model-Based Reasoning
1996
143
1,781
odel-based A eactive Self-Configuring Syste Brian C. Williams and P. Pandurang Nayak Recom Technologies, NASA Ames Research Center, MS 269-2 Moffett Field, CA 94305 USA E-mail: Williams ,nayak@ptolemy . arc .nasa.gov Abstract This paper describes Livingstone, an implemented kernel for a model-based reactive self-configuring au- tonomous system. It presents a formal characteriza- tion of Livingstone’s representation formalism, and re- ports on our experience with the implementation in a variety of domains. Livingstone provides a reac- tive system that performs significant deduction in the sense/response loop by drawing on our past experi- ence at building fast propositional conflict-based al- gorithms for model-based diagnosis, and by framing a model-based configuration manager as a proposi- tional feedback controller that generates focused, opti- mal responses. Livingstone’s representation formalism achieves broad coverage of hybrid hardware/software systems by coupling the transition system models un- derlying concurrent reactive languages with the qual- itative representations developed in model-based rea- soning. Livingstone automates a wide variety of tasks using a single model and a single core algorithm, thus making significant progress towards achieving a cen- tral goal of model-based reasoning. Livingstone, to- gether with the HSTS planning and scheduling engine and the RAPS executive, has been selected as part of the core autonomy architecture for NASA’s first New Millennium spacecraft. Introduction and Desiderata NASA has put forth the challenge of establishing a “virtual presence” in space through a fleet of intelli- gent space probes that autonomously explore the nooks and crannies of the solar system. This “presence” is to be established at an Apollo-era pace, with software for the first probe to be completed in 1997 and the probe (Deep Space 1) to be launched in 1998. The final pressure, low cost, is of an equal magnitude. To- gether this poses an extraordinary challenge and op- portunity for AI. To achieve robustness during years in the harsh environs of space the spacecraft will need to radically reconfigure itself in response to failures, and then navigate around these failures during its remain- ing days. To achieve low cost and fast deployment, one- of-a-kind space probes will need to be plugged together *Authors listed in reverse alphabetical order. Livingstone, integrated with the HSTS planning and quickly, using component-based models wherever pos- sible to automatically generate flight software. Finally, the space of failure scenarios and associated responses will be far too large to use software that requires pre- launch enumeration of all contingencies. Instead, the spacecraft will have to reactively think through the consequences of its reconfiguration options. We made substantial progress on each of these fronts through a system called Livingstone, an implemented kernel for a model-based reactive self-configuring au- tonomous system. This paper presents a formalization of the reactive, model-based configuration manager un- derlying Livingstone. Several contributions are key. First, the approach unifies the dichotomy within AI between deduction and reactivity (Agre & Chapman 198’7; Brooks 1991). We achieve a reactive system that performs significant deduction in the sense/response loop by drawing on our past experience at building fast propositional conflict-based algorithms for model- based diagnosis, and by framing a model-based config- uration manager as a propositional feedback controller that generates focused, optimal responses. Second, our modeling formalism represents a radical shift from first order logic, traditionally used to characterize model- based diagnostic systems. It achieves broad coverage of hybrid hardware/software systems by coupling the transition system models underlying concurrent reac- tive languages (Manna & Pnueli 1992) with the qual- itative representations developed in model-based rea- soning. Reactivity is respected by restricting the model to concurrent propositional transition systems that are synchronous. Third, the long held vision of model- based reasoning has been to use a single central model to support a diversity of engineering tasks. For model- based autonomous systems this means using a single model to support a variety of execution tasks including tracking planner goals, confirming hardware modes, reconfiguring hardware, detecting anomalies, isolating faults, diagnosis, fault recovery, and safing. Living- stone automates all these tasks using a single model and a single core algorithm, thus making significant progress towards achieving the model-based vision. Model-Based Reasoning 971 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. 0 Act Legend Valve Pym valve Figure 1: Engine schematic. Valves in solid black are closed, while the others are open. scheduling system (Muscettola 1994) and the RAPS The rest of the paper is organized as follows. In executive (Firby 1995)) the next section we introduce the spacecraft domain was demonstrated to success- fully navigate the simulated NewMaap spacecraft into and the problem of configuration management. We Saturn orbit during its one hour insertion window, de- spite about half a dozen failures. Consequently, Liv- then introduce transition systems, the key formalism ingstone, RAPS, and HSTS have been selected to fly for modeling hybrid concurrent systems, and a formal- Deep Space 1, forming the core autonomy architecture of NASA’s New Millennium program. In this archi- ization of configuration management. Next, we dis- tecture (Pell et al. 1996) HSTS translates high-level goals into partially-ordered tokens on resource time- lines. cuss model-based configuration management and its RAPS executes planner tokens by translating them into low-level spacecraft commands while enforc- key components: mode identification and mode recon- ing temporal constraints between tokens. Livingstone figuration. We then introduce algorithms for statisti- tracks spacecraft state and planner tokens, and recon- figures for failed tokens. cally optimal model-based configuration management using conflict-directed best-first search, followed by an empirical evaluation of Livingstone. Example: Autonomous Space Exploration Figure 1 shows an idealized schematic of the main en- gine subsystem of Cassini, the most complex spacecraft built to date. It consists of a helium tank, a fuel tank, an oxidizer tank, a pair of main engines, regulators, latch valves, pyro valves, and pipes. The helium tank pressurizes the two propellant tanks, with the regu- lators acting to reduce the high helium pressure to a lower working pressure. When propellant paths to a main engine are open, the pressurized tanks forces fuel and oxidizer into the main engine, where they combine and spontaneously ignite, producing thrust. The pyro valves can be fired exactly once, i.e., they can change state exactly once, either from open to closed or vice versa. Their function is to isolate parts of the main en- gine subsystem until needed, or to isolate failed parts. The latch valves are controlled using valve drivers (not shown), and the accelerometer (Act) senses the thrust generated by the main engines. Starting from the configuration shown in the figure, the high-level goal of producing thrust can be achieved using a variety of different configurations: thrust can be provided by either main engine, and there are a number of ways of opening propellant paths to either main engine. For example, thrust can be provided by opening the latch valves leading to the engine on the left, or by firing a pair of pyros and opening a set of latch valves leading to the engine on the right. Other configurations correspond to various combinations of pyro firings. The different configurations have different characteristics since pyro firings are irreversible actions and since firing pyro valves requires significantly more power than opening or closing latch valves. A configuration manager constantly attempts to move the spacecraft into lowest cost configurations that achieve a set of high-level dynamically changing goals. When the spacecraft strays from the chosen configu- raGon due to failures, the configuration manager ana- lyzes sensor data to identify the current configuration of the spacecraft, and then moves the spacecraft to a new configuration which, once again, achieves the de- sired configuration goals. In this sense a configuration manager is a discrete control system that ensures that the spacecraft’s configuration always achieves the set point defined by the configuration goals. Suppose that the main engine subsystem has been configured to provide thrust from the left main engine by opening the latch valves leading to it. Suppose that this engine fails, e.g., by overheating, so that it fails to provide the desired thrust. To ensure that the desired thrust is provided even in this situation, the spacecraft must be transitioned to a new configuration in which thrust is now provided by the main engine on the right. Ideally, this is achieved by firing the two pyro valves leading to the right side, and opening the remaining latch valves (rather than firing additional pyro valves). Models of Concurrent Processes Reasoning about a system’s configurations and au- tonomous reconfiguration requires the concepts of op- erating and failure modes, repairable failures, and con- figuration changes. These concepts can be expressed in a state diagram: repairable failures are transitions from a failure state to a nominal state; configuration 972 Model-Based Reasoning changes are between nominal states; and failures are transitions from a nominal to a failure state. Selecting a restricted, but adequately expressive, for- malism for describing the configurations of a hybrid hardware/software system is essential to achieving the competing goals of reactivity and expressivity. First- order formulations, though expressive, are overly gen- eral and do not lend themselves to efficient reasoning. Propositional formulations lend themselves to efficient reasoning, but are inadequate for representing concepts such as state change. Hence, we use a concurrent tran- sition system formulation and a temporal logic specifi- cation (Manna & Pnueli 1992) as a starting point for modeling hardware and software. Components operate concurrently, communicating over “wires,” and hence can be modeled as concurrent communicating transi- tion systems. Likewise, for software routines, a broad class of reactive languages can be represented natu- rally as concurrent transition systems communicating through shared variables. Where our model differs from that of Manna & Pnueli, is that reactive software procedurally modi- fies its state through explicit variable assignments. On the other hand, a hardware component’s behavior in a state is governed by a set of discrete and continu- ous declarative constraints. These constraints can be computationally expensive to reason about in all their detail. However, experience applying qualitative mod- eling to diagnostic tasks for digital systems, copiers, and spacecraft propulsion, suggests that simple qual- itative representations over small finite domains are quite adequate for modeling continuous and discrete systems. The added advantage of using qualitative models is that they are extremely robust to changes in the details of the underlying model. Hence behav- iors within states are represented by constraints over finite domains, and are encoded as propositional for- mulae which can be reasoned with efficiently. Other authors such as (Kuipers & Astrom 1994; Nerode & Kohn 1993; Poole 1995; Zhang & Mack- worth 1995) have been developing formal methods for representing and reasoning about reactive autonomous systems. The major difference between their work and ours is our focus on fast reactive inference using propo- sitional encodings over finite domains. Transition systems We model a concurrent process as a transition system. Intuitively, a transition system consists of a set of state variables defining the system’s state space and a set of transitions between the states in the state space. Definition 1 A transition system S is a tuple (II, C, 7)) where e II is a finite set of state variables. Each state variable ranges over a finite domain. o C is the feasible subset of the state space. Each state in the state space assigns to each variable in If a value from its domain. ir is a finite set of transitions between transition r E 7 is a function r : C + 2 %! ates. Each represent- ing a state transforming action, where r(s) denotes the set of possible states obtained by applying tran- sition 7 in state s. A trajectory for S is a sequence of feasible states : SO,Sl)... such that for all i 2 0, si+r E T(Q) for some r E 7. In this paper we assume that one of the transitions of S, called r,, is designated the nom- inal transition, with all other transitions being failure transitions. Hence in any state a component may non- deterministically choose to perform either its nominal transition, corresponding to correct functioning, or a failure transition, corresponding to a component fail- ure. Furthermore in response to a successful repair action, the nominal transition will move the system from a failure state to a nominal state. A transition system S = (II, C, 7) is specified using a propositional temporal logic. Such specifications are built using state formulae and the 0 operator. A state formula is an ordinary propositional formula in which all propositions are of the form ok = ek, where & is a state variable and ek is an element of Yk’s domain. 0 is the next operator of temporal logic denoting truth in the next state in a trajectory. A state s defines a truth assignment in the natural way: proposition yk = ek is true iff the value of yk is ek in s. A state s satisfies a state formula 4 precisely when the truth assignment corresponding to s satisfies 4. The set of stat,es characterized by a state formula 4 is the set of all states that satisfy 4. Hence, we specify the set of feasible states of S by a state formula pS. A transition r is specified by a formula pT, which is a conjunction of formulae pT, of the form @i 3 O@,, where @i and @i are state formulae. A feasible state Sk can follow a feasible state sj in a trajectory of S using transition r iff for all formulae Pi,, if sj satisfies the antecedent of pr,, then Sk satisfies the consequent of p7,. A transition ri that models a formula p7, is called a subtransition. Hence taking a transition r corresponds to taking all its subtransitions ri. Note that this specification only adds the 0 opera- tor to standard propositional logic. This severely con- strained use of temporal logic is an essential property that allows us to perform deductions reactively. Example 1 The transition system corresponding to a valve driver consists of 3 state variables {mode, cmdin, cmdout}, where mode represents the driver’s mode (on, 08, resettable or failed), cmdin represents commands to the driver and its associated valve (on, 08, reset, open, close, none), and cmdout represents the commands output to its valve (open, close, or none). The feasible states of the driver are specified by the formula mode = on + (cmdin = open =S cmdout = open) A (cmdin = close =S cmdout = close) A ((cmdin # open A cmdin # close) =s- cmdout = none) mode = ofi j cmdout = none Model-Based Reasoning 973 together with formulae like mode # on) V (mode # 0, .’ 6 . that assert that varia les have unique values. The driver’s nominal transition is specified by the fol- lowing set of formulae: ((mode = on) V (mode = 08) A (cmdin = 08 + O(mode = ofi ((mode = on) V (mode = o-83) A (cmdkn = on) j o(mofle = on) [moie # fuzzed) A (cmdin = reset) + O(mode = on) mo e = resettable) A (cmdin # reset) + O(mode = resettable) (mode = failed) j O(mode = failed) The driver also has two failure transitions specified by the formulae O(mode = failed) and O(mode = resettable), respectively. Configuration management We view an autonomous system as a combination of a high-level planner and a reactive configuration man- ager that controls a plant (Figure 2). The planner generates a sequence of hardware configuration goals. The configuration manager evolves the plant transition system along the desired trajectory. The combination of a transition system and a configuration manager is called a configuration system. More precisely, Definition 2 A configuration system is a tuple we), where S is a transition system, 0 is a feasible state of S representing its initial state, and u : 90,91>*-- is a sequence of state formulae called goal configurations. A configuration system generates a configuration trajectory u : SO, s1 . . . for S such that so is 0 and either se %+I satisfies gd or sd+l E I for some failure transition 7. Configuration management is achieved by sensing and controlling the state of a transition system. The state of a transition system is (partially) observable through a set of variables 0 C II. The next state of a transition system can be controlled through an ex- ogenous set of variables /I C II. We assume that p are exogenous so that the tranitions of the system do not determine the values of variables in p. We also assume that the values of 0 in a given state are independent of the values of p at that state, though they may depend on the values of p at the previous state. Definition 3 A configuration manager C for a transi- tion system S is an online controller that takes as input an initial state, a sequence of goal configurations, and a sequence of values for sensed variables 0, and in- crementally generates a sequence of values for control variables ,I such that the combination of C and S is a configuration system. A model-bused configuration manager is a configura- tion manager that uses a specification of the transition system to compute the desired sequence of control val- ues. We discuss this in detail shortly. 974 Model-Based Reasoning Plant transition system We model a plant as a transition system composed of a set of concurrent component transition systems that communicate through shared variables. The com- ponent transition systems of a plant operate syn- chronously, that is, at each plant transition every com- ponent performs a state transition. The motivation for imposing synchrony is given in the next section. We require the plant’s specification to be composed out of its components’ specification as follows: Definition 4 A plant transition system S = (II, C, 7) composed of a set CV of component transition systems a transition system such that; The set of state variables of each transition system in CD is a subset of II. The plant transition system may introduce additional variables not in any of its component transition systems. Each state in C, when restricted to the appropriate subset of variables, is feasible for each transition sys- tem in CD, i.e.., for each C E CD, pS b pc, though pS can be stronger than the conjunction of the pc. Each transition T E 7 performs one transition TC for each transition system C E CV. This means that The concept of synchronous, concurrent actions is captured by requiring that each component performs a transition for each state change. Nondeterminism lies in the fact that each component can traverse either its nominal transition or any of its failure transitions. The nominal transition of a plant performs the nomi- nal transition for each of its components. Multiple si- multaneous failures correspond to traversing multiple component failure transitions. Returning to the example, each hardware compo- nent in Figure 1 is modeled by a component transition system. Component communication, denoted by wires in the figure, is modeled by shared variables between the corresponding component transition systems. Model-based configuration management We now introduce configuration managers that make extensive use of a model to infer a plant’s current state and to select optimal control actions to meet configu- ration goals. This is essential in situations where mis- takes may lead to disaster, ruling out simple trial-and- error approaches. A model- bused configuration man- ager uses a plant transition model M to determine the desired control sequence in two stages-mode identi- fication (MI) and mode reconfiguration (MR). MI in- crementally generat(es the set of all plant trajectories consistent wit#h t’he plant transit)ion model and the se- quence of plant control and sensed values. MR uses a plant transition model and the partial trajectories C Figure 2: Model-based configuration management G u nw(SInL) n=%,+, (4) 3 ( k ) where the final inclusion follows from Equation 1. Equation 4 is useful because it is a characterization of S;+l in terms of the subtransitions Tjk. This allows us to develop the following characterization of Si+i in terms of state formulae: generated by MI up to the current state to determine a set of control values such that all predicted trajecto- ries achieve the configuration goal in the next state. Both MI and MR are reactive. MI infers the cur- rent state from knowledge of the previous state and observations within the current state. MR only consid- ers actions that achieve the configuration goal within the next state. Given these commitments, the deci- sion to model component transitions as synchronous is key. An alternative is to model multiple transitions through interleaving. This, however, places an arbi- trary distance between the current state and the state in which the goal is achieved, defeating a desire to limit inference to a small number of states. Hence we use an abstraction in which component transitions occur syn- chronously, even though the underlying hardware may interleave the transitions. The abstraction is correct if different interleavings produce the same final result. We now formally characterize MI and MR. Recall that taking a transition 76 corresponds to taking all subtransitions Tij. A transition Ti can be defined to apply over a set of states S in the natural way: T*(S) = u T*(S) This is a sound but potentially incomplete character- ization of Si+l, i.e., every state in Si+l satisfies p~,+~ but not all states that satisfy p~,+~ are necessarily in Sit1 - However, generating PS,+~ requires only that the entailment of the antecedent of each subtransition be checked. On the other hand, generating a complete characterization based on Equation 3 would require enumerating all the states in Si, which can be com- putationally expensive if Si contains many states. Mode Reconfiguration MR incrementally generates the next set of control val- ues pi using a model of the nominal transition r,, the desired goal configuration gi, and the current set of possible states Si. The model-theoretic characteriza- tion of Mi, the set of possible control actions that MR can take at time i, is as follows: M, = MG~S~ n s,,) n c c d (6) 2 bJlf-) hk(s% f-I s,,) n c c St} (7) SES Similarly we define rij(S) for each subtransition Tij of Ti. We can show that ds) E n Ttl(s) (1) 3 where, once again, the latter inclusion follows from Equation 1. As with MI, this weaker characterization of Mi is useful because it is in terms of the subtran- sitions Tnk. This allows us to develop the following characterization of Mi in terms of state formulae: M, _> (~~1 ps, A pP,is consistent and In the following, Si is the set of possible states at time i before any control values are asserted by MR, pi is the control values asserted at time i, 0i is the observations at time i, and SP, and So is the set of states in which control and sensed variables have values specified in pi and Oi, respectively. Hence, Si n S,, is the set of possible states at time i. We characterize both MI and MR in two ways-first model theoretically and then using state formulas. Mode Identification MI incrementally generate the sequence SO, Sr , . . . us- ing a model of the transitions and knowledge of the control actions pi as follows: A *nk A PC i= /%I,) (8) PS, “P#,l=@dc The first part says that the control actions must be consistent with the current state, since without this condition the goals can be simply achieved by making the world inconsistent. Equation 8 is a sound but po- tentially incomplete characterization of the set of con- trol actions in Mi, i.e., every control action that satis- fies the condition on the right hand side is in Mi, but not necessarily vice versa. However, checking whether a given pj is an adequate control action only requires that the entailment of the antecedent of each subtran- sition be checked. On the other hand, generating a so = (0) (2) complete characterization based on Equation 6 would Model-Based Reasoning 975 require enumerating all the states in Si, which can be computationally expensive if Si contains many states. If Md is empty, no actions achieve the required goal. The planner then initiates replanning to dynamically change the sequence of configuration goals. Statistically optimal configuration management The previous section characterized the set of all feasible trajectories and control actions. However, in practice, not all such trajectories and control actions need to be generated. Rather, just the likely trajectories and an optimal control action need to be generated. We efficiently generate these by recasting MI and MR as combinatorial optimization problems. A combinatorial optimization problem is a tuple (X, C, f), where X is a finite set of variables with finite domains, C is set of constraints over X, and f is an ob- jective function. A feasible solution is an assignment to each variable in X a value from its domain such that all constraints in C are satisfied. The problem is to find one or more of the leading feasible solutions, i.e., to generate a prefix of the sequence of feasible solutions ordered in decreasing order of f. Mode Identification Equation 3 characterizes the trajectory generat ion problem as identifying the set of all transi tions from the previous state that yield current states consistent with the current observations. Recall that a transi- tion system has one nominal transition and a set of failure transitions. In any state, the transition system non-deterministically selects exactly one of these tran- sitions to evolve to the next state. We quantify this non-deterministic choice by associating a probability with each transition: p(r) is the probability that the plant selects transition 7.l With this viewpoint, we recast MI’s task to be one of identifying the likely trajectories of the plant. In keeping with the reactive nature of configuration man- agement, MI incrementally tracks likely trajectories by extending the current set of trajectories by the likely transitions. The only change required in Equation 5 is that, rather than the disjunct ranging over all transi- tions Tj, it ranges over the subset of likely transitions. The likelihood of a transition is its posterior proba- bility p(rlOi). This posterior is estimated in the stan- dard way using Bayes Rule: If r(Si-1) and Oi are disjoint sets then clearly p(oilr) = 0. s imilarly, if T( Si- 1) C Oi then Oi is en- tailed and p(Oilr) = 1, and hence the posterior prob- ability of r is proportional to the prior. If neither of 1 We make the simplifying assumption that the proba- bility of a transition is independent of the current state. the above two situations arises then p(e)i 1~) 5 1. Es- timating this probability is difficult and requires more research, but see (de Kleer & Williams 1987). Finally, to view MI as a combinatorial optimization problem, recall that each plant transition consists of a single transition for each of its components. We intro- duce a variable into X for each component in the plant whose values are the possible component transitions. Each plant transition corresponds to an assignment of values to variables in X. C is the constraint that the states resulting from taking a plant transition is consis- tent with the observed values. The objective function f is the probability of a plant transition. The result- ing combinatorial optimization problem hence identi- fies the leading transitions at each state, allowing MI to track the set of likely trajectories. Mode reconfiguration Equation 6 characterizes the reconfiguration problem as one of identifying a control action that ensures that the result of taking the nominal transition yields states in which the configuration goal is satisfied. Recast- ing MR as a combinatorial optimization problem is straightforward. The variables X are just the control variables /J with identical domains. C is the constraint in Equation 5 that pj must satisfy to be in Mi. Finally, as noted earlier, different control actions can have dif- ferent costs that reflect differing resource requirements. We take f to be negative of the cost of a control ac- tion. The resulting combinatorial optimization prob- lem hence identifies the lowest cost control action that achieves the goal configuration in the next state. Conflict-directed best first search We solve the above combinatorial optimization prob- lems using a conflict directed best first search, similar in spirit to (de Kleer & Williams 1989; Dressler & Struss 1994). A conflict is a partial solution such that any solution containing the conflict is guaranteed to be in- feasible. Hence, a single conflict can rule out the feasi- bility of a large number of solutions, thereby focusing the search. Conflicts are generated while checking to see if a solution Xi satisfies the constraints C. Our conflict-directed best-first search algorithm, CBFS, is shown in in Figure 3. It has two major com- ponents: (a) an agenda that holds unprocessed solu- tions in decreasing order of f; and (b) a procedure to generate the immediate successors of a solution. The main loop removes the first solution from the agenda, checks its feasibility, and adds in the solution’s imme- diate successors to the agenda. When a solution Xi is infeasible, we assume that the process of checking the constraints C returns a part of Xi as a conflict Nd. We focus the search by generating only those immediate successors of Xi that are not subsumed by Ni, i.e., do not agree with Ni on all variables. Intuitively, solution Xj is an immediate successor of solution Xi only if f(Xi) 2 f(Xj) and Xi and Xi differ 976 Model-Based Reasoning function CBFS(X, C, f) Agenda = {{best-solution(X)}}; Result = 8; while Agenda is not empty do Soln = pop(&enda); if Soln satisfies C then Add Soln to Result; if enough solutions have been found then return Result; else Succs = immediate successors Soln; else Conf = a conflict that subsumes Soln; succs = immediate successors of Soln not subsumed by Con& endif Insert each solution in Succs into Agenda in decreasing f order; endwhile return Result; end CBFS Figure 3: Conflict directed best first search algorithm for combinatorial optimization only in the value assigned to a single variable (ties are broken consistently to prevent loops in the successor graph). One can show this definition of the immediate successors of a solution suffices to prove the correctness of CBFS, i.e., to show that all feasible solutions are generated in decreasing order of f. Our implemented algorithm further refines the no- tion of an immediate successor. The major benefit of this refinement is that each time a solution is removed from the agenda, at most two new solutions are added on, so that the size of the agenda is bounded by the total number of solutions that have been checked for feasibility, thus preserving reactivity (details are be- yond the scope of this paper). For MI, we use full propositional satisfiability to check C (transition con- sistency). Interestingly, reactivity is preserved since the causal nature of a plant’s state constraints means that full satisfiability requires little search. For MR, we preserve reactivity by using unit propagation to check C (entailment of goals), reflecting the fact that entail- ment is usually harder than satisfiability. Finally, note that CBFS does not require minimal conflicts. Empir- ically, the first conflict found by the constraint checker provides enough focusing, so that the extra effort to find minimal conflicts is unnecessary. Implementation and experiments We have implemented Livingstone based on the ideas described in this paper. Livingstone was part of a rapid prototyping demonstration of an autonomous architec- ture for spacecraft control, together with the HSTS planning/scheduling engine and the RAPS executive (Pell el al. 1996). In this architecture, RAPS fur- ther decomposes and orders HSTS output before hand- ing goals to Livingstone. To evaluate the architec- Table 1: NewMaap spacecraft model properties Failure Scenario MI MR Chck 1 Accpt 1 Time Ch k 1 T’ I 2 I 2.2 4” I E” Table 2: Results from the seven Newmaap failure re- covery scenarios ture, spacecraft engineers at JPL defined the Newmaap spacecraft and scenario. The Newmaap spacecraft is a scaled down version of the Cassini spacecraft that retains the most challenging aspects of spacecraft con- trol. The Newmaap scenario was based on the most complex mission phase of the Cassini spacecraft- successful insertion into Saturn’s orbit even in the event of any single point of failure. Table 1 provides summary information about Livingstone’s model of the Newmaap spacecraft, demonstrating its complexity. The Newmaap scenario included seven failure sce- narios. From Livingstone’s viewpoint, each scenario required identifying the failure transitions using MI and deciding on a set of control actions to recover from the failure using MR. Table 2 shows the results of run- ning Livingstone on these scenarios. The first column names each of the scenarios; a discussion of the details of these scenarios is beyond the scope of this paper. The second and fifth columns show the number of so- lutions checked by algorithm CBFS when applied to MI and MR, respectively. On can see that even though the spacecraft model is large, the use of conflicts dra- matically focuses the search. The third column shows the number of leading trajectory extensions identified by MI. The limited sensing available on the Newmaap spacecraft often makes it impossible to identify unique trajectories. This is generally true on spacecraft, since adding sensors increases spacecraft weight. The fourth and sixth columns show the time in seconds on a Spare 5 spent by MI and MR on each scenario, once again demonstrating the efficiency of our approach. Livingstone’s MI component was also tested on ten combinational circuits from a standard test suite (Br- glez & Fujiwara 1985). Each component in these cir- cuits was assumed to be in one of four modes: ok, stuck-at-l, stuck-at-O, unknown. The probability of transitioning to the stuck-at modes was set at 0.099 and to the unknown mode was set at 0.002, We ran 20 Model-Based Reasoning 977 I I # of I #of II I I the Bioreactor is that Livingstone’s models and re- stricted in ference are still expressi ve enough to solve important problems in a diverse set of domains. Third, Livingstone casts mode identification and mode re- configuration as combinatorial optimization problems, and uses a core conflict-directed best-first search to Devices components -. clauses Checked Time cl7 6 18 18 0.1 Table 3: Testing MI on a standard suite of circuits experiments on each circuit using a random fault and a random input vector sensitive to this fault. MI stopped generating trajectories after either 10 leading trajecto- ries had been generated, or when the next trajectory was 100 times more unlikely than the most likely tra- jectory. Table 3 shows the results of our experiments. The columns are self-explanatory, except that the time is the number of seconds on a Spare 2. Note once again the power of conflict-directed search to dramatically fo- cus search. Interestingly, these results are comparable to the results from the very best ATMS-based imple- mentations, even though Livingstone uses no ATMS. Furthermore, initial experiments with a partial LTMS have demonstrated an order of magnitude speed-up. Livingstone is also being applied to the autonomous real-time control of a scientific instrument called a Bioreactor. This project is still underway, and final results are forthcoming. More excitingly, the success of the Newmaap demonstration has launched Living- stone to new heights: Livingstone, together with HSTS and RAPS, is going to be part of the flight software of the first New Millennium mission, called Deep Space One, to be launched in 1998. We expect final delivery of Livingstone to this project in 1997. Conclusions In this paper we introduced Livingstone, a reactive, model-based self-configuring system, which provides a kernel for model-based autonomy. It represents an important step toward our goal of developing a fully model-based autonomous system (Williams 1996). Three technical features of Livingstone are par- ticularly worth highlighting. First, our modeling formalism achieves broad coverage of hybrid hard- ware/software systems by coupling the transition sys- tem models underlying concurrent reactive languages (Manna & Pnueli 1992) with the qualitative represen- tations developed in model-based reasoning. Second, we achieve a reactive system that performs significant deduction in the sense/response loop by using proposi- tional transition systems, qualitative models, and syn- chronous components transitions. The interesting and important result of Newmaap, Deep Space One, and solve them. The ubiquity of combinatorial tion problems and the power of conflict-direc optimiza- ted search are central themes in Livingstone. Livingstone, the HSTS planning/scheduling system, and the RAPS executive, have been selected to form the core autonomy architecture of Deep Space One, the first flight of NASA’s New Millennium program. Acknowledgements: We would like to thank Nicola Muscettola and Barney Pell for valuable dis- cussions and comments on the paper. References Agre, P., and Chapman, D. 1987. Pengi: An implemen- tation of a theory of activity. In Prom. of AAAI-87. Brglez, F., and Fujiwara, H. 1985. A neutral netlist of 10 combinational benchmark circuits. In Int. Symp. on Circuits and Systems. Brooks, R. A. 1991. Intelligence without reason. In Procs. of IJCA I-91, 569-595. de Kleer, J., and Williams, B. C. 1987. Diagnosing mul- tiple faults. Artificial Intelligence 32(1):97-130. de Kleer, J., and Williams, B. C. 1989. Diagnosis with behavioral modes. In Procs. of IJCAI-89, 1324-1330. Dressler, O., and Struss, P. 1994. Model-based diagnosis with the default-based diagnosis engine: Effective control strategies that work in practice. In Prow of ECAI-94. Firby, R. J. 1995. The RAP language manual. Working Note AAP-6, University of Chicago. Kuipers, B., and Astrom, K. 1994. The composition and validation of heterogenous control laws. Automatica 30(2):233-249. Manna, Z., and Pnueli, A. 1992. The Temporal Logic of Reactive and Concurrent Systems: Specification. Springer-Verlag. Muscettola, N. 1994. HSTS: Integrating planning and scheduling. In Fox, M., and Zweben, M., eds., Intelligent Scheduling. Morgan Kaufmann. Nerode, A., and Kohn, W. 1993. Models for hybrid sys- tems. In Grossman, R. L. et al, eds., Hybrid Systems. Springer-Verlag. 3 17-356. Pell, B.; Bernard, D. E.; Chien, S. A.; Cat, E.; Muscet- tola, N.; Nayak, P. P.; Wagner, M. D.; and Williams, B. C. 1996. A remote agent prototype for spacecraft autonomy In Procs. of SPIE Conf. on Optical Science, Engineering, and Instrumentation. Poole, D. 1995. Sensing and acting in the independent choice logic. In Procs. of the AAAI Spring Symp. on Ex- tending Theories of Action, 163-168. Williams, B. C. 1996. Model-based autonomous systems in the new millennium. In Procs. of AIPS-96. Zhang, Y., and Mackworth, A. K. 1995. Constraint nets: A semantic model for hybrid dynamic systems. Journal of Theoretical Computer Science 138( 1):211-239. 978 Model-Based Reasoning
1996
144
1,782
A Formal Hybrid Modeling Scheme for andling Discont inuit ies in hysical System Models Pieter J. Mosterman and Gautam Biswas Center for Intelligent Systems Box 1679, Sta B Vanderbilt University Nashville, Tennessee 37235 pjm, biswas@vuse.vanderbilt.edu Abstract Physical systems are by nature continuous, but often exhibit nonhnearities that make behavior generation complex and hard to analyze. Complexity is often reduced by linearizing model constraints and by ab- stracting the time scale for behavior generation. In either case, the physical components are modeled to operate in multiple modes, with abrupt changes be- tween modes. This paper discusses a hybrid model- ing methodology and analysis algorithms that combine continuous energy flow modeling and localized discrete signal flow modeling to generate complex, multi-mode behavior in a consistent and correct manner. Energy phase space analysis is employed to demonstrate the correctness of the algorithm, and the reachability of a continuous mode. Introduction Recent advances in model-based and qualitative rea- soning have led to researchers developing large scale models of complex, continuous systems, such as power plants and space station sub-systems. System com- plexity is handled by replacing nonlinear component behaviors by simpler piecewise linear behaviors, caus- ing the system to exhibit multi-mode behavior[ll]. For example, the Airbus A-320 fly-by-wire system includes the take ofl, cruise, approach, and go-around opera- tional modes[ 131. Quantitative and qualitative simulation methods (e.g., [6, 121) t ypically impose continuity constraints to ensure generated behaviors are meaningful. How- ever, system models that accommodate configuration changes and multi-mode components can exhibit dis- continuous behavior. Consider the diode-inductor cir- cuit in Fig. 1. When closed switch S, is opened, and the voltage drop across the diode exceeds 0.6V it comes on and abruptly enforces this voltage across the induc- tor . Computational complexity is reduced by modeling the diode as an ideal switch with on and off modes. In reality, parasitic resistive and capacitive effects in the Figure 1: Physical system with discontinuities. diode would force the on/o8 changes to be continuous with a very fast time constant. Our goal is to derive a uniform approach to ana- lyzing continuous and discontinuous system behavior without violating fundamental physical principles of conservation of energy and momentum. The solution is a hybrid modeling scheme that combines traditional energy-related bond graph elements to model the phys- ical components and finite state automata controlled junctions to model configuration changes. Characteristics of Physical Systems A physical system can be regarded as a configuration of connected physical elements. The energy distribu- tion in the system reflects its behavioral history up to that time and defines the traditional notion of sys- tem state. Future behavior of the system is a func- tion of its current state and input to the system from the present time. State changes are caused by en- ergy exchange between system components, expressed as power, the time derivative or flow of energy. Inde- pendent of the physical domain (mechanical, electrical, hydraulic, etc.), power is defined as the product of two conjugate power variables, eflort, e, and flow, f. Cor- respondingly, energy comes in two forms: stored effort and flow, called generalized momentum, p, and general- ized displacement, Q, respectively. The variables p and Q are called state variables. Bond graphs capture continuous energy-related physical system behavior[l2]. Its constituent elements are energy storage elements indzsctors, I, and capcsci- Qualitative Physics 985 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. tors, C, dissipators, R and sources/sinks of effort and flow, Se and Sf. Sources define interaction with the environment. Idealized, lossless 0- (common efiort) and l- (common flow) junctions connect sets of ele- ments and define the system configuration. Two spe- cial junctions called signal transformers complete the set of bond graph primitive elements, the transformer, TF, and the gyrator, GY. Bond graph models describe system behavior by en- ergy exchange among components. Depending on the type of stored energy, buffer elements impose either ef- fort or flow on their respective junctions. This imposes a causal structure on the system effort and flow vari- ables which is exploited to generate system behavior in the form of quantitative state equations[l2] and qual- itative relations among variables [l, 81. In summary, bond graphs provide an elegant formalism to model the continuous behavior of physical systems. Nature and Effects of Discontinuities Conservation of energy enforces a time integral rela- tion between energy and power variables which implies continuity of power, therefore, effort and flow. Discon- tinuities in behavior generation can be attributed to simplifying model descriptions[2, 111. We contend that all discontinuities in the modeling of physical systems can be attributed to abstracting component behavior to simplify (i) the time-scale of the interactions, or (ii) the relations among parameters. Often the time scale of nonlinear behavior in compo- nents is significantly less than the time scale at which overall system behavior is studied. Explicit model- ing of system behavior at the smaller time scale may greatly increase the time complexity of behavior simu- lation and introduce numerical stiffness. To avoid this, components like electric switches, valves, and diodes are modeled to have abrupt or discontinuous changes in behavior. Another cause for discontinuities in models stems from component parameter abstractions. The detailed effects of particular component characteristics, such as fast nonlinear behaviors of transistors and oscillators, are usually not important except for their gross effects on overall system behavior. Behavior generation is simplified by approximating nonlinear behavior as a series of piecewise linear behaviors. In other situations certain parameter effects that have negligible effects on gross behavior are omitted from system models. i S’ mce all changes in the state of any physical system are brought about by energy exchange or power, the constraint on power continuity plays an important role in meaningful behavior generation. However, in models with discontinuities, behavior generation schemes have to deal with discontinuities in power variables[9]. The Hybrid Modeling Scheme In qualitative simulation systems, such as QSIM[G], a higher level global control structure (meta-model) determines when to switch &DE sets during behav- ior generation. In other approaches[2, 61, transition functions between configurations are specified as rules or state transition tables. In work based on bond graph schemes, researchers have introduced switching bonds[2] controlled by global finite state automata to connect and disconnect subsystem models. All these methods fail for systems whose range of behaviors have not been pre-enumerated. Compositional modeling approaches that build system models dynamically by composing model fragments[l, lo] overcome this prob- lem. We adopt this methodology and implement a dy- namic model switching methodology in the bond graph modeling framework. We avoid global control structures and pre- enumerated bond graph models. Instead we translate the overall physical model to one bond graph model that covers the energy flow relations within the system. Next, the discontinuous mechanisms and components in the system are modeled locally as controlled junc- tions which can assume one of two states - on and off. Local finite state automata which control each junc- tion constitute the signal flow model of the system. It is distinct from the bond graph model that deals with the energy-related behavior of the physical system variables. Signal flow models describe the transitional, mode-switching behavior of the system. A mode of a system is determined by the combination of the on/off states of all the controlled junctions in its bond graph model. Controlled Junctions When active (on), controlled junctions behave like nor- mal 0- or l-junctions. Deactivated O-junctions force the effort and deactivated l-junctions force the flow at adjoining bonds to become 0, thus inhibiting en- ergy flow. In both cases, the controlled junction ex- hibits ideal switch behavior, and modeling discontinu- ous behavior in this way is consistent with bond graph theory[l2]. Deactivating controlled junctions can af- fect the behaviors at adjoining junctions, and, there- fore, the causal relations among system variables. Controlled junctions are marked with subscripts (e.g., 11, 02) in the hybrid bond graph representation (Fig. 2). Each controlled junction has an associated finite state automata that generates the on/ofi signals for the controlled junction. This is called a junction’s control specification (CSPEC). CSPEC input consists 986 Model-Based Reasoning 201 IS I I Figure 2: Hybrid bond graph model. of power variables from the bond graph and external control signals. Its output is the on/oflsignal for the controlled junction. In every CSPEC transition se- quence, on/off signals have to alternate. Mode Switching in the Hybrid Model Discontinuous effects establish or break energetic con- nections in the model when threshold values are crossed. As a consequence, signals associated with bonds at the junction may change discontinuously. Also, when junctions become active, buffers may be- come dependent, causing an apparent instantaneous change in the energy distribution of the system[9]. The use of controlled junctions is illustrated for the diode-inductor circuit (Fig. 1) in Fig. 2. The manual switch turns on or ofl based on an external control signal as shown in CSPEC 1, and the diode switches on or 08 based on CSPEC 2. Fig. 3 shows a simulation run of this system with parameter values V= lOV, RI = 330R, L = 5mH,p(O) = 0. When the sztch is closed (t = lo), the inductor is connected to the source and builds up a flux, p, by drawing a cur- rent. The diode is not active in this mode of operation (10). When the switch is opened (t = loo), the cur- rent drawn by the inductor drops to 0, causing its flux to discharge instantaneously. Because of the derivative nature of the constituent relation VL = Lg, the result is an infinite negative (the flux changes from a positive value to 0) voltage across the diode (Fig. 4). Be- cause its threshold value, V&de, is exceeded, the diode comes on instantaneously and the mode of operation where the switch was open and the diode inactive (00) is never realized in real time. If it were, the stored en- ergy of the inductor would be released instantaneously in a mythical mode where the model has no real rep- resentation, producing an incorrect energy balance in the overall system. Consequently, there would be no flow of current after the diode becomes active. In real time the system switches from mode 10 to 01 directly. Figure 4: A series of mode switches may occur. lation time step. The diode was inferred to switch oJgr when the current had a small negative value, rather than 0. This small current in the-inductor went to 0 instantaneously, which resulted in the spike shown. A model that undergoes a sequence of instantaneous mode changes has no physical manifestation during these changes, therefore, these modes are termed muth- ical. Thermodynamically, the system is considered to be isoZated[3] d uring mythical modes, i.e., there is no energy exchange with the environment. This estab- lishes the energy distribution of the system as a switch- ing invariant. Because the energy distribution in the system defines its state vector, it is referred to as the principle of invariance of state. In the diode-inductor example, the flux, p, of the inductor is invariant dur- ing switching. The invariance of state principle applies only if the state variables represent the energy distri- bution (buffer energy values) in the system. At t = 350 the diode turns off because its current Energy redistribution can occur in the real mode, falls below Ith = 0. Since there is no stored energy and the challenge is to compute the initial energy dis- in the system, 00 becomes the final mode. The spike tribution when a real mode is reached after a series observed is a simulation artifact caused by the simu- of discontinuous mythical changes. At this point, the v, 10 I s OO SO loo is0 zoo 2so 300 350 400 t- , SW’ I- oj 50 ISO ml Iso eventual mode 00’ r Ij tat-.@ 10 01 00 mythical mode 00 Figure 3: Diode-inductor simulation. Qualitative Physics 987 Figure 5: A sequence of mythical mode switches. system is no longer isolated and can exchange energy with the environment. This is illustrated in Fig. 5. Mythical modes are depicted as open circles and real modes are shown by solid circles. In real mode AJo a signal value crosses a threshold at time t;, which causes a discontinuous change to mode Ml. Based on the original energy dis- tribution (Ps, Q5) 1 va ues for the set of power variables (Ei, Fi) in this new configuration are calculated. The new values cause another instantaneous mode change and the new mode A&s is reached. Again, the set of new power variables values, (,?Z$ ,&), is calculated based on the original energy distribution (P, , Q,). Further mythical mode changes may occur till a real mode, MN, is reached. The final step involves mapping the energy distribution, or state variable values, of the de- parted real mode to the eventual real mode. Real time continuous simulation resumes at tf so system behav- ior in real time implies mode MN follows MO. The formal Mythical Mode Algorithm (MMA) is outlined below. 1. Calculate the energy values (Qs, P,) and signal val- ues (E,, F,) for bond graph model MO using (Qs, PO), at the previous simulation step as initial values. 2. Use CSPEC to infer a new mode given (Es, F,) . 3. If one or more controlled junctions switch states: (a) Derive the bond graph for this configuration. (b) Assign causal directions to bonds[l2]. (c) Calculate the signals (Ei, Fi) for the new mode, Mi, based on the initial values (Qs, Ps). (d) Use CSPEC again to infer a possible new mode based on (& , Fi) for the new mode, Mi . (e) Repeat step 3 till no more mode changes occur. 4. Establish the final mode, MN, as the new system configuration. 5. Map (&s, PO) to the energy distribution (&N, PN). Details of the complete simulation algorithm and soft- ware for modeling hybrid system behavior are de- scribed elsewhere[7]. Figure 6: Diode-relay circuit. Divergence of Time Consider a scenario where the diode requires a thresh- old current Ith > 0 to maintain its on state. If the inductor has built up a positive flux, the diode comes on when the switch opens. However, if the flux in the inductor is too low to maintain the threshold current, the diode goes off instantaneously, but in its oJJ‘state the voltage drop exceeds the threshold voltage again. The model goes into a loop of instantaneous changes (see Fig. 4). For instantaneous changes, real time does not progress or diverge, but this violates the physical principle of divergence of time[4]. Checking for diver- gence of time in model behavior is accomplished by a multiple energy phase plot method. Failure to diverge is linked back to the initial values of associated state variables. Consider the electrical circuit in Fig. 6. The three branches where voltage drops occur in this circuit are represented by O-junctions. The diode is modeled as an ideal voltage sink and the three branches and elements are connected using l-junctions. Two switches make up the control flow model 1. The diode switches on/o8 depending on its voltage drop or current. The corresponding controlled junc- tion is 1~ with CSPEC D. The input to D are the power signals eR, ec, eL, and fD. 2. The relay is closed/open depending on the voltage drop across the capacitor, modeled by controlled junction OR with the controlling power variable ec. A closed relay implies that OR is ofl and an open relay implies OR is on. To avoid discontinuous changes in power variables during analysis, CSPEC transition conditions are rewritten in terms of the energy variables which are invariant across mythical modes. Since discontinuities 988 Model-Based Reasoning Figure 7: Multiple energy phase space analysis. can cause changes in system configuration, and the re- lation between power and energy variables, an energy phase space diagram has to be constructed for each switch configuration. The energy phase space is k-dimensional, where k is the tot al number of independent buffers in the system. For example, the circuit in Fig. 6 has two energy buffers implying a two dimensional phase space with axes p, the flux in inductor L1, and q, the charge on capaci- tor Cl (Fig. 7). The four modes for the two switches are 00, 01, 10, and 11. The first digit indicates the open/closed state of the diode, and the second digit defines the on/off state of the relay. For each mode, the transition conditions based on the energy variables are grayed out in the phase spaces. For example, in mode 00 the relay turns on if ec > Vrelay.l Substitut- ing ec = & g enerates q > CIVrelay, which is grayed out in the phase space. The conditions under which the diode turns on are harder to derive because Li induces a Dirac pulse, 6.2 CSPEC D switches on if eR + ec -I- eL <_ -I&ode. When the diode is off, eR = ec = &-. A deactivated l-junction has 0 flow so the stored flux in the inductor becomes 0 instantaneously and because eL = g, this causes eL to be a Dirac pulse which approaches positive ‘Aspart ofa arg 1 er system (e.g., automobile ignitions), this circuit discharges the inductor through the diode and capacitor. The relay keeps the charge in the capacitor above a small threshold value so that the flux in the induc- tor does not increase first when it is switched to discharge. 2This is a pulse of finite area but infmitesimal width that occurs at a time point. Figure 8: One energy phase space. or negative infinity, depending on whether the stored flux was negative or positive, respectively. If the flux was 0, eL equals 0. Using the function sign I -1 ifa:<O sign(z) = 0 ifz=O (1) 1 ifz>O we derive eL = -sign(p)d 5 -V&de. The minus signs cancel and eR and ec can be neglected, so the condition for switching of the diode becomes sign(p)6 2 V&,&. Assuming the voltage enforced by the diode is 0.6V, this inequality holds for all values of p > 0. This area is grayed out in the phase space. The phase space representation for the four modes (Fig. 7) are superimposed (Fig. 8) to study possible divergence of time violations. If an energy distribution does not have a real energy phase space component, the state vector can never lead to a real mode and time does not diverge for this behavior. In this example, divergence of time is violated if Ith > 0 for the diode. This area will be reached for all energy distributions with positive initial flux, p. When p = 0 in the 00 and 01 modes, time does diverge. If the flux has a negative initial value, both the flux and capacitor charge converge asymptotically to 0. Discussion and Conclusions Hybrid models of physical systems may undergo a se- ries of discontinuous changes. These discontinuities are a result of abstracting the time scale and com- ponent parameters in system models. The Mythical Mode Algorithm uses the principle of invariance of state to correctly infer new modes of continuous op- eration and their state variable values. In pathological cases, system models result in mythical loops, implying the model is physically incorrect. Using the principle of invariance of state, a systematic energy phase space analysis method is developed to verify the correctness of system models. Note that our work verifies the cor- rectness of models, i.e., it ensures that these models do Qualitative Physics 989 not violate physical principles. This is different from model validation which establishes how well a system model behavior matches that of the exact physical sit- uation of interest. Previous work on model verification by Henzinger et &[4] relies on pre-enumeration of global modes of operation, and their method is restricted to variables that have linear rates of change. Our method applies more generally to linear and nonlinear models. In other work, Iwasaki et aZ.[5], introduce the concept of hyper- time to represent the instantaneous switching as an in- finitesimal interval. A sequence of switches can be an- alyzed in hypertime to determine state changes. This approach emulates physical effects of small time con- stants (e.g., parasitic dissipation) which can greatly increase simulation complexity. Moreover, the mod- eler often chooses to simplify the model by ignoring parasitic effects. If physical inconsistencies, e.g., non divergence of time arise in behavior generation, the modeler has to add more details in the model increas- ing its complexity, or adjust landmark values to estab- lish a physically correct but more simple and abstract model. Adding detail may not increase the accuracy of behavior generation because the additional param- eters required may be hard to estimate. Also, increas- ing the computational complexity of models and sim- ulation engines does not guarantee correct models. In the diode-inductor example, an infinitesimal change of time when both the switch and diode are oHdischarges the stored flux and generates incorrect behavior. On the other hand, explicit incorporation of invariance of state ensures that physical consistency of the chosen models can be determined. Another insight gained is that mythical modes arise from combinations of consistent switching elements, i.e., a single switch cannot cause mythical mode changes. When a number of switches interact via in- stantaneous relations with no intervening buffers, se- quential behavior may ensue. Although these modes are modeling artifacts, they result from justifiable modeling decisions, which have to be dealt with ap- propriately. In future work we will attempt to demon- strate that reachability analysis can be applied in the multiple energy phase space approach by taking the cross product of all interacting local finite state au- tomata. These sets of interacting automata represent local modes of operation. To avoid the computational complexity of the cross product of a number of au- tomata, we will have to develop schemes that effi- ciently decompose the model into parts that are not instantaneously connected because of intervening en- ergy buffers. References [l] G. Biswas and X. Yu. A formal modeling scheme for continuous systems: Focus on diagnosis. Proc. IJCAI-93, pp. 1474-1479, Chambery, France, Aug. 1993. [2] J.F. Broenink and K.C.J. Wijbrans. Describing discontinuities in bond graphs. Proc. of the IntZ. Conf. on Bond Graph Modeling, pp. 120-125, San Diego, CA, 1993. [3] G. Falk and W. Ruppel. Energie wad En- tropie: Eine Einftihrung in die Thermodynamik. Springer-Verlag, Berlin, New York, 1976. [4] T.A. Henzinger, X. Nicollin, J. Sifakis, and S. Yovine. Symbolic model checking for real-time systems. Information and Computation, 111:193- 244, 1994. [S] Y. Iwasaki, A. Farquhar, V. Saraswat, D. Bobrow, and V. Gupta. Modeling time in hybrid systems: How fast is “instantaneous” ? Qualitative Rea- soning Workshop, pp. 94-103, Amsterdam, The Netherlands, 1995. [S] B. Kuipers. Qualitative simulation. Artificial In- telligence, 29:289-338, 1986. [7] P. J. Mosterman and G. Biswas. Modeling discon- tinuous behavior with hybrid bond graphs. Quali- tative Reasoning Workshop, pp. 139-147, Amster- dam, The Netherlands, 1995. [B] P.J. Mosterman, R. Kapadia, and G. Biswas. Us- ing bond graphs for diagnosis of dynamic physical systems. Sixth IntZ. Workshop on Principles of Diagnosis, pp. 81-85, Goslar, Germany, 1995. [9] P. J. Mosterman and G. Biswas. Analyzing dis- continuities in physical system models. Qualita- tive Reasoning Workshop, Fallen Leaf Lake, CA, 1996. [lo] P.P. Nayak, L. Joscowicz, and S. Addanki. Au- tomated model selection using context-dependent behaviors. Proc. AAAI-91, pp. 710-716, Menlo Park, CA, 1991. [ll] T. Nishida and S. Doshita. Reasoning about dis- continuous change. Proc. AAAI-87, pp. 643-648, Seattle, WA, 1987. [12] R.C. Rosenberg and D. Karnopp. Introduction to Physical System Dynamics. McGraw-Hill Publish- ing Company, New York, NY, 1983. [13] W. Sweet. The glass cockpit. IEEE Spectrum, pp. 30-38, Sept. 1995. 990 Model-Based Reasoning
1996
145
1,783
Building steady-state simulators via hierarchical feedback decomposition Nicolas Rouquette Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive, M/S 525-3660 Pasadena, CA 91109 Nicolas.Rouquette@jpl.nasa.gov Abstract In recent years, compositional modeling and self- explanatory simulation techniques have simplified the process of building dynamic simulators of physical sys- tems. Building steady-state simulators is, conceptu- ally, a simpler task consisting in solving a set algebraic equations. This simplicity hides delicate technical is- sues of convergence and search-space size due to the potentially large number of unknown parameters. We present an automated technique for reducing the di- mensionality of the problem by 1) automatically iden- tifying feedback loops (a generally NP-complete prob- lem), 2) hierarchically decomposing the set of equa- tions in terms of feedback loops, and 3) structuring a simulator where equations are solved either serially without search or in isolation within a feedback loop. This paper describes the key algorithms and the re- sults of their implementation on building simulators for a two-phase evaporator loop system across multi- ple combinations of causal and non-causal approxima- tions. Introduction Recent advances in model-based reasoning have greatly simplified the task of building and using dynamic simulators of physical systems (Nayak 1993; Forbus & Falkenhainer 1995; Amador, Finkelstein, & Weld 1993). While the usefulness of dynamic simulators is well established in various fields from teaching to high- fidelity simulation, steady-state simulators are charac- terized by low computational requirements (i.e., that of solving a set of equations only once) which makes them attractive for a wide range of engineering analyses such as stress tolerance, sensitivity, and diagnosis (Biswas & Yu 1993). However, building steady-state simulators can be a challenging task dominated by issues related to the existence of numerical solutions, the physical interpretability of the solutions found and the conver- gence properties of the simulator (Manocha 1994). Building a steady-state simulators is conceptually a simple task, that of solving N algebraic equations in M < N unknown parameters with respect to N - M known parameter values. This simplicity hides the computational and numerical task of efficiently and accurately searching a solution in a space as large as M dimensions. There are two extreme approaches for solving a set of algebraic equations: the brute-force ap- proach uses an algorithm to search the numerical so- lution in the M-dimensional space of possible values; the clever approach seeks to identify closed-form alge- braic formulae for computing the unknown parameters in terms of the known values. This paper presents an intermediate approach relying on an automated tech- nique for reducing the dimensionality of the original search space thereby greatly simplifying the task of selecting numerical algorithms and initial solution es- timates. This is achieved by 1) automatically iden- tifying feedback loops, 2) hierarchically structuring a equation solver where groups of equations are solved either serially or independently from each other and 3) structurally merging the modeler’s choice of algorithms and initial estimates with the feedback decomposition to build the steady-state simulator. The construction of high-performance hierarchical steady-state simulators is organized as an operational- ization process transforming the steady-state model into numerical simulation software. The former is a non-computable, unstructured, conceptual specifica- tion of the latter which is is a computable, structured, pragmatic description of the former. In doing so, we elicit engineering understanding of feedback loops re- lated to closed-loop circuits (physical loops) or interde- pendent equations (algebraic loops). In varying mod- eling assumptions, we earnmark conceptual progress in tuning modeling assumptions not only to the purpose of the model and the conditions of the physical system but to the various numerical and physical aspects of the simulator (initial estimates, speed of convergence, and physical interpretability of solutions). Therefore this approach enables modelers to progress on con- ceptual and cognitive fronts; the alternance of which is characteristic of the cyclic nature of the modeling process from theory formation and revision, to experi- mentation, evaluation and interpretation as described in (Aris 1978). Like all modeling tools, efficiency is a practical concern. Consequently, we have limited the hierarchical feedback decomposition technique to a Qualitative Physics 991 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. tractable domain of models where the algorithms have polynomial-time complexity and efficient implementa- tions. As an unstructured collection of equations, a model is computationally unoperational for there is no indi- cation of which values must be computed when. The first step of operationalization consists in determining a partial ordering of computations; a process we call algebraic ordering whose emphasis on numerical com- putability distinguishes it from a physical notion of causation behind the process of causal ordering. We first presents a very efficient algorithm for comput- ing algebraic orderings as a maximum flow problem through a network. This ordering produces a graph of algebraic dependencies among parameters. The key contribution of this paper is in an algorithm that de- composes parameter dependency graphs to elucidate the hierarchical feedback structure of the equation model. Knowledge of this structure can be exploited in many ways; we show here how it serves the purposes of constructing low-cost, high-performance steady-state simulators. Algebraic ordering In numerical analysis, it is well known that there is no general-purpose, universally good numerical equa- tion solving algorithm (Press et al. 1992), there is only an ever-growing multitude of algorithms with var- ious abilities in specific domains and for specific types of equations. For steady-state simulation, this vari- ety poses a pragmatic problem; choosing the right al- gorithm for the job becomes more and more difficult as the size of the model grows. To efficiently reason about the possible ways to construct a simulation pro- gram for a given set of algebraic equations, we define the notion of an algebraic ordering gruph that captures how each parameter of the model algebraically depends on the values of other model parameters via the al- gebraic model equations. Our notion of algebraic or- dering bears close resemblance to that of causal order- ing (Nayak 1993; Iwasaki & Simon 1993) and relevance in modeling (Levy 1993). These three notions of or- dering share a common representation where an equa- tion, E: PV = nRT, yields the following parameter- equation graph (left): where edges indicate possible relations of physical causality and algebraic relevance between parameters and equations. Here, we focus on algebraic compu- tation instead of physical causality and we explicitly distinguish two types of relations (above right). One where an equation can compute a parameter value (solid edges) and another where an equation needs other parameter values to perform such computations (dashed edges). This allows us to distinguish several ways to numeri- cally compute parameter values. An equation e can di- rectly constrain a parameter value p if e is algebraically solvable with respect to p. An equation e indirectly constrains a parameter value p if e is not algebraically solvable with respect to p. For example, the equation: y = fi can directly constrain y for a given x since the solution is unique. This equation indirectly constrains x for a given y since there two possible solutions in x. Although, it is preferable to compute all model pa- rameters through direct constrainment, it is not always possible to do so. The combinations of possible direct and indirect constrainment relationships lead to three categories of parameter constrainment: 1) A parameter may not have any equation directly constraining it; we say it is under constrained because its solution value must be guessed as there is no way to directly com- pute it. 2) When a parameter is directly constrained by exactly one equation, we say it is properly construi- ned because its value is unambiguously computed by solving a unique equation. 3) When multiple equations directly constrain the same parameter, we say it is over constrained because there is no guarantee that all such equations yield the same numerical value unless other parameters of these equations can be adjusted. An equation can only be solved with respect to a single parameter; thus, there are only two possible con- strainment categories: 1) A properly constrained equa- tion e of n parameters properly constrains exactly one parameter p if p is computed numerically or analyti- cally by solving e with respect to values for the other n - 1 parameters. 2) An equation e of n parameters which constrains no parameter is said to be over cons- trained: there is no guarantee that the values given to the n parameters satisfy e unless some of the parameter values can be adjusted. We have established a validity test for a set of equa- tions and parameters which determines when a set of indirect and direct constrainment relationships is solvable (See Ch. 4 in (Rouquette 1995)). If all pa- rameters and equations were properly constrainable, then a bipartite matching approach (e.g., (Nayak 1993; Serrano & Gossard 1987)) would suffice to establish a valid order of computations. To account for pos- sible over and under constrainment, we defined an extended bipartite matching algorithm which ensures that each case of over constrainment is balanced by an adequate number of adjustable under-constrained parameters thereby resulting in a valid, computable ordering. Extended bipartite matching Algorithm 1 constructs a network flow match parameters and equations (Step 1 graph F to and 2). Step 992 Model-Based Reasoning 3 creates paths between s and t for each exogenous pa- rameter. The key difference with Nayak’s algorithm is in the construction of paths corresponding to the possi- ble constrainment relationships among equations and parameters. By default, equation ej could be solved iteratively to find the value of one of its parameters pi E P(ej) (pi + eydirect + ej). For a given ej, at most one pi E P(ej) can be computed in this man- ner. If an equation ej can properly constrain a pa- rameter pi, then there is a path: pi + eyirect + ej in F (Step 4). Since all paths have unit capacity, the paths pi -+ elidirect + ej and p; + eydirect -+ ej for all pi’s and ej s are mutually exclusive. This property confers to a maximum flow the meaning of a bipartite matching between the set of equations and parameters (step 5) as is also used in Nayak’s causal ordering al- gorithm. Further, the costs associated to paths allow a maximum flow, minimum cost algorithim to optimize the use of direct computations as much as possible. Finally, the results of the matching are used to define the edges of the algebraic ordering graph (step 7). For models where the equations are causal, it follows that an algebraic ordering is identical to a causal ordering when every non-exogenous parameter is directly com- puted. As an example, we consider the following hypothet- ical set of algebraic equations: el fl(pl, p2, p3, p8) = 0 ez5 exogenous( P5) e2 fi(p2, p7) = 0 ez6 exogenouS( p6) e3 f3(P3, p4) = 0 e7 e4 f4(P4, P5) = 0 e8 .iji$h 2;2\= 0 8 5, 8 - Suppose that e2 is solvable in P7 but not P2 and that e4 is solvable in P5 but not P4. The extended bipartite matching graph for this example is shown in Fig. 1. Intuitively, Alg. 1 combines the idea of using a per- fect matching as a validity criteria and the flexibility of both direct and indirect computations. Edges of the form (e, p) represent direct computations where the value of p is computed by e as a function of some ar- guments. Edges of the form (p, e) represent indirect computations where the value of p is constrained by e: the solution value of p is computed by search. If the algebraic ordering graph were acyclic, a topo- logical ordering would define an adequate order of com- putations. With cycles, the key to globally order com- putations is to relate the topological structure of the graph to feedback loops. Feedback Feedback is a property of the topological interdepen- dencies among parameters. Parameter Dependency Graph Definition 1 (Algebraic dependency) A parameter p’ depends on p, noted by p - p’, iff there exists an equation e such that p E P(e) and p’ E P(e) Input: A parameter-equation graph G = (V, A) Output: A predicate: EBM(e,p) for e E E and p E P(e). 1) 2) 3) 4 5) 6) 7) 74 W 8) Create a network flow graph F = (Vf , Af). Vf = V U{edirect, eindirect 1 e E E} U{s, t} (S and t are respectively the source and sink vertices) Af = Pf U Ef where: pr = {(%I4 I P E PI u UP, e4, (e9, t) I P E P A exogenous(p)} Ef = {(edirect, e), (eindirect, e), (e, t) 1 e E E} Each path s + p + ezP + t (p exogenous) has unit flow capacity and zero cost. direct Edges between pi and ei, = e; are-defined as follows: and e;, = efndlrect path: cost& capacity=1 From: To: path: cost= 1, capacity=1 Apply a min. cost, max. flow algorithm on F Nonzero transhipment nodes are: - the source, s, with b(s) = IPI - the sink, t, with b(t) = -IPI. If f(s, t) < IPI then return 0 Define EBM() f rom the maximum flow topology: EBM(P;, ej, indirect) holds iff f(Pi, eFdirect) = 1 EBM(Pi, ej, direct) holds iff f(Pi, ejdirect) = 1 Return EBM() Algorithm 1: Extended bipartite strutting an algebraic ordering. matching for con- (i.e., p and p’ are parameters of e) and EBM(p’, e, direct) holds. The dependency digraph Gd = (Pd, Ed) corre- sponding to an algebraic ordering digraph G’ = (V = P U E, A’) is defined as follows: e Pd = {p/p E P A lexogenous(p)) Q Ad = {(P,P’) IP,P’ E PdAp-+ p’} For notation convenience, we say that p w* p’ when there exists a sequence of parameters, p = pl, . . . , p, = p’ such that P = PI Q p2 - - *p,-1 - p,. For the 7-equation example, we have the following parameter-dependency graph: where parameters P4 and P2 are under-constrained while PI is over constrained. The validity of this Qualitative Physics 993 Figure 1: Extended bipartite matching for the 7- , equation example. The minimum cost, maximum flow solution is drawn with solid edges. ordering stems from the proper balancing between over and under-constrained parameters. Indeed, there are 3 ways to compute a value of Pi, two of which (P4 + P3 + PI, P2 + PI) can be relaxed to match the value derived from the exogenous parameter P5 (P5 + P8 + Pl). Hierarchical Feedback Decomposition Intuitively, feedback occurs when there exists at least two parameters p and p’ in the dependency graph Gd such that p +* p’ and p’ u* p hold in Gd. Feedback is described by various terms in various scientific dis- ciplines and engineering fields. Terms such as closed- loop circuit (as opposed to an open-loop circuit), cir- cular dependencies, closed-loop control, circular state dependencies, and state or control feedback are com- monly used. Here, we follow some basic ideas of system theory (Padulo & Arbib 1974) and concepts of connect- edness of graph theory (Even 1979) to distinguish two types of feedback structures, namely state (left) and control (right) as shown below: In a state feedback loop, input parameters, x, af- fect the output parameters, y, through a feedforward transformation. In a dependency graph, we will have: x CL)* y. The feedback transformation in turns makes the inputs x dependent on the outputs y, or: y w* x. A control feedback loop is similar to a state feedback loop except that the feedback transformation (usually the controller) uses both inputs x and outputs y inputs, i.e., 2, y ** 2. Operationalixing Feedback Except for degenerate cases, a feedback loop must be solved iteratively for it corresponds to a system of N 1 2 equations in N unknown parameters. Opti- mizing the solution quality and its computational cost requires making a number of choices for each feedback loop in terms of numerical algorithms, initial solution estimates, and convergence criteria. Addressing these issues globally can be very difficult. With a decompo- sition of the model in terms of a hierarchy of feedback loops, we can address these issues in two phases: one for the model subset corresponding to a given feed- back loop and another for the structure of the model encompassing this loop. Typically, the former focuses on finding a solution for the feedback loop while the latter addresses convergence issues at a global level. Unfortunately, identifying feedback in an arbitrary graph is an NP-complete problem. Fortunately, lumped-parameter algebraic models of physical sys- tems are typically sparse (due to lumping) and have a low degree of connectivity (because most physical components have limited interactions with neighbor components). Combined with the fact that most man- made devices are often engineered with closed-loop control designs, it is quite common for the correspond- ing dependency graphs of such models to be decom- posable in terms of feedback loops. Breaking feedback loops apart The algebraic dependency relation defined for Gd in- duces an equivalence relation. By definition, two pa- rameters pl and p2 are in the same equivalence class iff PI -* ~2 and ~2 -* P 1. These relations are character- istic of state and control feedback loops. Thus, feed- back loops are strongly-connected subgraphs of Gd; the converse is not truel. Thus, we now define a struc- tural criterion for recognizing feedback loops. In (Even 1979), a set of edges, T, is an (a,b) edge separator iff every directed path from a to b passes through at least one edge of T. Then for a strongly-connected compo- nent C, consider the smallest 2 such that, C - T is ei- ther unconnected or broken into two or more strongly- connected subcomponents. For a given pair, a, b, we call such a subset T a one-step optimal edge separator.2 With the one-step optimal edge separator, we can solve a restricted version of the feedback vertex prob- lem in polynomial time. Algorithm 2 shows how to remove optimal edge separators to analyze the topo- logical structure of a graph G. The algorithm stops if G is not separable with the optimal edge separa- tor (step 1). Consider G’ = (V, A - T). By def- inition of an optimal edge separator, T will break ‘A fully-connected graph is not a feedback loop. 20ne-step because we make a single analysis of how re- moving 7’ affects the strong connectivity of the given sub- graph. See (Rouquette 1995, Ch. 6) for a polynomial-time algorithm. 994 Model-Based Reasoning apart the strongly-connectedness of G. If G’ is no longer strongly-connected, G is in fact a simple feed- back loop (step 3). If G’ is still strongly-connected, we need to analyze the remaining structure of G’. First, we remove the strongly-connected components already found (step 4). 3 Let G” be the remaining subgraph (step 5). We consider two cases according to the connectivity of G”. If G” is not strongly connected, G has a 2-level hierarchical structure (if there are 2 or more sub-components) or a ‘L-level nested struc- ture (if there is only one sub-component) (step 7). If G” is strongly connected, it must have a single component, RCC4. Topologically, either RCC only shares vertices with the other sub-components already found (i.e.,CCS) (step 8) or it is distinct from them. In the latter case, we need to abstract the sub- component already found CC1 , . . . , CC,, RCC into - -- equivalence class vertices Ci , . . . , C,, R so that we can further analyze the remaining struct-ure of G” (step 9). Since G was strongly-connected, G is also strongly- connected (step 10). The algorithm stops if this ab- stract component, k, is not decomposable (step 12). Otherwise, we map to-the base level graph_ G the op- timal edge separator S that breaks apart K (step 13). Note that the DAD algorithm is recursive for we also need to analyze the structure of the strongly-connected sub-components of G found (steps 7,8,14). The re- cursion stops if a strongly-connected component has either 1) the structure of a state or control feedback loop (step 3) or 2) a more complex structure than that of a feedback loop (steps 1,12). Hierarchical Feedback Example 3We use the notat’ ion A(X) to mean ‘the set of edges of the subgraph X’. 4This follows from having removed all components found earlier (step 5) and from the nature of an optimal edge separator. Input: G = (V,A), a strongly-connected digraph Output: The hierarchical feedback tree (HFT) decomposition of G 1) Let T be a one-step optimal edge separator of G; HFT=ComplexFeedback(G) if T = 0. 2) G’ = (V, A - T) (remove 2’ from G). 3) HFT=Feedback(G,T) if G’ is not strongly connected. 4) Let CCS = {Ccl, . . . , CCn} be the remaining strongly-connected components of G’ 5) G” = (v A - (UCCECCS A(m))) (remove CCS from G) 6) If G” is strongly-connected, go to step 8. 7) HFT= 1 *f&G, T, UCC~CCS IlAD( if n > 1 Nested(G, T, DAD(CC1)) ifn=l 8) G” has a single strongly-connected component RCC. HFT= *gg(G T, DAD(RCC) u UccECCS DA%c))) if A(G”) - A(RCC) = 8. 9) Let 2;; be an abstract vertex representing CC;. Let E be an abstract vertex representing RCC. Let e = (v, 2) the abstract graph of G v= {El )...) ErL,ii}. 2 is defined according to paths among CCS U{ FCC}. 10) Let E be the strongly-connected component of G. Let ,? be the optimal edge separator of k 12) HFT=ComplexFeedback(G) if g = 8. 13) Let S C A be the base edges corresponding to ,!?. 14) HFT= Aggr(G, S, u DAD(cc) u DAD(RCC)) CCECCS Algorithm 2: DAD: Decomposition and Aggregation of Dependencies As an illustration example, we show in Fig 2 a schematic diagram of the evaporator loop of a two-phase, External-Active Thermal Control System (EATCS) designed at McDonnell Douglas. Liquid am- monia captures heat by evaporation from hot sources and releases it by condensation to cold sinks. The venturis maintain a sufficiently large liquid ammonia flow to prevent complete vaporization and superheat- ing at the evaporators. The RFMD pump transfers heat between the two-phase evaporator return and the condenser loop (not shown). A model of the EATCS presented in (Rouquette 1995) yields a parameter- dependency graph of 55 parameters, 18 exogenous and Figure 2: The evaporator loop of the External-Active Thermal Control System. Qualitative Physics 995 37 unknowns paired to 37 equations. A brute-force simulation approach consists in solving the 37 equa- tions for the 37 unknown parameters-at the cost of finding 37 initial value estimators for each unknown. The DAD algorithm finds a 2-level feedback decompo- sition shown-in Fig. 3. This essentially amounts to determining, at problem- solving time, how to prioritize the unknown parameters to work on. In contrast, the hierarchical decomposition determines these priorities once and for all at com- pile time. With hierarchical decomposition, higher- level feedback loops effectively act as constraints on the possible values lower-level feedback parameters can take. This process is similar to the gradient-descent techniques used in numerical algorithms. The key dif- ference is that a gradient-descent algorithm is contin- uously guessing the direction where the solution is. With hierarchical-feedback decomposition, there is no guesswork about the whereabouts of the solution; the hierarchical equation solver is built to find it in a very organized manner specified at compile-time instead of run-time. Venturi/Evaporator (pressure) -> flow Figure 3: Physical and nested algebraic feedback loops. With this feedback hierarchy, we now turn to pro- ducing a steady-state simulation, i.e., an equation solver for all the model equations. The modeler needs to choose for each feedback loop which subset of pa- rameters will characterize its state5 with the constraint that state parameters must be a graph cutset of the feedback loop 6. Other issues influence the choice of a feedback cutset as a state vector: numerical conver- gence, stability and speed. For example, in Fig. 3, the modeler chose the pressures at each loop, namely, Pr , Pz, and Ppitot, because the pressures have the widest range of behavior across possible states. Flow rates would be a poor choice because they are mostly con- stant during nominal circumstances. To produce the final hierarchical equation solver, the modeler must provide for each feedback loop the fol- lowing information: 1) a state vector of parameters; 2) an initial function to compute the initial state vec- tor values (this function can only use the exogenous parameters relative to the feedback loop.) and 3) a numerical algorithm to find the final feedback param- eter values from any state vector estimate7 Without decomposition, the equation solver has all unknown parameters to handle simultaneously. The gradient- descent approach (?) is a method to guess where the solution may be and focus the search towards there. 5The state parameter common to control and state feed- back structures is a good candidate. %e.,if state parameters are removed, the connectivity of the loop is broken. 7See (Rouquette 1995) for algorithms to generate C hi- erarchical equation solvers based on the above information. For the EATCS, we start at Loopl. From Ppitot we compute new estimates of the lower-level feedback loop states (PI, P2). Then, Loop2 and Loop3 refine PI and P2 to satisfy the algebraic feedback equations. With this decomposition, search costs are effectively divided among multiple feedback loops. Furthermore, we are spared from continuously evaluating the next- best direction to go as is done with gradient descent. The charts below show experimental results demon- strating that despite the lack of flexibility in determin- ing the next-best parameters to adjust, the decomposi- tion approach finds solutions of equal quality at much lower computational cost, even for difficult solutions. For each chart, we considered a series of 21 exogenous conditions defined by pump speed, venturi diameters (to analyze clogging conditions) and evaporator load. As long as the model converges to a nominal state for the EATCS, all three solvers are practically equivalent (data sets 0 through 12). Data sets 13 and above corre- spond to overload-conditions where the heat applied is greater than what the evaporator loop can circulate. In such cases, the initial estimates are quite far from the actual abnormal solutions which implies more search. In fact, the brute-force and intermediate solvers spend several orders of magnitude more time searching only to find wrong solutions. Only the hierarchical solver managed to predict the temperature increase at the evaporator outlet due to the overheating condition. 0.0495 Venturi 1 diameter, Phil (input) - I ’ ’ ’ ’ - ’ ’ ’ ‘-1 0.049 0.0485 t 0.047 0.0465 t Phil with linespoints 1; : 1 0.046 t 0.0455 e 0.045 f . - f * . ’ - * * 0 2 4 6 8 10 1’2 14 16 18 20 996 Model-Based Reasoning Log of Residual Error le+lO brute-force - intermediate .-+-- hiearchicd Q- 1 le-10 f le-20 I 0 2 4 6 8 10 12 14 16 18 20 Data set Equation-solving time 0.500 0.450 0.400 0.350 0.300 0.250 0.200 0.150 0.100 0.050 0.000 0 2 4 6 8 10 12 14 16 18 20 Data set Evaporator 1 outlet temperature (Tl) 260 , r - a z - . 8 - - , brute-force G 1 220 - intermediate --F-- _ b 200 hiearchical . 180 ir . 160 - d 140 - 120 d - 100 - 80 - _ _ _ _ _ _ _ 60 - 40 ’ ’ ’ ’ * ’ ’ a ’ 0 2 4 6 8 10 12 14 16 18 20 Data set Conclusion To describe the possible ways for solving a set of pa- rameters from a set of algebraic equations, we pre- sented the notion of algebraic ordering which is equiv- alent to causal ordering if all equations are believed to be causal. From an algebraic ordering, we constructed a parameter-dependency graph and described a de- composition algorithm based on analyzing the topo- logical structure of the dependencies in terms of its strongly-connected components. By carefully choosing how to break apart such components, we showed how to construct the hierarchical decomposition of the de- pendency graph in terms of state and control feedback structures. Once the modeler chooses state feedback parameters and initial estimators for them, the set of all equations is solved bottom-up by applying a cho- sen equation solver according to the- hierarchical feed- back decomposition found. Compared to knowledge- free gradient-descent approaches to equation solving, our knowledge-intensive approach seeks to elucidate knowledge about feedback from the model itself to help the modeler provide as much relevant equation-solving knowledge as possible in terms of initial solution estc mates and convergence metrics. Experimentally, this produced faster, better, and cheaper simulation pro- grams trading off an expansive and broad search space (brute force approach) for a narrow, structured search space (fewer independent parameters) thereby achiev- ing greater computational efficiency without loss of ac- curacy. References Amador, F.; Finkelstein, A.; and Weld, D. 1993. Real- time self-explanatory simulation. In Proceedings of the Eleventh National Conference on Artificial Intel- ligence. The AAAI Press. Aris, R. 1978. Mathematical Modeling Techniques, volume 24 of Research notes in mathematics. Pitman. Biswas, G., and Yu, X. 1993. A formal modeling scheme for continuous systems: Focus on diagnosis. In International Joint Conference on Artificial Intel- ligence, 1474-1479. Even, S. 1979. Graph Algorithms. Computer Science Press. Forbus, K., and Falkenhainer, B. 1995. Scaling up self-explanatory simulators: Polynomial-time compi- lation. In International Joint Conference on Artificial Intelligence, 1798-1805. Iwasaki, Y., and Simon, H. A. 1993. Retrospective on causality in device behavior: Artificial Intelligence Journal 141-146. Levy, A. 1993. Irrelevance Reasoning in Knowledge Based Systems. Ph.D. Dissertation, Department of Computer Science, Stanford University. Manocha, D. 1994. Algorithms for computing se- lected solutions of polynomial equations. J. Symbolic Computation ll:l-20. Nayak, P. 1993. Automated Modeling of Physical Sys- tems. Ph.D. Dissertation, Department of Computer Science. Padulo, L., and Arbib, M. A. 1974. System Theory. W. B. Saunders Company. Press, H.; Teukolsky, S.; W.Vetterling; and Flannery, B. 1992. Numerical Recipes in C. Cambridge Univer- sity Press. Rouquette, N. 1995. Operationalizing Engineering Models of Steady-State Equations. Ph.D. Disserta- tion, Dept. of Computer Science, Univ. of S. Califor- nia. Serrano, D., and Gossard, D. 1987. Constraint man- agement in conceptual design. In Sriram, D., and Adey, R., eds., Knowledge Based Expert Systems in Engineering: Planning and Design. Computational Mechanics Publications. 211-224. Qualitative Physics 997
1996
146
1,784
Managing Occurrence Branching in Qualitative Simulation Lance Tokuda University of Texas at Austin Department of Computer Sciences Austin, Texas 787 12- 1188 unicron@cs.utexas.edu Abstract have the simple transition diagram displayed in Figure 1. Qualitative simulators can produce common sense abstractions of complex behaviors given only partial knowledge about a system. One of the problems which limits the applicability of qualitative simulators is the intractable branching of successor states encountered with model of even modest size. Some branches may be unavoidable due to the complex nature of a system. Other branches may be accidental results of the model chosen. Figure 1: Variable transition diagram A common source of intractability is occurrence branching. Occurrence branching occurs when the state transitions of two variables are unordered with respect to each other. This paper extends the QSIM model to distinguish between interesting occurrence branching and uninteresting occurrence branching. A representation, algorithm, and simulator for efficiently handling uninteresting branching is presented. Systems of uncoupled variables First consider the behaviors generated in a system with two uncoupled variables A and B. Let the initial value for both variables is 0 so we can represent the initial state of the sys- tem as the 2-tuple (0,O). Figure 2 displays the transition dia- gram for this system. Introduction Qualitative simulators can produce common sense abstrac- tions of complex behaviors, however, they can also produce an intractable explosion of meaningless behaviors because they attempt to combinatorially order uncorrelated events. People who build brick walls obtain bricks, cement, and tools, and proceed to lay the wall. They don’t worry about whether they obtain bricks then tools then cement, or cement, then tools, then bricks, or cement, then bricks, then tools, or cement and bricks, then tools, etc. Common sense tell them that the order in which the events are completed does not matter, what matters is that the events are com- pleted before the wall is laid. Qualitative simulators fail the common sense challenge when confronted with similar problems. Simulators such as QSIM (Kuipers 1994) attempt to calculate all possible orderings of inherently unordered events which can lead to intractable branching in models of even modest size. This paper presents a representation (L-behavior diagrams), an algorithm (L-filter), and a simulator (LSIM) which manages this complexity. Occurrence branching problems To explore the problems associated with occurrence branching, we examine systems with variables defined to Figure 2: Transition diagram for 2 uncoupled variables (A,B) Note that there are 3 possible behaviors for this system. Next consider the behavior tree when a third independent variable C is added. This system produces 13 possible behaviors (Figure 3). Note that for a system with n variables, the number of behaviors is greater than 2”. This intractable explosion of behaviors arises due to the combinatorial ordering of variables transitioning from 0 to 1. The phenomenon of creating branches due to the ordering of variables attaining landmarks is known as occurrence branching. In the example systems, variables transition from 0 to 1. In systems where variables transition between three or more values or are allowed to transition back and forth between values, the number of behaviors would grow even more rapidly. This is the nature of qralitative models. Systems of couplled variables Models of physical systems do not experience the explosion of behaviors demonstrated in the previous section. Vari- 998 Model-Based Reasoning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Next consider a double-wishbone system composed of two wishbones connected through node A (Figure 6). Figure 3: Transition diagram for 3 uncoupled variables (A,B,C) ables in these models are often based on physical properties such as position, velocity and acceleration. The velocity of an object over time is related to both its acceleration and position. Properties of physical systems are captured through constraints. A constraint serves to prune the vari- able transition graph. A constraint may apply to a single variable (e.g the variable is constant) or it may apply to multiple variables (e.g. A = B + C). Consider the simple model in Figure 4 which we will refer to as a wishbone. The wishbone has four nodes labeled A, B, C, and D. Figure 4: Wishbone The lines connecting nodes represent constraints between nodes. The system has the following constraints: A is in state 1 if and only if B is in state 1. B is in state 1 if and only if C and D are in state 1. The wishbone can experience three possible behaviors displayed in Figure 5. WL 190) - (1,1,1,V ww,0) b (whQl) - (1AW (I,l,W Figure 5: Transition diagram for wishbone (A,B,C,D) The wishbone exhibits traces of occurrence branching since C can transition before, after, or at the same time as D. A3 B’ Figure 6: Double-wishbone The number of possible behaviors resulting from this composite system is 17 (Figure 7). 4 (0,0,0,1,0,0,1)+ (1,1,‘,‘,‘,‘,‘) ~0,0,0,0,0,0,‘) (0,0,1,0,0,0,1)-* (l*‘,**‘***l,l) (‘,‘,‘,‘,‘,‘,1) -E (0,0,0,1,0,1,0)+ (l,l,l,l,l,l,l) (0,0,0,0,0,1 ,O) (0,0,1,0,0,1,0)-* (l,l*l*l*l*l~l) (l,l,l,l,l,l,l) uuwMw40,0) (O,O,O, 1 ,O,O,O) < (0,0,0,1,0,0,1)+ (l,l,l,l,l,l,l) (0,0,0,1,0,1,0)-& (l,l,l,l,l,l,l) (1,1,1,1.1,1,1) Figure 7: Transition dia ram for double-wishbone 8 VW,W,B’, ‘D’) The large number of behaviors is due to occurrence branching among variables C, D, C’, and D’. For a system with m wishbone components (m > 1) joined at node A, the number of behaviors is greater than 3m. L-behavior diagrams The C-filter algorithm employed by QSIM maintains a dis- tinct tuple for each state of each behavior. This work pro- poses a compact representation which shares states among multiple behaviors. Consider the system of two uncoupled variables in Figure 1. The states for the behaviors of inde- pendent variables are tracked separately (Figure 8). This representation asserts that there is no ordering speci- fied as A and B transition from 0 to 1. Given the require- ment that at least one variable must make a transition to create a new state, Figure 8 represents the same set of behaviors as Figure 2. Figure 9 displays the representation for three independent variables. The cost of this representa- Qualitative Physics 999 A (0) __+ (1) Figure 8: Representation for two uncoupled variables A (o)- (1) B (0) ___+ (1) B W Figure 9: Representation for three uncoupled variables tion grows linearly with the number of variables versus the exponential cost of the transition diagrams in Figure 2 and Figure 3. Next we extend this representation to support coupled variables. Consider the wishbone from the previous section. The system is divided into three levels (Figure 10). I Level 1 I Level 2 0 \ I f OD 1 Level 3 Figure 10: Wishbone layering Based on this layering, a new representation called an L- behavior diagram is constructed. The L-behavior diagram for a wishbone is displayed in Figure 11. The boxes around values represent aggregate states. Aggregate states store the behaviors of coupled variables within the same level. The check for coupling is made with respect to the next higher level, the current level, and all lower levels. For the wishbone, behaviors for C and D are placed in an aggregate state because C and D are jointly constrained by B. The dashed lines connecting an aggre f ate state to a single state represent corresponding states . Corresponding states co-occur in some branch of the simulation. In Figure 11, the states (O,O), (O,l), and (1,O) in cdl co-occur with state (0) in bl. The L-behavior diagram in Figure 11 is equivalent to the transition diagram presented in Figure 5. The Figure 5 representation 1 .QSIM states refer to a tuple which stores the value of all model variables at a time-point or time-interval. States in this paper refer to individual variable states or aggregate variable states - all vari- able values are not tracked in a single tuple. Figure 11: L-behavior diagram for wishbone uses six 4-tuples for a total of 24 variable states. The L- behavior representation uses four 1-tuples and six 2-tuples for a total of 16 variable states. The L-behavior representation is more compact because the states for A and B are shared for different behaviors of C and D. Figure 12 presents a layering for the double-wishbone. In this system there are two sets of aggregate states produced for level 3. One set of aggregate states contains C and D pairs and the other contains C’ and D’ pairs. The pairs are separated because they are not constrained by a common ancestor in level 2 (i.e. they are decoupled). / . , &,pg+bJ Figure 12: Double-wishbone layering The L-behavior diagram for the double-wishbone is displayed in Figure 13. Note that the double-wishbone L-behavior diagram in Figure 13 is less than twice as large as the wishbone L- behavior diagram in Figure 11, while the double-wishbone transition diagram in Figure 7 is approximately six times larger than the wishbone transition diagram in Figure 5. The L-behavior diagram uses six I-tuples and twelve 2- tuples for a total of 30 variable states. The transition diagram in Figure 7 uses thirty-four 7-tuples for a total of 238 variable states. This is an order of magnitude difference. 1000 Model-Based Reasoning (l*l) C’,D P c’d’4 Figure 13: L-behavior diagram for double-wishbone L-Filter QSIM uses the C-filter algorithm (Kuipers 1994) to gener- ate a behavior tree. C-filter, like the transition diagrams, attempts to assign an ordering to all variables as they attain landmarks. Thus, C-filter is subject to intractable occur- rence branching. L-behavior diagrams offer an implicit representation for unordered events and avoid the problems associated with occurrence branching. This section presents the L-filter algorithm for efficiently computing L-behavior diagrams. I-Branching and U-Branching L-filter distinguishes between interesting occurrence branching (I-branching) and uninteresting occurrence branching (U-branching). What is interesting or uninterest- ing depends on the user’s perspective. The owner of a hydraulic power plant may want to know whether the warn- ing light flashed red before or after the dam overflowed (I- branching). The stray dog downstream is more concerned with its own swimming ability than the ordering of the two events (U-branching). L-filter is given a list of interesting variables as a part of the system model. I-branching is defined to be the occurrence branching involving interesting variables. U- branching is defined to be the branching involving uninteresting variables. Currently, QSIM does not distinguish between interesting and uninteresting variables in the qualitative model. The result is that many uninteresting states are calculated and displayed. Clancy was the first to address this problem by eliminating branches off of uninteresting variables as a post-process (Clancy & Kuipers 1993). This solves the display problem but it does not reduce the computational complexity of the model. E-Filter algorithm L-filter requires the following five elements: 1 Variable transition diagram 2 Model variables 3 Model constraints 4 Identification of interesting variables 5 Initial state of variables Given a system model, L-filter proceeds with the following steps: 1 The system diagram is constructed. All variables are identified and constraints between variables are con- nected with arcs (e.g. Figure 4). 2 Layering is added to the diagram. All interesting vari- ables are placed in level 1. Other levels are identified by performing a breadth first search. Variables at depth 1 are placed in level 2, variables at depth 2 are placed in level 3, etc. (e.g. Figure 10). Let the number of levels be n. The highest level refers to level 1 and the lowest level refers to level n. 3 Aggregates within each level are identified (e.g. Figure 12). Aggregates are constructed by grouping variables which are coupled given that variables in the next higher level remain unchanged. ’ 4 Initialize all variables and freeze all levels. 5 Set CL = lowest frozen level. Thaw(CL). 6 Advance( ApplyConstraints(C 7 If no new states are created and CL = 1, then end the simulation. 8 If no new states are created then goto step 5. 9 If CL < n, then Freeze(CL) and set CL = CL+l. Goto step 6. Freeze(level) blocks any transitions of variables in level. Thaw(level) removes the transition block imposed by Freeze(level). Advance generates successor states for all aggregates in level CL. The successor states are obtained by advancing each variable in each aggregate one step in the variable 1. Two variables in level 2 may be coupled by a constraint on a variable in level 1. For example if A = B + C where A is in level 1 and B and C are in level 2, B and C are coupled given a constant A. For cyclic models, variables are coupled if they share a com- mon descendant. 2. This step is analogous to running one iteration of C-filter on level CL with the additional constraint that all variable values in levels higher than CL do not change and all variables in lower lev- els are ignored. Qualitative Physics 1001 transition diagram (see Figure 1). All possible transition combinations are generated. ApplyConstraints checks constraints among variables in levels CL and CL-l. States which do not satisfy the con- straints are pruned. If all of an aggregate’s states are deleted then the aggregate is deleted. If all aggregates for some set of variables corresponding to a state are deleted, then the state is deleted. L-Filter applied to double-wishbone For the double-wishbone, the variable transition diagram, model variables, model constraints, and initial variable val- ues were given previously. The next step is to identify the interesting variables. If all variables were interesting, then the occurrence branching due to the ordering of variable transitions would be I-branching. C-filter would be an appropriate algorithm for this case since it explicitly calcu- lates every possible ordering. Suppose instead that the only interesting variable was A. For this system, the ordering of variable transitions for C, D, C’, and D’ constitutes U-branching. Given this knowledge, one would not need to unroll the L-behavior diagram to produce all possible transition orderings. Level 1 would be the only level of interest. This is the advantage of L-filter - I-branching is calculated explicitly while U- branching is implicit in the representation. To illustrate the advantage of L-filter, A is chosen as the only interesting variable. Given the double-wishbone system model, L-filter proceeds in the following steps: Construct the system-diagram (Figure 6). Divide the diagram into levels. Identify aggregates within each level (Figure 12). Set CL = 3. Thaw(3). Advance(3). ApplyConstraints(3). This produces cdl and c’d’l ? Since there are no lower levels, level 3 is advanced again. This time, no states which satisfy the constraints are produced. Thaw(2). Advance(2). ApplyConstraints(2). B transitions from 0 to 1 and constraints are checked between B and A. No states are possible since A and B must have the same value. Thaw( 1). Advance( 1). ApplyConstraints( 1). This pro- duces a2. Freeze( 1). Advance(2). ApplyConstraints(2). This produces b2 and b’2. Freeze(2). 10 Advance(3). ApplyConstraints(3). This produces cd2, cd3, cd4, c’d’2, c’d’3, and c’d’4. 11 Since there are no lower levels, level 3 is advanced again. This time, no states which satisfy the constraints are produced. 12 Thaw(2). Advance(2). ApplyConstraints(2). No new states are possible since B is in a terminal state. 1 Note that states cdl and c’d’l can be computed independently. 13 Thaw(l). Advance(l). ApplyConstraints(1). No new states are possible since A is in a terminal state. At this point, the algorithm terminates. The result is the L-behavior diagram in Figure 13. LSIM LSIM is a simulator which uses L-filter to run QSIM mod- els. This section discusses the changes which enable L-filter to run on QSIM models. Interesting variables QSIM models specify variables, constraints, and the initial variable state*. To apply L-filter, the QSIM model is extended by identifying interesting variables as a part of the model. While variables of interest to the user often repre- sent physical properties such as distance, velocity and acceleration, a modeler will often add abstract variables to model the hidden complexities of a system. A problem arises when the abstract variables create an intractable num- ber of branches and obscure interesting behaviors. The following scenario is commonplace - a modeler discovers that the simulation is generating too many behaviors. The modeler introduces new variables to place additional constraints on the system behavior. The revised model now generates more behaviors due to occurrence branching caused by the newly introduced variables. For example, a user may be interested in the velocity v and height h of a bouncing ball but the modeler may add variables KE = K * v* and PE = L * h to model the kinetic and potential energy of the ball. Depending on the model, C-filter may generate more behaviors when KE and PE are added. By identifying v and h as the only interesting variables, L-filter attempts to reduce the cost of U- branching due to KE and PE by not explicitly ordering uncoupled changes in the two variables3. Variable transition diagrams Variables values in QSIM are a 2-tuple consisting of a mag- nitude and a direction. Magnitudes transition between land- marks and intervals and directions transition between increasing, steady, and decreasing. Variable transitions in QSIM are constrained by continuity (all variables in QSIM are continuous). The transition of a continuous variable from a landmark to an interval is instantaneous. This instantaneous change restricts the possible transitions that other variables in corresponding states can make. For example, if a variable A transitions from (0,inc) to ((O,inf),inc), then a variable B 2.QIM generates all possible starting states given incomplete ini- tial state information. L-filter assumes a single initial state but could be extended to generate all possible starting states. 3.This would be true if KE and PE were uncoupled. A modeler is likely to define KE + PE to be constant, thus, KE and PE would be coupled. 1002 Model-Based Reasoning in corresponding state ((O,inf),dec) cannot transition to (0,dec). If a variable transitions from a landmark to an interval, then all corresponding states in the same and lower levels must obey the time-point to time-interval transition rules. When a variable transitions from an interval to a landmark, all corresponding states in the same and lower levels cannot transition from a steady value to a non- steady value (analytic function constraint). Other transitions are possible since a time-interval to time-point transition is not instantaneous. A set of transition rules is given in Kuipers 1994. LSIM adds time-point and time- interval transition rules to support continuous variables. Other Global Constraints QSIM has a number of global constraints which should be detected and enforced - infinite time/infinite value, non- intersection, energy, analytic function, etc. Only the infinite time/infinite value and the analytic function constraints are enforced at the current time, however, L-filter does not pre- clude the implementation of other global constraints. LSIM applied to a QSIM model The current implementation of LSIM is severely limited because it does not support the full complement of local and global constraints available in QSIM. A very simple QSIM model which exhibits occurrence branching was chosen to illustrate that LSIM can provide an advantage over QSIM. The system contains two objects traveling on a line at increasing speeds in opposite directions. Branching is introduced as the object velocities attain unevenly spaced landmarks (Figure 14). Constraints: vl increasing v2 decreasing vl = d/dt xl I @ + 4 4 4 I I I I I I b minf x2 0 Xl inf 4----a - 4 : I t I 1 III II I I I I I III ,I 8 minf a b cd v2 0 vle f g h inf Figure 14: Two accelerating objects moving in opposite directions od /’ ‘\ / \ xl 9 Q x2 Level 1 Level 2 Level 3 Figure 15: System layering Both LSIM and QSIM are capable of producing the desired behavior for d (d is positive and increasing). LSIM used 28 variable states while QSIM was unable to solve the problem with 400 5-tuples (over 2000 variable states). When the QSIM state limit was set to 800, the simulator consumed over nine megabytes of memory before crashing. This example shows that LSIM can produce efficiency gains for QSIM models. It also demonstrates the U- branching problem for two variables with more than two landmarks. Conclusions This paper establishes the intractable nature of attempting to order uncorrelated events. The unordered transition of only three variables is shown to produce an order of magni- tude increase in the complexity for QSIM-style systems which use single tuples to represent system states. The L- behavior representation distinguishes between coupled and decoupled states is shown to be potentially more efficient that the QSIM approach. L-filter computes L-behavior dia- grams and LSIM uses L-filter and additional qualitative rea- soning constraints to simulate QSIM models. For a simple model where U-branching is prevalent, LSIM demonstrates a two orders of magnitude benefit over QSIM. The hope is that this work will inspire a mature version of LSIM which supports the full complement of QSIM constraints and features. LSIM can then attempt to simulate models which where previously thought to be intractable. References Kuipers, B. J., 1994. Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge, Cambridge, MA: MIT Press. Clancy, D., and Kuipers, B. J. 1993. Behavior Abstraction for Tractable Simulation. In Proceedings of the Seventh International Workshop on Qualitative Reasoning, 57-64. The interesting variable is chosen to be d - the distance between the two objects. The system diagram is displayed in Figure 15. Qualitative Physics 1003
1996
147
1,785
Trajectory Constraints in Qualitative Simulation Giorgio Brajnik* Daniel J. Clancy Dip. di Matematica e Informatica Universith di Udine Udine - Italy giorgio@dimi.uniud.it Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 clancy@cs.utexas.edu Abstract We present a method for specifying temporal con- straints on trajectories of dynamical systems and en- forcing them during qualitative simulation. This ca- pability can be used to focus a simulation, simulate non-autonomous and piecewise-continuous systems, reason about boundary condition problems and incor- porate observations into the simulation. The method has been implemented in TeQSIM, a qualitative simu- lator that combines the expressive power of qualitative differential equations with temporal logic. It inter- leaves temporal logic model checking with the simu- lation to constrain and refine the resulting predicted behaviors and to inject discontinuous changes into the simulation. of qualitative states) that satisfy continuous and dis- continuous behavioral requirements specified via tra- jectory constraints’. Figure 1 describes the relation- ship between the sources of constraining power within TeQSIM. Introduction State space descriptions, such as differential equations, constrain the values of related variables within individ- ual states and are often used in models of continuous dynamical systems. Besides continuity, which is im- plicit, these models cannot represent non-local infor- mation constraining the behavior of the system across time. Because qualitative simulation (Kuipers 1994; Forbus 1984) uses an abstraction of ordinary differen- tial equations, it is based on a state space description too. The discretization of system trajectories into ab- stract qualitative states, however, makes the represen- tation used by qualitative simulation amenable to the application of temporal formalisms to specify explicit across-time constraints. In general, these trajectory constraints can be used to restrict the simulation to a region of the state space in order to focus the simula- tion, simulate non-autonomous systems, reason about boundary condition problems and incorporate obser- vations into the simulation. Trajectory constraints are formulated using a com- bination of temporal logic expressions, a specification of discontinuous changes and a declaration of exter- nal events. Temporal logic expressions are written us- ing a variation of a propositional linear-time tempo- ral logic (Emerson 1990) that combines state formulae specifying both qualitative and quantitative informa- tion about a qualitative state with temporal operators, such as until, always, and eventually, that quantify such properties over a sequence of states. Temporal logic model checking is interleaved with the simulation to ensure that all and only the behaviors satisfying temporal logic expressions are included within the re- sulting description. Our logic extends the work done by Shults and Kuipers (Kuipers & Shults 1994; Shults & Kuipers 1996) in two ways. A three-valued logic is used to allow an expression to be conditionally en- tailed when quantitative information contained within the expression can be applied to a behavior to refine it. In addition, the model checking algorithm is designed to handle the incremental nature of a qualitative sim- ulation. An undetermined result occurs whenever the behavior is not sufficiently determined to evaluate the truth of a temporal logic expression. Discontinuous change expressions define when a par- ticular discontinuity can occur and specify the new val- ues for the variables that change discontinuously. This information is propagated through the model to deter- mine the variables that are indirectly affected by the change. TeQSIM (Temporally Constrained QSIM, pro- nounced tek’sim) restricts the simulation generated by QSIM (K ui P ers 1994) to behaviors (i.e. sequences *The research reported in this paper has been performed while visiting the Qualitative Reasoning Group at the Uni- versity of Texas at Austin. Finally, external events enable the modeler to re- fer to otherwise unpredictable events and to provide a quantitatively bounded temporal correlation between the occurrence of these events and distinctions pre- ‘A trajectory for a tuple of variables <VI, . . . , v,> over a time interval [a, b] C !J?’ U (0, +oo} is defined as a function r mapping time to variable values defined over the set of the extended reals, i.e. T : [a, b] + (!R u (-00, +ccI})“. Qualitative Physics 979 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. TeQSIM uses three sources of information to constrain a simulation: structural constraints are specified as equations relating model variables within individual states; implicit continuity constraints restrict the relationship between variable values across time to ensure the continuity of each variable; and trajectory constraints restrict the behavior of individual variables and the interactions between variables. Each point in the above diagram represents a real valued trajec- tory. A qualitative behavior corresponds to a region within this space of trajectories. Discontinuous changes specified by the user cause a relaxation of the continuity constraints applied during simulation (dotted line surrounding the continuity constraints). Incorporating external events into the simulation extends the set of trajectories consis- tent with the structural constraints (dotted line surrounding the structural constraints). The qualitative behaviors generated by QSIM correspond to the trajectories consistent with both the unextended structural con- straints and the unrelaxed continuity constraints (thick boundary region), while the set of behaviors generated by TeQSIM corre- sponds to those trajectories consistent with all three constraint types (shaded region). Figure 1: TeQSIM constraint interaction. dieted by the model. A Control Problem TeQSIM has been applied to a variety of problems to address a range of tasks (Brajnik & Clancy 1996b). This section provides a simple example to demonstrate how trajectory information can be used to constrain a qualitative simulation in a realistic setting. Suppose that operators of a dam are told of a forecasted per- turbation to the flow of water into the lake. When involved in risk assessment decision making, they face the control problem of determining how to react, in terms of operations on gates and turbines, in order to avoid flooding neither the lake nor the downstream ar- eas. To show some of TeQSIM’s capabilities, we use a simple model of a lake, consisting of a reservoir, an in- coming river and an outgoing river; lake level and out- flow are regulated through a dam that includes a single floodgate. The implemented model uses quantitative information concerning Lake Travis, near Austin (TX), obtained from the Lower Colorado River Authority. Quantitative information is provided by numerical ta- bles which, in this specific case, are interpolated in a step-wise manner to provide lower and upper bounds for any intermediate point. Table 2 shows a portion of the rating table of a floodgate of Lake Travis. Its columns indicate the lake stage, i.e. level with respect to the mean sea level, the gate opening, and the gate discharge rate. A similar table correlates the lake stage with its volume. Stage (ft) Opening (ft) Discharge rate (cfs) 665.00 1.00 638.82 665.00 2.00 1277.65 . . . I 720.00 1 8.50 1 6200.00 Figure 2: Rating table for floodgates of Lake Travis. The simulation starts from a state with initial values for stage and gate opening that guarantee a steady out- flow in the downstream leg of the river. It is forecasted that in 2-3 days the inflow will increase and that for the subsequent 15-21 days there will be no substantial change. The task is to determine if there is any risk of overflowing the dam and, if so, what actions can be taken to prevent this. We use TeQSIM to specify trajectory constraints on input variables: input flow rate and gate opening. The following trajectory constraints specify the perturba- tion to the inflow rate (Colorado-up). (EVENT step-up :time (2 3)) (EVENT step-down :time (17 24)) (DISC-CHANGE (event step-up) ((Colorado-up (if* inf) :range (1500 1800)) ) ) (DISC-CHANGE (event step-down) ((Colorado-up if*))) The declaration of an event (e.g. (EVENT step-up :time (2 3))) d fi e nes a name for a time-point and provides quantitative bounds (i.e. between days 2 and 3 from the start of the simulation). The expres- sion (DISC-CHANGE (event step-up ((Colorado-up (if * inf > : range (1500 1800))) > states that when event step-up occurs, the qualitative magnitude of Colorado-up will instantaneously change into the in- terval (if * inf > and its value will be bounded by the range [1500, 18001. A simulation using these trajectory constraints shows that an overflow of the lake is indeed possible if no intervening action is taken. To guarantee that no overflow occurs, a significant opening action is re- quired. To this end, we postulate that an opening ac- tion to at least 4 feet occurs after Stage reaches the top-of-pool threshold. We are interested in know- ing the latest time at which such an action can oc- cur to prevent an overflow. The previous trajectory specification is extended by including an additional event (corresponding to the opening of the gate), the corresponding discontinuous action (the gate opening changes from its initial value op*=l to an interme- diate-value within (op* max) con&rained to be greater than or equal to 4) &d the drdering with respect to the threshold (using the temporal operator BEFORE applied to a state formula referring to the qualitative value of Stage and the external event). By focusing the simu- lation on behaviors that lead to an overflow condition (using the EVENTUALLY temporal operator applied to a state formula stating that Stage reaches the value Top), TeQSIM determines a lower bound for the tem- PO;& occurrence of actions leading to an overflow. (EVENT open) (DISC-CHANGE (event open) ((opening cop* max) :range (4 NIL)))) 980 Model-Based Reasoning r-------, SiIllUldlO~ & Model Checking Dlscontinuow - Procesfor Figure 4: TeQSIM architecture. (BEFORE (qvalue stage (top-of-pool NIL) 1 (EVENTUALLY (qvalue stage (top NIL))) open) ) This simulation tells us (figure 3) that if the gate is opened to at least 4 feet after 15.5 days then an over- flow may occur. Taking the action before 15.5 days, however, will prevent such an outcome. After con- straining the action to occur before 15.5 days and re- moving the eventually constraint, a third simulation produces only two behaviors, verifying that an over- flow cannot occur. In a similar manner, we infer an upper bound of 6 ft for the size of the opening ac- tion, given a restriction on the outflow rate expressed via the temporal logic expression (always (value-<= Colorado-dn 350) >. It is worth noting the amount of uncertainty present in even such a simple problem: functions (especially the discharge rate) may be non-linear, numeric en- velopes are based on a rough step-wise interpolation of tables, and the specification of input trajectories is uncertain (i.e. ranges for times and values). Neverthe- less, with a few simple simulations a reasonably useful and reliable result has been achieved. TeQSIM Architecture and Theory TeQSIM can be divided into two main components. The preprocessor modifies the qualitative differential model and decomposes the trajectory specification into temporal logic and discontinuous change expressions. The simulation and model checking component inte- grates temporal logic model checking into the simula- tion performed by QSIM by filtering and refining qual- itative behaviors according to a set of temporal logic expressions; it also injects discontinuous changes into the simulation. Figure 4 provides an overview of the system architecture. The user provides trajectory constraints to TeQSIM in the form of a trajectory specification that consists of an external event list and a set of extended temporal logic and discontinuous change expressions. The exter- nal event list is a totally ordered sequence of named, quantitatively bound time points. Events are repre- sented as landmarks of an auxiliary variable added to the model. The additional variable causes QSIM to branch on different orderings between external events and internal qualitative events identified during the simulation. The occurrence of external events is re- stricted by their quantitative bounds and the trajec- tory constraints specified by the modeler. The following subsections include a summary of the formal framework developed for the trajectory spec- ification language. A more detailed treatment of the language and main theorems, along with proofs and ad- ditional lemmas, is given in (Brajnik & Clancy 1996a; 1996b). Guiding and refining the simulation Model checking and behavior refinement are performed by the Temporal Logic Guide (TL-Guide). Each time QSIM extends a behavior by adding a new state, the behavior is passed to the TL-Guide. The behavior is refuted if it contains sufficient information to deter- mine that each of its completions fail to satisfy the set of TL expressions. If the behavior conditionally models the TL expressions, then it is refined by incorporating relevant quantitative information contained within the TL expressions. Otherwise, the behavior is retained unchanged. The trajectory specification language includes propositional state formulae that can refer to qual- itative and quantitative values of variables within states. Qualitative value information is specified using (qvalue w (qmag qdir) > where v is a model variable, qmag is a qualitative magnitude, and qdir is one of { inc, std, dec}. NIL can be used anywhere to match anything. Such a proposition is true for a state exactly when the qualitative value of v in the state matches the description ( qmag qdir > . Path formulae are defined recursively as either state formulae or combinations of path formulae using tem- poral and boolean operators. A state formula is true of a behavior if it is true for the first state in the behav- ior. The path formula (until p q), where both p and Q are path formulae, is true for a behavior if p holds for all suffixes of the behavior preceding the first one where q holds, while (strong-next p) is true for a be- havior if it contains at least two states and p holds in the behavior starting at the second state. Other tem- poral operators can be defined as abbreviations from these two. We have extended those defined in (Shults & Kuipers 1996) to provide a more abstract language to simplify the specification of assertions. Temporal logic formulae are given meaning with re- spect to linear-time interpretation structures. These structures are extended from their typical definition (e.g. (Emerson 1990)) in order to accommodate the refinement of behaviors with quantitative information. In addition to defining a sequence of states and a propositional interpretation function, means for repre- senting, generating and applying refinement conditions are provided. Refinement conditions are needed be- cause the language provides quantitative propositions whose truth value cannot always be determined. When ambiguity occurs for a formula, then the interpreta- tion is required to provide necessary and sufficient re- Qualitative Physics 981 TeQSIM produces two behaviors where Stage reaches Top. The first behavior is shown above. The opening action occurs at T3. The numeric bounds oti this time-point shows that an overflow can occur only if the opening action is performed after 15.5 days. The second behavior provides similar results. Figure 3: Lake simulation with opening actions leading to overflow. finement conditions on quantitative ranges to disam- biguate the truth value of the formula. A refinement condition is a boolean combination of inequalities be- tween the partially known numeric value of a variable in a state and an extended real number. The trajectory specification language contains po- tentially ambiguous state formulae like (value-<= v n), where v is a variable and n E !I? U (-cm, $00). The formula is true in a state s iff Vx E R( v, s) : 2 5 n, where R(v, s) denotes the range of possible numeric values for variable 21 in state s; it is false iff vx E R(v, s) : n < x; otherwise, it is conditionally true. In such a case, the refinement condition is that the least upper bound of the possible numeric values of TJ is equal to n (i.e. v, 5 n, where v, is the unknown value of 21 in s). Applying a refinement condition to a state yields a new, more refined state. For example, the formula (value-<= X 0.3) generates the condition X, 5 0.3 when interpreted on a state s where R(X, s) = [0, 1.01. Applying the condition to s leads to a new state s’ where R(X, s’) = [0,0.3]. Notice that ambiguity is not a purely syntactic prop- erty, but rather it depends on state information. For example, (value-<= X 0.3) is (unconditionally) true on a state where R(X, s) = [0,0.25], but only condi- tionally true if R(X, s) = (0, 1.01. Due to potential ambiguity, two entailment relations are used to define the semantics of formulae. The first one, called models, characterizes non-ambiguous true formulE while the second one, called conditionally models, characterizes ambiguous formulae. To avoid hindering the simulation process, the us- age of ambiguous formule must be restricted. The problem is that an arbitrary formula may yield sev- eral alternative refinement conditions. A disjunction of refinement conditions can be applied to states, but it requires the introduction of a new behavior that is qualitatively identical to the original behavior. For example, when interpreted on a particular state (or (value-<= X 0.5) (value-<= Y 15)) may yield the condition (X, 5 0.5 V Ys 5 15). Applying such a con- dition yields a state s’ in which R(X, s’) = [. . . ,0.5] and a state s” where R(Y, s”) = [. . . , 151. A similar, more severe problem occurs with path formulae. The set of admissible formule is a syntactic restric- tion that excludes formulae that may result in disjunc- 982 Model-Based Reasoning tive conditions. Even though such a restriction reduces the expressiveness of the language, it does not have an important impact from a practical point. If the modeler adheres to the general principle that all im- portant distinctions are made explicit in the qualita- tive model (i.e. introduces appropriate landmarks with associated numerical bounds instead of using quanti- tative bounds), then the restriction to admissible for- mulae does not reduce the applicability of TeQSIM. Discontinuous Changes The injection of discontinuous changes into qualitative simulation consists of identifying when the change oc- curs and then propagating its effects through the model to determine which variables inherit their values across the change and which don’t. A discontinuous change is specified by (disc- change precond eflect), where precond is a boolean combination of qvalue propositions and eflect is a list of expressions of the form (variable qmag [ : range range] 1. This expression is translated into the temporal logic path formula (occurs-at precond (strong-next eflect’)) where eflece is a conjunction of qvalue, value-<= and value->= formulae derived from eflect. This formula is true for a behavior iff eflect’ is true for the state immediately following the first state in which precond is true. The Discontinuous Change Processor monitors states as they are created and tests them against the preconditions of applicable discontinuous change ex- pressions. A new state is inserted into the simulation following state s if the preconditions are satisfied and a discontinuous change is required to assert the effects in the successor states. A new, possibly incomplete state s’ is created by asserting the qualitative values speci- fied within the effects and inheriting values from s for variables not affected by the discontinuous change via continuity relaxation (see below). All consistent com- pletions of s’ are computed and installed as successors of s. Continuity relaxation propagates the effects of a dis- continuous change through the model by identifying potentially affected variables. The following assump- tions are made: (i) state variables (variables whose time derivative is included in the model) are piecewise- C1 (i.e. continuous everywhere, and differentiable ev- erywhere except at isolated points); (ii) non-state vari- ables are at least piecezuise-C’ (i.e. continuous every- where except at isolated points); (iii) all discontinuous changes in -input variables are explicitly specified; and (iv) the model is valid during the transient caused by a discontinuous change. These assumptions suffice to support an effective criterion, proven to be sound, for automatically identifying all the variables that are po- tentially affected by the simultaneous discontinuity of variables in a set A = {VI.. .Vn). Given a qualitative differential model M, a variable 2 is totally dependent on a set of variables A (writ- ten d-2) iff M includes a non-differential, contin- uous relation R(Xr . . . Xi, 2, Xi+i . . . Xn) with n 2 1 such that Vi: Xi E A or A-Xi. For example, if M in- cludes the constraint (add X Y 2) then {Y, 2)-X. Furthermore, let TD(d) represent the set of variables totally dependent on A (i.e. TD(d) = {Xld++X}). Let E be the set of input variables and S the set of state variables of M. Then define PZ)A (the set of vari- ables that are potentially affected by the discontinuity of variables in A) as the maximum set of variables of M that satisfies: 1. A C PDA (by definition of A); 2. S n P’DA = 0 (by assumption (i)); 3. f n P’DA = A (by assumption (iii)); 4. TD(S u E - A) n PVA = 0 (by definition of total dependency, if 2 totally depends on a set of contin- uous variables, then 2 must be continuous too and cannot belong to PVA). Continuity relaxation handles discontinuous changes of variables in A by computing the set P’DA so that, during a transient, variables in P’DA are unconstrained and can change arbitrarily, whereas those not in PVA retain their previous qualitative magnitude. The di- rection of change (i.e. qdir) for all variables is assumed to be potentially discontinuous. In the simulation shown in figure 3, the disconti- nuity occurring to Opening at T3 cannot affect Stage because the latter is totally dependent on the state variable Volume. On the other hand, the discontinuity affects the magnitude of Discharge-rate (not shown in the figure) because none of the conditions above ap- ply. Notice also how the discontinuities affect the qdir of variables. Model Checking The temporal logic model checking algorithm is de- signed to evaluate a QSIM behavior with respect to a set of temporal logic formulae as the behavior is in- crement ally developed. This allows behaviors to be filtered and refined as early as possible during the sim- ulation. Kuipers and Shults (1994) developed a model checking algorithm to prove properties about contin- uous dynamical systems by testing a completed sim- ulation against temporal logic expressions. We have extended this work to deal with conditionally true for- mulaz and with behaviors that are not closed, i.e. still being extended by the simulator. The model checking algorithm, described in (Brajnik & Clancy 1996b), computes the function r: Formulae x Behaviors + {T, F, U} x C, where C is the set of all pos- sible refinement conditions, including the trivial con- dition TRUE. A definite answer (i.e. T or F) is provided when the behavior contains sufficient information to determine the truth value of the formula. For exam- ple, a non-closed behavior b will not be sufficiently determined with respect to the formula (eventually p) if p is false for all suffixes of b, since p may become true anytime in the future. A behavior b is suficiently determined with respect to a temporal logic formula cp (written b D ‘p) whenever there is enough information within the behavior to de- termine a single truth value for all of its completions. If a behavior is not sufficiently determined for a formula, then U is returned by the algorithm. The definition of suficiently determined (omitted due to space restric- tions) is given recursively on the basis of the syntactic structure of the formula. We will write b+ ‘p to signify that b is not sufficiently determined for cp. Notice that indeterminacy is a property independent from ambiguity: the former is related to incomplete behaviors, whereas the latter deals with ambiguous in- formation present in states of a behavior. The following theorem supports our use of tempo- ral logic model checking for guiding and refining the simulation. Theorem 1 (TL-guide is sound and complete) Given a QSIM behavior b and an admissible formula ‘p then TL-guide: 1. refutes b ifl b D cp and there is no way to extend b to make it a model for cp. 2. retains b without modifying it ifl (a) b D cp and b is a model for (p; or (b) b @ cp and th ere is no necessary refinement condi- tion C for refining b into a model for ‘p. 3. replaces b with b’ iff (a) br>cp and b conditionally models ‘p and there exists C that is necessary and suficient for refining b into a model for ‘p; or (b) b @ cp and there is a necessary condition C for refining b into a model for cp. Proof. By induction on the length of ‘p; see (Brajnik & Clancy 1996b). Discussion and Conclusions We are currently exploring several directions to extend the expressiveness of the trajectory specification lan- guage. Enabling the comparison of magnitudes of vari- ables across states (e.g. to specify a decreasing oscil- lation) requires a move from a propositional logic to Qualitative Physics 983 some sort of first order logic. Expressing the possi- bility of a discontinuous change requires a more com- plex relationship between preconditions and effects of a discontinuous change. Addressing discontinuous feed- forward control problems requires that preconditions are specified using arbitrary temporal logic expres- sions, not simply state formuhe. Simulating hybrid discrete-continuous systems calls for a more flexible specification of partially ordered external events. While the practical time-complexity of a TeQSIM simulation is dominated by quantitative inferences per- formed by QSIM, we are still investigating improve- ments to our algorithm with respect to complexity. The incorporation of trajectory information into a qualitative simulation has been explored by DeCoste (1994)) who introduces suficient discriminatory envi- sionments to determine whether a goal region is pos- sible, impossible or inevitable from each state of the space. Washio and Kitamura (1995) also present a technique that uses temporal logic to perform a history oriented envisionment to filter predictions. TeQSIM, within a rigorously formalized framework, provides a more expressive language not limited to reachability problems, refines behaviors as opposed to just filter- ing them, and incorporates discontinuous changes into behaviors. Discontinuities have been investigated by Nishida and Doshita (1987)) Forbus (1989), Iwasaki and col- leagues (1995)) and others. The continuity relaxation method adopted in TeQSIM is conceptually simpler, sound, widely applicable and practically effective. Our trajectory specification language is similar in expressiveness to both Allen’s intervaZ algebra (Allen 1984) and Dechter, Meiri and Pearl’s temporal con- straint networks (Dechter, Meiri, & Pearl 1991). The usage of the language in TeQSIM, however, is quite dif- ferent from these two formalisms. Instead of asserting temporal constraints in a database of assertions and querying if certain combinations of facts are consistent, TeQSIM checks that a database of temporally related facts generated by QSIM satisfy a set of temporal logic constraints. TeQSIM supports a general methodology for incor- porating otherwise inexpressible trajectory informa- tion into the qualitative simulation process. The cor- rectness of TL-Guide, of the Discontinuous Change Processor, and of QSIM guarantee that all possible trajectories of the modeled system that are compatible with the model, the initial state and the trajectory con- straints are included in the generated behaviors. In ad- dition, the completeness of TL-Guide ensures that all behaviors generated by TeQSIM are potential models of the trajectory constraints specified by the modeler. For these reasons, and its limited complexity, TeQSIM can be applied to problems where QSIM alone would not be appropriate. Acknowledgments We thank Benjamin Shults for letting us use his TL pro- gram to implement TeQSIM. This work has taken place in the Qualitative Reasoning Group at the Artificial In- telligence Laboratory, The University of Texas at Austin. Research of the Qualitative Reasoning Group is supported in part by NSF grants IRI-9216584 and IRI-9504138, by NASA grants NCC 2-760 and NAG 2-994, and by the Texas Advanced Research Program under grant no. 003658-242. QSIM and TeQSIM are available for research purposes viahttp://www.cs.utexas.edu/users/qr. References Allen, J. F. 1984. Towards a general theory of action and time. Artificial Intelligence 23:123-154. Brajnik, G., and Clancy, D. J. 1996a. Guiding and refin- ing simulation using temporal logic. In Proc. of the Third International Workshop on Temporal Representation and Reasoning (TIME’96). Key West, Florida: IEEE Com- puter Society Press. To appear. Brajnik, G., and Clancy, D. J. 1996b. Temporal con- straints on trajectories in qualitative simulation. Techni- cal Report UDMI-RT-01-96, Dip. di Matematica e Infor- matica, University of Udine, Udine, Italy. Dechter, R.; Meiri, I.; and Pearl, J. 199 1. Temporal constraint networks. Artificial Intelligence 49:61-95. DeCoste, D. 1994. Goal-directed qualitative reasoning with partial states. Technical Report 57, The Institute for the Learning Sciences, University of Illinois at Urbana- Champaign. Emerson, E. 1990. Temporal and modal logic. In van Leeuwen, J., ed., Handbook of Theoretical Computer Sci- ence. Elsevier Science Publishers/MIT Press. 995-1072. Chap. 16. Forbus, K. 1984. Qualitative process theory. Artificial Intelligence 24:85-168. Forbus, K. 1989. Introducing actions into qualitative sim- ulation. In IJCAI-89, 1273-1278. Iwasaki, Y.; Farquhar, A.; Saraswat, V.; Bobrow, D.; and Gupta, V. 1995. Modeling time in hybrid systems: how fast is “instantaneous”? In IJCAI-95, 1773-1780. Montreal, Canada: Morgan Kaufmann Publishers, Inc. Kuipers, B., and Shults, B. 1994. Reasoning in logic about continuous change. In Principles of Knowledge Represen- tation and Reasoning (KR-94). Morgan Kaufmann Pub- lishers , Inc. Kuipers, B. 1994. Qualitative Reasoning: modeling and simulation with incomplete knowledge. Cambridge, Mas- sachusetts: MIT Press. Nishida, T., and Doshita, S. 1987. Reasoning about dis- continuous change. In AAA I-87, 643-648. Shults, B., and Kuipers, B. J. 1996. Qualitative simula- tion and temporal logic: proving properties of continuous systems. Technical Report TR AI96-244, University of Texas at Austin, Dept. of Computer Sciences. Washio, T., and Kitamura, M. 1995. A fast history- oriented envisioning method introducing temporal logic. In Ninth International Workshop on Qualitative Reason- ing (QR-95), 279-288. 984 Model-Based Reasoning
1996
148
1,786
atic Reasoning and Cases Michael Anderson Robert McCartney Computer Science Department University of Hartford 200 Bloomfield Avenue West Hartford, Connecticut 06 117 anderson@morpheus.hartford.edu Department of Computer Science and Engineering University of Connecticut 191 Auditorium Road Stows, Connecticut 06269-3 15 robert@cse.uconn.edu Abstract We believe that many problem domains that lend themselves to a case-based reasoning solution can benefit from an diagrammatic implementation and propose a diagrammatic case-based solution to what we term the n-queens best solution problem where the best solution is defined as that which solves the probfem moving the fewest queens. A working system, based on a novel combination of diagrammatic and case-based reasoning, is described. the problem in the fewest moves). Further, we develop an inter-diagrammatic implementation of the min-conflicts heuristic [Gu 1989-J to find solutions to randomly chosen n-queens problems [Stone & Stone 19861 themselves. Introduction Interest in computing with analogical representations is on the rise. This is evidenced by the attention given to them in recent symposia (e.g. [Narayanan 1992]), journals (e.g. parayanan 1993]), and books (e.g. [Glasgow, Narayanan, & Chandrasekaran 19951). We believe that such attention is the natural outgrowth of the evolution of the currency of computing, the first two generations of which were numeric and symbolic. We present a syntax and semantics of inter-diagrammatic reasoning and then introduce the inter-diagrammatic operators and functions. Next, case-based reasoning is briefly overviewed. This is followed by a description of the diagrammatic solution of the n-queens problem and the diagrammatic case-based solution of the n-queens best solution problem. A brief discussion of related work follows and, finally, we offer our conclusions. Our particular interest lies in developing a general set of operators that can be used to reason with sets of related diagrams- inter-diagrammatic reasoning. This concept has been explored in [Anderson 1994; Anderson & McCartney 1995a; Anderson & McCartney 1995b] in which a heuristic for a game has been developed, musical notation and Venn diagrams have been reasoned with, and information from cartograms and various type of charts has been inferred using a general set of operators. Along these lines, we have been investigating the integration of inter-diagrammatic reasoning with case-based reasoning. Inter-diagrammatic Reasoning Most generally, one can syntactically define a diagram to be a tessellation of a planar area such that it is completely covered by atomic two dimensional regions or tesserae. The semantic domain will be defined as {v,, . . . . v,.,] denoting an i valued, additive gray scale incrementally increasing from a minimum value v, ,WHITE, to a maximum value vi-, , BLACK. Intuitively, the gray scale values correspond to a discrete set of transparent gray filters that, when overlaid, combine to create a darker filter to a maximum of BLACK. The following primitive unary operators, binary operators, and functions provide a set of basic tools to facilitate the process of inter-diagrammatic reasoning. We contend that many problem domains that lend themselves to a case-based reasoning solution can benefit from an inter-diagrammatic implementation. For example, domains that deal with spatial configuration, navigation [Goel et al. 19941, and perception all might benefit from explicit representation of cases stored as diagrams, retrieved via a diagrammatic match, and modified diagrammatically. Unary Operators NOT, denoted ld, is a unary operator taking a single diagram that returns a new diagram where each tessera’s value is the difference between BLACK and its previous value. Binary Operators We propose a diagrammatic case-based solution to what we term the n-queens best solution problem where the best solution is defined as that which solves the problem moving the fewest queens, leaving queens that are already in place untouched (versus a solution that solves Binary operators take two diagrams, d, and d2, of equal dimension and tessellation and return a new diagram where each tessera has a new value that is some function of the two corresponding tesserae in the operands. OR, denoted d, v d2, returns the maximum of each pair of tesserae where the maximum of two corresponding 1004 Model-Based Reasoning From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. tesserae is defined as the tessera whose value is closest to BLACK. AND, denoted d, A d2, returns the minimum of each pair of tesserae where the minimum of two corresponding tesserae is defined as the tessera whose value is closest to WHITE. OVERLAY, denoted d, + d2, returns the sum of each pair of tesserae (to a maximum of BLACK) where the sum of values of corresponding tesserae is defined as the sum of their respective values’ subscripts. PEEL, denoted d, - d2, returns the dzfirence of each pair of tesserae (to a minimum of WHITE) where the difference of values of corresponding tesserae is defined as the difference of their respective values’ subscripts. ASSIGNMENT, denoted d, c d2, modifies d, such that each tessera has the value of the corresponding tessera in d2. (Note that non-diagrammatic assignment will be symbolized as := and the equality relation as =.) Functions Over Diagrams NULL, denoted NULL(d), is a one place Boolean function taking a single diagram that returns TRUE if all tesserae of dare WHITE else it returns FALSE. NONNULL, denoted NONNULL( is a one place Boolean function taking a single diagram that returns FALSE if all tesserae of d are WHITE else it returns TRUE. Functions Over Sets of Diagrams ACCUMULATE, denoted ACCUMULATE (d, ds, o), is a three place function taking an initial diagram, d, a set of diagrams of equal dimension and tessellation, ds, and the name of a binary diagrammatic operator, o, that returns a new diagram which is the accumulation of the results of successively applying o to d and each diagram in ds . MAP, denoted MAP(fI ds), is a two place function taking a function f and a set of diagrams of equal dimension and tessellation, ds, that returns a new set of diagrams comprised of all diagrams resulting from application offto each diagram in ds. FILTER, denoted FILTER% ds), is a two place function taking a Boolean function, fand a set of diagrams of equal dimension and tessellation, ds, that returns a new set of diagrams comprised of all diagrams in ds for which f returns TRUE. RANDOM, denoted RANDOM([x,]s), is a one or two place function that returns a set of x unique elements of s at random, x defaulting to 1 if not present. CARDINALITY, denoted CARDINALITY(s), is a one place function taking a finite set that returns the number of elements in s. Case-based reasoning [Kolodner 19931 is the use of previous problem solving episodes (with solutions) to solve a new problem. Cases can be used for two purposes: 1) to support plausible inferencing in the absence of a complete domain theory [McCartney 19931, and 2) to increase efficiency by either providing partial solutions or providing focus and direction to problem solving efforts [Kambhampati & Hendler 19921. For both of these purposes, case-based reasoning provides an obvious learning mechanism: as problems are solved, new episodes are incorporated into the case base, which can later be used in future problem solving. Implementing a case-based reasoning system requires answering a number of fundamental questions. 0 representation: What is a case and how is it represented? . indexing: How is a case stored and retrieved? l similarity: How do we determine which case is most appropriate to use in solving a given problem? 0 adaptation: How do we use an appropriate case once we get it? These questions have obvious general answers given our interest in diagrammatic reasoning. Cases will be diagrams, represented in a way consistent with the proposed syntax and semantics, and algorithms used for indexing, similarity, and adaptation of a case will be defined in terms of diagrammatic operators. As we are working with a complete domain theory and no uncertainty, we are using case-based reasoning to increase efficiency and provide a mechanism to improve performance over time. iagrammatic Constraint Satisfaction Diagrammatic reasoning can be used to solve constraint satisfaction problems- problems in the form of a set of variables that must satisfy some set of constraints. The n-queens problem, for example, can be viewed as a constraint satisfaction problem that can be solved diagrammatically. A solution to the n-queens problem is any configuration of n queens on an n by n chessboard in which no queen is being attacked by any other queen. Figure 1 shows a diagram of a solution to the problem when n = 8. When the location of each queen is considered a variable that must meet the constraint that no other queen can attack that location, a constraint satisfaction perspective of the problem arises. The min-conflicts heuristic, which advocates selecting a value for a variable that results in the minimum number of conflicts with other variables, can be Spatial h Functional Reasoning 1005 Figure 1: n -queen solution where n=8 implemented diagrammatically to solve the n-queens problem. A diagram in the n-queens domain is represented as an n by n tessellation of gray-scale valued tesserae. A set of n by n diagrams comprised of all possible single queen positions (denoted Queens) must be defmed. Each of these diagrams represents one possible position of a queen (in a medium level gray) and the extent of its attack (in GRAYI). Figure 2 shows a diagram of n OVERLAYed queen diagrams and each of the corresponding diagrams from Queens that represent the individual queens in question where n = 8. Given a random selection of queen positions, the strategy is to move iteratively the most attacked queen to a position on the board that currently is the least attacked until a solution is discovered. Discovering a Solution After a random selection of queens is made, all the corresponding diagrams from Queens (denoted SelectedQueens) are OVERLAYed onto a single diagram (denoted CurrentBoard). This process can be more formally represented using the proposed diagrammatic operators as CurrentBoard= ACCUMULATE (0, SelectedQueens, +) The CurrentBoard is checked to see if it is a solution by PEELing from it a diagram that is completely covered in the same gray level that represents a queen (denoted QueenGrqBoard). Only if the result of this operation is a diagram with all WHITE tesserae (denoted NullDiagram) has a solution been found. More formally stated, a solution will return TRUE for NULL (CurrentBoard - QueenGrayBoard) As long as the gray level representing queens is greater than any gray level achievable by simply OVERLAYing the GRAY1 tesserae representing queen attack extents, a tessera will only take on a gray level greater than that representing queens if one or more GRAY1 tesserae is OVERLAYed upon a queen. If such a level of gray is found, a queen is under attack. Therefore, if the previous PEEL operation does not remove all gray from a diagram, it cannot be a solution. If a solution has yet to be found, an attacked queen is PEELed from the current diagram and a Figure 2: OVERLAYing 8 queen diagrams new queen is OVERLAYed at a minimally attacked location. An attacked queen (denoted AttackedQueen) is found by ANDing a GRAYI-PEELed version of all diagrams from SelectedQueens with the results of the solution test and randomly selecting from those queens that do not produce the NuZZDiagram (i.e. those queens that correspond with NON-WHITE tesserae in the diagram resulting from the solution test). More formally: AttackedQueent RANDOM (FILTER (NULL, MAP (h(x) ((x - Gray/Board) A (CurrentBoard - QueenGrayBoard)), SelectedQueens))) AttackedQueen is PEELed from the CurrentBoard and a minimally attacked queen is OVERLAYed in its place. By definition, the minimally attacked queen (denoted MinimalQueen) on the current diagram will be the queen at the location that is the lightest gray level. These locations are found by ANDing a GRAYI-PEELed version of all unused diagrams from Queens (denoted UnusedQueens) with the current diagram and randomly selecting from those queens that produce the NullDiagram (i.e. those queens that correspond with WHITE tesserae in CurrentBoard ). More formally: MinimalQueene= RANDOM (FILTER (NONNULL, MAP (h(x) ((x - GraylBoard) A (CurrentBoard - AttackedQueen)), UnusedQueens))) 1006 Model-Based Reasoning If no such queen is found, a diagram that is completely covered in GRAY1 (denoted GrayZBoard) is iteratively PEELed from the current diagram, making all tesserae one gray level lighter, and the process repeated. More formally: CurrentBoard = CurrentBoard - Gray I Board MinimalQueen is then OVERLAYed upon the current diagram. More formally: CurrentBoard e= CurrentBoard - AttackedQueen + MinimalQueen This new diagram is checked to see if it is a solution and the process continues until such a solution is discovered. An Example Figures 2 through 4 graphically display an example of the solution finding process where n = 8. Figure 2 shows the queen diagrams selected from Queens as well as the diagram that results from OVERLAYing these diagrams. Figure 3 displays one iteration of this process. 3a shows the solution check, QueenGrayBoard is PEELed from the current diagram. This diagram is not a solution because the result is not the NullDiagram. In 3b, one of the attacked queens is selected and PEELed from the current diagram. Since there are no WHITE tesserae, GraylBoard is PEELed from the result in 3c. In 3d, a queen diagram is randomly selected from the set of queen diagrams that correspond to the WHITE tesserae in the result and OVERLAYed on the current diagram. Figure 4 shows the next two iterations of the solution finding process. 4a displays the solution check for the current diagram created by the last iteration. This is also found not to be a solution, so an attacked queen’s diagram is PEELed from the current diagram in 4b. Since there is a WHITE tesserae in the result, PEELing GraylBoard from it is not required. The only possible new queen diagram is then OVERLAYed on the current diagram in 4c. 4d shows the solution check for the third iteration and, as this is found to be a solution (i.e. the check results in the NullDiagram), processing stops. The result of the entire process is the 8-queen problem solution presented in 4e. Diagrammatic Case-based Best Solution A solution to an n-queens best solution problem is an n-queens placement obtained by moving the fewest queens from some initial placement. Although finding this minimal solution can only be achieved at great computational cost, we have implemented a system that improves its performance at this task by making use of previous solutions it has developed. Solutions to previous D) Figure 3: 8-queens example, 1st iteration B) El Figure 4: 8-queens example, 2nd and 3rd iterations problems can be used to provide partial solutions to the current problem. These previous solutions form the cases of our case-based reasoning solution. Case representation is defined diagrammatically as an OVERLAYed solution set of n queens without attack extent information. Case similarity is defined as cases that have the most number of queens in common with the current problem. This matching is accomplished diagrammatically by ANDing the Spatial & Functional Reasoning 1007 current problem board (PEELed with QueenGrayBoard) with each of the stored solutions, counting all non-WHITE tesserae and retrieving those solutions with the highest count. A partial solution to the current problem has then been found; all queens in common can be exempted from further consideration as they are already in place. Case adaptation is the arrangement of those queens that are not yet in place to form a complete solution without disturbing the positions of the exempted queens. Lastly, case indexing is expedited by diagrammatically comparing a new candidate case with existing cases and rejecting duplicates. An Example Figure 5 details this case-based approach. 5a PEELS QueenGrayBoard from the current diagram resulting in a diagram that is gray only where queens are placed on the current diagram (denoted QueenPlacement). More formally: QueenPlacement c CurrentBoard - QueenGrayBoard 5b shows the process of ANDing QueenPlacement with each stored solution in the CaseBase, 5c, resulting in a set of diagrams, 5d, that each display their similarity with QueenPlacement via the number of gray tessera they have (denoted SimilaritySet). More formally: SimilaritySet := MAP&(x) (QueenPlacement A x), CaseBase) In this example, one case’s queen placement matches six of the current diagram’s, 5e. Such counting of certain valued tessera is accomplished diagrammatically as well (see [Anderson & McCartney 1995b]). This case is chosen, then, and the placement of the remaining two queens proceeds as described previously with the stipulation that the six matched queens are not to be moved. Although this system cannot guarantee an optimal solution, it learns over time by storing previous solutions and, therefore, becomes progressively better at providing near optimal solutions at reasonable computational cost. Related Research Research in diagrammatic reasoning is just beginning to flourish after a long dormancy it experienced being virtually abandoned after a brief flirtation in the early days of AI (e.g. [Gelernter 1959; Evans 19621). See, for instance, [Larkin & Simon 1987; Narayanan & Chandrasekaran 199 1; Narayanan 1992; Chandrasekaran, Narayanan, & Iwasaki 1993; Narayanan 1993; Glasgow 1993; Glasgow, Narayanan, & Chandrasekaran 19951 for a representative sample of this work. We have previously proposed inter-diagrammatic reasoning as one way of using diagrammatic representations to solve problems 1008 Model-Based Reasoning 4 s! Figure 5: 8-queens example, case matching [Anderson 1994; Anderson & McCartney 1995a; Anderson & McCartney 1995b]. The earliest work in diagrammatic reasoning can be considered the first example of inter-diagrammatic reasoning as well [Evans 19621. Bieger and Glock [ 1985; 19861 and Willows and Houghton [ 19871 have done work in human use of sets of related diagrams. Case-based reasoning has generated a good deal of interest: much work has been and is being done in this area. See [Kolodner 19931 for an overview. Interestingly, case-based reasoning has been previously used to increase efficiency in solving constraint satisfaction problems in [Purvis 19951. [Narayanan & Chandrasekaran 199 I] discuss what they term “visual cases” for diagrammatic spatial reasoning but we believe that we are the first to successfully integrate diagrammatic and case-based reasoning. Conclusion We have shown how a diagrammatic cased-based approach is useful in providing near optimal solutions to the n-queens problem. It is straight forward to generalize our approach to involve objects of various sizes and extents. This is the fast step towards applying this approach to spatial configuration problems and other domains. We believe that, in general, a diagrammatic approach to case-based reasoning can help provide answers to the questions of case representation, similarity, indexing, and adaptation in many interesting real world domains. Further, a case-based reasoning approach to diagrammatic reasoning provides a framework that enables the effectiveness of diagrammatic operators to emerge. References Anderson, M. 1994. Reasoning with Diagram Sequences. In Proceedings of the Conference on Information-Oriented Approaches to Logic, Language and Computation (Fourth Conference on Situation Theory and its Applications). Anderson, M. and McCartney, R. 1995a. Developing a Heuristic via Diagrammatic Reasoning. In Proceedings of the Tenth Annual ACM Symposium on Applied Computing. Anderson, M. and McCartney, R. 1995b. Inter-diagrammatic Reasoning. In Proceedings of the 14th International Joint Conference on Artificial Intelligence. Bieger, G. and Glock, M. 1985. The Information Content of Picture-Text Instructions. The Journal of Experimental Education, 53(2), 68-76. Bieger, G. and Glock, M. 1986. Comprehending Spatial and Contextual Information in Picture-Text Instructions. The Journal of Experimental Education, 54(4), 18 1- 188. Chandrasekaran, B., Narayanan, N. and Iwasaki, Y. 1993. Reasoning with Diagrammatic Representations, AI Magazine, I4(2). Evans, T. G. 1962. A Heuristic Program to Solve Geometry Analogy Problems. MIT AI Memo 46. (also in Semantic Information Processing as “A Program for the Solution of a Class of Geometric-analogy Intelligence-test Questions”, 271-353, Minsky, M. L., ed. MIT Press, 1968). Feigenbaum, E. A. and Feldman, J., eds. 1963. Computers and Thought, McGraw-Hill. Gelernter, H. 1959. Realization of a Geometry Theorem Proving Machine. In Proceedings of an International Conference on Information Processing, 273-282. UNESCO House. (also in [Feigenbaum & Feldman 19631). Glasgow, J. 1993. The Imagery Debate Revisited: A Computational Perspective in [Narayanan 19931. Glasgow, J., Narayanan, N., and Chandrasekaran, B. 1995. Diagrammatic Reasoning: Cognitive and Computational Perspectives, AAAI Press. Goel, A., Ali, K., Donnellan, M., de Silva Garza, A., and Callantine, T. 1994. Multistrategy Adaptive Path Planning, IEEE Expert, 9:6,57-65. Gu, J. 1989. Parallel Algorithms and Architectures for Very Fast AI Search, Ph.D. diss., University of Utah. Kambhampati, S., and Hendler, J. 1992. A Validation Structure Based Theory of Plan Modification and Reuse. ArtiJcial Intelligence, 55: 193-258. Kolodner, J. L. 1993. Case-based Reasoning. Morgan Kaufmann, San Mateo. Larkin, J. and Simon, H. 1987. Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cognitive Science, 11, 65-99. McCartney, R. 1993. Episodic Cases and Real-time Performance in a Case-based Planning System. Expert Systems with Applications, 619-22. Narayanan, N., ed. 1992. Working Notes of AAAI Spring Symposium on Reasoning with Diagrammatic Representations. Narayanan, N., ed. 1993. Taking Issue/Forum: The Imagery Debate Revisited. Computational Intelligence, W). Narayanan, N. H. and Chandrasekaran, B. 1991. Reasoning Visually about Spatial Interactions. In Proceedings of the 12th International Joint Conference on Artificial Intelligence. Purvis, L. 1995. Constraint Satisfaction Combined with Case-Based Reasoning for Assembly Sequence Planning, Technical Report CSE-TR-93-20, University of Connecticut. Stone, H. S. and Stone, J. 1986. Efficient Search Techniques: an Empirical Study of the n-Queens Problem, Technical Report RC 12057, IBM Thomas J. Watson Research Center, Yorktown Heights, New York. Willows, D. and Houghton, H. 1987. The Psychology of Illustration, Springer-Verlag. Spatial & Functional Reasoning 1009
1996
149
1,787
The Use of Artificially Intelligent Agents with Bounded Rationality in the Study of Economic Markets Vijay Rajan and James R. Slagle Department of Computer Science University of Minnesota Minneapolis, MN 55455 (jayrajanlslagle)@cs.umn.edu Abstract The concepts of ‘knowledge’ and ‘rationality’ are of central importance to fields of science that are inter- ested in human behavior and learning, such as artificial intelligence, economics, and psychology. The similar- ity between artificial intelligence and economics - both are concerned with intelligent thought, rational beha- vior, and the use and acquisition of knowledge - has led to the use of economic models as a paradigm for solving problems in distributed artificial intelligence (DAI) and multi agent systems (MAS). What we pro- pose is the opposite; the use of artificial intelligence in the study of economic markets. Over the centur- ies various theories of market behavior have been ad- vanced. The prevailing theory holds that an asset’s current price converges to the risk adjusted value of the rationally expected dividend stream. While this ra- tional expectations model holds in equilibrium or near- equilibrium conditions, it does not sufficiently explain conditions of market disequilibrium. An example of market disequilibrium is the phenomenon of a specu- lative bubble. We present an example of using arti- ficially intelligent agents with bounded rationality in the study of speculative bubbles. Introduction Economics is concerned with agents making choices in situations where information is decentralized and agents have imperfect knowledge and finite resources. As pointed out by Hayek, ‘the problem is to show how a solution is produced by the interactions of agents each of whom has partial knowledge’ (Hayek 1945). Thus economic models based on market price systems can be used as a paradigm for solving problems in distributed artificial intelligence and multi-agent systems (Malone et aE. 1988; Waldspurger et CJZ. 1992; Wellman 1993; Rajan & Slagle 1995; Clearwater 1996). Here we are in- terested in the opposite, using artificial agents to under- stand the working of markets. The formation of prices in a market is influenced by the specific rules that gov- ern trade in the market. These rules are referred to as the ‘market institution’. Market institutions have 102 Agents evolved in an ad-hoc manner over the years and eco- nomists are still trying to understand them. The 1950s witnessed a big intellectual advancement in econom- ics with the theory of general equilibrium. However, general equilibrium theory is institution free. There is currently no general theory of trading under various market institutions. Over the centuries economists have advanced various theories of market behavior. The prevailing hypothesis holds that the market is driven to a competitive rational expectations equilibrium that reflects the aggregate of all the information dispersed among the market parti- cipants. At this equilibrium there should be no oppor- tunity for arbitrage. This hypothesis has proven to be robust and its predictions support,ed to some extent by experimental laboratory markets as well as tests based on field data. However, there are certain market con- ditions that theory fails to explain. One such example is the speculative bubble. Under the condition that all market participants are rational, speculative bubbles would be impossible. Hence, to study such phenom- ena, the condition of rationality needs to be relaxed. However, incorporating the notion of bounded or lim- ited rationality in economic theory has been extremely difficult. Theoretical models of bubbles have an infin- ity of solutions satisfying the equilibrium conditions. Hence theory does not provide any information about the process by which bubbles form and collapse. In this paper we explore the use of artificially in- telligent agents with bounded rationality to provide an alternative framework to study the divergence of market prices from the rational expectations equilib- rium. We believe that computer simulations using such agents offers a promising alternative to study mar- ket conditions that are not easily explained by the- ory. Empirical research based on simulations is be- ginning to play an important role in the understand- ing of many complex systems, both natural and so- cial, that defy precise mathematical characterization. For example, in physics, computer simulations have From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. played an important role in the understanding of com- plex phenomena such as spin glasses, Ising magnets, and quantum chromodynamics. However, there is an important distinction between complexity in natural systems and social systems. The complexity in social systems arises from the nature of human thought and behavior. Hence the simulation of complex social sys- tems requires an additional component to model the thought, behavior, and learning processes of the par- ticipants in the social system. Several areas of arti- ficial intelligence, especially those related to reason- ing about knowledge (Fagin, Halpern, & Vardi 1984; Fagin et al. 1995), agent theory (Shoham 1993), and goal driven learning (Ram & Leake 1995) can play an important role in the study of social systems including economic markets. Bubbles A phenomenon that has plagued markets, since the in- ception of organized trading, is the speculative bubble. A bubble is defined loosely as a sharp rise in the price of an asset, with the initial rise generating expectations of further rises and attracting new buyers. The rise is usually followed by a reversal of expectations causing a sharp decline in the assets price. Bubbles are of wide spread interest since their consequences are serious of- ten resulting in a financial crisis. Such a crisis can af- fect millions of people and have a significant impact on society. Little progress has been made so far in under- standing this phenomenon. Some examples of bubbles are the Tulip mania that occurred in Holland around 1636-7, the Mississippi bubble that occurred in Paris around 1719-20, the South Sea bubble that occurred in London also around 1720, the Railway mania that oc- curred in England around 1846, and the stock market crashes that occurred in New York in 1929 and 1987. As of today, economic theory has failed to explain the phenomena of bubbles. Theorists are still divided about even the existence of bubbles. Recently, ex- perimental laboratory markets conducted with human traders have shown the possibility of the occurrence of speculative bubbles. Smith, Suchanek, and Williams conducted a set of experiments to study the possib- ility of speculative bubbles in asset markets (Smith, Suchanek, & Williams 1988). These experiments were organized such that the price of the asset was based on the expected value of the dividend stream. The dividend structure and the number of trading peri- ods were publicly announced to all the traders making it common knowledge. They found price bubbles in about half of the experiments they conducted. Cutler, Poterba, and Summers (Cutler, Poterba, & Summers 1990) suggest that models based solely on existing theory cannot account for the movement of asset prices (such as the crash of 1987) and the high trading volumes observed in modern securities mar- kets. An approach based on ‘feedback’ or ‘noise’ traders is used by Cutler, Poterba, and Summers (Cutler, Poterba, & Summers 1990) and by DeLong, Shleifer, Summers, and Waldman (DeLong et al. 1989; 1990). The models in (DeLong et al. 1989) and (De- Long et al. 1990) were developed based on an overlap- ping generations environment where investors live for two periods. (Cutler, Poterba, & Summers 1990) devel- ops a model of price dynamics for investors using het- erogeneous trading strategies. The model emphasizes the existence of ‘feedback traders’ who behave based on the history of past returns rather than future funda- mentals. These models based on noise trading provide some insight into gradual asset price swings. However, they do not account for the sharp price swings observed during the crash of a price bubble in the laboratory. The failure of theory to explain the phenomena of bubbles has lead to empirical efforts to understand them. Empirical work attempting to identify bubbles using field data (Flood & Garber 1980; Flood, Garber, & Scott 1981) has turned up mixed results and has been inconclusive about the relevance of bubbles. However, studies based on laboratory markets have established the existence of bubbles. As previously mentioned, Smith, Suchanek, and Williams found frequent price bubbles in an experimental asset market that paid a di- vidend (from a known probably distribution) at the end of each period. Camerer and Weigelt (Camerer & Wei- gelt 1993) 1 a so observed bubbles, especially with inex- perienced traders. King, Smith, Williams, and Boen- ing (King et al. 1993) conducted further experiments building on the results of (Smith, Suchanek, & Willi- ams 1988). Their goal was to determine whether addi- tional policies such as short selling, buying on margin, and rules that limit price change could prevent price bubbles. None of these factors seemed to affect the possibility of observing bubbles. The experimental markets conducted with human traders have shown the existence of bubbles but have not been able to provide insight into the process of cre- ation and the collapse of such bubbles. We hope to accomplish this by using artificial agents rather than human subjects. By using artificial agents programmed with specific behaviors we can observe, verify, control, and replicate (Gode & Sunder 1992) the decision rules used by the agents enabling us to better understand the phenomenon of speculative bubbles. Multiagent Problem Solving 103 The Study of Speculative Bubbles A market can be considered as a complex social system that consists of the individuals that participate in the market. The participants in a market act based on their expectations and beliefs about future market outcomes. The market outcomes, however, are themselves affected by the actions of the participants. Hence there is a mapping from beliefs to market outcomes back to be- liefs, leading to a form of self-reference. By the efficient market theory the market reaches the rational expect- ations equilibrium. This equilibrium is a fixed point in the mapping from beliefs to beliefs. At the equi- librium, the actions of the market participants based on current beliefs generate market outcomes that con- firm the beliefs. Hence the rational expectations model implies homogeneous beliefs among all the market par- ticipants. Our approach to studying phenomena such as the speculative bubble is based on representing a market as a multi- agent system organized as a society of artificially intelligent software agents. Agents can be programmed with different models of belief formation allowing us to study the effects of trader heterogeneity on market prices. Figure 1: The double auction as a multi-agent system we use 12 artificial agents as the traders in the market. At the beginning of the experiment, each agent receives an initial endowment of cash and shares. Trading oc- curs over a sequence of 15 market periods, each period lasting 5 minutes. Agents can make bids to buy or of- fers to sell shares at any time during a period. The cash and shares owned by an agent at the end of a period is carried over to the next period. In addition, at the end of each trading period, each share earns a dividend. The dividend is drawn from a probability distribution centered around a fixed value. The structure of the probability distribution is common knowledge to all me agents. At the end of the experiment the cash held by an agent is the profit earned for participating in the experiment. All shares are worthless at the end of the experiment. The Market as a Multi-Agent System The computerized market used for this study is organ- ized as a multi-agent system. Each agent participating in the market is a UNIX process. In addition there is one more agent, the market monitor, whose role is similar to a specialist in the trading pit of a stock ex- change. All market participants send their messages to the market monitor. The market monitor, implements the rules of the double auction; decides what messages are valid, maintains the current bid and ask, and de- cides when a transaction occurs. Agents can at any time request the current ask, current bid, and the mar- ket price of the last transaction. Figure 1 shows the double auction organized as a multi-agent system. The Market Institution As mentioned earlier, the market institution plays a sig- nificant role in the price formation process. Hence, the notion of a bubble can make no sense in the absence of a precise model detailing the markets operation. The specific model we use is a computerized version of the ‘Double Auction’. The double auction is a market in- stitution in which traders make offers to buy and sell. There are several forms of the double auction. The market we use is a restricted form of the Continuous Double Auction (CDA). The lowest ask submitted so far is called the current ask, and the highest bid the current bid. An incoming ask has to be lower than the current ask, and an incoming bid higher t,han the cur- rent bid. If the incoming ask is equal to or less than the current bid a transaction occurs (at the bid price), otherwise it becomes the current ask. Similarly if the incoming bid is equal to or greater than the current ask a transaction occurs (at the ask price), else it be- comes the current bid. After a transaction, the current bid and ask are removed, and the first bid (ask) re- ceived after the transaction will become the current bid (ask). Most financial and commodities market around The Agent Architecture Experimental Design We use the experimental design of (Smith, Suchanek, & Williams 1988). However, instead of human subject, the world, such as the New York Stock Exchange, are based on variants of the double auction. An agent at any time has a choice among five possibIe actions; enter a bid, enter an ask, accept the current bid, accept the current ask, or wait. In order to choose among the five possible actions, the agent needs to form expectations of the asset’s price. Figure 2 shows the architecture for agents participating in the market. 104 Agents Figure 2: Agent architecture Rationality requires each agent to build a correct model of beliefs and expectations based on the available market information. The expectation is correct if it is fulfilled by the future behavior of the market. However, an agent due to bounded or limited rationality, may not be able to completely interpret all the available information correctly. Our model is built on agents using only certain subsets of the information available to them. This leads to agents with differing beliefs and expectations. This notion of bounded rationality differs from Simon (Simon 1958). The agents are not limited by computational resources, but rather ignore a certain subset of the information available to them. Agents For this study we used three types of agents based on the subset of the information used by them. The behavior of these agents were arrived at by studying the data from (Smith, Suchanek, & Williams 1988). The following are brief descriptions of the three types of agents: fundamental traders, speculative traders, and strategic traders. Fundamental traders forms their expectations based solely on information about the dividends. Their valu- ation of an asset’s price is the same as the theoret- ical value. The theoretical value for each share is the product of the expected value of the dividend at the end of a period and the number of trading periods that remain. The fundamental trader ignores all other mar- ket information. The valuations are updated once at the beginning of each trading period. A fundamental trader buys shares when the price is below the theoret- ical value and sells shares when the price is above the theoretical value. Speculative traders ignore all information about the dividend structure. They are interested purely in short term capital gains - buy low and sell high. In this study, speculative traders make use of information about the transactions in which they participate. If a speculative trader buys a share at a given price, it immediately changes its valuation to that price. Simil- arly when it sells a share it changes its valuation to the transaction price. Information about transactions con- ducted by other agents and information about all bids and asks are not used. Based on the number of shares in its possession a speculative trader frequently changes its role between being a buyer or a seller. Each time a speculative trader changes from a buyer to a seller, it increases its valuation of the price so that it can sell the shares at a higher price. If it is unable to sell for an extended period of time, it begins to drop its valuation. When a speculative trader changes from a seller to a buyer, it decreases its valuation of the price so that it can buy assets at a lower price. If it is unable to buy for an extended period of time, it begins to increase its valuation. Strategic traders build their expectations of price based on a combination of dividend information and information about the current price at which shares are being traded. Strategic traders are similar to fun- damental traders, in that they know the theoretical value of the price based on the dividend information. However, they do not begin to sell as soon as the market price exceeds the theoretical value. Instead they build expectations on whether the price is likely to stay at the current level or go higher. If this is likely they can hold on to their shares, collecting dividend at the end of the period, and sell later. In this way they can exploit the behavior of the short term speculators. Results We conducted two sets of multi-agent simulations. We performed 100 runs of each set of simulations. However, only illustrative results are shown here. The first set of simulations attempted to study the inter- action between fundamental and speculative traders. Figure 3 shows the results from two runs of these ex- periments. In the first run we used three fundamental traders and nine speculative traders. The three funda- mental traders were able to hold the market price close to the theoretical value. They did so by buying as- sets when the price was below the theoretical value and selling assets when the price was above the theoretical value. The second run used one fundamental trader and eleven speculative traders. In this case too, any divergence from the theoretical value was quickly cor- rected by the single fundamental trader (in some runs of this case, the single fundamental trader ran out of cash or assets and could no longer influence the market price). Since fundamental traders were able to hold the mar- ket prices close to the theoretical value, the second set of simulations studied the interaction between strategic and speculative traders. Figure 4 shows the prices ob- served in one of the laboratory experiments conduc- ted by Smith, Suchanek, and Williams (with human traders) versus the prices observed in one of our multi- Multiagent Problem Solving 105 0 2 4 6 WADIN PERIOD 10 12 14 16 Figure 3: Fundamental traders keep the price close to the theoretical value Figure 4: Speculative bubbles agent simulations using two strategic traders and ten speculative traders. In this case the simulation pro- duces a speculative bubble similar to the one observed in the laboratory market. The speculative bubble is identified by a rise in the asset’s price to a value sig- nificantly over the fundamentals followed by a sharp reversal. The strategic traders allowed the price to rise to a high value before beginning to sell, thereby caus- ing a reversal. Figures 5 and 6 show the prices from another typical run of the simulation with two strategic traders- and ten speculative traders. Discussion The speculative bubble is a good example of market disequilibrium. Since theory does not provide a reason- able explanation of the process of speculative bubbles, we need to rely on the empirical study of this phe- nomenon. There are three possible approaches to take when trying to empirically analyze the behavior of mar- kets. The first, based on field data, involves study- ing actual market data. The second, involves the use of human traders in experimental laboratory markets. Figure 5: Average prices in a market with 2 strategic traders and 10 speculative traders Figure 6: Transaction prices in a market with 2 stra- tegic traders and 10 speculative traders The third, is based on using artificial agents in com- putational markets. While field data provides the best source of actual market behavior, due to the large num- ber of variables involved, it is difficult if not impossible to isolate the effect of individual factors on the markets outcome. Over the last two decades, laboratory exper- iments have proved invaluable in the study of market institutions. However, to specifically study the effects of a single factor such as the agent’s behavior on the market outcome it is necessary to observe the parti- cipant’s decision rules in order to isolate the conditions that impact the market. It is difficult to observe hu- man decision rules both in the field and in experimental laboratory markets. Hence the use of multi-agent sim- ulations based on artificial agents provides a promising alternative where it is possible to control factors such as the agents rationality and decision rules. In this paper we have used multi-agent simulations, based on artificially intelligent agents, to show one pos- sible explanation for how speculative bubbles can occur within the framework of the double auction market in- 106 Agents stitution. We find that the interaction of two classes of agents, each with simple behavior, often results in a bubble. Further statistical analysis is required to val- idate this approach. Acknowledgments We would like to thank John Dickhaut, Arijit Mukherji, and the three anonymous reviewers for their sugges- tions. We also acknowledge the help of Shawn LaMas- ter of the Economic Science Laboratory at the Uni- versity of Arizona in providing us with the experimental data from (Smith, Suchanek, & Williams 1988). References Camerer, C., and Weigelt, K. 1993. Convergence in experimental double auctions for stochastically lived assets. In Friedman, D., and Rust, J., eds., The Double Auction Market. Addison Wesley. 333-354. Clearwater, S. H., ed. 1996. Market Based Control. World Scientific. Cutler, D. M.; Poterba, J. M.; and Summers, L. H. 1990. Speculative dynamics and the role of feedback traders. American Economic Review, Papers and Proceedings 80:63-68. DeLong, J. B.; Shleifer, A.; Summers, L. H.; and Waldman, R. J. 1989. The size and incidence of the losses from noise trading. Journal of Finance 44:681- 696. DeLong, J. B.; Shleifer, A.; Summers, L. H.; and Waldman, R. J. 1990. Noise trader risk in financial markets. Journal of Political Economy 981703-738. Fagin, R.; Halpern, J. Y.; Moses, Y.; and Vardi, M. Y. 1995. Reasoning about Knowledge. MIT Press, Cam- bridge, Massachusetts. Fagin, R.; Halpern, J. Y .; and Vardi, M. 1984. A model-theoretic analysis of knowledge: preliminary report. In Proc. of the 25th IEEE Symposium on Foundationsof Computer Science. Flood, R., and Garber, P. 1980. Market fundamentals versus price level bubbles: The first tests. Journal of Political Economy 745-770. Flood, R.; Garber, P.; and Scott, L. 1981. Further evidence on price level bubbles. Working Paper, Uni- versity of Rochester. Gode, D. K., and Sunder, S. 1992. Some issues in electronic modeling of continuous double auctions with computer traders. Working Paper No. 1992-25, Graduate School of Industrial Administration, Carne- aie Mellon Universitu. Hayek, F. A. 1945. The use of knowledge in society. American Econ. Rev. 34(4):519-530. King, R. R.; Smith, V. L.; Williams, A. W.; and Boening, M. V. 1993. The robustness of bubbles and crashes in experimental stock markets. In Day, R. H., and Chen, P., eds., Nonlinear Dynamics and Evol- utionary Economics. Oxford University Press. 183- 200. Malone, T. W.; Fikes, R. E.; Grant, K. R.; and Howard, M. T. 1988. Enterprise: A market like task scheduler for distributed computing environments. In Huberman, B. A., ed., The Ecology of Computation. North Holland. 177-205. Rajan, V., and Slagle, J. 1995. The use of markets in decentralized decision making. In Proceedings of the Eighth International Symposium on Artificial Intelli- gence. Ram, A., and Leake, D. B., eds. 1995. Goal Driven Learning. A Bradford book, MIT Press, Cambridge, Massachusetts. Shoham, Y. 1993. Agent-oriented programming. Ar- tificial Intelligence 60( 1):51-92. Simon, H. A. 1958. Administrative behavior : a study of decision-making processes in administrative organ- ization. Macmillan, 2nd edition. Smith, V. L.; Suchanek, G. L.; and Williams, A. W. 1988. Bubbles, crashes, and endogenous expectations in experimental spot asset markets. Econometrica Vol. 56. No. 5 1119-1151. Waldspurger , C. A.; Hogg, T.; Huberman, B. A.; Kephart, J. 0.; and Stornetta, S. 1992. Spawn: A distributed computational economy. IEEE Transac- tions on Software Engineering 18:103-l 17. Wellman, M. P. 1993. A market-oriented program- ming environment and its applications to distributed multicommodity flow problems. Artificial Intelligence Research 1: l-23. Multiagent Problem Solving 107
1996
15
1,788
Luca Chittaro and Roberto Ranon Dipartimento di Matematica e Informatica Universita di Udine Via delle Scienze 20633 100 Udine, ITALY chittaro@dimi.uniud.it Abstract In this paper, we consider flow-based approaches to functional diagnosis. First, we contrast the existing approaches, pointing out the major limitations of each. Then, we choose one of them and extend it in order to overcome the identified limitations. Finally, we show how the proposed extension can be introduced into the other flow-based approaches. Introduction Reasoning about function for diagnostic purposes has been recently investigated by several research groups (Chandrasekaran 1994; Chittaro 1995; Hawkins et al. 1994; Hunt, Pugh, & Price 1995; Kumar & Upadhyaya 1995; Larsson 1996). Nevertheless, a lot of work has still to be done on the functional diagnosis of real complex systems. In this paper, we take into consideration flow-based approaches (Chittaro 1995; Kumar & Upadhyaya 1995; Larsson 1996) to functional diagnosis. These approaches propose to model a system by focusing on the flows (of mass, energy, or information) in the system and on the actions performed by components on the considered flows. From a diagnostic point of view, they typically implement diagnosis as a search in a graph structure and claim to perform diagnostic reasoning very efficiently. In this paper, we initially show that this claim is achieved at the expense of diagnostic power. Indeed, each approach exhibits at least one of the following limitations: (i) easy availability of measurements is assumed, while in real-world cases measurements are often difficult to take or too expensive, or too unreliable, (ii) a single-fault assumption is adopted, and it is thus not possible to handle multiple faults, (iii) the modeling of interactions among different physical domains is not easy or impossible. Since we are dealing with the application of flow-based techniques to a real-world problem in the domain of marine engineering (Chittaro, Fabbri & Lopez Cortes 1996), we need to overcome these limitations. To this purpose, this paper: (i) compares the existing flow-based diagnostic engines, pointing out the major limitations of each, (ii) chooses one of them (i.e., FDef (Chittaro 1995)) and extends it in order to overcome the identified limitations, and (iii) shows how the proposed extension can be introduced into the other flow-based approaches. 1010 Model-Based Reasoning Comparing flow-based approaches This section contrasts the three main flow-based approaches to functional diagnosis. The considered approaches are MFM (Multilevel Flow Modeling) (Larsson 1996), Classes (Kumar & Upadhyaya 1995), and FDef (Functional Diagnosis with efforts and flows) (Chittaro 1995). Flow-based approaches: a short overview Flow-based approaches to functional representation are founded on the general concept of flow (Paynter 1961). Some specific instances of flow are electrical current, mechanical velocity, hydraulic volume flow rate, and thermal heat flow. Some approaches also support the general notion of effort (Paynter 1961), i.e., the force responsible for the flow. Specific instances of effort are voltage, force, pressure, and temperature. In flow-based approaches, function is represented by means of a set of primitives, which are interpretations of actions frequently performed on the substances flowing through physical systems. These approaches generally aim at representing function in isolation, separating it from other types of knowledge, such as behavioral or teleological, in order to increase the clarity and the reusability of models. MFM. In MFM, functions are expressed in terms of primitives such as source, sink, storage, transport, barrier and balance. Instances of these primitives are connected together to build flow-structures. Functions are linked to goals (i.e., purposes of the system) by two types of means- ends relations: achieve and condition. An achieve relation connects a set of functions to the goal they are meant to achieve. A condition relation connects a goal to a function: the goal must be fulfilled in order for the function to be available. In the diagnostic algorithm proposed by (Larsson 1996), the user starts the diagnostic process by choosing an unachieved goal. The search proceeds downwards from the goal, via achieve relations, into the connected network of functional primitives, each of which has to be investigated (by questioning the user or by sensor readings) to find if the associated function is available or not. If a functional primitive conditioned by a goal is found to be at fault (or has no means of being checked), then the connected goal is recursively investigated; if it is found to be working, the From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. goal is skipped. Classes. Classes represents function of a component in terms of the ports of the component, i.e., function is a relation between input and output of energy, matter or information. A set of functional primitives, called classes (producer, consumer, data, store, control, and address) is defined. Every causal flow in the system is called a signal; a signal-line is the sequence of components along the path from the origin to the use of a signal. Signal-lines can be of different types (power, clock, control, address, and data) with respect to the port of the component to which they provide input. The diagnostic technique proposed by (Kumar & Upadhyaya 1995) starts from an incorrect system output S. Signal-lines that merge at output S are chosen for investigation, and ordered using heuristic criteria. Signal- lines are investigated following the order until a signal-line that contains a fault is found. Components in the currently investigated signal-line are ordered exploiting heuristic criteria, and tested in the obtained order until the faulty one is found. If a suspect component is connected to another signal-line, that signal-line is recursively investigated. FDef. FDef (Chittaro 1995) adopts a limited version of the functional model proposed in Multimodeling (Chittaro et al. 1993), where functional primitives (called roles) are defined as interpretations of the physical equations describing the behavior of components. This interpretation is carried out using the Tetrahedron of State (TOS) (Paynter 1961), an abstract framework that describes a set of generalized equations which are common to a wide variety of physical domains. When the TOS is instantiated in a specific domain, the ordinary physical variables and equations are obtained. Functional roles are interpretations of the generalized equations of the TOS, and are of four types: generator, conduit, barrier, and reservoir. The FDef diagnostic technique is based on the identification of the so called enabling sets and disabling sets. An enabling set is a set of roles which are all allowing the passage of flow or effort. A disabling set is a set of roles where there is at least one impediment to the passage of flow or effort. These sets are derived starting from the given measurements, using general axioms as those provided in (Chittaro 1995). They are then used both for exoneration purposes (identify normal roles), and to generate conflicts (i.e., sets of roles, each one containing at least a faulty role). A simple candidate generation algorithm (de Kleer & Williams 1987) uses the set of conflicts to produce the minimal diagnoses, and a minimum entropy (de Kleer & Williams 1987) prescription mechanism suggests the best measurement to discriminate among them. Comparison This section compares the three flow-based approaches in terms of the assumptions they make on the availability of measurements, their diagnostic output, and how they represent interactions among different physical domains. Required availability of measurements. (Larsson 1996) uses questions (or sensor readings) in order to find if the currently investigated MFM functional primitive is faulty (or not), and in order to decide which further parts of the model have (or have not) to be investigated by the search algorithm. In order to guarantee the progress of the diagnostic algorithm, it is thus necessary that the highest number (possibly all) of the functions in the system is measurable (by diagnostic question or sensor readings). On the contrary, the scarce availability of measurements is a typical problem in real-world systems (e.g., because some of them are too costly, or too unreliable, or it is physically impossible to take them). In these cases, the diagnostic capability of the approach is impaired, leading to a partial, incomplete diagnosis or to a stuck reasoning process. It should also be noted that since MFM handles only flows, observations about efforts cannot be represented. In Classes (Kumar & Upadhyaya 1995), components have to be testable in order to proceed in the investigation of signal-lines. This can lead again to a stuck reasoning process, when it is not possible to test some inputs and outputs of components. Unlike MFM, Classes tries to focus first on most probable diagnoses, by applying its heuristic ordering rules (e.g., a power signal-line has precedence over a control signal-line) to determine an order of investigation for signal-lines, and an order of testing for components inside the currently investigated signal-line. FDef (Chittaro 1995) fully performs its diagnostic activity, regardless of the number of measurements given, returning the complete set of minimal diagnoses that are physically consistent with the given measurements. After generating this set, it ranks all the possible additional measurements from the most to the least informative, following an entropy-based strategy (de Kleer & Williams 1987). In this way, it aims at isolating the real diagnosis, using the least number of measurements. Diagnostic output. The three approaches differ also in the type of diagnoses they produce. MFM produces just one diagnosis, including all the functions which have been measured to be faulty in the parts of model which have been explored. In order to help the user in the interpretation of this output, it classifies faulty functions into primary and secondary (the secondary could be an effect of the primary). Fault masking cases can thus be identified only if specific evidence is obtained, i.e., after a measurement pinpoints that a function involved in the masking is faulty. Classes relies on the single-fault assumption, and produces an ordered set of single faults, which depend on the signal-line that has been currently reached by the investigation process. Multiple faults and fault masking cases are thus not covered. FDef produces the set of all the minimal diagnoses that are physically consistent with the given measurements. It does not rely on the single-fault assumption, and each diagnosis is a minimal explanation of the observations. In this way, it covers also all the minimal fault masking cases consistent with the observations, without needing to obtain Spatial 82 Functional Reasoning 1011 Figure 1. A simple circuit. specific evidence first. However, producing this more detailed and complete diagnostic output causes FDef to be less efficient than MFM and Classes. Representation of influences. All flow-based approaches aim at modeling separately the different flows in a system, by organizing the model into different networks (often calledflow-structures) of functional primitives. Each flow-structure operates in a single physical domain (e.g., electrical, thermal,...). This modeling strategy is meant to allow: (i) the production of clear, easier to understand, models that modularly represent the different physical aspects of system functioning, and (ii) the focusing of reasoning on a selected physical domain. As a consequence, a component that works in more than one physical domain has to be represented by more than one functional primitive. For example, consider a simple circuit composed by a resistor connected to a battery, with the goal of producing heat (Figure 1). The resistor has a function both in the electrical domain (to conduce current), and in the thermal domain (to generate heat). In flow-based approaches, this system is represented by an electrical and a thermal flow-structure, each one including a function associated to the resistor. Since the two functions belong to the same physical component, the modeling approach should also provide a way to represent the relation between them. In general, we call influence the relation between two interacting primitives belonging to different flow- structures: the state of one of the two (called the influencing one) has the capability to influence the state of the other (called the influenced one). In the resistor example, the electrical function of the resistor influences its thermal function: an heat flow is generated by the resistor if and only if the resistor is conducting electrical flow. MFM represents influences using means-ends relations. A condition relation connects a goal to the influenced function (i.e., the goal must be fulfilled in order for the function to be available). Then, the flow-structure containing the influencing function is connected to the goal Goal 1 ? produce heat Goal 2 keep electrical flow through resistor source 0 transport 69 sink by an achieve relation. It is interesting to note that representation of influences in MFM requires to switch to the teleological level of representation, and then return to the functional one. Furthermore, from a practical point of view, the modeler is required to define a specific goal for every interaction he/she needs to model. An MFM model of the resistor circuit is shown in Figure 2(a). The diagnostic use of the condition relation in this example is the following: if the source associated to the resistor is measured to be malfunctioning, goal “keep electrical flow through resistor” is investigated and thus the electrical flow-structure is checked; if the source associated to the resistor is functioning, the goal is not investigated. Classes represents a component that works in more than one physical domain by assigning it different functional primitives with respect to its inputs and outputs. Influences are implicitly represented in the model. From the Classes point of view (Figure 2(b)), the resistor has an electrical input, and a thermal output: it is a consumer (i.e., it consumes flow) with respect to the electrical input, and a producer (i.e., it produces flow) with respect to the thermal output. The interactions between the two primitives that represent the resistor are not explicitly expressed in the model. During diagnosis, if the resistor becomes suspect in one signal-line, then the other signal-line can be recursively investigated. In Multimodeling (Chittaro et al. 1993), influences are defined as follows: a roIe FRi, which refers to a physical equation PEi, influences a role FRj, which refers to a physical equation PEj, if a physical variable of PEi is (or concurs to determine) a parameter of PEj. In the example, the conduit role associated to the resistor in the electrical domain influences the generator role associated to the resistor in the thermal domain. The resulting model is depicted in Figure 2(c), where the influence states that presence of flow in the electrical conduit is required to activate the thermal generator. Although Multimodeling considers influences, FDef does not support them. As a consequence, FDef can only diagnose a flow-structure instantiated in one physical domain. Introducing influences in FDef The previous analysis has shown that while FDef exhibits interesting diagnostic capabilities, it imposes an wirin generator 0 conduit 0 electrical domain I thermal domain influence Figure 2. Models of the resistor system. 1012 Model-Based Reasoning unacceptable restriction to a single physical domain. In this section, we extend FDef in order to remove that restriction, allowing a scaling up in the complexity of the functional models to be handled. role normal generator 1 abnormal produces effort and causes flow In addition to what is defined by Multimodeling, we further characterize influences as follows. conduit With transduction influences, the influenced role is a generator, and the state of the influencing role determines if the influenced generator is active. For example, the electrical conduit associated to the resistor causes (if it is traversed by current) the activation of the thermal generator associated to resistor. barrier allows passage of flow and effort does not allow passage of flow and effort does not allow passage , of flow and effort, does i not produce them Idoes not allow passage of flow and effort allows passage of flou and effort Table 1. Normal and abnormal functioning of roles. With regulation infZuences, the influenced role is not a generator, and the state of the influencing role regulates the state of the influenced one. For example, the mechanical reservoir role associated to the screw of a tap regulates the passage of flow in the tap viewed as an hydraulic conduit. Table 2. Functioning of influenced roles. The relay system case study phases, also showing the results on the relay example for each of them. In the following, we consider a diagnostic example proposed by (Holtzblatt 1992), where the main component is a single pole, double throw relay. Holtzblatt presents two different cases: in the first, two components (both sensors) are connected to the relay; in the second, sensors are substituted with bulbs. In order to show how we handle both situations, we consider the case in Figure 3, where both a sensor (sns) and a bulb (b) are connected to the relay. The relay can work in two different states: an energized state (Vc is greater than a given threshold) in which current is allowed to flow only between Pcommon and sns, or a de- energized state (VC is lower than the given threshold), in which current is allowed to flow only between Pcommon and b. Hereinafter, we suppose that three observations are given: (i) Vc>threshold (the relay is thus expected to be in the energized state), (ii) sns is off, and (iii) b is lit. Characterization of influences. For clarity purposes, Table 1 first summarizes what is assumed by FDef as normal and abnormal functioning of roles (e.g., in their normal functioning, conduits allow passage of both flow and effort). FDef also qualitatively characterizes the status of functional roles in this way: a role is uncrossed (crossed) if the flow associated to it is (is not) zero, a role is unpushed (pushed) if the effort associated to it is (is not) zero. Hereinafter, an influenced role is said to be positively influenced if the influencing role is crossed, or negatively influenced, if the influencing role is uncrossed. Table 2 characterizes the functioning of positively influenced and negatively influenced roles. Figure 4 depicts the functional representation of the relay example, considering the same domains taken into account by (Holtzblatt 1992). Reasoning with influences This section presents the extension of FDef, which is structured in three phases: (i) generation of Influence Assigners, (ii) application of influences, and (iii) generation of candidates. First, we characterize influences in more detail. Then, we provide a general treatment of the three Generation of Influence Assigners. The diagnosis of systems with influences requires to consider different alternative situations: if the given observations do not allow to derive the status of an influencing role with respect to flow (crossed or uncrossed), the functioning of the influenced role is undetermined. For example, while the three given observations in the relay case study allow to conclude that influencing role b is crossed by electrical flow (because bulb b is lit), they do not allow to conclude anything about neither influencing role sns (the observation that the sensor is off does not mean that current is not flowing through it, because the sensor could be failing in f%oiC Fbil+ generator conduit barrier electrical domain optical domain information domain transduction influence regulation influence Figure 3. The relay system. Figure 4. FDef model of the relay system. Spatial & Functional Reasoning 1013 communicating information) nor influencing role coil (the observation Vc>threshold does not allow to conclude anything about current through the coil). In order to handle this, we introduce the concept of Influence Assigner (IA). An IA is a set of observations from which it is possible to univocally derive the status of every influencing role. When the set of observations currently given on the system is not an IA (i.e., it is not possible to derive the status of at least one influencing role), we need to consider multiple possibilities (the alternative would be to prescribe and take additional measurements until there are no more ambiguities, but this solution would lead to the first problem pointed out in the Comparison section). We thus build a set of IAs, by assuming additional observations concerning some undetermined influencing roles. More specifically, the algorithm that builds the set of IAs is the following (GivenObs denotes the set of observations given on the system, IAS the set of IAs produced by the algorithm, and the predicate obs(role,observation) describes observations on roles): &IAS=0; & SetsOfObs = {GivenObs}; repeat foreach set of observations S E SetsOfObs & derive all the consequences of the observations in S and let UndS be the set of undetermined influencing roles; ifUndS=0- remove S from SetsOfObs; gjd S to IAS; &f enddo if SetsOfObs # 0 then foreach set of observations S E SetsOfObs & remove S from SetsOfObs; choose r E UndS; &l the set S u { obs(r,crossed)} &J SetsOfObs; &.l the set S u {obs(r,uncrossed)} @ SetsOfObs; enddo endif until SetsOfObs=O. Generation of IAs is obviously a possible source of combinatorial explosion, especially when very few measurements are given. However, the propagation activity (carried out both backward and forward before a new observation is assumed) typically allows to determine the status of a number of influencing roles which need not to be considered in the generation of assumptions, and also ensures that only physically feasible IAs are considered and generated. Moreover, inferences are cached and not performed more than once, e.g., when a set S is used (in the second part of the algorithm) to generate two sets that differ just for one assumption, they both inherit the inferences already performed with S. Considering the relay example, GivenObs contains three elements: obs(Vc, pushed) (voltage is produced by generator Vc), obs(snsc, uncrossed) (information from the sensor is off), and obs(env, crossed) (light is flowing in the environment around the bulb). As discussed previously, these observations do not allow to determine the status of all the three influencing roles (coil, sns, and b), and thus they are not an IA. Only the status of b can be determined by propagation: since env is crossed, then generator bg must be positively influenced, i.e., b is crossed. In this case, the following four IAs are generated: These four IAs are to be considered as different diagnostic situations, and thus handled disjunctively. Application of influences. For each generated IA, the functional model is transformed according to the definitions of influences in Table 2 (e.g., a negatively influenced conduit is replaced by a barrier, positively influenced roles remain unchanged,...). For example, the application of the fourth generated IA to the model in Figure 4 results in three changes: conduit wl becomes a barrier, barrier w2 becomes a conduit, and conduit snsc becomes a barrier. Generation of candidates. Generation of candidates for an IA and the corresponding model (i.e., the result of the transformation described above) is performed as follows. First, generation of local minimal candidates is performed for each flow-structure in the model, by locally running the plain FDef engine (as described in (Chittaro 1995)) only on that single flow-structure, feeding it with the observations (contained in the considered IA) concerning roles of that flow-structure. An interesting feature of this procedure is that it focuses diagnostic reasoning only on small sets of components (i.e., those belonging to the currently considered flow-structure). In the case of the fourth generated IA, locally running FDef only on the electrical flow-structure composed by Vc and coil produces the enabling set {Vc} and the disabling set {Vc, coil}. The minimal candidate for this flow-structure is then {coil}, while the local consideration of the other flow-structures does not produce any conflict (and thus no candidates): the coil is faulty, and the remaining part of the relay behaves as expected (the relay is actually in the de-energized state). Second, global candidates for the currently considered IA are simply obtained by Cartesian product of the sets of local candidates generated for the single flow-structures (for the fourth IA, {coil} is thus the only minimal candidate). The generation of consistent global candidates is guaranteed, because the IA and the model transformation ensure that the flow-structures in the model are in a mutually consistent state, and thus the local candidates are also mutually consistent and can be globally combined. The two activities above are performed for each generated IA, and then the complete set of candidates is obtained as the union of the sets of global minimal candidates obtained 1014 Model-Based Reasoning with the different IAs. In the relay example, the complete set of minimal candidates generated is { {coil}, { w2, w 1 }, { w2, sns}, { w2, snsc}, { w2, snsg} }. In order to speed up generation, the results of locally running FDef on a single flow-structure for an IA are saved, avoiding the need of repeating the computation with other IAs that make the same assumptions on that flow-structure. Our extension of FDef preserves the entropy-based mechanism for test prescription. In the relay example, measuring the flow associated to role coil (or, alternatively, to generator Vc) is the suggested best measurement. Extending other flow-based approaches This section provides guidelines to implement the extended FDef reasoning strategy inside the other flow-based approaches. Firstly, enabling sets and disabling sets have to be introduced in the considered approach. FDef uses axioms for the derivation of enabling and disabling sets from observations on flows and efforts (Chittaro 1995; Chittaro, Fabbri & Lopez Cortes 1996). Their adaptation to MFM and Classes is relatively straightforward. For example, reformulating FDef axioms in a more general context, we obtain statements such as: “if in a circuit of functional primitives, we are given at least an observation of presence of flow, and no observations about absence of flow or effort, then that circuit is an enabling set”, or “if in a circuit of functional primitives, we are given at least an observation of absence of flow, and no observations about absence of effort, then that circuit is a disabling set”. Consider for example the MFM flow-structure in Figure 2(a), representing an electrical circuit made of battery, wiring and a resistor, and suppose to observe current flowing in the resistor. In this case, the first of the two rules mentioned above would conclude that the battery, wiring and resistor allow the passage of flow, i.e., they constitute an enabling set. On the contrary, if absence of current were observed in the resistor, the second rule would conclude that there is at least a component in the circuit that does not allow the passage of flow, i.e., they are a disabling set. The Classes case (Figure 2(b)) is analogous. The second step is the identification of influences in MFM and Classes. To do this, the following approach can be followed. In MFM, every condition relation can be considered as a transduction influence, that connects two different functions representing the same component in two flow-structures (e.g., the sink and the source associated to the resistor in Figure 2(a)). In Classes, influences can be introduced when a component has ports belonging to different physical domains. For example, in Figure 2(b), the resistor is represented by a consumer class in the electrical signal-line, and a producer class in the thermal signal-line, and thus they can be connected by an influence. Once the above described adaptations have been carried out, it is straightforward to run the extended FDef engine described in this paper on MFM and Classes models, thus overcoming the shown limitations of these approaches at the expense of some efficiency. Conclusions This paper presented (i) an evaluation of the diagnostic power of existing flow-based diagnostic engines, (ii) a relevant and useful extension of the FDef diagnostic engine, and (iii) guidelines to implement the features of extended FDef in the other flow-based approaches. The techniques presented in this paper are being used on the technical marine system application presented in (Chittaro, Fabbri & Lopez Cortes 1996), where they are allowing us to move from the diagnosis of the considered hydraulic system to the diagnosis of the whole set of subsystems connected to it. The evaluation of the results on this application is pointing out that the approach is good at isolating faults when they result in a loss of functionality. Since some faults in the considered domain are preceded by a slow degradation in performance before turning into a loss of functionality, one of the subjects we are considering is the introduction and exploitation of “too low”/“too high” flow and effort observations in flow-based diagnostic approaches. References Chandrasekaran B. 1994. Functional Representation and Causal Processes. Advances in Computers 38:73-143. Chittaro L.; Guida G.; Tasso C. and Toppano E. 1993. Functional and Teleological Knowledge in the Multimodeling Approach for Reasoning About Physical Systems: A Case Study in Diagnosis. IEEE Transactions on Systems, Man, and Cybernetics 23(6): 1718-1751. Chittaro L. 1995. Functional Diagnosis and Prescription of Measurements Using Effort and Flow Variables. IEE Control Theory and Applications, 142(5): 420-432. Chittaro L.; Fabbri R. and Lopez Cortes J. 1996. Functional Diagnosis Goes to the Sea: Applying FDef to the Heavy Fuel Oil Transfer System of a Ship. In Proceedings of the Ninth Florida AI Research Symposium (FLAIRS), Key West, FL. Hawkins R.; Sticklen J.; McDowell J.K.; Hill T. and Boyer R. 1994. Function-based Modeling and Troubleshooting. Journal of Applied Artificial Intelligence 8(2): 285-302. Holtzblatt, L.J. 1992. Diagnosing Multiple Failures Using Knowledge of Component States. In W. Hamscher, L. Console, J. de Kleer (eds.), Readings in Model-based Diagnosis, San Mateo, Calif: Morgan Kaufmann, 165169. Hunt J.; Pugh D. and Price C. 1995. Failure Mode Effects Analysis: a Practical Application of Functional Modeling. Journal of Applied Artificial Intelligence 9(l): 33-44. de Kleer J. and Williams B.C. 1987. Diagnosing Multiple Faults. Artificial Intelligence 32: 97- 130. Kumar A. N. and Upadhyaya S.J. 1995. Function Based Discrimination during Model-based Diagnosis. Journal of Applied Artificial Intelligence 9( 1): 65-80. Larsson J.E. 1996. Diagnosis based on explicit means-end models. Artificial Intelligence 80: 29-93. Paynter H.M. 1961. Analysis and Design of Engineering Systems. Cambridge, Mass.: MIT Press. Spatid & Functional Reasoning 1015
1996
150
1,789
A Qualitative Model of Physical Monika Lundell Artificial Intelligence Laboratory Computer Science Department Swiss Federal Institute of Technology IN-Ecublens, 1015 Lausanne, Switzerland lundell@lia.di.epfl.ch Fields Abstract A qualitative model of the spatio-temporal be- haviour of distributed parameter systems based on physical fields is presented. Field-based mod- els differ from the object-based models normally used in qualitative physics by treating parame- ters as continuous entities instead of as attributes of discrete objects. This is especially suitable for natural physical systems, e.g. in ecology. The model is divided into a static and a dynamic part. The static model describes the distribu- tion of each parameter as a qualitative physical field. Composite fields are constructed from in- tersection models of pairs of fields. The dynamic model describes processes acting on the fields, and qualitative relationships between parame- ters. Spatio-temporal behaviour is modelIed by interacting temporal processes, influencing single points in space, and spatial processes that grad- ually spread temporal processes over space. We give an example of a qualitative model of a nat- ural physical system and discuss the ambiguities that arise during simulation. Introduction Research in qualitative physics has so far mostly fo- cused on lumped parameter models of man-made phys- ical systems, e.g. refrigerators and electrical circuits. A lumped model describes the temporal but not the spatial variation of the parameters within a system. This paper focuses on distributed parameter models, describing temporal as well as spatial variation. These models are especially appropriate for natural physical systems that have so far received little attention in qualitative physics, e.g. the atmosphere, the ocean and many other environmental and ecological systems. Scientists often think of distributed parameter sys- tems in terms of physical fields. A physical field de- scribes the spatial distribution of the values of a pa- rameter. Its properties are functions of space coordi- nates and of time. This view contrasts with the object- based ontology commonly used in qualitative physics, where models are constructed around a set of interact- ing objects described by several physical parameters. 1016 Model-Based Reasoning In a distributed parameter system like the atmosphere, there is usually no obvious object structure to asso- ciate the parameters with. Instead, with a field-based view, the parameters of the system are seen as evolving patterns determined by the current spatial configura- tion of values. The patterns are combined spatially as needed to obtain a more complete view of the sys- tem, which evolves due to physical processes acting in regions where patterns intersect. In this paper, we present a qualitative model of the spatio-temporal behaviour of distributed param- eter systems based on physical fields. The model is divided into a static and a dynamic part. The static model describes each parameter as a qualitative phys- ical field by dividing space into contiguous regions ac- cording to the quantity space of the parameter. The pattern of a field is described by its regions’ boundaries and contiguities. Composite fields are constructed from the region boundaries’ coincidence and traversal properties within other fields. The dynamic model de- scribes processes acting on the fields and qualitative relationships between the parameters. Processes are triggered by spatial features in individual or compos- ite fields. Spatio-temporal behaviour is modelled by interacting temporal and spatial processes. Temporal processes directly influence single points in space. Spa- tial processes have an indirect influence by gradually spreading temporal processes over space. The purpose of the presented model is to support the qualitative methods used by scientists. In some sciences, e.g. ecology, qualitative methods are a neces- sity, since many processes lack numerical models. One practical example is landscaping, e.g. envisioning the evolution of a planned garden in order to avoid un- desirable events (flooding, spreading of weeds, pests, etc). This involves a number of distributed parame- ters (land elevation, soil quality, seed distribution, etc), whose interaction is seldom described by precise ,nu- merical models. In many cases, exact coordinates are also missing. In other sciences, qualitative methods are used as a complement to existing numerical models. One example is weather forecasting, which is carried out in two phases. The first phase, called the objective From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. analysis, is entirely quantitative. Finite-element simu- lations of partial differential equations are run on huge amounts of observed data, resulting in a map of pre- dicted data. The second phase, called the subjective analysis, is qualitative. The meteorologist analyzes both the observed and predicted data for each param- eter by drawing patterns, e.g. isobars and rain regions, on the maps. This time-consuming analysis is carried out without computer support and builds up an “inner weather picture” (Perby 1988), i.e. an understanding of the current atmospheric processes that enables the meteorologist to produce a final forecast. The working methods of scientists can be characterized as qualita- tive, model-based and diagrammatic, thus touching on three related fields in artificial intelligence. In the following, after a brief survey of related work, we present the static and dynamic models and a di- agrammatic formalism for visualization of qualitative physical fields. We give an example of a qualitative model of a natural physical system and discuss the ambiguities that arise during simulation. We conclude by outlining further work and discussing the main con- tributions with respect to other research fields. Related work In qualitative physics, this research is related to the early work of FROB (Forbus 1983), in the sense that a qualitative physical field resembles a place vocabu- lary, i.e. a set of contiguous regions where some impor- tant property is constant. However, place vocabularies have not previously been used to model individual and composite parameters of natural physical systems. In- stead, most qualitative models of natural physical sys- tems are lumped and adopt the object-based ontology. The process-based ontology and the syntax used in our model have been inspired by Qualitative Process The- ory (Forbus 1984). In spatial information theory, there is extensive work on qualitative spatial reasoning. Most approaches de- scribe relations between points and are not applicable to continuous fields. The topological properties of ex- tended regions have been studied by e.g. (Cui, Cohn, & Randell 1992) and (Egenhofer & Al-Taha 1992). How- ever, these approaches focus on pairs of regions and do not explicitly represent the properties and physics of continuous fields. Geographical information systems provide methods for storage and analysis of large amounts of spatial data. Although these systems are mainly quantitative, they address many issues relevant to this research. In particular, the representation of individual parameter fields and their subsequent combination corresponds to the traditional cartographic technique of map overlay, where transparent maps of different themes are phys- ically combined in order to produce a complete map. Map overlays can be manipulated using the map alge- bra of (Tomlin 1991), which, however, is not relevant to our purposes since it requires a predefined discretiza- tion of space into a grid. Static model: structure The static model describes the structure of a dis- tributed parameter system as a set of qualitative phys- ical fields. It consists of a distribution model for each individual field and an intersection model for each pair of fields that are to be combined in a composite field. istribution model: individual fields Each individual parameter is described as a qualitative physical field by a distribution model. A qualitative physical field represents a double discretization: First, the value domain of the parameter is dis- cretized into a quantity space, i.e. a finite set of sym- bols called landmark values, representing qualitatively interesting events in the system, e.g. the critical values. A qualitative value either corresponds to a landmark value or to an interval between two landmark values. Representations of quantity spaces have been discussed in e.g. (Forbus 1984) and (Kuipers 1994). They are totally or partially ordered sets, whose values can only be compared for equality or order. Second, the space of the physical field is discretized into a pattern of contiguous, non-overlapping regions corresponding to the parameter’s qualitative values. The regions are maximal in the sense that no region is adjacent to a region with the same value. We rep- resent the continuous pattern of a qualitative physical field by the boundaries and contiguities of its regions. The boundary number of a region indicates the num- ber of topological holes. Each region has as many in- ternal boundaries as holes, plus one external boundary. The boundaries, in their turn, represent the contigui- ties of the region, i.e’. its adjacencies to other regions. Each boundary is divided into a number of segments, each indicating an adjacency to another region. For two-dimensional regions, with one-dimensional bound- aries, the segments can be ordered in a sequence. In three dimensions, the boundaries are themselves two- dimensional regions with boundaries whose segments can be ordered. Scientists often use diagrammatic methods to reason about distributed parameter systems. Consequently, we have developed a diagrammatic formalism for visu- alization of qualitative physical fields. The diagrams are deliberately abstract in order to convey only the qualitative properties represented by the model, i.e. the presence of distinct regions, continuity, boundaries and contiguities. This avoids the problematic issue in diagrammatic reasoning that pictorial representa- tions may be interpreted as containing more informa- tion than intended (Wang, Lee, & Zeevat 1995). Figure 1 shows the distribution models of two differ- ent fields. The qualitative information in the left-hand pattern is conveyed by the right-hand diagram, where the regions are represented as circles and circle sec- tors. Internal boundaries are represented by nesting Spatial & Functional Reasoning 1017 Dynamic model: behaviour the circles and contiguities by shared perimeters. The unusual shape of the first diagram reflects the inner regions’ contiguity with the outside world. Unshaded Shaded WalIII Hot Figure 1: Diagrams of distribution models. Intersection model: composite fields A composite field is the spatial combination of two or more fields. It is constructed from intersection models describing the coincidence and traversal properties of the boundaries in each pair of fields to be combined. For each boundary, the behaviour of its segments within the other field is described. Each segment is divided into an ordered sequence of subsegments rep- resenting either coincidence, i.e. a shared path with a subsegment in the other field, or traversal, when the subsegment cuts through a region in the other field. The intersection model is visualized diagrammati- cally as indicated in figure 2, corresponding to the physical overlay of the two patterns in figure 1. The subsegments are indicated by small white squares in the diagrams. The mapping between the fields is visu- alized as a graph. Each subsegment is connected by an edge to either a subsegment (coincidence) or a region (traversal) in the other field. The figure shows a par- tial mapping. For clarity, the segments in the pattern have been labelled and indicated by arrows, Figure 2: Diagram of \ntersection model. The dynamic model describes the behaviour of a dis- tributed parameter system in terms of processes acting on the fields, and qualitative functional relationships between the parameters. A coherent framework is obtained by letting func- tional relationships and processes have spatial extent and representing them as physical fields. The regions of applicability are represented by the intersection model of a parameter field and a process or relationship field. Functional relationships between parameters are de- scribed with the vocabulary of Qualitative Process Theory (QPT) (Forbus 1984). We write (qprop+ pa- rameterl parumeter2) to indicate the existence of a function that determines the value of parameter1 and is increasing monotonic in its dependence on parume- ted!. A decreasing monotonic relationship is indicated by ww-. We follow the ontology of QPT and use processes as the sole mechanism of change. A process is an en- tity that acts over time to change a parameter. The concept has so far mostly been used in lumped models describing only the temporal progress of the processes. In a distributed model, spatial progress must also be considered. We model spatio-temporal behaviour as an interaction between temporal and spatial processes. In the following, we discuss the properties of tem- poral and spatial processes by developing a model of a natural physical system: heat flow in a partially shaded meadow. The system consists of three distributed pa- rameters: irradiaton, shade and temperature. Irradia- tion indicates the parts of the meadow that could be irradiated by the sun. The irradiation can be consid- ered constant for meadows of normal size, but would have a varying distribution in a model of a larger area, e.g. an entire continent. The shade parameter distin- guishes between shaded and unshaded regions, e.g. due to passing clouds or obstructing trees. The dynamic behaviour of the system is modelled by two processes: 1. The temperature will rise in all irradiated regions, but less in shaded regions, 2. A varying temperature distribution will lead to a horizontal flow transporting heat from warmer to cooler regions. Temporal processes A temporal process is similar to a QPT process. It selects regions from individual or composite fields and imposes direct influences on their values. A temporal process is represented as a field and acts on each indi- vidual point in the selected regions. Since a region is not a fixed object, but can be decomposed with respect to other fields, parts of the region may be influenced by other processes. The process fields are composed in order to determine the net change to the values in the influenced fields and to update the region structures. We describe temporal processes with a syntax simi- lar to QPT’s but adapted to our model as follows: 1018 Model-Based Reasoning The regions referred to by the process are speci- fied by pattern templates indicating the following: 1. The name of the region, 2. The individual or composite field to which it belongs, 3. Whether to retrieve an atomic region, which is the default, or a union of contiguous regions, 4. Conditions on val- ues and spatial features. Value conditions compare parameter values, while spatial conditions indicate constraints on boundaries and contiguities. The con- ditions can refer to other regions by their names. A parameter value is referred to by combining the name of the field and the name of the region. The region-conditions indicate additional conditions on values and spatial features that could not be ex- pressed earlier. The regiohs and region-conditions correspond to the individuals, preconditions and quantity-conditions of QPT. Names of local variables, to be used in the process, are defined. The relations indicate functional relationships valid during the lifetime of the process. The variables and relations correspond to QPT relations. Direct influences are specified with QPT syntax. (It- parameter variable) indicates a monotonic increasing influence of the variable on the parameter. I- indi- cates a monotonic decreasing influence. The stop-conditions indicate conditions on values and spatial features that stop the process. The following example models the warming of each atomic region in the composite field of irradiation, shade and temperature. The heating-rate is inversely proportional to the amount of shade. The process stops when the irradiation is reduced to zero. temporal-process solar-warming :regions (r :fields (irradiation shade temp) : atomic T :conditions (> (irradiation r) zero) :variables heating-rate :relations (qprop- heating-rate (shade r)) :influences (I+ (temp r) heating-rate) :stop-conditions (<= (irradiation r) zero) Spatial processes Spatio-temporal behaviour is modelled by interacting spatial and temporal processes. Spatial processes are different from temporal pro- cesses in that they do not act in a single point but gradually spread influences over space, starting from a boundary between two regions. A spatial process is represented as a field with expanding applicability re- gions, called expansion regions. The segments of the expansion regions correspond to fronts that move at a certain rate. The path of a spatial process can be guided by defining functional relationships between the rates of the fronts and the values of the regions they move through. A spatial process can change other fields only in- directly by spreading temporal processes. Each ex- pansion region is associated with a temporal process defined as a local variable within the spatial process. These embedded temporal processes do not themselves select a region to act on, but are applied to the points encountered by the fronts of the expansion region. Once applied, a temporal process is activated and de- coupled from the spatial process. It obeys its own stop conditions, which, however, can refer to the local vari- ables of the spatial process. A spatial process is defined similarly to a tempo- ral process. The main difference lies in the influences. Since parameters are not directly influenced by spa- tial processes, I+ and I- are not used. Instead, the expansion regions are defined as follows: (E expansion-region from-region to-region rate influence stop-conditions) Expansion-region names a local variable for the ex- pansion region. From-region and to-region define from which boundary and in which direction the region starts expanding at the specified rate. Influence is the name of an embedded temporal process. The stop- conditions for this particular expansion region are in- dicated. The spatial process can also have global stop- conditions, in analogy with a temporal process, indi- cating conditions that will stop all expansion regions. The following example models the second process in our example, horizontal heat flow, which only concerns the temperature field. A spatial process is triggered by adjacent regions of different temperature, rl and 4’. Two local variables, heating-rate and expansion-rate, are directly proportional to the temperature difference. Two embedded temporal processes are defined, tpl and tp2, that respectively increase and decrease the tem- perature at the specified heating-rate until the two ex- pansion regions, epl and ep.2, have equal temperature. The expansion regions are spread into rl and r2 re- spectively with the specified expansion-rate, starting from their common boundary, each applying a tem- poral process to each passed point. The expansion is defined to stop only when the other region is no longer expanding. spatial-process heat-flow :regions (rl :fields temp) (r2 :fields temp :conditions (adjacent? rl r2) (> (temp rl) (temp r2))) :variables heating-rate expansion-rate (diff (- (temp rl) (temp r2)) (tpl (temporal-process heat-flow :influences (I+ temp heating-rate) :stop-conditions (= (temp epl) (temp ep2)))) (tp2 (temporal-process heat-flow :influences (I- temp heating-rate) :stop-conditions (= (temp epl) (temp ep2)))) Spatial & Functional Reasoning 1019 :relations (qprop+ expansion-rate diff) (qprop+ heating-rate diff) :influences (E epl rl r2 expansion-rate tpl (not (expanding? ep2))) (E ep2 r2 rl expansion-rate tp2 (not (expanding? epl))) This example demonstrates a tricky issue with dy- namic fields: the identity of a region. The temporal processes must refer to the expansion regions instead of rl and ~2, since the latter cease to exist as distinct regions when the temperature starts changing. The local variable dig that is computed from the values of these regions must thus be considered as constant dur- ing the lifetime of the process. This example gives a flavour of the qualitative as- pects of natural physical systems that can be modelled. It can be extended with fields describing e.g. the dis- tribution of different kinds of seeds, water availability, soil conditions, etc. Compositions of these parameters indicate varying living conditions, and can be used to model different ecological systems. Qualitative simulation: ambiguity The purpose of a qualitative simulation is to describe the evolution of a system as a sequence of qualitatively interesting states. In lumped models, a new state is generated each time a parameter reaches a significant landmark value, called a limit point in QPT. We use the same technique, but additionally con- sider spatial limit points that are reached when the structure of the system changes. The change can either concern a distribution model or an intersection model. One example is when a region reaches the same value as one of its neighbours due to an influencing tempo- ral process. Since all regions must be maximal, the two regions will be merged into one, thus changing the dis- tribution model. Another spatial limit point is when an expansion region crosses a boundary in another field, which entails a change to the intersection model of the process field and the parameter field. Since qualitative models use incomplete information, ambiguities can arise when the next limit point is to be determined. Our model inherits the ambiguities of lumped qualitative models, which can be divided into value ambiguities, e.g. deciding whether the dif- ference of two qualitative values is less or greater than a third value, and temporal ambiguities, e.g. deciding which of two changing parameters will next reach a limit point. Distributed qualitative models addition- ally have spatial ambiguities, which in our case arise when determining in which order spatial limit points will be encountered by an expansion region. In lumped qualitative models, a tree of behaviours can be generated when there is a known number of al- ternatives for each ambiguity. This is not the case for distributed qualitative models lacking shape informa- tion Figure 3 shows an example of two fields whose regions have the same qualitative structure but differ- 1020 Model-Based Reasoning ent shapes. The shaded region is a single expansion region, in a separate spatial process field, that gradu- ally spreads from region A into region B. At the instant indicated in the figure, the expansion region’s bound- ary traverses region C twice in the first field, but only once in the second. The intersection model of the spa- tial process field and the parameter field thus cannot be unambiguously established within the framework of the qualitatative model, nor is there a known num- ber of alternatives since region C could be of arbitrary A D izi C 3B E Figure 3: Fields with identical qualitative structure. The grey region is a superimposed single expansion region spreading in the indicated direction. The existence of spatial ambiguities means that our approach does not provide a general solution to the poverty conjecture of (Forbus, Nielsen, & Falt- ings 1987) stating that qualitative spatial reasoning re- quires a metric diagram conveying shape information. However, the poverty conjecture originated in reason- ing about objects in small-scale space, e.g. gearwheels, where the coordinates of the metric diagram can be obtained. We argue that qualitative spatial reasoning based on topological and ordinal information is useful for a different kind of situation, where the initial met- ric data is sparse or incomplete, e.g. in the form of scattered observation points. In these situations, both quantitative and qualitative simulations have to rely on assumptions and simplifications. In the case of scientists analyzing their data, we hy- pothesize that intractable ambiguities, like shape, are simplified to a known number of alternatives at a cog- nitive level, which makes it possible to use the envision- ing techniques of qualitative reasoning. We believe this is done by assuming a non-complex spatial configura- tion, given the known qualitative constraints, as well as a non-complex spatial evolution of the system. Note that this does not mean assuming convex or regular re- gions, since a continuous field, unless it is a grid, must necessarily contain concave and irregular regions. Based on this hypothesis, the qualitative simulation algorithm generates an envisionment as a sequence of diagrams differing in as few spatial features as possi- ble. Figure 4 shows a non-complex behaviour of the initial situation in figure 3, generated from a few sim- ple complexity-reducing heuristics. The shaded regions indicate the intersection of the field with a single ex- pansion region as it spreads gradually from region A into region B. In the first diagram, only B is partially covered. The least complex transition to the next state is assuming that the immediate neighbours of B are reached, i.e. D and E. In the next state, B is com- pletely covered and C has been reached. The final least complex transition is to the state where regions B, C, D, and E are entirely covered by the expansion region. The rules governing the simulation algorithm are described in detail in (Lundell 1995). 1 2 3 4 Figure 4: Qualitative simulation as a sequence of dia- grams describing one possible non-complex evolution. Conclusion The main contributions of the research described in this paper, with respect to related research fields, can be summarized as follows: Qualitative physics: We introduce the concept of a qualitative physical field, describing a physical sys- tem in terms of parameters instead of objects. A technique for modelling spatio-temporal behaviour and a language for spatial processes are presented. Spatial information theory: We do not limit our- selves to pairs of regions, but describe the qualita- tive properties of continuous fields containing many regions. Spatial features are not only described topo- logically, but also with ordinal information suitable for qualitative analysis. Geographic information systems: We present a qual- itative alternative to the quantitative techniques for representation and simulation of spatial data used in current systems. Qualitative methods are advanta- geous in situations with incomplete spatial data that cannot be satisfactorily represented in a quantitative system. Distributed parameter systems have several addi- tional features that can be exploited in qualitative rea- soning. We are currently working on a number of re- lated issues: Extending the qualitative physical fields with regions describing not only point-wise parameters but also amounts, averages and totals. This will also entail extensions to the process language. Representation of gradients of regions that are not described by a constant value but by an interval. This requires imposing a direction on the variation of values within a region and developing techniques for compositions of gradient fields. Automatic generation of qualitative models from sparse metric data in the form of scattered observa- tion points. Triangulation techniques and Voronoi diagrams combined with heuristics are a possible so- lution. Preliminary results have been presented in (Lundell 1994). Extending the model with ordinal information on the sizes of spatial features. This would make it possi- ble to model processes at different scales, and also to eliminate some of the spatial ambiguities. This technique has been used in a qualitative model of gradient flow presented in (Lundell 1995). References Cui, Z.; Cohn, A. 6.; and Randell, D. A. 1992. Quali- tative simulation based on a logical formalism of space and time. In AAAI, 679-684. Egenhofer, M., and Al-Taha, K. 1992. Reasoning about gradual changes of topological relationships. In Theories and Methods of Spatio-Temporal Reasoning in Geographic Space. Springer-Verlag. 196-219. Forbus, K.; Nielsen, P.; and Faltings, B. 1987. Qual- itative kinematics: A framework. In AAAI, 430-436. Forbus, K. 1983. Qualitative reasoning about space and motion. In Mental Models. Lawrence Erlbaum. 53-73. Forbus, K. 1984. Qualitative process theory. Artificial Intelligence 24:85-168. Kuipers, B. 1994. Qualitative Reasoning: Modelling and Simulation with Incomplete Knowledge. MIT Press. Lundell, M. 1994. Qualitative reasoning with spa- tially distributed parameters. In Eighth International Workshop on Qualitative Reasoning about Physical Systems. Lundell, M. 1995. A qualitative model of gradient flow in a spatially distributed parameter. In Ninth In- ternational Workshop on Qualitative Reasoning about Physical Systems. Perby, M.-L. 1988. Computerization and skill in local weather forecasting. In Knowledge, Skill and Artifi- cial Intelligence. Springer-Verlag. 39-52. Tomlin, C. D. 1991. Cartographic modelling. In Geographical information systems: principles and ap- plications. Longman. 361-374. Wang, D.; Lee, J.; and Zeevat, H. 1995. Reason- ing with diagrammatic representations. In Diagram- matic Reasoning, Cognitive and Computational Per- spectives. AAAI Press. 339-401. Spatial & Functional Reasoning 1021
1996
151
1,790
g Multiple New Designs From a Sketch Thomas F. Stahovich, Randall Davis, Howard Shrobe* MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 stahov@ai.mit.edu Abstract We describe a program called SKETCHIT that transforms a single sketch of a mechanical device into multiple families of new designs. It repre- sents each of these families with a “BEP-Model,” a parametric model augmented with constraints that ensure the device produces the desired be- havior. The program is based on qualitative con- figuration space (qc-space), a novel representa- tion that captures mechanical behavior while ab- stracting away its implementation. The program employs a paradigm of abstraction and resynthe- sis: it abstracts the initial sketch into qc-space then maps from qc-space to new implementa- tions. Introduction SKETCHIT is a computer program capable of taking a single sketch of a mechanical design and generaliz- ing it to produce multiple new designs. The program takes as input a stylized sketch of the original design and a description of the desired behavior and from this generates multiple families of new designs. It does this by first transforming the sketch into a representation that captures the behavior of the origi- nal design while abstracting away its particular imple- mentation. The program then maps from this abstract representation to multiple new families of implemen- tations. This representation, which we call qualitative configuration space, is the key tool allowing SKETCHIT to perform its tasks. The program represents each of the new families of implementations with what we call a behavior ensur- ing parametric model (“BEP-Model”): a parametric model augmented with constraints that ensure the ge- ometry produces the desired behavior.’ Our program takes as input a single sketch of a device and produces *Support for this project was provided by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-91-J-4038. iA parametric model is a geometric model in which the shapes are controlled by a set of parameters. 1022 Model-Based Reasoning I (a) actuator +-I Figure 1: (a) One structure for the circuit breaker. (b) Sketch as actually input to program. Engagement faces are in bold. The actuator represents the reset motion imparted by the user. (Labels for engagement pairs: (fl f6)=push-pair, (f2 f5)=cam-follower, (f3 f4)=lever- stop, (f7 f8)=pushrod-stop.) as output multiple BEP-Models, each of which will pro- duce the desired behavior. We use the design of a circuit breaker to illustrate the program in operation; one implementation is shown in Figure la. In normal use, current flows from the lever to the hook; current overload causes the bimetal- lic hook to heat and bend, releasing the lever and inter- rupting the current flow. After the hook cools, pressing and releasing the pushrod resets the device. The designer describes the circuit breaker to SKETCHIT with the stylized sketch shown in Figure lb, using line segments for part faces and icons for springs, joints, and actuators. SKETCHIT is concerned only with the functional geometry, i.e., the faces where parts meet and through which force and motion are trans- mitted (lines fI-f8). The designer’s task is thus to indicate which pairs of faces are intended to engage each other. Consideration of the connective geometry (the surfaces that connect the functional geometry to make complete solids) is put off until later in the design process. The designer describes the desired behavior of a de- vice to SKETCHIT using a state transition diagram (Figure 2b). Each node in the diagram is a list of the pairs of faces that are engaged and the springs that are From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Figure 2: The desired behavior of the circuit breaker. (a) Physical interpretation. (b) State transition dia- gram. In each of the three states, the hook is either at its hot or cold neutral position. relaxed. The arcs are the external inputs that drive the device. Figure 2b, for instance, describes how the cir- cuit breaker should behave in the face of heating and cooling the hook and pressing the reset pushrod. Figure 3 shows a portion of one of the BEP-models that SKETCHIT derives in this case. The top of the fig- ure shows the parameters that define the sloped face on the lever (f2) and the sloped face on the hook (f5). The bottom shows the constraints that ensure this pair of faces plays its role in achieving the overall de- sired behavior: i.e., moving the lever clockwise pushes the hook down until the lever moves past the point of the hook, whereupon the hook springs back to its rest position. As one example of how the constraints enforce the desired behavior, the ninth equation, 0 > Rl4/TAN(PSIl7) + H2_12/SIN(PSIl7), constrains the geometry so that the contact point on face f2 never moves tangent to face f5. This in turn ensures that when the two faces are engaged, clockwise rotation of the lever always increases the deflection of the hook. The parameter values shown in the top of Figure 3 are solutions to the constraints of the BEP-Model, hence this particular geometry provides the desired behavior. These specific values were computed by a program called DesignView, a commercial parametric modeler based on variational geometry. Using Design- View, we can easily explore the family of designs de- fined by this BEP-Model. Figure 4, for example, shows another solution to this BEP-Model. Because these pa- rameter values satisfy the BEP-Model, even this rather unusual geometry provides the desired behavior. As this example illustrates, the family of designs defined by a BEP-Model includes a wide range of design so- lutions, many of which would not be obtained with conventional approaches. Figures 3 and 4 show members of just one of the families of designs that the program produces for the circuit breaker. SKETCHIT produces other families of designs (i.e., other BEP-Models) by selecting different motion types (rotation or translation) for the compo- nents and by selecting different implementations for the pairs of interacting faces. For example, Figure 5 shows a design obtained by selecting a new motion type for the lever: in the original design the lever rotates, here it translates. Figure 6 shows an example of se- lecting different implementations for the pairs of in- I-- H S13 2.728 I PSI17 134.782 I +----- H2-12 0.041 Hl-11 > 0 H2-12 > 0 S13 > Hl-11 L15 > 0 PHI16 > 90 PHI16 < 180 PSI17 > 90 PSI17 < 180 0 > Rl4/TAN(PSIl7) + H2,12/SIN(PSIl7) R14 = SQRT(Sl3-2 + Ll5-2 - 2*Sl3*Ll5*COS(PHIl6)) Figure 3: Output from the program (a BEP-Model). Top: the parametric model; the decimal number next to each parameter is the current value of that param- eter. Bottom: the constraints on the parameters. For clarity, only the parameters and constraints for faces f2 and f5 are shown. 1 0 : \ t f6 fj’ f8 fl 1 I I .;;q* \f5 , f4 . . . . I : - L -I Figure 4: Another solution to the BEP-Model of Fig- ure 3. Shading indicates how the faces might be con- nected to flesh out the components. This solution shows that neither the pair of faces at the end of the lever nor the pair of faces at the end of the hook need be contiguous. teracting faces: In the original implementation of the cam-follower engagement pair, the motion of face f2 is roughly perpendicular to the motion of face f5; in the new design of Figure 6, the motions are parallel. Representation: QC-Space SKETCHIT’S approach to its task is use a representa- tion that captures the behavior of the original design while abstracting away the particular implementation, providing the opportunity to select new implementa- tions. For the class of devices that SKETCHIT is concerned with, the overall behavior is achieved through a se- quence of interactions between pairs of engagement faces. Hence the behavior that our representation must capture is the behavior of interacting faces. Our search for a representation began with configu- Spatial & Functional Reasoning 1023 lever-stop 1 Figure 5: A design variant obtained by replacing the rotating lever with a translating part. ent implementations for the engagement faces. The pushrod is pressed so that the hook is just on the verge of latching the lever. ration space (c-space), which is commonly used to rep- resent this kind of behavior. Although c-space is capa- ble of representing the behaviors we are interested in, it does not adequately abstract away their implemen- tations. We discovered that abstracting c-space into a qualitative form produces the desired effect; hence we call SKETCHIT’S behavioral representation “quali- tative configuration space” (qc-space) . This section begins with a brief description of c- space, then describes how we abstract c-space to pro- duce qc-space. C-Space Consider the rotor and slider in Figure 7. If the angle of the rotor UR and the position of the slider US are as shown, the faces on the two bodies will touch. These values of UR and US are termed a configuration of the bodies in which the faces touch, and can be represented as a point in the plane, called a configuration space plane (cs-plane). If we determine all of the configurations of the bod- ies in which the faces touch and plot the corresponding points in the cs-plane (Figure 7), we get a curve, called a configuration space curve (cs-curve). The shaded re- gion “behind” the curve indicates blocked space, con- figurations in which one body would penetrate the other. The unshaded region “in front” of the curve represents free space, configurations in which the faces do not touch. The axes of a c-space are the position parameters of the bodies; the dimension of the c-space for a set - Figure 7: Left: A rotor and slider. The slider translates horizontally. The interacting faces are shown with bold lines. Right: The c-space. The inset figures show the configuration of the rotor and slider for selected points on the cs-curve. of bodies is the number of degrees of freedom of the set. To simplify geometric reasoning in c-space, we assume that devices are fixed-axis. That is, we assume that each body either translates along a fixed axis or rotates about a fixed axis. Hence in our world the c- space for a pair of bodies will always be a plane (a cs-plane) and the boundary between blocked and free space will always be a curve (a cs-curve) .2 However, even in this world, a device may be composed of many fixed-axis bodies, hence the c-space for the device as a whole can be of dimension greater than two. The individual cs-planes are orthogonal projections of the multi-dimensional c-space of the overall device. Abstracting to QC-Space C-space is already an abstraction of the original design. For example, any pair of faces that produces the cs- curve in Figure 7 will produce the same behavior (i.e., the same dynamics), as the original pair of faces. Thus, each cs-curve represents a family of interacting faces that all produce the same behavior. We can, however, identify a much larger family of faces that produce the same behavior by abstracting the numerical cs-curves to obtain a qualitative c-space. In qualitative c-space (qc-space) we represent cs-curves by their qualitative slopes and the locations of the curves relative to one another. By qualitative slope we mean the obvious notion of labeling monotonic curves as diagonal (with positive or negative slope), vertical, or horizontal; by relative location we mean relative lo- cation of the curve end points.3 To see how qualitative slope captures something es- sential about the behavior, we return to the rotor and 2The c-space for a pair of fixed-axis bodies will always be 2-dimensional. However, it is possible for the c-space to be a cylinder or torus rather than a plane. See Section “Selecting Motion Type” for details. 3We restrict qcs-curves to be monotonic to facilitate qualitative simulation of a qc-space. 1024 Model-Based Reasoning slider. The essential behavior of this device is that the slider can push the rotor: positive displacement of the slider causes positive displacement of the rotor, and vice versa. If the motions of the rotor and slider are to be related in this fashion, their cs-curve must be a diagonal curve with positive slope. Conversely, any geometry that maps to a diagonal curve with positive slope will produce the same kind of pushing behavior as the original design. Their are eight types of qualitative cs-curves, shown in Figure 10. Diagonal curves always correspond to pushing behavior; vertical and horizontal curves corre- spond to what we call “stop behavior,” in which the extent of motion of one part is limited by the position of another. The key, more general, insight here is that for mono- tonic cs-curves, the qualitative slopes and the relative locations completely determine the first order dynam- ics of the device. By first order dynamics we mean the dynamic behavior obtained when the motion is as- sumed to be inertia-free and the collisions are assumed to be inelastic and frictionless. The consequence of this general insight is that qc-space captures all of the relevant physics of the overall device, and hence serves as a design space for behavior. It is a particularly con- venient design space because it has only two properties: qualitative slope and relative location. Another important feature of qc-space is that it is constructed from a very small number of building blocks, viz., the different types of qcs-curves in Fig- ure 10. As a consequence we can easily map from qc-space back to implementation using precomputed implementations for each of the building blocks. We show how to do this in Section “Selecting Geometries.” The SKETCHIT System Figure 8 shows a flow chart of the SKETCHIT system with its two main processes: abstraction and resynthe- sis. Abstraction Process SKETCHIT uses generate and test to abstract the ini- tial design into one or more working qc-spaces, i.e., qc- spaces that provide the behavior specified in the state transition diagram. The generator produces multiple candidate qc- spaces from the sketch, each of which is a possible interpretation of the sketch. The simulator computes each candidate’s overall behavior (i.e., the aggregate behavior of all of the individual interactions), which the tester then compares to the desired behavior. The generator begins by computing the numerical c-space of the sketch, then abstracts each numerical 4”Inertia-free” refers to the circumstance in which the inertia terms in the equations of motion are negligible com- pared to the other terms. One important property of inertia-free motion is that there are no oscillations. This set of physical assumptions is also called quasi-statics. cs-curve into a qcs-curve, i.e., a curve with qualitative slope and relative location. As with any abstraction process, moving from spe- cific numerical curves to qualitative curves can intro- duce ambiguities. For example, in the candidate qc- space in Figure 9 there is ambiguity in the relative lo- cation of the abscissa value (E) for the intersection be- tween the push-pair curve and the pushrod-stop curve. This value is not ordered with respect to B and C, the abscissa values of the end points of the lever-stop and cam-follower curves in the hook-lever qcs-plane: E may be less than B, greater than C, or between B and C.5 Physically, point E is the configuration in which the lever is against the pushrod and the pushrod is against its stop; the ambiguity is whether in this particular configuration the lever is (a) to the left of the hook (i.e., E < B) (b) contacting the hook (i.e., B < E < C), or (c) to the right of the hook (i.e., C < E). When the generator encounters this kind of ambiguity, it enumer- ates all possible interpretations, passing each of them to the simulator. The relative locations of these points are not am- biguous in the original, numerical c-space. Neverthe- less, SKETCHIT computes all possible relative loca- tions, rather than taking the actual locations directly from the numerical c-space. One reason for this is that it offers one means of generalizing the design: The orig- inal locations may be just one of the possible working designs; the program can find others by enumerating and testing all the possible relative locations. A second reason the program enumerates and tests all possible relative locations is because this enables it to compensate for flaws in the original sketch. These flaws arise from interactions that are individually cor- rect, but whose global arrangement is incorrect. For example, in Figure lb the interaction between the lever and hook, the interaction between the pushrod and the lever, and the interaction between the pushrod and its stop may all be individually correct, but the pushrod-stop may be sketched too far to the left, so that the lever always remains to the left of the hook 5 We do not consider the case where E = B or E = C. c2 W-Space Figure 8: Overview of SKETCHIT system. Spatial & Functional Reasoning 1025 Hook Pushrod Position Position II I I I’ ‘A B I I- + c D Lever Angle E D Lever Angle Figure 9: Candidate qc-space for the circuit breaker. (i.e., the global arrangement of these three interactions prevents the lever from actually interacting with the hook.) By enumerating possible locations for the inter- section between the pushrod-stop and push-pair qcs- curves, SKETCHIT will correct this flaw in the original sketch. Currently, the candidate qc-spaces the generator produces are possible interpretations of ambiguities in- herent in the abstraction. The simulator and tester identify which of these interpretations produce the de- sired behavior. We are also working on repairing more serious flaws in the original sketch, as we describe in the Future Work section. SKETCHIT employs an innovative qualitative simu- lator designed to minimize branching of the simulation. See [12] for a detailed presentation of the simulator. Re-Synt hesis In the resynthesis process, the program turns each of the working qc-spaces into multiple families of new de- signs. Each family is represented by a BEP-Model. &c-space abstracts away both the motion type of each part and the geometry of each pair of interact- ing faces. Hence there are two steps to the resynthesis process: selecting a motion type for each part and se- lecting a geometry for each pair of engagement faces. Selecting Motion Type SKETCHIT is free to se- lect a new motion type for each part because qc-space abstracts away this property. More precisely, qc-space abstracts away the motion type of parts that translate and parts that rotate less than a full revolution.6 Changing translating parts to rotating ones, and vice 6Qc-space cannot abstract away the motion type of parts that rotate more than a full revolution because the topology of the qc-space for such parts is different: If one of a pair of parts rotates through full revolutions, its motion will be 271. periodic, and what was a plane in qc-space will become a cylinder. (If both of the bodies rotate through full revolutions the qc-space becomes a torus.) Hence, if a pairwise qc-space is a cylinder or torus, the design must employ rotating parts (one foi: a cylinder, two for a toroid) rather than translating ones. versa, permits SKETCHIT to generate a rich assort- ment of new designs. Selecting Geometries The general task of translat- ing from c-space to geometry is intractable ([I]). How- ever, qc-space is carefully designed to be constructed from a small number of basic building blocks, 40 in all. The origin of 32 of these can be seen by examining Fig- ure 10: there are four choices of qualitative slope; for each qualitative slope there are two choices for blocked space; and the qc-space axes q1 and q2 can represent either rotation or translation. The other 8 building blocks represent interactions of rotating or translating bodies with stationary bodies. Because there are only a small number of basic build- ing blocks, we were able to construct a library of im- plementations for each building block. To translate a qc-space to geometry, the program selects an entry from the library for each of the qcs-curves. t -/I\ A 6 C D E F G H q, Figure 10: For drawing convenience, qcs-curves are shown as straight line segments; they can have any shape as long as they are monotonic. Each library entry contains a pair of parameterized faces and a set of constraints that ensure that the faces implement a monotonic cs-curve of the desired slope, with the desired choice of blocked space. Each library entry also contains algebraic expressions for the end point coordinates of the cs-curve. For example, Figure 11 shows a library entry for qcs-curve F in Figure 10, for the case in which q1 is rotation and q2 is translation. For the corresponding qcs-curve to be monotonic, have the correct slope, and have blocked space on the correct side, the following ten constraints must be satisfied: w>o L>O h>O s<h r>h 7112 < q5 5 7T +>o $ < arcsin(h/r) + 71/2 arccos(h/r) + arccos( %j$&) < 7r/2 r = (s” + L2 - 2sLco~(4))‘/~ The end point coordinates of the cs-curve are: 81 = arcsin(h/r) xl = -rcos(&) $2 = x - arcsin(h/r) x2 = -rcos(&) Figure 12 shows a second way to generate qcs-curve F, using the constraints: hl > 0 hz > 0 s > hl L>O 42 < 4 < -IT n/2 < $ < 7.r 0 > r/tan($) + b/sin(+) r = (s2 + L2 - 2sL cos(44)“” The end point coordinates of this cs-curve are: 1026 Model-Based Reasoning Figure 11: The two faces are shown as thick lines. The rotating face rotates about the origin; the translating face translates horizontally. 8 is the angle of the rotor and 2, measured positive to the left, is the position of the slider. Figure 12: The two faces are shown as thick lines. The rotating face rotates about the origin; the translating face translates horizontally. 19 is the angle of the rotor and x, measured positive to the left, is the position of the slider. 81 = - arcsin(ha/r) ~1 = --T cos(81) + h2/ tan($) 132 = arcsin(hl/s) + arccos( w) 22 = -scos(arcsin(hl/s)) - hlf;tan($) In the first of these designs the motion of the slider is approximately parallel to the motion of the rotor, while in the second the motion of the slider is approxi- mately perpendicular to the motion of the rotor.7 The two designs thus represent qualitatively different im- plementations for the same qcs-curve. To generate a BEP-Model for the sketch, we se- lect from the library an implementation for each qcs- curve. For each selection we create new instances of the parameters and transform the coordinate systems to match those used by the actual components. The relative locations of the qcs-curves in the qc-space are turned into constraints on the end points of the qcs- curves. We assemble the parametric geometry frag- ments and constraints of the library selections to pro- duce the parametric model and constraints of the BEP- Model. Our library contains geometries that use flat faces, although we have begun work on using circular faces.8 We have at least one library entry for each of the 40 kinds of interactions. We are continuing to generate new entries. SKETCHIT is able to produce different BEP-Models (i.e., different families of designs) by selecting different 7The first design is a cam with offset follower, the second is a cam with centered follower. ‘Circular faces are used when rotors act as stops. library entries for a given qcs-curve. For example, Fig- ure 4 shows a solution to the BEP-Model SKETCHIT generates by selecting the library entry in Figure 12 for the cam-follower qcs-curve. Figure 6 shows a so- lution to a different BEP-Model SKETCHIT generates by selecting the library entry in Figure 11 for the cam- follower. As these examples illustrate, the program can generate a wide variety of solutions by selecting different library entries. Refining a Concept As we have noted, the constraints in each BEP-Model represent the range of values that the geometric pa- rameters can take on, and still provide the behavior originally specified. The constraints thus define an en- tire family of solutions a designer can explore in order to adapt an initial conceptual design to meet additional design requirements. We illustrate this with a new example concerning the design of the yoke and rotor device shown in Fig- ure 13a. Continuous counter-clockwise rotation of the rotor causes the yoke to oscillate left and right with a brief dwell between each change in direction. - Figure 13: The yoke and rotor device. (a) Structure. (b) Stylized sketch. Each of the rotor faces is intended to engage each of the yoke faces. We describe the device to SKETCHIT with the styl- ized sketch in Figure 13b. The desired behavior is to have each of the rotor blades engage each of the yoke faces in turn. From this input SKETCHIT generates the BEP-Model in Figure 14. The designer now has available the large family of designs specified by the BEP-model and can at this point begin to specify additional design requirements. Imagine that one requirement is that all strokes have the same length. A simple way to achieve this is to constrain the yoke and rotor to be symmetric. We do this by adding additional constraints to the BEP- Model, such as the following which constrain the rotor blades to be of equal length and have equal spacing: Rl = R2 = R3, AOFFl- AOFF = 120”, AOFF - AOFFl = 120” Imagine further that all strokes are required to be l.Ocm long. We achieve this by adding the additional constraint:’ LM29 - LM27 = 1.0 ‘LA429 and EM27 are variables that SKETCHIT assigns to the extreme positions of the yoke. We obtain the names of these variables by using a graphical browser to inspect SKETCHIT’S simulation of the device. Because we have Spatial 81 Functional Reasoning 1027 PHI <= 180 PHI > 90 R>H H>O L>O w>o PSI c 0 PSI < ASIN(H/R)+SO ACOS(H/R) + ACOS((L-2 + R-2 - S^2)/(2*L*R)) < 90 Figure 14: Sample constraints from the yoke and ro- tor’s BEP-Model; For simplicity, new variable names have been substituted for sets of variables constrained to be equal. For example, because all three rotor blades are constrained to have equal length, R replaces Rl, R2, and R3. Finally, imagine that the dwell is required to be 40”, i.e., between each stroke, the rotor turns 40” while the yoke remains stationary. We can achieve this by adding one additional constraint: LMG - LM8 = 40” We can now invoke DesignView to find a solution to this augmented set of constraints; the solution will be guaranteed to produce both the designed behavior and the desired performance characteristics. We have been able to do this design refinement simply by adding additional constraints to the BEP-Model. RELATED WORK Our techniques can be viewed as a natural complement to the bond graph techniques of the sort developed in [15]. Our techniques are useful for computing geome- try that provides a specified behavior, but because of the inertia-free assumption employed by our simula- tor, our techniques are effectively blind to energy flow. Bond graph techniques, on the other hand, explicitly represent energy flow but are incapable of representing geometry. Our techniques focus on the geometry of devices which have time varying engagements (i.e., variable kinematic topology). Therefore, our techniques are complementary to the well know design techniques for fixed topology mechanisms, such as the gear train and linkage design techniques in [3]. There has been a lot of recent interest in automat- ing the design of fixed topology devices. A common task is the synthesis of a device which transforms a specified input motion to a specified output motion ([lo], [14] [16]). For th e most part, these techniques synthesize a design using an abstract representation of behavior, then use library lookup to map to im- plementation. However, because our library contains interacting faces, while theirs contain complete compo- nents, we can design interacting geometry, while they cannot. Like SKETCHIT, these techniques produce de- sign variants. To construct new implementations (BEP-Models), we map from qc-space to geometry. [8] and [I] have also explored the problem of mapping between c-space and geometry. They obtain a geometry that maps to constrained the yoke and the strokes have the same length. rotor to be symmetric, all a desired c-space by using numerical techniques to di- rectly modify the shapes of parts. However, we map from qc-space to geometry using library lookup. Our work is similar in spirit to research exploring the mapping from shape to behavior. [9] uses kine- matic tolerance space (an extension of c-space) to ex- amine how variations in the shapes of parts affect their kinematic behavior. Their task is to determine how a variation in shape affects behavior, ours is to determine what constraints on shape are sufficient to ensured the desired behavior. [5] examines how much a single geo- metric parameter can change, all others held constant, without changing the place vocabulary (topology of c- space). Their task is to determine how much a given parameter can change without altering the current be- havior, ours is to determine the constraints on all the parameters sufficient to obtain a desired behavior. More similar to our task is the work in [6]. They de- scribe an interactive design system that modifies user selected parameters until there is a change in the place vocabulary, and hence a change in behavior. Then, just as we do, they use qualitative simulation to de- termine if the resulting behavior matches the desired behavior. They modify c-space by modifying geom- etry, we modify qc-space directly. They do a form of generalization by generating constraints capturing how the current geometry implements the place vocabulary; we generalize further by constructing constraints that define new geometries. Finally, our tool is intended to generate design variants while theirs is not. Our work builds upon the research in qualitative simulation, particularly, the work in [4], [7], and [ll]. Our techniques for computing motion are similar to the constraint propagation techniques in [13]. FUTURE WORK As Section “Abstraction Process” described, the cur- rent SKETCHIT system can repair a limited range of flaws in the original sketch. We are continuing to work on techniques for repairing more serious kinds of flaws. Because there are only two properties in qc-space that matter - the relative locations and the qualita- tive slopes of the qcs-curves, to repair a sketch, even one with serious flaws, the task is to find the correct relative locations and qualitative slopes for the qcs- curves. We can do this using the same generate and test paradigm described earlier, but for realistic designs this search space is still far too large. We are explor- ing several ways to minimize search such as debugging rules that examine why a particular qc-space fails to produce the correct behavior, based on its topology. The desired behavior of a mechanical device can be described by a path through its qc-space, hence the topology of the qc-space can have a strong influence on whether the desired path (and the desired behavior) is easy, or even possible. For example, the qc-space may contain a funnel-like topology that “traps” the device, 1028 Model-Based Reasoning preventing it from traversing the desired path. If we can diagnose these kinds of failures, we may be able to generate a new qc-space by judicious repair of the current one. We are also working to expand the class of devices that SKETCHIT can handle. Currently, our techniques are restricted to fixed-axis devices. Although this con- stitutes a significant portion of the variable topology devices used in actual practice (See [ll]), we would like extend our techniques to handle particular kinds of non-fixed-axis devices. We are currently working with a commonly occurring class of devices in which a pair of parts has three degrees of freedom (rather than two) but the qc-space is still tractable. We are beginning to explore how our techniques can be applied to other problem domains. For example, we believe that the BEP-Model will be useful for kine- matic tolerance analysis (see [2] for an overview of tol- erancing). Here the task is to determine if a given set of variations in the shapes and locations of the parts of a device will compromise the desired behavior. We have also begun to explore design rationale cap- ture. We believe that the constraints of the BEP- Model will be a useful form of design documentation, serving as a link between the geometry and the desired behavior. The constraints might, for example, be used to prevent subsequent redesign efforts from modifying the geometry in a way that compromises hard won de- sign features in the original design. CONCLUSION This work is clearly at an early stage; we have yet to determine how well our techniques will scale to design problems that are more complex than the working ex- amples reported here. Even so, we have successfully used the program on three design problems: the cir- cuit breaker, the yoke and rotor, and the firing mech- anism from a single action revolver. We have demon- strated that SKETCHIT can generate multiple families of designs from a single sketch and that it can repair a limited range of flaws in the initial design. One reason this work is important is that sketches are ubiquitous in design. They are a convenient and efficient way to both capture and communicate de- sign information. By working directly from a sketch, SKETCHIT takes us one step closer to CAD tools that speak the engineer’s natural language. Given the intimate connection between shape and behavior, design of mechanical artifacts is typically conceived of as the modification of shape to achieve be- havior. But if changes in shape are attempts to change behavior, and if the mapping between shape and be- havior is quite complex [l], then, we suggest, why not manipulate a representation of behavior? Our qualita- tive c-space is just such a representation. We suggest that it is complete and yet offers a far smaller search space. It is complete because any change in shape will produce a c-space that maps to a new qc-space differing from the original by at most changes in relative loca- tions and qualitative slopes. Qc-space is far smaller precisely because it is qualitative: often many changes to the geometry map to a single change in qc-space. Fi- nally, it is an appropriate level of abstraction because it isolates the differences that matter: changes in the relative locations and qualitative slopes of a qc-space are changes in behavior. REFERENCES [l] Caine, M. E., 1993, “The Design of Shape from Mo- tion Constraints,” MIT AI Lab. TR 1425, September. [2] Chase, K. W. and Parkinson, A. R., 1991, “A Sur- vey of Research in the Application of Tolerance Analy- sis to the Design of Mechanical Assemblies,” Research in Engineering Design, Vol. 3, pp. 23-37. [3] Erdman, A. and Sandor, G., 1984, Mechanism De- sign: A na ysis 1 and Synthesis, Vol. 1, Prentice-Hall, Inc., NJ. [4] Faltings, B., 1990, “Qualitative Kinematics in Mechanisms,” JAI, Vol. 44, pp. 89-119. [5] Faltings, B., 1992, “A Symbolic Approach to Qual- itative Kinematics,” JAI, Vol. 56, pp. 139-170. [6] Faltings, B. and Sun, K., 1995, “FAMING: Sup- porting Innovative Mechanism Shape Design,” CAD. [7] Forbus, K., Nielsen, P., and Faltings, B., 1991, “Qualitative Spatial Reasoning: The CLOCK Project,” Northwestern Univ., The Institute for the Learning Sciences, TR #9. [8] Joskowicz, L. and Addanki, S., 1988, “From Kine- matics to Shape: An Approach to Innovative Design,” Proceedings AAAI-88, pp. 347-352. [9] Joskowicz, L., Sacks, E., and Srinivasan, V., 1995, “Kinematic Tolerance Analysis,” 3rd ACM Symposium on Solid Modeling and Applications, Utah. [lo] Kota, S. and Chiou, S., 1992, “Conceptual Design of Mechanisms Based on Computational Synthesis and Simulation of Kinematic Building Blocks,” Research in Engineering Design, Vol. 4, #2, pp. 75-88. [ll] Sacks, E. and Joskowicz, L., 1993, “Automated Modeling and Kinematic Simulation of Mechanisms,” CAD, Vol. 25, #2, Feb., pp. 106-118. [12] Stahovich, T., 1996, “SKETCHIT: a Sketch In- terpretation Tool for Conceptual Mechanical Design,” MIT AI Lab. TR 1573, March. [13] Stallman, R. and Sussman, G., 1976, “Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis,” MIT AI Lab. TR 380. [14] Subramanian, D., and Wang, C., 1993, “Kinematic Synthesis with Configuration Spaces,” The 7th In- ternational Workshop on Qualitative Reasoning about Physical Systems, May, pp. 228-239. [15] Ulrich, K, 1988, “Computation and Pre- parametric Design,” MIT AI Lab. TR-1043. [16] Welch, R. V. and Dixon, J. R., 1994, “Guiding Conceptual Design Through Behavioral Reasoning,” Research in Engineering Design, Vol. 6, pp. 169-188. Spatial & Functional Reasoning 1029
1996
152
1,791
Tree-bank Grammars Eugene Charniak Department of Computer Science, Brown University Providence RI 02912-1910 ec@cs.brown.edu Abstract By a “tree-bank grammar” we mean a context-free grammar created by reading the production rules di- rectly from hand-parsed sentences in a tree bank. Common wisdom has it that such grammars do not perform we& though we know of no published data on the issue. The primary purpose of this paper is to show that the common wisdom is wrong. In particu- lar, we present results on a tree-bank grammar based on the Penn WaII Street Journal tree bank. To the best of our knowledge, this grammar outperforms ah other non-word-based statistical parsers/grammars on this corpus. That is, it outperforms parsers that con- sider the input as a string of tags and ignore the actual words of the corpus. Introduction Recent years have seen many natural-language pro- cessing (NLP) projects aimed at producing gram- mars/parsers capable of assigning reasonable syntac- tic structure to a broad swath of English. Naturally, judging the creations of your parser requires a “gold standard,” and NLP researchers have been fortunate to have several corpora of hand-parsed sentences for this purpose, of which the so-called “Penn tree-bank” [7] is perhaps the best known. It is also the corpus used in this study. (In particular, we used the Wall Street Journal portion of the tree bank which consists of about one million words of hand-parsed sentences.) However, when a convenient standard exists, the re- search program subtly shifts: the goal is no longer to create any-old parser, but rather to create one that mimics the Penn tree-bank parses. Fortunately, while there is no firm NLP consensus on the exact form a syntactic parse should take, the Penn trees are rea- sonably standard and disagreements are usually about less common, or more detailed, features. Thus the at- tempt to find Penn-style trees seems a reasonable one, and this paper is a contribution to this effort. Of those using tree banks as a starting point, a sig- nificant sub-community is interested in using them to support supervised learning schemes so that the gram- mar/parser can be created with minimal human inter- vention [1,2,5,6,8]. The benefits of this approach are P VP NP / NP pron vb dt nn She heard the noise Figure 1: A simple parsed entry in a tree-bank twofold: learning obviates the need for grammar writ- ers, and such grammars may well have better coverage (assign parses to more sentences) than the hand-tooled variety. At any rate, this is the’game we have chosen. Now the simplest way to “learn” a context-free grammar from a tree-bank is to read the grammar off the parsed sentences. That is, we can read the follow- ing rules off the parsed sentence in Figure 1 S + NPVP NP + pron VP + vb NP NP + dt nn We call grammars obtained in this fashion “tree-bank grammars.” It is common wisdom that tree-bank grammars do not work well. We have heard this from-several well- known researchers in the statistical NLP community, . , and the complete lack of any performance results on such grammars suggests that -if they have been re- searched the results did not warrant publication. The primary purpose of this paper is to refute this com- mon wisdom. The next section does this by presenting some results for a tree-bank grammar. Section 3 com- pares these results to prior work and addresses why our results differ from the common expectations. The parser used in our experiments is, for the most Learning 1031 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. part, a standard chart parser. It does differ from the standard, however, in two ways. One is an efficiency matter - we improved its ability to search for the most probable parse. This is discussed briefly in sec- tion 3 as well. The second difference is more unusual. On impressionistic evidence, we have come to believe that standard PCFGs do not match English’s prefer- ence for right-branching structures. In section 4 we present some ideas on how this might be corrected and show how these ideas contribute to the performance results of section 2. The Experiment We used as our tree bank the Penn parsed Wall Street Journal corpus, release 2. ’ We divided the sentences into two separate corpora, about 100,000 words for testing and about l,OOO,OOO words for training. We ig- nored all sentences in the testing data of length greater than 40 because of processing-time considerations; at any rate, the actual number of such sentences is quite low, as the overall average sentence length is about 22 words and punctuation. Of the 100,000 words of test- ing data, half were used for preliminary testing and the other half for “official” testing - the results reported here. With the exception of the right-bracketing bias dis- cussed later, the training was particularly simple. We obtained a context-free grammar (CFG) by reading the rules off all the sentences in the training data. Trace el- ements indicated in the parse were ignored. To create a probabilistic CFG, a PCFG, we assigned a probability to each rule by observing how often it was used in the training corpus. Let 1 T ] be the number of times rule T occurred in the parsed training corpus and X(T) be the non-terminal that ir expands. Then the probability assigned to r is Pm = b-1 Cr’E(r’ 1 x(T’)=x(T)} I r’ I (1) After training we test our parser/grammar on the test data. The input to the tester is the parsed sentence with each word assigned its (presumably) correct part of speech (or tag). Naturally the parse is ignored by the parser and only used to judge the parsers output. Also, our grammar does not use lexical information, but only the tags. Thus the actual words of the sentence are irrelevant as far as our parser is concerned; it only notices the tag sequence specified by the tree-bank. For example, the sentence in Figure 1 would be “pron vb dt nn." We used as our set of non-terminals those specified in the tree-bank documentation, which is roughly the ‘An earlier draft of this paper was based upon a pre- liminary version of this corpus. As this earlier version was about one-third the size and somewhat less “clean,” this version of the paper sports (a) a larger tree-bank grammar (because of more training sentences), and (b) somewhat better results (primarily because of the cleaner test data). Sentence Lengths 2-12 2-16 2-20 2-25 2-30 2-40 Figure 2: Parsing results for the tree-bank grammar Average Length Precision Recall 8.0 91.5 89.1 11.5 89.6 87.1 13.9 87.3 84.9 16.3 85.5 83.3 18.8 83.6 81.6 22.0 82.0 80.0 Accuracy 96.9 95.0 92.9 91.2 89.7 88.0 set specified in [7]. It was necessary to add a new start symbol, Sl, as all the parses in our version of the tree bank have the following form: ((S (NP The dog) (VP chewed (NP the bone)) .)) Note the topmost unlabeled bracketing with the single S subconstituent, but no label of its own. We handled such cases by labeling this bracket Sl.’ We use the full set of Penn-tree-bank terminal parts of speech augmented by two new parts of speech, the auxiliary verb categories aux and auxg (an aux in the “ing” form). We introduced these by assigning all oc- currences of the most common aux-verbs (e.g., have, had, is, am, are, etc.) to their respective categories. The grammar obtained had 15953 rules of which only 6785 occurred more than once. We used all the rules, though we give some results in which only a subset are used. We obtained the most probable parse of each sen- tence using the standard extension of the HMM Viterbi algorithm to PCFGs. We call this parse the map (max- imum a posteriori) parse. We then compared the map parse to the one given in the tree-bank testing data. We measured performance by three observations: 1. precision: the percentage of all non-terminal brack- etings appearing in map parses that also appear as a non-terminal bracketing in the corresponding tree- bank parse, 2. recall: the percentage of all non-empty non-terminal bracketings from the tree bank that also appeared as non-terminal bracketings in the map parse, and 3. accuracy: the percentage of all bracketings from the map parses that do not cross over the bracketings in the tree-bank parse. The results obtained are shown in Figure 2. At about sixteen thousand rules, our grammar is rather large. We also ran some tests using only the 20ne interesting q uestion is whether this outermost bracketing should be counted when evaluating the preci- sion and recall of the grammar against the tree-bank. We have not counted it in this paper. Note that this bracketing always encompasses the entire sentence, so it is impossible to get wrong. Including it would improve our results by about l%, i.e., precision would increase from the current 82% to about 83%. 1032 Natural Language Sentence Grammar Lengths Size Precision Recall Accuracy 2-16 Full 89.6 87.1 95.0 Reduced 89.3 87.2 94.9 2-25 Full 85.5 83.3 91.2 Reduced 85.1 83.3 91.1 2-40 Full 82.0 80.0 88.0 Reduced 81.6 80.0 87.8 Figure 3: Parsing results for a reduced tree-bank gram- mar lOO- 95- 90- 85- A- The tree-bank grammar @ - The PCFG of [5] 17 - The transformational A 0 parser of [a] 0 - The PCFG of [B] A A q n A 0 A A 0 0 Figure several VS. 4: Accuracy parsers 110 :5 40 average sentence length for subset of rules that occurred more than once. As noted earlier, this reduced the number of rules in the gram- mar to 6785. Interestingly, this reduction had almost no impact on the parsing results. Figure 3 gives first the results for the full grammar followed by the results with the 6785-rule subset; the differences are small. Discussion To put the experimental results into perspective it is useful to compare them to previous results on Wall Street Journal data. Figure 4 compares the accuracy figures for our tree-bank grammar with those of three earlier grammars/parsers that also used Wall Street Journal text for testing purposes. We compare only accuracy figures because the earlier work did not give precision and recall figures. It seems clear that the tree-bank grammar is more accurate than the others, particularly when the aver- age sentence length increases - i.e., when longer sen- tences are allowed into the testing corpus, The only data point that matches our current results is one for an earlier grammars of ours [5], and that only for very short sentences. This is not to say, however, that there are no bet- ter grammars/parsers. Magerman [6] reports preci- sion and accuracy figures of 86% for WSJ sentences of length 40 and less. The difference is that Magerman’s parser uses statistics based upon the actual words of the sentence, while ours and the others shown in Fig- ure 4 use only the tags of the words. We believe this shows the importance of including lexical information, a point to which we return below. Next we turn to the discrepancy between our results and the prevailing expectations. Roughly speaking, one can identify five reasons why a parser does not identify the “correct” parse for a sentence: 1. the necessary rules are not in the grammar, 2. the rules are there, but the probabilities are incor- rect, 3. the probabilities are correct, but the tag sequence by itself does not provide sufficient information to select the correct parse, 4. the information is sufficient, but because the parser could not consider all of the possible parses, it did not find the correct parse, 5. it found the correct parse, but the the tree-bank “gold standard” was wrong (or the correct parse is simply not clear). Of these, (3) and (5) are important but not relevant to the current discussion. Of the rest, we believe that (1) is a major component of the low expectations for tree-bank grammars. Certainly it was our major con- cern. Penn-style trees tend to be be rather shallow, and the 40-odd parts of speech allow many possible com- binations. For example, consider the NP “the $200 to $400 price range”, which has the tag sequence dt $ cd to $ cd nn nn. Our tree-bank grammar does not have the corresponding NP rule (or any reasonable combi- nation of rules as far as we can tell) and thus could not assign a correct parse to a sentence that contained this N P. For this reason we gave some thought to how new rules might be introduced and assigned non-zero probability. Indeed, we started on this work becuase we believed we had a interesting way to do this. In the event, however, no such complications proved nec- essary. First, our grammar was able to parse all of the test sentences. Second, it is not too hard to show that coverage is not a first-order problem. In retrospect, our concerns about coverage were not well thought out because of a second property of our tree-bank grammar, its extreme overgeneration. In particular, the following fact is true: Let x be the set of the tree-bank parts of speech minus the following parts of speech: forward and Learning 1033 Sentence Data Lengths Used Precision Recall Accuracy 2-16 Testing 89.6 87.1 95.0 Training 90.7 88.6 95.4 2-25 Testing 85.5 83.3 91.2 Training 86.7 84.0 91.6 2-40 Testing 82.0 80.0 88.0 Training 83.7 81.1 88.6 Figure 5: Parsing results for the tree-bank grammar backward single quote mark (neither of which oc- curred in our corpus) 9 sym (symbol), u h (interjec- tion), o (final punctuation), and ). Any string in x* (where “*” is the normal Kleene star operator) is a legitimate prefix to a sentence in the language of our tree-bank-grammar, and furthermore, any non-terminal may start immediately following x*. In other words, our grammar effectively rules out no strings at all, and every possible part of speech can start at almost any point in the sentence. The proof of this fact is by induction on the length of the string and is straightforward but tedious3 Of course, that our grammar comes up with some parse for a sentence does not mean that it is immune to missing rules. However, we can show that possi- ble missing rules are not a first-order problem for our grammar by applying it to sentences from the training corpus. This gives an upper bound on the performance we can expect when we have all of the necessary rules (and the correct probabilities). The results are given in Figure 5. Looking at the data for all sentences of length less than or equal to 40, we see that having all of the necessary rules makes little difference. We noted earlier that the tree-bank grammar not only overgenerates, but also places almost no con- straints on what part of speech may occur at any point in the sentence. This fact suggests a second reason for the bad reputation of such grammars - they can be hard on parsers. We noticed this when, in preliminary testing on the training corpus9 a significant number of sentences were not parsed - this despite the fact that our standard parser used a simple best-first mecha- nism. That is, the parser chooses the next constituent to work on by picking the one with the highest “figure of merit.” In our case this is the geometric mean of the inside probability of the constituent. Fortunately, we have been also working on improved best-first chart parsing and were able to use some new 3So tedious that after proving this fact for the tree-bank grammar used in the fkst draft of this paper, we could not muster the enthusiasm necessary to confirm it for the cur- rent grammar. However, since the new grammar is larger than that in the earlier draft, the above theorem or a sim- ilar one will surely hold. 1034 Natural Language techniques on our tree-bank grammar. We achieved the performance indicated in Figure 2 using the fol- lowing figure of merit for a constituent Nj,k , that is, a constituent headed by the ith non-terminal, which covers the terms (parts of speech) tj . . . tk - 1 P($,k I to,n) sa P(Ni I tj-l)P@j,k I WP@k I N”) (2) &j,k+d Here p(tj,k+l) is the probability of the sequence of terms tj . . .tk and is estimated by a tri-tag model, p(i!j,k ] N”) is the inside probability of Nj,k and is computed in the normal.fashion (see, e.g., [4] ), and p(N’ ] Q-r) and p(i!k I NZ) are estimated by gathering statistics from the training corpus. It is not our purpose here to discuss the advan- tages of this particular figure of merit (but see [3]). Rather, we simply want to note the difficulty of ob- taining parses, not to mention high-probability parses, in the face of extreme ambiguity. It is possible that some of the negative “common wisdom” about tree- bank grammars stems from this source. Right-branching Bias Earlier we noted that we made one modification to our grammar/parser other than the purely efficiency- related ones discussed in the last section. This modi- fication arose from our long standing belief that our context-free parsers seemed, at least from our non- systematic observations, to tend more toward center- embedding constructions than is warranted in En- glish. It is generally recognized that English is a right- branching language. For example, consider the follow- ing right-branching bracketing of the sentence “The cat licked several pans.” ( (The (cat (licked (several pans)))) .) While the bracketing starting with “cat” is quite ab- surd, note how many of the bracketings are cor- rect. This tendency has been exploited by Brill’s [2] “transformational parser,” which starts with the right- branching analysis of the sentence and then tries to improve on it. On the other hand, context-free grammars have no preference for right-branching structures. Indeed, those familiar with the theory of computation will rec- ognize that the language anbn, the canonical center embedded language, is also the canonical context-free language. It seemed to us that a tree-bank grammar, because of the close connection between the “gold- standard” correct parses and the grammar itself, of- fered an opportunity to test this hypothesis. As a starting point in our analysis, note that a right- branching parse of a sentence has all of the closing parentheses just prior to the final punctuation. We call constituents that end just prior to the final punctua- tion “ending constituents” and the rest “middle con- stituents.” We suspect that our grammar has a smaller propensity to create ending constituents than is war- ranted by correct parses. If this is the case, we want to bias our probabilities to create more ending con- stituents and fewer middle ones. The “unbiased” probabilities are those assigned by the normal PCFG rules for assigning probabilities: P(4 = PWW CET Here 7r is a parse of the tag sequence, c is a non- terminal constituent of this parse, and rule(c) is the grammar rule used to expand this constituent in the parse. Assume that our unbiased parser makes 2 per- cent of the constituents ending constituents whereas the correct parses have 9 percent, and that conversely it makes u-percent of ihe constituents middle con- stituents whereas the correct parse has v percent. it We hypothesized that y > z-and u > v. Furthermore seems reasonable to bias the probabilities to account for the underproduction of ending constituents by di- viding out by 2 to get an “uninfluenced” version and then multiplying by-the correct probability y to make the influence match the reality (and similarly for mid- dle constituents). This gives the‘ following equation for the probability of a parse: P(4 = Y/X if c is ending v/u otherwise (4) CET Note that the deviation of this equation from the stan- dard context-free case is heuristic in nature: it derives not from any underlying principles, but rather from our intuition. The best way to understand it is simply to note that if the grammar tends to underestimate the number of ending constituents and overestimate mid- dle constituents,the above equation will multiply the former by g/x, a number greater than one, and the latter by-VI& a number less than one. Furthermore, if we assume that on the average the total number of constituents is the same in both the map parse and the tree-bank parse (a pretty good as- sumption), and rect -parses) that y and u (the numbers for the cor- are collected from the training data, we need collect only one further number, which we have chosen as the ending factor E = y/x. To test our theory, we estimated s out data. It came out 1.2 (thus confirming, at least for this test sample, our hypothesis that the map parses would underestimate the number of ending con- stituents). We modified our parse probability equation to correspond to Equation 4. The data we reported earlier is the result. Not using this bias yields the “Un- biased” data shown here: from some held- Precision Recall Accuracy With bias 82.0 80.0 88.0 Unbiased 79.6 77.3 85.4 Difference 2.4 2.7 2.6 The data is for sentences of lengths 2-40. The differ- ences are not huge, but they are significant - both in the statistical sense and in the sense that they make up a large portion of the improvement over the other grammars in Figure 4. Furthermore, the modification required to the parsing algorithm is trivial (a few lines of code), so the improvement is nearly free. It is also interesting to speculate whether such a bias would work for grammars other than tree-bank gram- mars. On the one hand, the arguments that lead one to suspect a problem with context-free grammars are not peculiar to tree-bank grammars. On the other, mechanisms like counting the percentage of ending con- stituents assume that the parser’s grammar and that of the gold standard are quite similar, as otherwise one is comparing incomparables. Some experimenta- tion might be warranted. Conclusion We have presented evidence that tree-bank grammars perform much better than one might at first expect and, in fact, seem to outperform other non-word-based grammars/parsers. We then suggested two possible reasons for the mistaken impression of tree-bank gram- mars9 inadequacies. The first of these is the fear that missing grammar rules will prove fatal. Here we ob- served that our grammar was able to parse all of our test data, and by reparsing the training data have showed that the real limits of the parsers’ performance must lie elsewhere (probably in the lack of informa- tion provided by the tags alone). The second pos- sible reason behind the mistaken current wisdom is the high level of ambiguity of Penn tree-bank gram- mars. The ambiguity makes it hard to obtain a parse because the number of possible partial constituents is so high, and similarly makes it hard to find the best parse even should one parse be found. Here we sim- ply pointed to some work we have done on best-first parsing and suggested that this may have tamed this particular problem. Last, we discussed a modification to the probabilities of the parses to encourage more right-branching structures and showed how this led to a small but significant improvement in our results. We also noted that the improvement came at essentially no cost in program complexity. However, because of the informational poverty of tag sequences, we recognize that context-free parsing based only upon tags is not sufficient for high precision, re- call, and accuracy. It seems clear to us that we need to include lexical items in the information mix upon which we base our statistics. Certainly the 86% pre- cision and recall achieved by Magerman [6] supports this contention. On the other hand, [6] abjures gram- mars altogether, preferring a more complicated (or at least, more unusual) mechanism that, in effect, makes up the rules as it goes along. We would suggest that the present work, with its accuracy and recall of about Sl%, indicates that the new grammatical mechanism is not the important thing in those results. That is to say, we estimate that introducing word-based statis- Learning 1035 tics on top of our tree-bank grammar should be to make up the 5% gap. Showing this is the next of our research. able step Acknowledgements This research was supported in part by NSF grant IRI- 9319516. References 1. BOD, R. Using au annotated language corpus as a virtual stochastic grammar. In Proceedings of the Eleventh National Conference on Artificial In telli- gence. AAAI Press/MIT Press, Menlo Park, 1993, 778-783. 2. BRILL, E. Automatic grammar induction and parsing free text : a transformation- based approach. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics. 1993, 259-265. 3. CARABALLO, S. AND CHARNIAK, E. Figures of merit for best-first probabilistic chart parsing. De- partment of Computer Science, Brown University, Technical Report , forthcoming . 4. CHARNIAK, E. Statistical Language Learning. MIT Press, Cambridge, 1993. 5. CHARNIAK, E. Parsing with context-free gram- mars and word statistics. Department of Computer Science, Brown University, Technical Report CS-95- 28, 1995. 6, MAGERMAN, D. M. Statistical decision-tree mod- els for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Lin- guistics. 1995, 276-283. 7. MARCUS, M. P., SANTORINI, B. AND MARCINKIEWICZ, M. A. Building a large an- notated corpus of English: the Penn treebank. Com- putational Linguistics 19 (1993), 313-330. 8. PEREIRA, F. AND SCHABES, Y. Inside-outside reestimation from partially bracketed corpora. In 27th Annual Meeting of the Association for Com- putaitonal Linguistics. ACL, 1992, 128-135. 1036 Naturd Language
1996
153
1,792
eft-corner Unification-base Steven L. Lytinen and Nor&o Tomuro DePaul University School of Computer Science, Telecommunications and Information Systems 243 S. Wabash Ave. Chicago IL 60604 lytinen@cs.depaul.edu Abstract In this paper, we present an efficient algorithm for parsing natural language using unification grammars. The algorithm is an extension of left-corner parsing, a bottom-up algorithm which utilizes top-down expec- tations. The extension exploits unification grammar’s uniform representation of syntactic, semantic, and do- main knowledge, by incorporating all types of gram- matical knowledge into parser expectations. In par- ticular, we extend the notion of the reochcsbility ta- ble, which provides information as to whether or not a top-down expectation can be realized by a potential subconstituent, by including all types of grammatical information in table entries, rather than just phrase structure information. While our algorithm’s worst- case computational complexity is no better than that of many other algorithms, we present empirical test- ing in which average-case linear time performance is achieved. Our testing indicates this to be much im- proved average-case performance over previous left- corner techniques. Introduction A family of unification-based grammars has been de- veloped over the last ten years, in which the trend has been to represent syntactic and semantic infor- mation more uniformly than in previous grammatical formalisms. In these grammars, many different types of linguistic information, including at least some kinds of syntactic and semantic constraints, are encoded as feature structures. In the most extreme versions, such as HPSG (Pollard and Sag, 1994), and our own previ- ous work (Lytinen, 1992), feature structures are used to encode, all syntactic and semantic information in a completely uniform fashion. Standard approaches to unification-based parsing do not reflect this uniformity of knowledge representation. Often a unification-based parser is implemented using an extension of context-free parsing techniques, such as chart parsing or left corner parsing. The context- free (phrase structure) component of the grammar is used to drive the selection of rules to apply, and the additional feature equations of a grammar rule are ap- plied afterward. The result remains a syntax-driven approach, in which in some sense semantic interpre- tation (and even the application of many syntactic constraints) is performed on the tree generated by the context-free component of the unification grammar. This standard approach to unification-based parsing is not efficient. Worst-case complexity must be as bad as context-free parsing (O(n3)) and perhaps worse, due to the additional work of performing unifications. Em- pirical examinations of unification-based parsers have indicated nonlinear average case performance as well (Shann, 1991; Carroll, 1994). Other popular parsing algorithms, such as Tomita’s algorithm (Tomita, 1986)) also fail to achieve average-case linear performance, even without the inclusion of semantic interpretation. Our hypothesis is that a uniform approach to pro- cessing will result in a more efficient parsing algorithm. To test this hypothesis, we have developed a further extension of left-corner parsing for unification gram- mars. The extension exploits unification grammar’s uniform representation, by incorporating all types of grammatical knowledge into parser expectations. In particular, we have extended the notion of the recschcs- bility table, which provides information as to whether or not a top-down expectation can be realized by a po- tential subconstituent, by including all types of gram- matical information in table entries, rather than just phrase structure information.’ We have implemented the extended left-corner parsing algorithm within our unification-based NLP system, called LINK (Lytinen, 1992). To evaluate the efficiency of our algorithm, we have tested LINK on a corpus of example sentences, taken from the Fifth Message Understanding Competition (MUC-5) (Sundh eim, 1993). This corpus consists of a set of newspaper articles describing new developments in the field of microelectronics. Since we competed in MUC-5 using a previous version of LINK, we were able to test our left-corner algorithm using a knowl- edge base that was developed independent of the algo- ‘The extended reachability table will be referred to as a reachability net, since the additional complexity of table entries requires it to be implemented as a discrimination network. Learning 1037 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. S rule trans Figure 1: Example LINK grammar rules rithm, and compare its performance on this corpus di- rectly to the performance of more standard approaches to unification-based parsing. A regression analysis of the data indicates that our algorithm has achieved lin- ear average-case performance on the MUC-5 corpus, a substantial improvement over other unification-based parsing algorithms. This paper is organized as follows: first we present the uniform knowledge representation used in LINK to represent syntactic, semantic, and domain knowledge. We then discuss LINK’s parsing algorithm. Finally, we present results of empirical testing, and discuss its implications. LINK’s Knowledge Representation All knowledge is encoded in LINK’s unification gram- mar in the form of feature structures. A feature consists of a name and a value. Values may either be atomic or may themselves be feature structures. A feature struc- ture may also have an atomic label. Thus, each rule in LINK’s knowledge base can be thought of as a di- rected acyclic graph (DAG), whose edges corresponds to feature names, and whose nodes correspond to fea- ture values. Figure 1 shows a few simple LINK rules. The S rule encodes information about one possible structure of a complete sentence. The cat feature of the root indicates that this rule is about the syntactic category S. The numbered arcs lead to subconstituents, whose syntactic categories are NP and VP respectively. Im- plicit in the numbering of these features is the order in which the subconstituents appear in text. In addi- tion, this rule indicates that the VP functions as the head of the sentence, that the NP is assigned as the subj of the sentence, and that the NP and VP share ca$ sem&m FOOD EAT Entry for “apple” Entry for “ate” ANIMATE FObD EAT frame Figure 2: Example LINK lexical entries and frames the same agr feature (which encodes the number and person featires which must agree between a verb and its subject). Each of the other two rules displayed in figure 1 describes one possible structure for a NP and a VP, respectively. Other rules exist for the other possible structures of these constituents. The purpose of the head feature is to bundle a group of other features together. This makes it easier for a constituent to inherit a group of features from one of its subconstituents, or vice versa. In this case, the agr feature is passed up from the noun and verb to the NP and VP constituents, to be checked for compatibility in the S rule. In the other direction, the subj feature is passed down to the verb, where its semantics is checked for compatibility with the semantics of the verb (see figure 2). Other rules in LINK’s knowledge base encode lexical and domain information, such as those in figure 2. Lex- ical rules typically provide many of the feature values which are checked for compatibility in the grammar rules. For example, the entry for ate indicates that this verb is transitive, and thus may be used with the VP rule in figure 1. Lexical items also provide seman- tic information, under the sem feature. Thus, “ate” refers to a frame called EAT, and “apple” refers to a FOOD. The operation responsible for checking compatibility of features is unification, which can be thought of as the combining of information from two DAGs. The result of unifying two DAGs is a DAG with all features from both of the original DAGs. Two DAGs fail to unify if they share a feature with incompatible values. Domain knowledge is encoded in frame definition rules, such as the EAT frame. A node whose cat feature has a frame definition must unify with the def- inition. As a result, semantic type-checking is per- 1038 Natural Language Figure 3: LINK reachability net entry formed during parsing, resulting in the construction of a semantic interpretation. In these example rules, since the lexical entry for “ate” unifies the subj of the verb with the actor of its semantic representation, this means the subject of ate must be HUMAN. Note that LINK’s knowledge base is completely uni- form. All rules, including grammar rules, lexical en- tries, and frame definitions, are represented as DAGs. Moreover, within a DAG there is no structural dis- tinction between syntactic and semantic information. While certain naming conventions are used in the rules for different kinds of features, such as using the cat fea- ture for the syntactic category of a constituent and the sem feature for its semantic representation, these con- ventions are only for mnemonic purposes, and play no special role in parsing. Parsing The Reachability Net Context-free left-corner parsers generate top-down ex- pectations in order to filter the possible constituents that are constructed via bottom-up rule application. In order to connect top-down and bottom-up informa- tion, a reuchubility table is used to encode what con- tituents can possibly realize a top-down expectation. The table is constructed by pre-analyzing the gram- mar in order to enumerate the possible left corner con- stituents of a particular syntactic category. For exam- ple, possible left corners of a NP (noun phrase) might include DET, ADJ, and N (noun), but not PREP. In most left-corner unification-based parsers (e.g., see Carroll, 1994)) the reachability table is the same: only the syntactic labels of an expectation and a potential subconstituent are used as indices into the table, which then provides information as to which rules may lead to the satisfaction of the expectation. In LINK, an extended reuchubility net is used, in which entire DAGs, rather than just syntactic labels, serve both as indices and entries. During grammar pre- compilation in LINK, net entries are constructed by connecting each possible expectation (represented as a DAG) with possible constituents that could be found in a sentence to realize the expectation (also DAGs). A net entry is generated for each possible consituent, which is placed in the Ic (left corner) arc of the expec- tation. For example, figure 3 shows the entry for the : next expectation Figure 4: A portion of the parse of the sentence frag- ment “John ate.. .” situation in which a VP is expected and a transitive verb is encountered. The use of the reachability net sometimes enables LINK to prune incorrect parses earlier than they oth- erwise would be. For example, consider the sentence “John slept .” After the word “John,” the expecta- tion is for a VP to follow. Upon encountering “slept”, a standard reachability table would indicate that two possible rules could apply: the VP rule for transitive verbs pictured in figure 1, and a similar rule for intran- sitive verbs. Application of the transitive rule would result in a unification failure, assuming that “slept” is marked as intransitive, while the intransitive rule would succeed. In LINK, because net entries contain more than just syntactic category information, only the intransitive verb rule is retrieved in this situation, because the marking of “slept” as intransitive is part of the DAG which is used as an index into the net. Thus, the unification failure is avoided. Because all features are utilized in retrieval of net entries, semantics can also come into play in the se- lection of rules. For example, figure 4 shows the VP constituent from the parse of the sentence fragment “John ate . ..“. At this point, the expectation is for an NP which means FOOD. This semantic information may be used in lexical disambiguation, in the case of an ambiguous noun. For instance, the word “apple” at this point would be immediately disambiguated to mean FOOD (as opposed to COMPUTER) by this expectation. Structural ambiguities may also be imme- diately resolved as a result of the semantic information in expectations. For example, consider the sentence fragment “The course taught.. .“. Upon encountering “taught”, a standard left-corner parser would attempt to apply at least two grammar rules: the VP rule for transitive verbs (see figure l), and another rule for re- duced relative subclauses. In LINK, assuming the ex- istence of a TEACH frame whose ACTOR should be a HUMAN, the transitive VP rule would not be retrieved from the reachability net, since the semantics Learning 1039 cat cat s Ic S head -%’ 9 1 NP word ca of the algorithm. At the beginning of a sentence, the cat first DAG in figure 5 is constructed if the word “the” is the first word of a sentence. This DAG is matched head cat 1 2 against entries in the reachability net, retrieving the cat sub] entry shown. This entry indicates that the NP rule head should be applied, resulting in the third DAG shown DET the DET N in figure 5. At this point, the algorithm identifies N at the end of lc 2 path as the expectation for the the Index into teachability table Table entry next word. In LINK, a constituent under the lc arc is only im- cat IC S head 8 NP cat sub) ca plicitly connected to the expectation (i.e., the expec- head cat 1 2 stituents under Ie arc are found, if the root DAG and the DAG under its Ic arc unify, it means that the ex- pectation has been fully realized. One possible action e : next expectation tation is not completed yet). After all the subcon- war head DET at this point is to replace the root with its lc arc and the N continue. This action corresponds to the decision that Result of rule appllcatlon a constitutent is complete. Figure 5: Net entries and DAGs constructed while parsing the word “the” of “The course taught” do not agree with the ACTOR constraint of TEACH. The Parsing Algorithm Empirical Results To test the performance of our parsing algorithm, we selected a random set of sentences from the MUC-5 corpus, and parsed them using two different versions of LINK. One version used the extended reachability net as described above; the second version used a stan- dard reachability table, in which only phrase structure information was utilized. At the beginning of the parse of a sentence, LINK con- structs an expectation for an S (a complete sentence). As the parse proceeds left-to-right, LINK constructs all possible interpretations that are consistent with top- down expectations at each point in the sentence. A rule is applied as soon as its left corner is found in the sentence, assuming the reachability net sanctions the application of that rule given the current expecta- tions. A single-word lookahead is also used to further constrain the application of rules. LINK’s parsing algorithm extends the standard left- corner parsing in the way top-down constraints are propagated down to the lower subconstituents. When a subconstituent is completed (often called a complete edge in chart parsing), it is connected to the current expectation through the Ic 1 path. Then, that expec- tation is used to retrieve the possible rule(s) to apply from the net. If the unification succeeds (creating an active edge with the dot just after the first constituent), the algorithm first checks to see if an expectation for the next word is generated (i.e., there are more con- stituents to be found after the dot). If there is a new expectation, the iteration stops. Otherwise, the DAG under lc arc is complete. That DAG is demoted to lc 1 path, and the process is repeated. This way, the gap between the expectation and the input word is incre- mentally filled in a bottom-up fashion, while the top- down constraints are fully intact at each level. Thus, the top-down constraints are applied at the earliest possible time. Some simple examples will illustrate the key aspects Both versions of LINK were run using a pre-existing knowledge base, developed for the MUC-5 competition. 2 Thus, both versions successfully parsed the same set of 131 sentences from the random sample. These 131 sentences formed the basis of the performance analysis. Performance was analyzed in terms of several fac- tors. First, a left-corner parser can be thought of as performing several “primitive” actions: rule in- stantiation and subsequent “dot” advancing, indicat- ing the status of a partially matched grammar rule (i e . how many of the right-hand side constituents of the’kule have matched constituents in the sentence). These two actions involve different operations in the implementation. 3 A rule is instantiated when a con- stituent in the text (either a lexical item or a com- pleted edge) matches with its left-corner child on the right-hand side. This action involves retrieving the rules from the reachability net and unifying the two constituents. On the other hand, when the dot is advanced, the subconstituent only needs to trace the pointer back to the (partially filled) parent DAG which predicted that constituent at the position right after the dot. Also, since all the expected features were al- ready propagated down when the prediction was made, the subconstituent can be simply replaced into the rule. ‘In order to improve the coverage of the domain, we added to LINK’s original MUC-5 knowledge base for this test. 3Both of these actions correspond to the construction of an edge in chart parsing. 1040 Natural Language y= 26.02 x 40007 lwo. . -. .- .- ---- __.__._. - I .- ..-. _ __.__. _ __.- -;-..- _.--- _ __:_ -2 __ -...‘a’-- : -, ” -. -’ 0, i I L t --r$“ fg, ,.- 0 10 20 30 40 yc.74~"2 +23.12x 4WO- Figure 6: Actions vs. sentence length in LINK using extended and standard reachability tables For performance monitoring purpose, those two actions are recorded separately. Figure 6 shows plots of the number of actions exe- cuted during a parse vs. sentence length for LINK us- ing the standard and extended reachability nets. The number of actions also includes failures; i.e., rule in- stantiations or dot advances which were attempted but in which unification failure occurred (see discussion of rule failures in Parsing section). A best regression model analysis4, using the adjusted R2 metric, indi- cates that when using the extended reachability net, LINK achieved linear performance in this respect (R2 = .599).5 This is an encouraging result, because pars- ing time in context-free chart parsing is linearly pro- portional to the number of edges entered in the chart. When using the standard reachability table, a best regression analysis indicates a small quadratic com- ponent to the best-fitting curve (adjusted R2 = .682 vs. .673 for the best linear model). When compar- ing best-fit linear models, on average LINK performed 41n all analyses, best-fitting curves were restricted to those with no constant coefficient (i.e., only curves which pass through the origin). Intuitively, this makes sense when analyzing actions vs. sentence length, since parsing a sen- tence containing 0 words requires no actions. 5Although not shown, performance was also analyzed for a no-lookahead version of this algorithm. The action vs. sentence length result was also linear, but with a much steeper slope. Figure 7: CPU time vs. sentence length in LINK using extended and standard reachability tables 40% more actions using the standard reachability table than when using the extended reachability net. Figure 7 shows plots of CPU time used vs. sentence length for the two versions of LINK. The best regres- sion model in both cases for this variable is quadratic.6 Thus, the number of primitive actions taken by the parser is not linearly proportional to processing time, as it would be for a context-free parser. Average CPU time is 20% longer with the standard reachability ta- ble than with the extended reachability net. Thus, this analysis indicates that we have achieved a considerable speed-up in performance over the standard left-corner technique.7 Further analysis indicated that a potential source of nonlinear performance in our system is the need to copy DAGs when multiple interpretations are pro- duced. If the reachability net indicates that more than one rule can be applied at some point in the parse, it 61ntuitively, the best model of CPU time vs. sentence length may contain a constant coefficient, since the algo- rithm may include some constant-time components; how- ever, when allowing for a constant coefficient, the best re- gression model results in a negative value for the constant. Thus, we did not allow for constant coefficients in the best models. 7 We speculate that the difference of the reduction ra- tio between the number of actions and CPU time comes from the processing overhead by other parts of the system, such as the added complexity of looking up entries in the reachability net. Learning 1041 I y = 967.9 x Figure 8: CPU time vs. sentence length in LINK using extended reachability net and improved DAG copying is necessary to copy the DAG representing the parse so far, so that the alternate interpretations can be con- structed without interference from each other. Indeed, a regression analysis of the number of DAGs generated during a parse vs. sentence length using the extended reachability net indicates that a quadratic model is the best for this variable (R2 = .637). To remedy this problem, we re-implemented the version of LINK using the extended reachability net, this time using a more efficient algorithm for copying DAGs. Our approach is similar to the lazy unification algorithm presented in (Godden, 1983). Space con- straints prohibit us from describing the copying algo- rithm in detail. The same set of 131 test sentences was parsed again, and the results were analyzed in a sim- ilar fashion. The modified copying algorithm did not affect the number of actions vs. sentence length, since copying had no effect on which rules could or could not be applied. However, it did have a marked effect on the CPU time performance of the system. Figure 8 shows the plot of CPU time vs. sentence length for the lazy version of LINK. On average, the lazy copy- ing algorithm achieved an additional 43% reduction in average CPU time per parse, and an average to- tal speedup of 54% when compared to the version of LINK which used the standard reachability table. In addition, a regression analysis indicates a linear rela- tionship between CPU time and sentence length for the lazy version of LINK (adjusted R2=.726, vs. an adjusted R2 of .724 for a quadratic model “). Related Work Efficient Parsing Algorithms Many previous efforts have been focused on the con- struction of efficient parsing algorithms, Some deter- ministic algorithms such as Marcus’ (1980) parser and 8While the adjusted R2 figures for the linear and quadratic models are very close, statistical analysis indi- cates that the quadratic coefficient in the latter model is not significantly different from 0. Register Vector Grammar (Blank, 1989) achieve lin- ear time complexity. However, because linear time is achieved due to the restrictions imposed by determin- ism, those algorithms consequently limit the generative capacity of the grammar. Our approach, on the other hand, does not limit the generative capacity of our sys- tem’s unification grammar. Some nondeterministic algorithms have been de- veloped which utilize efficient encoding techniques. Chart-parsing algorithm uses a chart (or table) to record the partial constituents in order to eliminate redundant search. Earley’s algorithm (Earley, 1970), a variant of chart-parsing, is proven to run in time O(n3) for general context-free grammars. Tomita’s General- ized LR parsing algorithm (GLR) (Tomita, 1986,199l) uses a precompiled table, an extension of LR parse ta- ble, to guide the search at any given point in the parse. GLR also employs other efficient encoding techniques such as graph-structured stack and packed shared for- est. However, the worst case complexity of GLR is proven to be no better than Earley’s algorithm (John- son, 1991). In (Shann, 1991), the performance of several varia- tions of chart-parsing algorithms is empirically tested and compared. In this report, left-corner parsing (LC) with a t’op-down filtering strategy ranked the high- est, and scored even or better in timing than Tomita’s GLR. In particular, top-down filtering seemed to make a significant contribution to reducing the parse time. The timing results of this report, however, shows that neither LC nor GLR achieved linear performance in average case. Parsing Algorithms for Unification Grammars In (Shieber, 1992), a generalized grammar formalism is developed for the class of unification grammars, and an abstract parsing algorithm is defined. This abstract algorithm involves three components: pre- diction, in which grammar rules are used to predict subsequent constituents that should be found in a sentence; scanning, in which predictions are matched against the input text; and completion, in which pre- dictions are matched against fully realized subcon- stituents. Shieber leaves the prediction component intentially vague; depending on the specificity of the predictions generated,g the algorithm behaves as a bottom-up parser, a top-down parser, or some com- bination thereof. On one extreme, if no information is used, the predictor does not propagate any expec- tations; hence, the algorithm is in essence equivalent to bottom-up parsing. If the predictor limits itself to only the phrase structure information in unification rules, then the algorithm is analogous to traditional (syntax-driven) left-corner parsing. Our algorithm can ‘A prediction is created applied by the predictor. after the filtering function p is 1042 Natural Language be characterized as a version of this abstract algorithm in which the most extreme prediction component is used, one in which all possible information is included in the predictions. Top-down Filtering Shieber (1985) h s ows how Earley’s algorithm can be extended to unification-based grammars, and the ex- tended algorithm in effect gives a greater power in per- forming top-down filtering. He proposes restriction, a function which selects a set of features by which top- down prediction is propagated. By defining the restric- tion to select more features (eg. subcategorization, gap or verb form feature) than just phrase structure cate- gory, those features are used to prune unsuccessful rule application at the earliest time. Although with a very small example, a substantial effect on parsing efficiency by the use of of restriction is reported. Another approach taken in (Maxwell and Kaplan, 1994) encodes some (functional) features directly in the context-free symbols (which requires the grammar modification), thereby allowing those features to be propagated down by the predictor operation of the Ear- ley’s algorithm. Not only does this strategy enable the early detection of parse failure, it can also help exploit the efficiency of the context-free chart-parsing (O(n3)) in unification-based systems. In their report, despite the increased number of rules, the modified grammar showed an improved efficiency. Early detection of failure is accomplished in LINK in a more pricipled way, by simply including all informa- tion in reachability net entries rather than deciding in an ad hoc fashion which constraints to encode through subcategorization and which to encode as features. Conclusion and Future Work We have presented a unification-based parser which achieves a significant improvement in performance over previous unification-based systems. After incorporat- ing an improved version of DAG copying into the parser, our extended left-corner algorithm achieved average-case linear-time performance on a random sample of sentences from the MUC-5 corpus. This is a significant improvement over standard left-corner pars- ing techniques used with unification grammars, both in terms of average-case complexity and overall average speed. The improvement is indicated by our own com- parative analysis, as well as by comparing our results with empirical testing done by others on standard left- corner parsers and other algorithms such as Tomita’s algorithm (e.g., Shann, 1991). Linear time performance was not achieved without the addition of an improved DAG copying algorithm. Further analysis is required to determine more pre- cisely how much of the improvement in performance is due to the extended reachability net and how much is due to the improved DAG copying. However, our testing indicates that, even without improved copying, the extended reachability net provements in performance as a standard reachability table. achieves significant im- compared to the use of Acknowledgement The authors would like to thank Joseph Morgan for very useful comments and help on the statistical anal- ysis of the experiment data. - References Blank, G. (1989). A Finite and Real-Time Proces- sor for Natural Language. Communications of the ACM, 32 10) p. 1174-1189. Carroll, J. I 1994). Relating complexity to practical performance in parsing with wide-coverage unifica- &on grammars. In Proceedings of the 32nd Annual Meeting of the Association for Computational Lin- guistics. Earley, J. (1970). A n efficient context-free parsing al- gorithm. Communications of the ACM, 13(2). Godden, K. (1990). Lazy unification. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, Pittsburgh PA, pp. 180- 187. Johnson, M. (1991). Th e computational complexity of GLR parsing. In Tomita, 1991, pp. 35-42. Lytinen, S. (1992). A unification-based, integrated natural language processing system. Computers and Mathematics with Applications, 23(6-g), pp. 403-418. Marcus, M. (1980). A theory of syntactic recognition for natural language, Cambridge, MA: MIT Press. Maxwell, J. and Kaplan, R. (1994). The interface be- tween phrasal and functional constraints, Compu- tational Linguistics, 19 (4). Pollard, C. and Sag, I. (1994). Head-driven Phrase Structure Grammar. Stanford, CA: Center for the Study of Language and Information. The Univer- sity of Chicago Press. Shann, P. (1991). Experiments with GLR and chart parsing. In Tomita, 1991, p. 17-34. Shieber, S. (1985). Using restriction to extend parsing algorithms for complex-feature-based for- malisms. In Proceedings of the 23rd Annual Meet- ing of the Association for Computational Linguis- tics, Chicago, IL, pp. 145-152. Shieber, S. (1992). Constraint-based Grammar For- malisms. Cambridge, MA: MIT Press. Sundheim, B. (1993). P roceedings of the Fifth Message Understanding Conference (MUC-5). San Fran- cisco: Morgan Kaufmann Publishers. Tomita, M. (1986). E’cient Parsing for Natural Lan- guage. Boston: Kluwer Academic Publishers. Tomita, M. (1991). G eneralized LR Parsing. Boston: Kluwer Academic Publishers. Learning 1043
1996
154
1,793
Automatically Generating Extraction Patterns from Untagged Text Ellen Riloff Department of Computer Science University of Utah Salt Lake City, UT 84112 riloff@cs.utah.edu Abstract Many corpus-based natural language processing sys- tems rely on text corpora that have been manually annotated with syntactic or semantic tags. In partic- ular, all previous dictionary construction systems for information extraction have used an annotated train- ing corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting Au- toSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experi- ments with the MUG-4 terrorism domain, AutoSlog- TS created a dictionary of extraction patterns that performed comparably to a dictionary created by Au- toSlog, using only preclassified texts as input. Motivation The vast amount of text becoming available on-line offers new possibilities for conquering the knowledge- engineering bottleneck lurking underneath most natu- ral language processing (NLP) systems. Most corpus- based systems rely on a text corpus that has been man- ually tagged in some way. For example, the Brown cor- pus (Francis & Kucera 1982) and the Penn Treebank corpus (Marcus, Santorini, & Marcinkiewicz 1993) are widely used because they have been manually anno- tated with part-of-speech and syntactic bracketing in- formation. Part-of-speech tagging and syntactic brack- eting are relatively general in nature, so these corpora can be used by different natural language processing systems and for different domains. But some corpus- based systems rely on a text corpus that has been manually tagged in a domain-specific or task-specific manner. For example, corpus-based approaches to in- formation extraction generally rely on special domain- specific text annotations. Consequently, the manual tagging effort is considerably less cost effective because the annotated corpus is useful for only one type of NLP system and for only one domain. 1044 Natural Language Corpus-based approaches to information extraction have demonstrated a significant time savings over con- ventional hand-coding methods (Riloff 1993). But the time required to annotate a training corpus is a non- trivial expense. To further reduce this knowledge- engineering bottleneck, we have developed a system called AutoSlog-TS that generates extraction patterns using untagged text. AutoSlog-TS needs only a pre- classified corpus of relevant and irrelevant texts. Noth- ing inside the texts needs to be tagged in any way. Generating Extraction Patterns from Tagged Text Related work In the last few years, several systems have been de- veloped to generate patterns for information extrac- tion automatically. All of the previous systems de- pend on manually tagged training data of some sort. One of the first dictionary construction systems was AutoSlog (Riloff 1993)) which requires tagged noun phrases in the form of annotated text or text with asso- ciated answer keys. PALKA (Kim & Moldovan 1993) is similar in spirit to AutoSlog, but requires manually defined frames (including keywords), a semantic hierar- chy, and an associated lexicon. Competing hypotheses are resolved by referring to manually encoded answer keys, if available, or by asking a user. CRYSTAL (Soderland et al. 1995) also generates extraction patterns using an annotated training cor- pus. CRYSTAL relies on both domain-specific anno- tations plus a semantic hierarchy and associated lex- icon. LIEP (Huffman 1996) is another system that learns extraction patterns but relies on predefined key- words, object recognizers (e.g., to identify people and companies), and human interaction to annotate each relevant sentence with an event type. Cardie (Cardie 1993) and Hastings (Hastings & Lytinen 1994) also developed lexical acquisition systems for information extraction, but their systems learned individual word From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. meanings rather than extraction patterns. Both sys- tems used a semantic hierarchy and sentence contexts to learn the meanings of unknown words. AutoSlog AutoSlog (Riloff 1996) is a dictionary construction sys- tem that creates extraction patterns automatically us- ing heuristic rules. As input, AutoSlog needs answer keys or text in which the noun phrases that should be extracted have been labeled with domain-specific tags. For example, in a terrorism domain, noun phrases that refer to perpetrators, targets, and victims may be tagged. Given a tagged noun phrase and the origi- nal source text, AutoSlog first identifies the sentence in which the noun phrase appears. If there is more than one such sentence and the annotation does not indicate which one is appropriate, then AutoSlog chooses the first one. AutoSlog invokes a sentence analyzer called CIRCUS (L e h nert 1991) to identify clause boundaries and syntactic constituents. AutoSlog needs only a flat syntactic analysis that recognizes the subject, verb, di- rect object, and prepositional phrases of each clause, so almost any parser could be used. AutoSlog deter- mines which clause contains the targeted noun and applies the heuristic rules shown in Figure phrase PATTERN EXAMPLE <subj> passive-verb <victim> was murdered <subj> active-verb <perp> bombed <subj> verb infin. <perp> attempted to kill <subj> aux noun <victim> was victim - passive-verb <dobj>’ active-verb <dobj> infin. <dobj> verb infin. <dobj> gerund <dobj> noun aux <dobj> killed <victim> bombed <target > to kiII <victim> triedto attack <target > killing <victim> fatality was <victim> noun prep <np> bomb against <target> active-verb prep <np> killed with <instrument> passive-verb prep <np> was aimed at <target> Figure 1: AutoSlog Heuristics The rules are divided into three categories, based on the syntactic class of the noun phrase. For exam- ple, if the targeted noun phrase is the subject of a clause, then the subject rules apply. Each rule gen- erates an expression that likely defines the conceptual role of the noun phrase. In most cases, they assume that the verb determines the role. The rules recognize several verb forms, such as active, passive, and infini- ‘In principle, passive verbs should not have direct ob- jects. We included this pattern only because CIRCUS oc- casionally confused active and passive constructions. tive. An extraction pattern is created by instantiating the rule with the specific words that it matched in the sentence. The rules are ordered so the first one that is satisfied generates an extraction pattern, with the longer patterns being tested before the shorter ones. As an example, consider the following sentence: Ricardo Castellar, the yesterday by the FMLN. mayor, was kidnapped Suppose that “Ricardo Castellar” was tagged as a rele- vant victim. AutoSlog passes the sentence to CIRCUS, which identifies Ricardo Castellar as the subject. Au- toslog’s subject heuristics are tested and the <subj> passive-verb rule fires. This pattern is instantiated with the specific words in the sentence to produce the extraction pattern <victim> was kidnapped. In fu- ture texts, this pattern will be activated whenever the verb “kidnapped” appears in a passive construction, and its subject will be extracted as a victim. AutoSlog can produce undesirable patterns for a va- riety of reasons, including faulty sentence analysis, in- correct pp-attachment, or insufficient context. There- fore a person must manually inspect each extraction pattern and decide which ones should be accepted and which ones should be rejected. This manual filtering process is typically very fast. In experiments with the MUC-4 terrorism domain, it took a user only 5 hours to review 1237 extraction patterns (Riloff 1993). Although this manual filtering process is part of the knowledge-engineering cycle, generating the annotated training corpus is a much more substantial bottleneck. Generating Extraction Patterns from Untagged Text To tag or not to tag? Generating an annotated training corpus is a signifi- cant undertaking, both in time and difficulty. Previ- ous experiments with AutoSlog suggested that it took a user about 8 hours to annotate 160 texts (Riloff 1996). Therefore it would take roughly a week to construct a training corpus of 1000 texts. Committing a domain expert to a knowledge-engineering project for a week is prohibitive for most short-term applications. Furthermore, the annotation task is deceptively complex. For AutoSlog, the user must annotate rel- evant noun phrases. But what constitutes a relevant noun phrase ? Should the user include modifiers or just the head noun? All modifiers or just the relevant modi- fiers? Determiners? If the noun phrase is part of a con- junction, should the user annotate all conjuncts or just one? Should the user include appositives? How about prepositional phrases ? The meaning of simple NPs can change substantially when a prepositional phrase is at- Learning 1045 t ached. For example, “the Bank of Boston” is differ- ent from “the Bank of Toronto.” Real texts are loaded with complex noun phrases that often include a vari- ety of these constructs in a single reference. There is also the question of which references to tag. Should the user tag all references to a person? If not, which ones? It is difficult to specify a convention that reliably captures the desired information, but not specifying a convention can produce inconsistencies in the data. To avoid these problems, we have developed a new version of AutoSlog, called AutoSlog-TS, that does not require any text annotations. AutoSlog-TS requires only a preclassified training corpus of relevant and ir- relevant texts for the domain.2 A preclassified corpus is much easier to generate, since the user simply needs to identify relevant and irrelevant sample texts. Fur- thermore, relevant texts are already available on-line for many applications and could be easily exploited to create a training corpus for AutoSlog-TS. AutoSlog-TS AutoSlog-TS is an extension of AutoSlog that operates exhaustively by generating an extraction pattern for every noun phrase in the training corpus. It then eval- uates the extraction patterns by processing the corpus a second time and generating relevance statistics for each pattern. The process is illustrated in Figure 2. preclassified texts & Q Stage I s: l!!arldum V: wasbombed rr) Pp: bye preclassified texts bombed by <y> <w saw I Figure 2: AutoSlog-TS flowchart In Stage 1, the sentence analyzer produces a syntac- tic analysis for each sentence and identifies the noun phrases. For each noun phrase, the heuristic rules gen- erate a pattern (called a concept node in CIRCUS) to extract the noun phrase. AutoSlog-TS uses a set of 2 Ideally, the irrelevant texts should that are similar to the relevant texts. be “near-miss” texts 15 heuristic rules: the original 13 rules used by Au- toSlog plus two more: <subj> active-verb dobj and infinitive prep <np>. The two additional rules were created for a business domain from a previous experiment and are probably not very important for the experiments described in this paper.3 A more sig- nificant difference is that AutoSlog-TS allows multiple rules to fire if more than one matches the context. As a result, multiple extraction patterns may be gener- ated in response to a single noun phrase. For exam- ple, the sentence “terrorists bombed the U.S. embassy” might produce two patterns to extract the terrorists: <subj> bombed and <subj> bombed embassy. The statistics will later reveal whether the shorter, more general pattern is good enough or whether the longer pattern is needed to be reliable for the domain. At the end of Stage 1, we have a giant dictionary of ex- traction patterns that are literally capable of extract- ing every noun phrase in the corpus. In Stage 2, we process the training corpus a second time using the new extraction patterns. The sentence analyzer activates all patterns that are applicable in each sentence. We then compute relevance statistics for each pattern. More specifically, we estimate the conditional probability that a text is relevant given that it activates a particular extraction pattern. The formula is: Pr(relevant text 1 text contains pattern,) = ,~,efi;frr:~;l t where rel - freq t is the number of instances of pattern, that were activated in relevant texts, and total-freq, is the total number of instances of pattern, that were acti- vated in the training corpus. For the sake of simplicity, we will refer to this probability as a pattern’s relevance rate. Note that many patterns will be activated in rel- evant texts even though they are not domain-specific. For example, general phrases such as “was reported” will appear in all sorts of texts. The motivation behind the conditional probability estimate is that domain- specific expressions will appear substantially more of- ten in relevant texts than irrelevant texts. Next, we rank the patterns in order of importance to the domain. AutoSlog-TS’s exhaustive approach to pattern generation can easily produce tens of thou- sands of extraction patterns and we cannot reasonably expect a human to review them all. Therefore, we use a ranking function to order them so that a person only needs to review the most highly ranked patterns. We rank the extraction patterns according to the formula: relevance rate * log2(frequency), unless the relevance rate is 5 0.5 in which case the function re- turns zero because the pattern is negatively correlated 3See (Riloff 1996) for a more detailed explanation. 1046 Natural Language with the domain (assuming the corpus is 50% relevant). This formula promotes patterns that have a high rel- evance rate or a high frequency. It is important for high frequency patterns to be considered even if their relevance rate is only moderate (say 70%) because of expressions like “was killed” which occur frequently in both relevant and irrelevant texts. If only the pat- terns with the highest relevance rates were promoted, then crucial expressions like this would be buried in the ranked list. We do not claim that this particular ranking function is the best - to the contrary, we will argue later that a better function is needed. But this function worked reasonably well in our experiments. Experimental Results Automated scoring programs were developed to eval- uate information extraction (IE) systems for the mes- sage understanding conferences, but the credit assign- ment problem for any individual component is vir- tually impossible using only the scores produced by these programs. Therefore, we evaluated AutoSlog and AutoSlog-TS by manually inspecting the performance of their dictionaries in the MUC-4 terrorism domain. We used the MUC-4 texts as input and the MUC-4 answer keys as the basis for judging “correct” out- put (MUC-4 Proceedings 1992). The AutoSlog dictionary was constructed using the 772 relevant MUC-4 texts and their associated answer keys. AutoSlog produced 1237 extraction patterns, which were manually filtered in about 5 person-hours. The final AutoSlog dictionary contained 450 extrac- tion patterns. The AutoSlog-TS dictionary was con- structed using the 1500 MUC-4 development texts, of which about 50% are relevant. AutoSlog-TS generated 32,345 unique extraction patterns. To make the size of the dictionary more manageable, we discarded patterns that were proposed only once under the assumption that they were not likely to be of much value. This re- duced the size of the dictionary down to 11,225 extrac- tion patterns. We loaded the dictionary into CIRCUS, reprocessed the corpus, and computed the relevance rate of each pattern. Finally, we ranked all 11,225 pat- terns using the ranking function. The 25 top-ranked extraction patterns appear in Figure 3. Most of these patterns are clearly associated with terrorism, so the ranking function appears to be doing a good job of pulling the domain-specific patterns up to the top. The ranked extraction patterns were then presented to a user for manual review.4 The review process con- sists of deciding whether a pattern should be accepted or rejected, and labeling the accepted patterns.5 For 4The author did the manual review for this experiment. 5Note that AutoSlog’s patterns were labeled automati- 1. 2. 3. 4. 5. 6. 7. 8. 9. <sub j > exploded 14 murder of <np> 15 assassination of <np> 16 <subj> was killed 17 <subj> was kidnapped 18 attack on <np> 19 <subj> was injured 20 exploded in <np> death of <np> 21. 22. <sub j > occurred <subj> was located took-place on <np> responsibility for <np> occurred on <np> was wounded in <np> destroyed <dobj> <subj> was murdered one of <np> 10. <subj> took-place 23. <subj> kidnapped 11. caused <dobj> 24. exploded on <np> 1.2. claimed <dobj> 25. <subj> died 13. <subj> was wounded Figure 3: The Top 25 Extraction Patterns example, the second pattern murder of <up> was accepted and labeled as a murder pattern that will extract victims. The user reviewed the top 1970 pat- terns in about 85 minutes and then stopped because few patterns were being accepted at that point. In total, 210 extraction patterns were retained for the fi- nal dictionary. The review time was much faster than for AutoSlog, largely because the ranking scheme clus- tered the best patterns near the top so the retention rate dropped quickly. Note that some of the patterns in Figure 3 were not accepted for the dictionary even though they are asso- ciated with terrorism. Only patterns useful for extract- ing perpetrators, victims, targets, and weapons were kept. For example, the pattern exploded in <np> was rejected because it would extract locations. To evaluate the two dictionaries, we chose 100 blind texts from the MUC-4 test set. We used 25 relevant texts and 25 irrelevant texts from the TST3 test set, plus 25 relevant texts and 25 irrelevant texts from the TST4 test set. We ran CIRCUS on these 100 texts, first using the AutoSlog dictionary and then using the AutoSlog-TS dictionary. The underlying information extraction system was otherwise identical. We scored the output by assigning each extracted item to one of four categories: correct, mislabeled, du- plicate, or spurious. An item was scored as correct if it matched against the answer keys. An item was mis- labeled if it matched against the answer keys but was extracted as the wrong type of object. For example, if “Hector Colindres” was listed as a murder victim but was extracted as a physical target. An item was a du- plicate if it was coreferent with an item in the answer keys. For example, if “him” was extracted and coref- erent with “Hector Colindres.” The extraction pattern acted correctly in this case, but the extracted informa- tion was not specific enough. Correct items extracted more than once were also scored as duplicates. An item tally by referring to the text annotations. Learning 1047 was spurious if it did not refer to any object in the an- swer keys. All items extracted from irrelevant texts were spurious. Finally, items in the answer keys that were not extracted were counted as missing. There- fore correct + missing should equal the total number of items in the answer keys.6 Tables 1 and 2 show the numbers obtained after manually judging the output of the dictionaries. We scored three items: perpetrators, victims, and tar- gets. The performance of the two dictionaries was very similar. The AutoSlog dictionary extracted slightly more correct items, but the AutoSlog-TS dictionary extracted fewer spurious items.7 Slot Corr. Miss. Mislab. Dup. Spur. perp 36 22 1 11 129 Victim 41 24 7 18 113 i Target 39 1 19 8 18 108 Total 116 1 65 16 47 350 Table 1: AutoSlog Results Table 2: AutoSlog-TS Results We applied a well-known statistical technique, the two-sample t test, to determine whether the differ- ences between the dictionaries were statistically sig- nificant. We tested four data sets: correct, correct + duplicate, missing, and spurious. The t values for these sets were 1.1012, 1.1818, 0.1557, and 2.27 re- spectively. The correct, correct + duplicate, and miss- ing data sets were not significantly different even at the p < 0.20 significance level. These results suggest that AutoSlog and AutoSlog-TS can extract relevant information with comparable performance. The spuri- ous data, however, was significantly different at the p < 0.05 significance level. Therefore AutoSlog-TS was significantly more effective at reducing spurious extrac- tions. We applied three performance metrics to this raw data: recall, precision, and the F-measure. We calculated recall as correct / (correct + missing), and computed precision a~ (correct + duplicate) / (correct + duplicate + mislabeled + spurious). The F-measure 6 “Optional” items in the answer keys were scored as correct if extracted, but were never scored as missing. 7 The difference in mislabeled items is an artifact of the human review process, not AutoSlog-TS. (MUC-4 Proceedings 1992) combines recall and preci- sion into a single value, in our case with equal weight. As the raw data suggests, Table 3 shows that Au- toSlog achieved slightly higher recall and AutoSlog-TS achieved higher precision. The F-measure scores were similar for both systems, but AutoSlog-TS obtained slightly higher F scores for victims and targets. Note that the AutoSlog-TS dictionary contained only 210 patterns, while the AutoSlog dictionary contained 450 patterns, so AutoSlog-TS achieved a comparable level of recall with a dictionary less than half the size. I II AutoSlog II AutoSlog-TS 1 I II II I I J Table 3: Comparative Results The AutoSlog precision results are substantially lower than those generated by the MUC-4 scoring program (Riloff 1993). There are several reasons for the difference. For one, the current experiments were done with a debilitated version of CIRCUS that did not process conjunctions or semantic features. Al- though AutoSlog does not use semantic features to create extraction patterns, they can be incorporated as selectional restrictions in the patterns. For exam- ple, extracted victims should satisfy a human con- straint. Semantic features were not used in the cur- rent experiments for technical reasons, but undoubt- edly would have improved the precision of both dic- tionaries. Also, the previously reported scores were based on the UMass/MUC-4 system, which included a discourse analyzer that used domain-specific rules to distinguish terrorist incidents from other events. CIR- CUS was designed to extract potentially relevant infor- mation using only local context, under the assumption that a complete IE system would contain a discourse analyzer to make global decisions about relevance. Behind the scenes It is informative to look behind the scenes and try to understand why AutoSlog achieved slightly better re- call and why AutoSlog-TS achieved better precision. Most of AutoSlog’s additional recall came from low frequency patterns that were buried deep in AutoSlog- TS’s ranked list. The main advantage of corpus- tagging is that the annotations provide guidance so the system can more easily hone in on the relevant expres- sions. Without corpus tagging, we are at the mercy of the ranking function. We believe that the ranking function did a good job of pulling the most impor- 1048 Natural Language tant patterns up to the top, but additional research is needed to recognize good low frequency patterns. In fact, we have reason to believe that AutoSlog-TS is ultimately capable of producing better recall than AutoSlog because it generated many good patterns that AutoSlog did not. AutoSlog-TS produced 158 patterns with a relevance rate 2 90% and frequency 2 5. Only 45 of these patterns were in the original AutoSlog dictionary. The higher precision demonstrated by AutoSlog-TS is probably a result of the relevance statistics. For ex- ample, the AutoSlog dictionary contains an extraction pattern for the expression <subj> admitted, but this pattern was found to be negatively correlated with rele- vance (46%) by AutoSlog-TS. Some of AutoSlog’s pat- terns looked good to the human reviewer, but were not in fact highly correlated with relevance. In an ideal ranking scheme, the “heavy hitter” ex- traction patterns should float to the top so that the most important patterns (in terms of recall) are re- viewed first. AutoSlog-TS was very successful in this regard. Almost 35% recall was achieved after review- ing only the first 50 extraction patterns! Almost 50% recall was achieved after reviewing about 300 patterns. Future Directions The previous results suggest that a core dictionary of extraction patterns can be created after reviewing only a few hundred patterns. The specific number of pat- terns that need to be reviewed will ultimately depend on the breadth of the domain and the desired perfor- mance levels. A potential problem with AutoSlog-TS is that there are undoubtedly many useful patterns buried deep in the ranked list, which cumulatively could have a substantial impact on performance. The current ranking scheme is biased towards encouraging high frequency patterns to float to the top, but a bet- ter ranking scheme might be able to balance these two needs more effectively. The precision of the extraction patterns could also be improved by adding semantic constraints and, in the long run, creating more com- plex extraction patterns. AutoSlog-TS represents an important step towards making information extraction systems more easily portable across domains. AutoSlog-TS is the first sys- tem to generate domain-specific extraction patterns automatically without annotated training data. A user only needs to provide sample texts (relevant and ir- relevant), and spend some time filtering and labeling the resulting extraction patterns. Fast dictionary con- struction also opens the door for IE technology to sup- port other tasks, such as text classification (Riloff & Shoen 1995). Finally, AutoSlog-TS represents a new approach to exploiting on-line text corpora for domain- specific knowledge acquisition by squeezing preclassi- fied texts for all they’re worth. Acknowledgments This research was funded by NSF grant MIP-9023174 and NSF grant IRI-9509820. Thanks to Kern Mason and Jay Shoen for generating much of the data. References Cardie, C. 1993. A Case-Based Approach to Knowledge Acquisition for Domain-Specific Sentence Analysis. In Proceedings of the Eleventh National Conference on Arti- ficial Intelligence, 798-803. AAAI Press/The MIT Press. Francis, W., and Kucera, H. 1982. Frequency Analysis of English Usage. Boston, MA: Houghton Mifflin. Hastings, P., and Lytinen, S. 1994. The Ups and Downs of Lexical Acquisition. In Proceedings of the Twelfth National Conference on Artificial Intelligence, 754-759. AAAI Press/The MIT Press. Huffman, S. 1996. Learning information extraction pat- terns from examples. In Wermter, S.; Riloff, E.; and Scheler, G., eds., Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing. Springer-Verlag, Berlin. Kim, J., and Moldovan, D. 1993. Acquisition of Semantic Patterns for Information Extraction from Corpora. In Proceedings of the Ninth IEEE Conference on Artificial Intelligence for Applications, 171-176. Los Alamitos, CA: IEEE Computer Society Press. Lehnert , W. 1991. Symbolic/Subsymbolic Sentence Anal- ysis: Exploiting the Best of Two Worlds. In Barnden, J., and PolIack, J., eds., Advances in Connectionist and Neu- ral Computation Theory, Vol. 1. Ablex Publishers, Nor- wood, NJ. 135-164. Marcus, M.; Santorini, B.; and Marcinkiewicz, M. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19(2):313-330. MUC-4 Proceedings. 1992. Proceedings of the Fourth Message Understanding Conference (MUC-4). San Ma- teo, CA: Morgan Kaufmann. Riloff, E., and Shoen, J. 1995. Automatically Acquiring Conceptual Patterns Without an Annotated Corpus. In Proceedings of the Third Workshop on Very Large Cor- pora, 148-161. Riloff, E. 1993. Automatically Constructing a Dictio- nary for Information Extraction Tasks. In Proceedings of the Eleventh National Conference on Artificial Intelli- gence, 811-816. AAAI Press/The MIT Press. Riloff, E. 1996. An Empirical Study of Automated Dic- tionary Construction for-Information Extraction in Three Domains. Artificial Intelligence. Vol. 85. Forthcoming. Soderland, S.; Fisher, D.; Aseltine, J.; and Lehnert, W. 1995. CRYSTAL: Inducing a conceptual dictionary. In Proceedings of the Fourteenth International Joint Confer- ence on Artificial Intelligence, 1314-1319. Learning 1049
1996
155
1,794
Learning to Parse Database Queries Using Inductive Logic Programming John M. Zelle Department of Mathematics and Computer Science Drake University Des Moines, IA 50311 jz60ilrQacad.drake.edu Abstract This paper presents recent work using the CHILL parser acquisition system to automate the con- struction of a natural-language interface for data- base queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Program- ming to learn relational control knowledge. Start- ing with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Exper- imental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a pre- existing, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic represent- ations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application. Introduction Empirical or corpus-based methods for constructing natural language systems has been an area of grow- ing research interest in the last several years. The empirical approach replaces hand-generated rules with models obtained automatically by training over lan- guage corpora. Recent approaches to constructing robust parsers from corpora primarily use statistical and probabilistic methods such as stochastic grammars (Black, Lafferty, & Roukaos 1992; Periera & Shabes 1992; Charniak & Carroll 1994) or transition networks (Miller et al. 1994). Several current methods learn some symbolic structures such as decision trees (Black et al. 1993; Magerman 1994; Kuhn & De Mori 1995) and transformations (Brill 1993). Zelle and Mooney (1993, 1994) h ave proposed a method called CHILL based on the relational learning techniques of Inductive Logic Programming. Raymond J. Mooney Department of Computer Sciences University of Texas Austin, TX 78712 mooneyQcs.utexas.edu To date, these systems have been demonstrated primarily on the problem of syntactic parsing, group- ing the words of a sentence into hierarchical constitu- ent structure. Since syntactic analysis is only a small part of the overall problem of understanding, these ap proaches have been trained on corpora that are “arti- ficially” annotated with syntactic information. Simil- arly, they are typically evaluated with artificial metrics of parsing accuracy. While such metrics can provide rough comparisons of relative capabilities, it is not clear to what extent these measures reflect differences in performance on real language-processing tasks. The acid test for empirical approaches is whether they allow the construction of better natural language systems, or perhaps allow for the construction of comparable sys- tems with less overall effort. This paper reports on the experience of using CHILL to engineer a natural lan- guage front-end for a database-query task. A database-query task was a natural choice as it represents a significant real-world language-processing problem that has long been a touch-stone in NLP re- search. It is also a nontrivial problem of tractable size and scope for actually carrying out evaluations of em- pirical approaches. Finally, and perhaps most import- antly, a parser for database queries is easily evaluable. The bottom line is whether the system produces a cor- rect answer for a given question, a determination which is straight-forward for many database domains. Learning to Parse DB queries Overview of CHILL Space does not permit a complete description of the CHILL system here. The relevant details may be found in (Zelle & Mooney 1993; 1994; Zelle 1995). What follows is a brief overview. The input to CHILL is a set of training instances con- sisting of sentences paired with the desired parses. The output is a shift-reduce parser that maps sentences into parses. CHILL treats parser induction as a problem of learning rules to control the actions of a shift-reduce 1050 Natural Language From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. parser expressed as a Prolog program. Control-rules are expressed as definite-clause concept definitions. These rules are induced using a general concept learn- ing system employing techniques from Inductive Logic Programming (ILP) a subfield of machine learning that addresses the problem of learning definite-clause logic descriptions from examples (Lavraz & Dz’eroski 1994; Muggleton 1992). The central insight in CHILL is that the general op- erators required for a shift-reduce parser to produce a given set of sentence analyses are directly inferable from the representations themselves. For example, if the target is syntactic analysis, the fact the the parser requires a reduction to combine a determiner and a noun to form an NP follows directly from the existence of such an NP in the training examples. However, just inferring an appropriate set of operators does not pro- duce a correct parser, because more knowledge is re- quired to apply operators of parsing an example. accurately during the course The current context of a parse is contained in the contents of the stack and the remaining input buffer. CHILL uses parses of the training examples to figure out the contexts in which each of the inferred operators is and is not applicable. These contexts are then given to a general induction algorithm that learns rules to clas- sify the contexts in which each operator should be used. Since the contexts are arbitrarily-complex parser-states involving nested (partial) constituents, CHILL employs an ILP learning algorithm which can deal with struc- tured inputs and produce relational concept descrip- tions. Figure 1 shows the basic components of CHILL. During Parser Operator Generation, the training ex- amples are analyzed to formulate an overly-general shift-reduce parser that is capable of producing parses from sentences. The initial parser is overly-general in that it produces a great many spurious analyses for any given input sentence. In Example Analysis, the training examples are parsed using the overly-general parser to extract contexts in which the various parsing operators should and should not be employed. Control- Rule Induction then employs a general ILP algorithm to learn rules that characterize these contexts. Finally, Program Specialization “folds” the learned control- rules back into the overly-general parser to produce the final parser. Previous experiments have evaluated CHILL'S per- formance in learning parsers to perform case-role pars- ing (Zelle & Mooney 1993) and syntactic parsers for portions of the ATIS corpus (Zelle & Mooney 1994; 1996). These experiments have demonstrated that CHILL works as well or better than neural-network or Control Exampks I Figure 1: The CHILL Architecture statistical approaches on comparable corpora. Parsing DB Queries Overview of the Problem For the database-query task, the input to CHILL consists of sentences paired with executable database queries. The query lan- guage considered here is a logical form similar to the types of meaning representation typically pro- duced by logic grammars (Warren & Pereira 1982; Abramson 8z Dahl 1989). The semantics of the rep- resentation is grounded in a query interpreter that ex- ecutes queries and retrieves relevant information from the database. The choice of a logical query language rather than the more ubiquitous SQL was made because the former provides a more straight-forward, compos- itional mapping from natural language utterances-a property that is necessary for the CHILL approach. The process of translating from an unambiguous logical form into other query formats is easily automated. The domain of the chosen database is United States geography. The choice was motivated by the availabil- ity of an existing natural language interface for a simple geography database. This system, called Geobase was supplied as an example application with a commercial Prolog available for PCs, specifically Turbo Prolog 2.0 (Borland International 1988). Having such an example provides a database already coded in Prolog for which a front-end can be built; it also serves as a convenient benchmark against which CHILL'S performance can be compared. Learning 1051 What is the capital of the state with the largest population? answer (C, (capital (S ,C) , largest (P, (state 6) , population(S,P))))). What are the major cities in Kansas? answer(C, (major(C), city(C), loc(C,S), equal(S,stateid(kansas) 1)). Type Form Example country countryid(Name) countryid(usa) city cityid(Name, State) cityid(austin,tx) state stateidclame) stateidctexas) river riverid(Name) riverid(colorado1 place placeid(Name) placeidcpacif ic) Figure 2: Sample Database Queries Figure 3: Basic Objects in Geoquery The Geobase data contains about 800 Prolog facts asserting relational tables for basic information about U.S. states, including: population, area, capital city, neighboring states, major rivers, major cities, and highest and lowest points along with their elevation. Figure 2 shows some sample questions and associated query representations. Development of the database application required work on two components: a framework for parsing into the logical query representations, and a specific query language for the geography database. The first component is domain-independent and consists of al- gorithms for parsing operator generation and example analysis to infer the required operators and parse the training examples. The resulting parsing framework is quite general and could be used to generate parsers for a wide range of logic-based representations. The second component, which is domain specific, is a query language having a vocabulary sufficient for ex- pressing interesting questions about geography. The database application itself comprises a parser produced by CHILL coupled with an interpreter for the query language. The specific query language for these exper- iments (hereafter referred to as Geoquery) was initially developed by considering a sample of 50 sentences. A simple query interpreter was developed concurrently with the query language, thus insuring that the rep resentations were grounded in the database-query task. The Query Language, Geoquery The query lan- guage considered here is basically a first-order logical form augmented with some higher-order predicates or meta-predicates, for handling issues such as quantifica- tion over implicit sets. This general form of representa- tion is useful for many language processing tasks. The particular constructs of Geoquery, however, were not designed around any notion of appropriateness for rep resentation of natural language in general, but rather as a direct method of compositionally translating Eng- lish sentences into unambiguous, logic-oriented data- base queries. The most basic constructs of the query representa- tion are the terms used to represent the objects refer- enced in the database and the basic relations between them. The basic forms are listed in Figure 3. The objects of interest are states, cities, rivers and places (either a high-point of low-point of a state). Cities are represented using a two argument term with the second argument containing the abbreviation of the state. This is done to insure uniqueness, since dif- ferent states may have cities of the same name (e.g. cityid(columbus,oh) vs. cityid(columbus,ga)). This convention also allows a natural form for express- ing partial information; a city known only by name is given an uninstantiated variable for its second term. Form capital(C) Predicate C is a capital (city). C is a city. X is major. P is a place. R is a river. S is a state. city(C) major(X) place (PI river(R) state 6) capital(C) area6 ,A) capital (S ,C) equal (V , C) density(S,D) elevation(P,E) high-point (S ,P) higher(Pl,P2) loc(X,Y) low-point (S ,P) len(R,L) next-to 61 ,S2) size(X,Y) traverse (R, S) C is a capital (city). The area of S is A. The capital of S is C. variable V is ground term C. The (population) density of S is P The elevation of P is E. The highest point of S is P. Pl’s elevation is greater than P2’s. X is located in Y. The lowest point of S is P. The length of R is L. Sl is next to S2. The size of X is Y. R traverses S. Figure 4: Basic Predicates in Geoquery The basic relations are shown in Figure 4. The equal/2 predicate is used to indicate that a certain vari- able is bound to a ground term representing an object in the database. For example, a phrase like “the cap italofTexas” translates to (capital(S,C), equal@, stateid(texas) )) rather than the more traditional capital(stateid(texas) ,C). The use of equal al- lows objects to be introduced at the point where they are actually named in the sentence. Although the basic predicates provide most of the ex- pressiveness of Geoquery, meta-predicates are required to form complete queries. A list of the implemented meta-predicates is shown in Figure 5. These predicates 1052 Natural Language Form Explanation answer (V , Goal) V is the variable of interest in Goal. largest (V, Goal) Goal produces only the solution that maximizes the size of V smallest (V ,Goal) Analogous to largest. highest (V ,Goal) Like largest (with elevation). lowest (V ,Goal) Analogous to highest. longest (V , Goal) Like largest (with length). shortest(V,Goal) Analogous to longest. c0udD,God,C) c is count of unique bindings for D that satisfy Goal. most (X,D,Goal) Goal produces only the X that maximizes the count of D fewest (X,D,Goal) Analogous to most. Figure 5: Meta-Predicates in Geoquery are distinguished in that they take completely-formed conjunctive goals as one of their arguments. The most important of the meta-predicates is answer/S. This predicate serves as a “wrapper” for query goals indic- ating the variable whose binding is of interest (i.e. an- swers the question posed). The other meta-predicates provide for the quantification over and selection of ex- tremal elements from implicit sets. A Parsing Framework for Queries Although the logical representations of Geoquery look very differ- ent from parse-trees or case-structures on which CHILL has been previously demonstrated, they are amenable to the same general parsing scheme as that used for the shallower representations. Adapting CHILL to work with this representation requires only the identification and implementation of suitable operators for the con- struction of Geoquery-style analyses. The parser is implemented by translating parsing ac- tions into operator clauses for a shift-reduce parser. The construction of logical queries involves three dif- ferent types of operators. Initially, a word or phrase at the front of the input buffer suggests that a cer- tain structure should be part of the result. The ap propriate structure is pushed onto the stack. For ex- ample, the word “capital” might cause the capital/2 predicate to be pushed on the stack. This type of op eration is performed by an introduce operator. Ini- tially, such structures are introduced with new (not co- referenced) variables. These variables may be unified with variables appearing in other stack items through a co-reference operator. For example, the first argu- ment of the capital/2 structure may be unified with the argument of a previously introduced state/ I pre- dicate. Finally, a stack item may be embedded into the argument of another stack item to form conjunct- ive goals inside of meta-predicates; this is performed by a conjoin operation. For each class of operator, the overly-general op erators required to parse any given example may be easily inferred. The necessary introduce operators are determined by examining what structures occur in the given query and which words that can intro- duce those structures appear in the training sentence. Co-reference operators are constructed by finding the shared variables in the training queries; each shar- ing requires an appropriate operator instance. Fi- nally, con join operations are indicated by the term- embedding exhibited in the training examples. It is important to note that only the operator generation phase of CHILL is modified to work with this repres- entation; the control-rule learning component remains unchanged. As an example of operator generation, the first query in Figure 2 gives rise to four introduce oper- ators: “capital” introduces capital/a, “state” intro- duces state/i, “largest” introduces largest/2 and “population” introduces populat iox&. The initial parser-state has answer/2 on the stack, so its intro- duction is not required. The example generates four co-reference operators for the variables (e.g., when capital/2 is on the top of the stack, its second ar- gument may be unified with the first argument of answer/Z, which is below it). Finally, the example produces four conjoin operators. When largest/2 is on the top of the stack, state/i is “lifted” into the second argument position from its position below in the stack. Conversely, when population/2 is on the top of the stack, it is “dropped” into the second argument of largest/2 to form the conjunction. Similar operators embed capital/2 and largest/2 into the conjunction that is the second argument of answer/Z. Experimental Results Experiments A corpus of 250 sentences was gathered by submitting a questionnaire to 50 uninformed subjects. For evalu- ation purposes, the corpus was split into training sets of 225 examples with the remaining 25 held-out for test- ing. CHILL was run using default values for various parameters. Testing employed the most stringent standard for ac- curacy, namely whether the application produced the correct answer to a question. Each test sentence was parsed to produce a query. This query was then ex- ecuted to extract an answer from the database. The extracted answer was then compared to the answer produced by the correct query associated with the test sentence. Identical answers were scored as a correct parsing, any discrepancy resulted in a failure. Figure 6 shows the average accuracy of CHILL’S parsers over 10 Learning 1053 ably does not represent a state-of-the-art standard for natural language database query systems, neither is it a “straw man.” Geobase uses a semantics-based parser which scans for words corresponding to the entities and relationships encoded in the database. Rather than re- lying on extensive syntactic analysis, the system at- tempts to match sequences of entities and associations in sentences with an entity-association network describ- ing the schemas present in the database. The result is a relatively robust parser, since many words can simply be ignored. That CHILL performs better after training on a relatively small corpus is an encouraging result. Figure 6: Geoquery: Accuracy trials using different random splits of training and test- ing data. The line labeled “Geobase” shows the average accuracy of the Geobase system on these 10 testing sets of 25 sentences. The curves show that CHILL outper- forms the existing system when trained on 175 or more examples. In the best trial, CHILL's induced parser comprising 1100 lines of Prolog code achieved 84% ac- curacy in answering novel queries. In this application, it is important to distinguish between two modes of failure. The system could either fail to parse a sentence entirely, or it could produce a query which retrieves an incorrect answer. The parsers learned by CHILL for Geoquery produced few spuri- ous parses. At 175 training examples, CHILL produced 3.2% spurious parses, dropping to 2.3% at 200 ex- amples. This compares favorably with the 3.8% rate for Geobase. Discussion These results are interesting in two ways. First, they show the ability of CHILL to learn parsers that map sentences into queries without intermediate syntactic parsing or annotation. This is an important consider- ation for empirical systems that seek to reduce the lin- guistic expertise needed to construct NLP applications. Annotating corpora with useful final representations is a much easier task than providing detailed linguistic annotations. One can even imagine the construction of suitable corpora occurring as a natural side-effect of at- tempting to automate processes that are currently done manually (e.g. collecting examples of the queries pro- duced by database users in the normal course of their work). Second, the results demonstrate the utility of an em- pirical approach at the level of a complete natural- language application. While the Geobase system prob- Related Work As noted in the introduction, most work on corpus- based parsing has focused on the problem of syn- tactic analysis rather than semantic interpretation. However, a number of groups participating in the ARPA-sponsored ATIS benchmark for speech under- standing have used learned rules to perform some semantic interpretation. The Chronus system from AT&T (Pieraccini et al. 1992) used an approach based on stochastic grammars. Another approach employ- ing statistical techniques is the Hidden Understanding Models of Miller, et. al. (1994). Kuhn and De Mori (1995) have investigated an approach utilizing semantic classification trees, a variation on decision trees famil- iar in machine learning. These approaches differ from work reported here in that learning was used in only a one component of a larger hand-crafted grammar. The ATIS benchmark is not an ideal setting for the evaluation of empirical components per se, as overall performance may be sig- nificantly affected by the performance of other com- ponents in the system. Additionally, the hand-crafted portions of these systems encompassed elements that were part of the learning task for CHILL. CHILL learns to map from strings of words directly into query rep- resentations without any intermediate analysis; thus, it essentially automates construction of virtually the en- tire linguistic component. We also believe that CHILL'S relational learning algorithms make the approach more flexible, as evidenced by the range of representations for which CHILL has successfully learned parsers. Ob- jective comparison of various approaches to empirical NLP is an important area for future research. Future Work and Conclusions Clearly, there are many open questions regarding the practicality of using CHILL for the development of NLP systems. Experiments with larger corpora and other domains are indicated. Another interesting avenue of investigation is the extent to which performance can 1054 Natural Language be improved by corpus “manufacturing.” Since an ini- tial corpus must be annotated by hand, one method of increasing the regularity in the training corpus (and hence the generality of the resulting parser) would be to allow the annotat0.r to introduce related sentences. Although this approach would require extra effort from the annotator, it would be far easier than annotating an equal number of random sentences and might produce better results. The development of automated techniques for lex- icon construction could also broaden the applicability of CHILL. Currently, the generation of introduce op- erators relies on a hand-built lexicon indicating which words can introduce various predicates. Thompson (1995) has demonstrated an initial approach to corpus- based acquisition of lexical mapping rules suitable for use with CHILL-Style parser acquisition systems. We have described a framework using ILP to learn parsers that map sentences into database queries us- ing a training corpus of sentences paired with queries. This method has been implemented in the CHILL sys- tem, which treats parser acquisition as the learning of search-control rules within a logic program represent- ing a shift-reduce parser. Experimental results with a complete application for answering questions about U.S. geography show that CHILL's parsers outperform a pre-existing hand-crafted counterpart. These results demonstrate CHILL's ability to learn semantic map pings and the utility of an empirical approach at the level of a complete natural-language application. We hope these experiments will stimulate further research in corpus-based techniques that employ ILP. Acknowledgments Portions of this research were supported by the National Science Foundation under grant IRI-9310819. References Abramson, H., and Dal& V. 1989. Logic Grammars. New York: Springer-Verlag. Black, E.; JeIineck, F.; Lafferty, J.; Magerman, D.; Mer- cer, R.; and Roukos, S. 1993. Towards history-based grammars: Using richer models for probabilistic parsing. In Proceedings of the 31st Annual Meeting of the Associ- ation for Computational Linguistics, 31-37. Black, E.; Lafferty, J.; and Roukaos, S. 1992. Devel- opment and evaluation of a broad-coverage probabiIistic grammar of English-language computer manuals. In Pro- ceedings of the 30th Annual Meeting of the Association for Computational Linguistics, 185-192. Borland International. 1988. Turbo Prolog 2.0 Reference Guide. Scotts Valley, CA: Borland International. BriII, E. 1993. Automatic grammar induction and parsing free text: A transformation-based approach. In Proceed- ings of the 31st Annual Meeting of the Association for Computational Linguistics, 259-265. Char&&, E., and Carroll, G. 1994. Context-sensitive statistics for improved grammatical language models. In Proceedings of the Twelfth National Conference on Art@- cial Intelligence. Kuhn, R., and De Mori, R. 1995. The application of se- mantic classification trees to natural language understand- ing. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(5):449-460. Lavrae, N., and Dzeroski, S., eds. 1994. Inductive Logic Programming: Techniques and Applications. Ellis Hor- wood. Magerman, D. M. 1994. Natrual Lagnuage Parsing as Statistical Pattern Recognition. Ph.D. Dissertation, Stan- ford University. Miller, S.; Bobrow, R.; Ingria, R.; and Schwartz, R. 1994. Hidden understanding models of natural language. In Pro- ceedings of the 3Znd Annual Meeting of the Association for Computational Linguistics, 25-32. Muggleton, S. H., ed. 1992. Inductive Logic Programming. New York, NY: Academic Press. Periera, F., and Shabes, Y. 1992. Inside-outside reestim- ation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Association for Computa- tional Linguistics, 128-135. Pieraccini, R.; Tzoukermann, E.; Z. Gorelov, J. L. G.; Levin, E.; Lee, C. H.; and Wilpon, J. 1992. A speech understanding system based on statistical representation of semantics. In Proceedings ICASSP 92. 1-193-I-196. Thompson, C. A. 1995. Acquisition of a lexicon from semantic representations of sentences. In Proceeding of the 33rd Annual Meeting of the Association for Computa- tional Linguistics, 335-337. Warren, D. H. D., and Pereira, F. C. N. 1982. An efficient easily adaptable system for interpreting natural language queries. American Journal of Computational Linguistics 8(3-4):110-122. Zelle, J. M., and Mooney, R. J. 1993. Learning semantic grammars with constructive inductive logic programming. In Proceedings of the Eleventh National Conference on Ar- tificial Intelligence, 817-822. Zelle, J. M., and Mooney, R. J. 1994. Inducing determin- istic Prolog parsers from treebanks: A machine learning approach. In Proceedings of the Twelfth National Confer- ence on Artificial Intelligence, 748-753. Zelle, J., and Mooney, R. 1996. Comparative results on using inductive logic programming for corpus-based parser construction. In Wermter, S.; Riloff, E.; and Scheler, G., eds., Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing. Springer Verlag. ZeIle, J. M. 1995. Using Inductive Logic Programming to Automate the Construction of Natural Language Pars- ers. Ph.D. Dissertation, University of Texas, Austin, TX. available via http:// cs.utexas.edu/users/ml. Learning 1055
1996
156
1,795
HUNTER-GAT : Three Search Techniques Integrated for Natural Language Semant its Stephen Beale, Sergei Nirenburg and Kavi Mahesh Computing Research Laboratory Box 30001 New Mexico State University Las Cruces, New Mexico 88003 sb,sergei,mahesh@crl.nmsu.edu Abstract This work’ integrates three related AI search tech- niques - constraint satisfaction, branch-and-bound and solution synthesis - and applies the result to se- mantic processing in natural language (NL). We sum- marize the approach as “Hunter-Gatherer:” o branch-and-bound and constraint satisfaction allow us to “hunt down” non-optimal and impossible so- lutions and prune them from the search space. e solution synthesis methods then “gather” all opti- mal solutions avoiding exponential complexity. Each of the three techniques is briefly described, as well as their extensions and combinations used in our system. We focus on the combination of solution syn- thesis and branch-and-bound methods which has en- abled near-linear-time processing in our applications. Finally, we illustrate how the use of our technique in a large-scale MT project allowed a drastic reduction in search space. Introduction The number of possible semantic analyses in an average-sized sentence in the Spanish corpus used in the Mikrokosmos MT project is fifty six million, six hundred eighty seven thousand, and forty. Com- plex sentences have gone past the trillions. Exhaustive search methods applied to real sentences routinely re- quire several minutes to finish, with larger sentences running more than a day. Clearly, techniques must be developed to diffuse this exponential explosion. Hunters and Gatherers in AI Search is the most common tool for finding solutions in artificial intelligence. The two paths to higher effi- ciency in search are 1. Reducing the search space. Looking for sub-optimal or impossible solutions. Removing them. Killing them. “Hunting” ‘-Research reported in this paper was supported in part by Contract MDA904-92-C-5189 from the U.S. Department of Defense. 1056 Natural Language 2. Efficiently extracting answer(s) from the search space. Collecting satisfactory answer(s). “Gather- ing” Much work has been done with regard to the hunters. Finding and using heuristics to guide search has been a major focus. Heuristics are necessary when other techniques cannot reduce the size of the search space to reasonable proportions. Under such circumstances, “guesses” have to be made to guide the search engine to the area of the search space most likely to contain acceptable answers. “Best-first” search (see, among many others, (Charniak et al. 1987)) is an example of how to use heuristics. The “hunting” techniques applied in this research are most closely related to the field of constraint satis- faction problems (CSP). (Beale 1996) overviews this field and (Tsang 1993) covers it in depth. Further references include (Mackworth 1977), (Mackworth & Freuder 1985) and (Mohr & Henderson 1986). “Gathering” has been studied much less in AI. Most AI problems are content with a single “acceptable” an- swer. Heuristic search methods generally are sufficient. Certain classes of problems, however, demand all cor- rect answers. “Solution synthesis” addresses this need. Solution synthesis techniques (Freuder 1978; see also Tsang & Foster 1990), iteratively combine (gather) partial answers to arrive at a complete list of all cor- rect answers. Often, this list is then rated according to some separate criteria in order to pick the most suit- able answer. In a “blocks” world, CSP techniques and solution synthesis are powerful mechanisms. Many “real-world” problems, however, have a more complex semantics: constraints are not “yes” or “no” but “maybe” and “sometimes.” In NL, certain word-sense combinations might make sense in one context but not in another. This is the central problem with previous attempts at using constraint analysis for NL disambiguation (Na- gao 1992; Maruyama 1990).2 We need a method as powerful as CSP for this more complex environment. 2For instance, Nagao eliminates an ownership meaning on the basis that a file-system is not a human agent. As shown in the next section, metonymy and other figurative From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Grupo Roche ORGANIZATION a traves de au compania en espana adquirir Dr. Andreu LOCATION OWNER CCRPORATION LOdATlON NATIOM ACGWRE HUMAN INS’TRUMENT SOCIAL-EVENT DURING LEARN ORGANIZA TloN Figure 1: Example Sentence Our proposal is to 1) use constraint dependency in- formation to partition problems into appropriate sub- problems, 2) combine (gather) results from these sub- problems using a new solution synthesis technique, and 3) prune (hunt) these results using branch-and-bound techniques. The rest of this paper addresses each of these issues in turn. Constraint Satisfaction Hunters in NL NL problems can be almost always viewed as bundles of tightly constrained sub-problems, each of which com- bine at higher, relatively constraint-free levels to pro- duce a complete solution. Beale (1996) argues that syntactic and semantic constraints effectively partition discourse into clusters of locally interacting networks. Here, we summarize those results and report how so- lution synthesis and branch-and-bound techniques can improve search efficiency. Figure 1 illustrates the basic lexical ambiguities in a very simple Spanish sentence from the corpus pro- cessed by our semantic analyzer. In the figure the Spanish words and phrases are shown with their read- ings, expressed as corresponding concepts in the under- lying ontology. An exhaustive decision tree for this sen- tence would include 36 possible combinations of word senses, but, when some fairly obvious “literal” seman- tic constraints are imposed and propagated using arc consistency, all but one of the paths can be eliminated. Unfortunately, a literal imposition of constraints does not work in NL. For example, a traves de, in a truves de su companiu could very well be location, even though a literal constraint would expect compu- niu to be a place, because corporation names are often used metonymically to stand for “the place of the cor- poration:” I walked to IBM. I walked to where IBM’s building is. Therefore, the fact that compuniu is not literally a place does not rule out the location interpretation. In fact, in certain contexts, the location interpretation might be preferred. Constraint satisfaction techniques such as arc-consistency, therefore, will be of limited value. Figure 2 gives a different view of this same NL prob- lem by graphically displaying the constraint dependen- cies present in Figure 1. These dependencies can be language often overrides such constraints. Adquirir Grupu en ESpalk3 Figure 2: Constraint Dependencies in Sample Sentence identified simply by iterating through the list of con- straints, retrieved from the Mikrokosmos lexicon and ontology (Beale, Nirenburg & Mahesh 1995), and link- ing any words involved in the same constraint. In Fig- ure 2, three relatively independent sub-parts can be identified. If these sub-parts, or “circles” in our termi- nology, could be identified, the processing involved in finding a complete solution could be decomposed into three sub-problems. In this paper we assume such a de- composition is possible so that we may concent,rate on describing the methods used to combine results from individual circles to form larger and larger solutions, the largest of which will be the solution to the ent,ire problem. Solution Synthesis Gatherers in NL Freuder (1978) introduced Solution Synthesis (SS) as a means to “gather up” all solutions for a CSP without resorting to traditional search methods. Freuder’s al- gorithm (SS-FREUDER) created a set of two-variable nodes that contained combinations of every two vari- ables. These two-variable nodes were then combined into three-variable nodes, and so on, until a node con- taining all the variables, i.e. the solution, was synthe- sized. At each step, constraints were propagated down and then back up the “tree” of synthesized nodes. Tsang improved on this scheme with the Essex Al- gorithms (SS-ESSEX). Th ese algorithms assumed that a list of the variables could be made, after which two-variable nodes were created only between adjacent variables in the list. Higher order nodes were then synthesized as usual, starting from the two-variable nodes. Tsang noted that some orderings of the original list would prove more efficient than others, most no- tably a “Minimal Bandwidth Ordering” (MBO), which seeks to minimize the distance between constrained variables. Semantics & Discourse 1057 The work described here extends and generalizes the concept of MBO. The basic idea of synthesizing solu- tion sets one order higher than their immediate an- cestors is discarded. Instead, solution synthesis op- erates with maximally interacting groupings (circles) of variables of any order and extends to the highest levels of synthesizing. Tsang only creates second or- der nodes from adjacent variables in a list, with the original list possibly ordered to maximize second order interactions. After that, third and higher order nodes are blindly created from combinations of second order nodes. We extend MB0 to the higher levels. The cir- cles of co-constrained variables described in the previ- ous section guide the synthesis process from beginning to end. The main improvement of this approach comes from a recognition that much of the work in SS-FREUDER and SS-ESSEX was wasted on finding valid combi- nations of variables which were not related. Even though relatively few words in a sentence are con- nected through constraints, SS-FREUDER looks for valid combinations of every word pair. Depending on the ordering used, many irrelevant combinations can also be inspected by SS-ESSEX. Furthermore, the ES- SEX algorithm tends to carry along unneeded ambigu- ity. If two related variables are not adjacent in the ES- SEX algorithm, their disambiguating power will not be applied until they happen to co-occur in a higher-order synthesis.3 The current work combines the efficiency of the ESSEX algorithms with the early disambiguation power of the Freuder method. Our SS-GATHERER algorithm only expends energy on variables directly related by constraints. For in- stance, for the example in Figure 2, three “base” circles would be formed: 1. adquirir, grupo roche, dr andreu 2. adquirir, a traves de, compania 3. compania, en, espana The last two are synthesized into a larger circle: ad+, a traves de, compania, en, espana, su This is then synthesized with the first “base” circle above to give the answer to the complete problem. The bulk of disambiguation occurs in the lower or- der circles which were chosen to maximize this phe- nomenon. The correct solution to the example prob- lem was obtained by SS-GATHERER in only five steps. SS-Freuder uses hundreds of extra nodes for this exam- ple and SS-ESSEX, 31 extra nodes. Focusing the syn- thesizer on circles that yield maximum disambiguation power produces huge savings while still guaranteeing the correct solution. One objection that could be raised to this process is that more work might be needed to create higher-level 3Freuder’s algorithm does not have this disadvantage, because all combinations of variables are created, though at great expense. nodes. For instance, if each variable had three possi- ble values, one needs to test 9 (32) combinations for each second-order node, but 27 (33) combinations for third order nodes.4 If two second-order nodes could be created that would form a third-order node, and each second order node could be completely disambiguated to a single solution, then the third order node could be created without any combinatorics, yielding a total of 18 combinations (9 + 9) that were searched in the case of three values for each variable. Directly creat- ing the third-order node requires the 27 combinations to be searched. However, if the second order nodes do not disambiguate, nothing is gained from them. For this reason, base circles can be further sub-divided into groups of second-order nodes, if those second-order nodes are connected in the constraint graph. The algorithm below accepts a list of Circles, ordered from smaller to larger. Each circle has the sub-circles from which it is made identified. 1 PROCEDURE SS-GATHERER(Circles) 2 FOR each Circle in Circles 3 PROCESS-CIRCLE(Circle) 4 PROCEDURE PROCESS-CIRCLE(Circle) ;;Each Circle in form (Vars-in-Circle Sub-Circles) 5 Output-Plans < -- nil 6 Incoming-Non-Circles < -- REMOVE all variables in Sub-Circles from Vars-In-Circle 7 Non-Circle-Combos < - - Get-Combos(Incoming-Non-Circles) 8 Circle-Combos < -- Combine-Circles{ Sub-Circles) 9 FOR each Non-Circle-Combo in Non-Circle-Combos 10 FOR each Circle-Combo in Circle-Combos ;; each incoming circle has consistency ;; info stored in arrays: 11 AC-Info < -- access arc constraint info from input circles 12 Plan < -- add Non-Circle-Combo to Circle-Combo ;; Plan is a potential solution for this Circle ;; with a value assigned to each variable. 13 IF Arc-Consistent(Plan,AC-Info) THEN 14 Output-Plans < -- Output-Plans + Plan 15 ;; update AC-Info for this circle 16 RETURN Output-Plans The Get-Combos procedure (line 7) simply produces all combinations of value assignments for the input variables. This procedure has complexity O(ux), where x is the number of variables in the input Incoming-Non- Circles, and a is the maximum number of values for a variable. In the worst case, x will be n; this is the case when the initial circle contains all the variables 41t should be pointed out that sometimes second or- der nodes are used in SS-GATHERER, if the dependency structure calls for them. Incidentally, there is nothing spe- cial about third-order nodes in SS-GATHERER, although NL constraints seem to produce them the most. It is quite possible that even higher-order nodes could be the starting point. 1058 Natural Language and no sub-circles. Of course, this is simply an ex- haustive search, not Solution Synthesis. In practice, the circles usually contain no more than two variables not involved in input sub-circles, the exceptions almost always pertaining to the base circle, in which case the combine-circles procedure does not add complexity. The Combine-Circles procedure (line 8) combines all consistent5 plans already calculated for each in- put Sub-Circle. In the worst case, where each Sub- Circle contained a single variable, and Sub-Circles con- t ained every variable, then the time complexity would be O(an),” where a is the maximum size of a vari- able domain. This is unavoidable, and is the nature of CSPs. However, if the number of circles in Sub- Circles is limited to c, and each circle has at most p possible Plans, then the complexity of this step is O(p’). This step dominates the time complexity of SS-GATHERER. The next section illustrates how this number can be reduced to a “near” constant value. The FOR loops in lines 9 and 10 simply combine the possible Plan-Combos from the Incoming-Non-Circles with the Circle-Combos calculated for the Sub-Circles. The worst-case time complexity is no worse than the worst-case time complexity for either Combine-Circles or Get-Var-Combos. If Get-Var-Combos produces an combinations, then Combine-Circles will produce none, and vice-versa. In practice, Combine-Circles pro- duces pc combinations while Plan-Combos produces a constant7 number of combinations. The total complex- ity of PROCESS-CIRCLE is therefore O(p’). Again, this number can be reduced to a “near” constant, as shown below. The complexity of SS-GATHERER, then, is O(p”) times the number of circles, which is pro- portional to the number of variables, n. If O(pc) can be shown to be a “near” constant, then SS-GATHERER has time complexity that is “near” linear with respect to the number of variables. For each synthesis, arc consistency may be per- formed (line 13). As discussed above, however, un- modified CSP techniques such as arc-consistency are not usually helpful in problems with non binary-valued constraints. The next section presents a computational substitute that will produce similar efficiency for these kinds of problems. Using Branch-and-Bound in an Uncertain World The key observation that enables the application of branch-and-bound to solution synthesis problems is that some variables in a synthesis circle are unaffected by variables outside the circle. For example, in the 51f one circle has a Plan1 with the assignment < A, X > (value X assigned to variable A) and another Circle has a Plan2 with the assignment < A, Y >, then Plan1 and Plan2 are not consistent and cannot be combined. ‘Combining n variables each with a possible values. 7us, where x is the number of variables in Incoming- Non-Circles, usually 1 or at most 2, except for base circles. first circle of Figure 2, (Adquirir, Grupo-Roche, Dr- Andreu), neither Grupo-Roche nor Dr-Andreu is con- nected (through constraints) to any other variables outside the circle. This will allow us to optimize, or reduce, this circle with respect to these two vari- ables. The reduction process uses branch-and-bound techniques. Implementing this type of branch-and-bound is quite simple using the apparatus of the previous sections. It is a simple matter to determine if, for a given circle, a variable is connected, through constraints, to variables outside the circle. To implement SS-GATHERER with branch-and-bound, we first need to add to the inputs a list of variables that are affected outside the circle. All that is needed to complete the upgrade of SS- GATHERER is the addition of one procedure and a modification to SET-UP-CONSTRAINTS, the initial- ization procedure (not shown) so that it sets up the consistency arrays based not on yes-no constraints but rather on values from the 0 to 1 scale. The best ap- proach is to set a THRESHold below which a con- straint score should be considered “not satisfied.” This will allow the CSP mechanism to eliminate combina- tions with low-scoring constraint scores. binations will be allowed to go through. All other com- 1 PROCEDURE PROCESS-CIRCLE(Circle) . . . 16a REDUCE-PLANS(Output-Plans Constrained-Vars) 16 RETURN Output-Plans 17 PROCEDURE REDUCE-PLANS( Plans Constr-Vars) 18 FOR each Plan in Plans 19 Affected-Assignments < -- all value assgnmnts from Plan that involve a Constr-Var 20 IF Affected-Assignments is NIL THEN ;;This will only happen for the topmost circle 21 22 23 Affected-Assignments < -- TOP This-Score < - - Get-Score( Plan) Best-Score < -- Best-Score[Affected-Assignments] 24 25 IF (This-Score > Best-Score) THEN Best-Score[Affected-Assignments] < -- This-Score 26 Best-Plan[Affected-Assignments] < -- Plan 27 RETURN the list of all Best-Plans Why does this work ? First of all, each previously processed circle has been reduced, so that the input Circle-Combos will only contain reduced plans. In REDUCE-PLANS, then, we want to keep all possi- ble combinations of variables that are affected out- side the circle. Line 19 calculates what these affected combinations are for the input plan. The Best-Score and Best-Plan arrays are then indexed by this (con- sistently ordered) list of combinations. The goal is that, for each possible combination of assignments of variables affected outside the circle, find the Plan that maximizes that combination. Because all of the other, Unconstrained-Vars, are not affected outside the cir- cle, we can find the Plan that maximizes each of the Semantics & Discourse 1059 combinations that are affected outside the circle. In the first circle, (Adquirir, Grupo-Roche, Dr- Andreu), only adquirir is affected outside the cir- cle. Because there exist other constraints that are not in this circle, we cannot choose a final value for adquirir. We will need to retain plans for both pos- sible value assignments: < adquirir, aquire > and < adquirir, learn >. On the other hand, Grupu Roche and Dr. Andreu are not constrained outside the circle. All of the constraints involving them are taken care of within the circle. For this reason, we can find the value assignments of Grupo Roche and Dr. Andreu that produce the maximum score for the < adquirir, aquire > assignment, and then find the value assignments that produce the maximum score for the < adquirir, learn >. All other plans involving non- optimal combinations can be discarded. “Scores” are calculated by comparing constraints, such as a learn concept requiring an animate agent, with the actual relationships between the value assignments under con- sideration. It must be stressed here that discarding the non- optimal plans in no way incurs risk of finding sub- optimal solutions. These are not heuristic decisions being made which might be wrong. Branch-and-bound techniques such as these simply prune off areas of the search space in which optimal solutions can be guaran- teed not to be found. The only non-certainty present is in the scoring of constraints, which is an inexact science; however, once given a set of scores, these tech- niques are guaranteed to give the optimal value assign- ment combinations. Branch-and-Bound Results To illustrate how Branch-and-Bound dramatically reduces the search space, consider the results of applying it to the sample sentence. ------------------------------------------- Circle In-Circles In-Combos Reduced-Combos ------------------------------------------- ------------------------------------------- 1 none 2*2*1 = 4 2 ---------------------~---~--~---~-~~---~-~~ 2 none 2*2*2 = 8 4 ------------------------------------------- 3 none 2*2*1 = 4 2 ------------------------------------------- 4 2 and 3 synth only 2 ------I------------------------------------ 5 1 and 4 synth only 1 -----------------------~-~~--~--~~-~~~~~--~ ---------------------~-~-~~-~--~~~--~-~~--- The total number of combinations examined is the sum of the input combos; in this case 4+8+4=16. Compare this to an exhaustive search, which would ex- amine (2*1*2*2*2*1*2*1) = 32 combinations. As the input problem size increases, the savings are even more dramatic. This happens because the problem is broken up into manageable sub-parts; the total complexity of the problem is the sum of the individual complexi- 1060 Natural Language i ; \ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..-.............................. Figure 3: Cross Dependencies ties. Without these techniques, the overall complexity is the product of the individual complexities. This is the central fact that supports our claim that the number of circles in Sub-Circles is limited to a “near” constant, leading to a “near” linear time complexity for the whole algorithm. The only way multiplicative growth can occur in SS- GATHERER is when there are constraints across trees, as in Figure 3. In that Figure, several of the circles cannot be fully reduced due to interactions outside the circle. Variable A in Circle 1 cannot be fully reduced’ because of Arc a. Note, however, that when Circle 4 is synthesized, Variable A can be reduced because, at that point, it does not interact outside the larger circle. In Circle 4, Variable B cannot be reduced because it interacts with Variable C. Likewise, Variable C cannot be reduced in Circle 2 because of its interaction with Variable B. In all of these cases, ambiguity must be carried along until no interactions outside the circle exist. For Variables B and C, that does not occur until Circle 6, the entire problem, is processed. Practically speaking, though, NL problems gener- ally do not allow interactions such as Arc a and Arc b ’ “Governed” (Haegeman 1991) interactions such as . Variable D directly constraining Variable A can occa- sionally occur, but these only delay reduction to the next higher circle. Thus, some local multiplicative ef- fects can occur, but over the problem as a whole, the complexity is additive. To illustrate this point, consider what happens as the size of the problem increases. The following table shows actual results of analyses of various size prob- lems. We have tested the SS-GATHERER algorithm extensively on a wide variety of sentences in the con- text of the Mikrokosmos Machine Translation Project. Over 70 sentences have been analyzed (a relatively large corpus for knowledge-based MT). The claims of near-linear time processing and guaranteed optimal so- 8By “fully reduced” we mean all child variables max- imized with respect to a single parent, which cannot be reduced because it connects higher up in the tree. ’ “Long-distance” dependencies do exist, but are rela- tively rare. lutions have been representative: verified. These three sentences are Sentence A Sentence B Sentence C # plans 79 95 119 exh. combos 7,864,320 56,687,040 235 billion SS-GATHERER 179 254 327 It is interesting to note that a 20% increase in the number of total plans lo (79 to 95) results in a 626% increase (7.8M to 56M) in the number of exhaustive combinations possible, but only a 42% increase (179 to 254) in the number of combinations considered by SS-GATHERER. As one moves on to even more com- plex problems, a 25% increase (95 to 119) in the num- ber of plans catapults the exhaustive complexity by 414,600% (56M to 235B) and yet only increases the SS-GATHERER complexity by 29% (254 to 327). As the problem size increases, the minor effects of “local multiplicative” influences diminish with respect to the size of the problem. We expect, therefore, the behav- ior of this algorithm to move even closer to linear with larger problems (i.e. discourse). And, again, it is im- portant to note that SS-GATHERER is guaranteed to produce the same results as an exhaustive search. Although time measurements are often misleading, it is important to state the practical outcome of this type of control advancement. Prior to implementing SS-GATHERER, our analyzer failed to complete pro- cessing large sentences. The largest sentence above was analyzed for more than a day with no results. Using SS-GATHERER, on the other hand, the same sentence was finished in 17 seconds. It must be pointed out as well that this is not an artificially selected example. It is a real sentence occurring in natural text, and not an overly large sentence at that. Conclusion We have presented a new control environment for pro- cessing Natural Language Semantics. By combining and extending the AI techniques known as constraint satisfaction, solution synthesis and branch-and-bound, we have reduced the search space from billions or more to thousands or less. This paper has concentrated on the combination of branch-and-bound “hunters” with solution synthesis “gatherers.” In the past, the utility of Knowledge-based seman- tics has been limited, subject to arguments that it only works in “toy” environments. Recent efforts at increas- ing the size of knowledge bases, however, have created an imbalance with existing control techniques which are unable to handle the explosion of information. We believe that this methodology will enable such work. Furthermore, because this work is a generalization of a control strategy used for simpler binary constraints, “The total number of plans corresponds to the total number of word senses for all the words in the sentence. we believe that it is applicable to a wide variety of real- life problems. We intend to test this control paradigm on problems outside NLP. References Beale, S. 1996. Hunter-Gatherer: Applying Con- straint Satisfaction, Branch-and-Bound and Solution Synthesis to Natural Language Semantics, Techni- cal Report, MCCS-96-289, Computing Research Lab, New Mexico State Univ. Beale, S. and Nirenburg, S. 1995. Dependency- Directed Text Planning. In Proceedings of the 1995 International Joint Conference on Artificial Intelli- gence, Workshop on Multilingual Text Generation, 13-21. Montreal, Quebec. Beale, S.; Nirenburg, S. and Mahesh, K. 1995. Se- mantic Analysis in the Mikrokosmos Machine Trans- lation Project. In Proceedings of the 2nd Symposium on Natural Language Processing, 297-307. Bangkok, Thailand. Charniak, E; Riesbeck, C.K.; McDermott D.V. and Meehan, J.R. 1987. Artifkial Intelligence Program- ming. Hillsdale, NJ: Erlbaum. Freuder, E.C. 1978. Synthesizing Constraint Expres- sions. Communications ACM 21( 11): 958-966. Haegeman, L. 1991. An Introduction to Government and Binding Theory. Oxford, U.K.: Blackwell Pub- lishers. - - Lawler, E.W. and Wood, D.E. 1966. Branch-and- Bound Methods: a Survey. Operations Research 14: 699-719. Mackworth, A.K. 1977. Consistency in Networks of Relations. Artificial Intelligence 8( 1): 99-118. Mackworth, A.K. and Freuder, E.C. 1985. The Com- plexity of Some Polynomial Consistency Algorithms for Constraint Satisfaction Problems. Artificial Inted- ligence 25: 65-74. Maruyama, H. 1990. Structural Disambiguation with Constraint Propagation. In Proceedings 28th Confer- ence of the Association for Computational Linguis- tics, 31-38. Pittsburgh, Pennsylvania. Mohr, R. and Henderson, T.C. 1986. Arc and Path Consistency Revisited. Artificial Intelligence 28: 225- 233. Nagao, K. 1992. A Preferential Constraint Satisfac- tion Technique for Natural Language Analysis. In Proceedings 10th European Conference on Artificial Intelligence, 523-527. Vienna. Tsang, E. 1993. Foundations of Constraint Satisfac- tion. London: Academic Press. Tsang, E. and Foster, N. 1990. Solution Synthesis in the Constraint Satisfaction Problem, Technical Re- port, CSM-142, Dept. of Computer Science, Univ. of Essex. Semantics & Discourse 1061
1996
157
1,796
Semantic Interpret at ion of Nominahat ions Richard D. Hull and Fernando Gomez Department of Computer Science University of Central Florida Orlando, FL 32816 hull@cs.ucf.edu Abstract A computational approach to the semantic inter- pretation of nominalizations is described. Inter- pretation of nominalizations involves three tasks: deciding whether the nominalization is being used in a verbal or non-verbal sense; disam- biguating the nominalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be rec- ognized by the presence of modifiers that repre- sent the arguments of the verbal concept. It is these same modifiers which provide the seman- tic clues to disambiguate the nominalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non- verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These meth- ods are tested in the domain of encyclopedic texts and the results are shown. Introduction Quirk(Quirk et al. 1985) defines nominalization as “a [noun phrase] which has a systematic correspondence with a clause structure,” where the head nouns of nom- inalization phrases are related morphologically to ei- ther a verb (deverbal noun) or adjective (deadjectival noun). In this paper we focus on deverbal nominaliza- tions, a common linguistic device used as a vehicle for describing specific events, generic actions, and action concepts such as evolution. Understanding sentences with nominalizations requires the ability to determine the meaning of the nominalization and to make sense of the nominalization’s modifiers, namely, the other words in the noun phrase (NP) containing the nom- inalization and the prepositional phrases that follow it. An important problem is distinguishing between the verbal and non-verbal senses of the nominalization, as is necessary for words like “support”, “decoration”, and “publication”. In order to distinguish between ver- bal and non-verbal senses, we will use the term nom- 1062 Natural Language inalization to refer to only those senses of the noun which are derived from verbs. For example, the noun “decoration” has several senses including a military badge of honor, an ornament, and the nominalization sense, which means the act or process of decorating. There also is the equally serious problem of what to do when the nominalized verb is polysemous. This problem is quite prevalent, and while there is a large body of work discussing word sense disambigua- tion using knowledge-based approaches(McRoy 1992; Hirst 1992; Jacobs & Rau 1993; Voorhees 1993; Li, Sz- pakowicz, & Matwin 1995), a computational approach to this problem has not, to our knowledge, been specif- ically addressed in the literature. As an illustration of the problem, consider the verb promote, which has sev- eral different meanings, e.g., to promote a product, to promote a person to a higher position of authority, and to promote a cause. The nominalization, “promotion,” therefore, can have different meanings as in the promo- tion of Peter or the promotion of liberalism. The framework of a model of semantic interpretation that can be used to solve these problems is described in detail in (Gomez, Segami, & Hull 1997). While a partial treatment of nominalizations involving the at- tachment of prepositional phrases to them is presented in that work, its focus is not a complete model of nom- inalization interpretation and as such does not address the issues of interpreting nominalization NP modifiers and deciding between a nominalization’s verbal and non-verbal senses, which will be discussed here. The remainder of this paper is comprised of five sec- tions. Section describes the essential aspects of the semantic interpreter of which the nominalization algo- rithms, explained in section , are a part. The testing of these algorithms on a collection of nominalizations is discussed in section . A comparison of this approach to other work in the literature is explained in section . Finally, section presents the authors’ conclusions. Semantic Interpreter The parser used by our semantic interpreter leaves structural syntactic relations underspecified along the lines of D-Theory (Marcus, Hindle, & Fleck 1983) From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. and minimal commitment parsers (Weinberg 1993). The parser recognizes the following syntactic relations: subject, object 1, object2, predicate, and prepositional phrases (PPs). Object1 is built for the first postverbal noun phrases (NPs) of transitive verbs, and object2 for the second postverbal NP of diatransitive verbs. The parser also recognizes temporal adjuncts of the verb; but, as indicated, it does not resolve structural ambi- guity. PPs are left unattached in the structure built by the parser until the semantic interpreter finds their meaning and attaches them. As each syntactic relation is identified, it is passed to the semantic interpreter to make semantic sense of it. All important semantic decisions are delayed until the meaning of the verb is determined. Determining verb meaning is done by rules that use the types of syntactic relations built by the parser and the semantic categories of the nouns in them. For instance, what determines the meaning of “drive” in (1) is the direct object of the verb and the fact that “bus” is a motor- vehicle (here “drive” means to operate a vehicle), and that “nail” is a device (here “drive” means to hummer). (1) Peter drove the bus/the nail. In addition to NP complement rules, prepositional phrase rules (PP rules) are also used to determine the meaning of the verb. In (2), the meaning of “left” is identified by two for-rules stored in its lexical entry: the meaning of “left” is identified its transfer of pos- session in the first case because the object of the PP is a human and as depart in in the second because the object of the PP is a location. (2) Jennifer left th e orange groves for her son/for home. Once the meaning of the verb is established, addi- tional knowledge is needed to interpret subsequent syn- tactic relations and those which have been parsed but have remained uninterpreted. This knowledge is stored in the representation of the verbal concept or predi- cate. Below are examples of NP complement rules and preposition rules for the verb “defend”: (defend ((object ((if% is-a obj location) (verbal-concept-is defend-physical-thing)) ((if% is-a obj idea) (verbal-concept-is defend-idea))) (Prep (in ((if% equal obj-of-prep court) (verbal-concept-is legal-defend)))) (end-of-clause ((if% is-a obj championship) (verbal-concept-is defend-championship)) ( t (verbal-concept-is defend))))) The first set of rules selects the verbal concept based upon the NP complement. If the direct object is a subconcept or an instance of a location within a hi- erarchy of concepts, then the meaning of the verb is defend-physical-thing. This rule is designed to handle constructions such as Carleton defended Quebec. If the direct object is a subconcept or instance of an idea in the concept hierarchy, then the meaning of the verb is represented by the verbal concept defend-idea, as in her defense of the theorem. If the direct object does not pass either of these two constraints, then the mean- ing of the verb is left unspecified in hopes that later evidence will disambiguate it. The next rule is a prepo- sition rule for PPs that follow the verb “defend” and begin with the preposition “in”. If the object of such a prepositional phrase (the object of the PP is the head of its complement NP) is equal to the word “court”, then the meaning of the verb is represented by the ver- bal concept legal-defend. The last set of rules, called end-of-clause rules, is used if the parser reaches the end of a clause and the meaning of the verb is still un- known. If the direct object is a subconcept or instance of a championship in the concept hierarchy, then the meaning of the verb is represented by the verbal con- cept defend-championship. Otherwise, the meaning of the verb is the generic defend. These five verbal con- cepts are displayed below. (defend (is-a (action-r)) (subj (agent (actor))) (obj (thing (theme)) (prep (against (thing (against (strong)))) (from (thing (against (strong)))))) ; WordNet sense defend 1.47 (defend-idea (is-a (defend)) (obj (idea (theme)))) ; WordNet sense defend2.5 (defend-physical-thing (is-a (defend)) (obj (physical-thing (theme)))) ; WordNet sense defend3 (defend-championship (is-a (defend)) (obj (championship (theme)))) ; WordNet sense defend6 (legal-defend (is-a (defend)) (obj (agent (theme))) (prep (in (court (at-hoc (strong)))))) The first entry in the verbal concept defend, @-a (action-r)), places defend within the hierarchy of ac- tion concepts. The next entry is a restriction: if the subject of defense is subsumed by the agent concept in the concept hierarchy, then make it fill the actor role. The other entries represent restrictions on the object and prepositional phrases. Each of the subconcepts of defend; defend-idea, defend-physical-thing, defend- championship, and legal-defend, inherits entries from defend. This fine-grained decomposition of “defend” is neces- sary if one wishes to make specific inferences depending on the type of defense. The structures above embody the knowledge necessary to understand clauses con- taining the verb “defend” (see (Gomez, Segami, & Hull 1997) for a detailed discussion of VM rules and ver- bal concepts). This verbal knowledge can be exploited and reused for interpretation of nominalizations, if a small amount of additional information detailing how the nominalization’s modifiers relate to the syntactic Semantics & Discourse relations of the VM rules and verbal concepts is con- structed. Knowledge indicating whether any thematic roles are “obligatory,” that is, necessary for a verbal sense interpretation, is also stored. The nominalization “defense” is shown below. De- fense has one obligatory role, theme. Because “de- fense” has both verbal and non-verbal senses, a re- quirement is made that the theme must be present for a verbal sense to be chosen. A nominalization that has only verbal senses does not need an obligatory-role slot. No special mapping rules for genitives or preposi- tions are present because “defense” behaves like most nominalizations: genitives represent either the actor or theme of the action; and besides the preposition “of” which represents the theme of the transitive verbs and the actor of intransitive verbs, “defense” inherits the meanings of its PP modifiers from its root verb, “de- fend.” (defense (obligatory-role (theme))) For the majority of nominalizations, no information over and above that of the verbal knowledge is neces- sary. However, there are exceptions. Genitive mod- ifiers of the nominalization “attendance” only make sense as the actors of “attend”. In this case, a slot specifying that the genitive should fill the actor role is needed. Another situation where additional informa- tion is needed is in the handling of certain prepositional phrases. The nominalization “control” takes PPs us- ing the preposition “over” as the verb’s object, as in, his control over the business, while the verb “control” does not. To handle this, a slot mapping the preposi- tion “over” to the verb’s object is added. In addition to providing the means for disambiguat- ing between nominal and deverbal senses of the nom- inalization, verbal knowledge can also be used to dis- ambiguate the underlying verb of the nominalization when it exhibits polysemy. It is the prepositional phrase “in court” that selects the meaning legal-defend for “defense” in her defense in court, and it is the prepositional phrase “of Richmond” that selects the meaning defend-physical-thing in Lee gave up the de- fense of Richmond. Interpretation Algorithms for Nominaliaations The interpretation algorithms attempt to determine the verbal concept of the nominalization and to fill its thematic roles. Determination of the verbal concept requires disambiguation of the meaning of the nomi- nalization’s root verb. This ambiguity may be resolved by examining the noun phrase in which the nominal- ization occurs, or as is true in many cases, disambigua- tion can only be accomplished by examining postnom- inal prepositional phrases. Once the verbal concept has been identified, surrounding nouns are then inter- preted as verbal concept arguments. There are three separate interpretation algorithms: the nominalization 1064 Natural Language noun phrase algorithm, the prepositional attachment and meaning determination algorithm, and the end-of- clause algorithm. We will discuss the main points and then the details of each algorithm in turn. Nominalization Noun Phrase Algorithm The nominalization noun phrase algorithm is triggered when the head noun’ of some NP is determined to be a potential nominalization. This is accomplished by consulting WordNet(Miller et al. 1993) to see if any of the senses of the noun are hypernyms of either actions or events. Conceptually, the algorithm has two objectives: 1. 2. To determine the verbal concept or predicate of the nominalization, and To identify which thematic roles of the verbal con- cept, if any, each of the remaining nouns and adjec- tives of the NP fill. Determining the verbal concept of the nominaliza- tion establishes its meaning within the context of the sentence. Occasionally, the nominalization has a single meaning and in those cases we can immediately deter- mine the verbal concept. This trivial disambiguation is attempted first and works for nominalizations like “invasion” and “murder .” More often, however, deter- mining the verbal concept requires disambiguating the nominalization because the root verb of the nominal- ization is polysemous. In order to disambiguate polysemous nominaliza- tions, the algorithm uses the root verb to select VM rules. In addition, mapping rules and heuristics are needed to handle the fact that nominalizations do not take bare NPs; the verb’s syntactic subject and ob- ject reappear as genitival, adjectival, or prepositional modifiers. This algorithm addresses genitives, posses- sive pronouns, single prenominal nouns, i.e., pairs of the form noun nom, and adjectives which fill thematic roles. The prepositional attachment and meaning de- termination algorithm described later handles preposi- tional modifiers. Determining the Verbal Concept The VM rules of the root verb do not include any for handling genitives, pronouns, or noun/adjective mod- ifiers. Therefore, if these rules are to be reused, some way of selecting the appropriate ones is needed. The central problem associated with disambiguation of the nominalization within the NP then becomes Which VM rules should be fired? Consider the case where the NP is of the form: (genitive nominalization). The gen- itive may correspond semantically to the verb’s subject or object, as is illustrated in the examples below. (3) Lincoln’s election; The representatives elected Lincoln. ‘The algorithm does not currently handle nominaliza- tions in positions other than the head. (4) Metternich ‘s resignation; Metternich resigned. In (3), the genitive corresponds to the object posi- tion, while in (4), it corresponds to the subject posi- tion. The verb resign is intransitive, except for col- loquial expressions such as “resigned his office”, and therefore, genitive modifiers of the nominali .zation res- ignation correspond to the verb’s subject. This idea forms the first rule selection heuristic. Passive nom- inals behave differently; their genitive modifiers cor- respond to the verbal object. A mapping rule selects the verb’s object rules when the nominalization is pas- sive. If the nominalization’s verb is neither intransi- tive nor is the nominalization passive, then the geni- tive could correspond to either the subject or object. Consequently, the selection of VM rules is postponed, in hopes that following prepositional phrases will dis- ambiguate the nominalization. (a) If the verbal concept has been determined, then i. Fire nominalization mapping rules. ii. If no rules are triggered, check the genitive/pronoun against the selectional restrictions of the verbal concept’s object and subject entries (subject entries only for intransitive verbs). iii. If no meaning was found for the ginitive/pronoun, goto step 4. (b) Else, (the verbal concept has not been determined) i. Fire nominalization mapping rules. ii. Else, if the nominalization ject rules of the verb. is an -ing nominalization, fire the sub- . . . 111. If no rules are triggered rules of the verb. and the verb is intransitive, fire the subject iv. Else, try both the object and subject rules of the verb. If only one type has a rule that fires, take the triggered verbal concept. v. If no rules are triggered, the verbal concept cannot be determined, exit. 3. If the NP containing n9 includes concept is unknown, exit. other qualifier and the verbal 4. Else, if the NP containing verbal concept is known: n9 includes some other qualifier(s) and the (a) If the modifier is a noun, attempt to determine which role the noun plays in the verbal concept underlying the nominalization as follows: If the nominalization is modified by either a noun or adjective, it is not possible to determine exactly which disambiguation rules must be fired. Any ordering of the rules is guaranteed to be wrong in a large percent- age of cases. In addition, it would be unproductive to try all of the rules in hopes that only the appropriate one would be triggered. Therefore, disambiguation is postponed until more ev dence is available in the form i. Examine the selectional the verbal concept. restrictions found in the representation of ii. If the modifier satisfies of the modifier. a single role, make that the interpretation iii. Else, procrastinate until more evidence is available. (b) Else, if the modifier is an adjective, determine if it can: fill an at-time role, e.g., the 1972 election; is derived from a noun which may fill a role; or is an ordinal adjective, in which case, mark the adjective as a temporal indicator. Prepositional Attachment and Meaning Algorithm The prepositional attachment and meaning determi- nation algorithm is activated for each prepositional phrase found within the scope of some nominalization, which is defined to be any postnominal position within the same sentence clause as the nominalization, up to the main verb (for nominalizations before the verb) and to the end of the clause (for nominalizations after the verb). This algorithm has two objectives: of roles filled by prepositional phrases. Filling Thematic Roles Once the verbal concept of the nominalization has been determined, the next step is to determine which of the other constituents of the NP fill thematic roles of the verbal concept and what those roles are. A syntac- tic relation is said to fill a thematic role if the concept it represents in the concept hierarchy passes the selec- tional restrictions associated with that role. For exam- ple, humans elect humans to institutions as social-roles. The hierarchy of concepts is consulted to determine if the argument of the syntactic relation under considera- tion is-subsumed by the subhierarchy of the selectional restriction. If it is, then the restriction is passed. 1. To determine if the prepositional the nominalization, and phrase attaches to 2. To determine the meaning of the prepositional phrase attachment within the context of the nom- inalization. As each such prepositional phrase is parsed from left to right, the preposition is used to select either VM rules, if the verbal concept has not been established, or to select verbal concept selection has been established. If one of these al restrictions, if it rules fires, indicat- Even when the verbal concept of the nominalization has been determined, identifying the role that the nom- inalization’s modifier pl ays is difficult. For this reason it seems appropriate to wait until all of the roles stem- ming from prepositional phrases have been identified before trying to resolve the nominalization’s modifiers in the NP. That way, candidate roles, if already filled, can be weeded out. Thus, only the remaining unfilled roles need to be checked. Below is a detailed descrip- ing that the nominalization takes the preposition, the PP is attached to the nominalization, and its thematic role is noted. The prepositional phrase attachment al- gorithm, shown below, is part of a general semantic interpretation algorithm that is described in (Gomez, Segami, & Hull 1997). Prepositional Phrase Attachment Algorithm Let n9 be the head noun of some noun phrase where one or more of the senses of n9 represent a nominalized verb, and let the verbal concept of that verb be VCO. Let ppl, pp2, . . . . pp, be a list of one or more preposi- tional phrases that follow no in the sentence. Applv the algorithm below to of determine whether ppi at‘taches to pp; is, that is, its thematic role: (modifies) &, and whgt the 1. Ifnghas continue. a single sense, set the concept vco to this meaning and meaning 2. If the NP containing no includes a genitive or a possessive pronoun, and there is no modifying crof” PP, attempt to realize the qualifier (or its anaphoric referent) as a thematic role of a verbal concept from the nominalization’s root verb as follows: 1. If the then concept, vc9, underlying the nominalization is not known, Semantics SC Discourse 1065 (a) If the preposition is “of,” to ensure i. Use the “of” mapping rules of the nominaliaation, if any exist. ii. If no “of” mapping rules exist, attempt to fire the obj rules of the nominalized verb. (b) Else, (the preposition is not rrofJ’) i. Use the exist. appropriate mapping rules of the nominalization, if any ii. Else, attempt to fire the preposition for the given preposition. 2. Else, if vco is known, nominaliaed (a) If the nominalization has mapping rules for the preposition, use them to select the appropriate verbal concept entry. If the entry’s selectional restriction is passed, goto step 3 else goto step 4. (“1 Else, if the preposition is “of,” try the obj entry’s selectional restriction is passed, goto entries of step 3 else UC&-J. got0 If the step 4. (c) Else, use the entries of vco indexed under the appropriate preposi- tion. If the entry’s selectional restriction is passed, goto step 3 else got0 step 4. 3. pp; attaches to vcg, therefore, save tion, save its meaning, and exit. the attachment nominaliaa- 4. If vco does not claim ppi, see if any superconcept of vco has an entry, under the appropriate preposition or using the mapping, that vco in- herits, which determines attachment and meaning. Repeat step 2 with the ancestor of vco recursively until either an attachment is found, or the list of superconcepts is exhausted. Discussion We examine the progress of the interpreter as the nominalization and its modifying constituents of the sentence below is parsed: (5) The king sent another fleet to break the . , ,Mustrns>eontro!over spice in that country. genitive -- 120 PPl PPa In (5), the meaning of “control” can not be deter- mined by the noun phrase interpretation algorithm be- cause “Muslims’ control” may have several different interpretations. Now the PP algorithm is called with “ovei spice,” and because the Serbal concept is un- known, step 1 executes. The nominalization “control” has mapping rules for the preposition “over,” which ul- timately take the preposition strongly as its theme, and determine the verbal concept of “control” to be controd- physical-thing. The last constituent, pp2, is handled by step 2c, which attaches it to “control” as the location of the action. PP2 can also be attached to “spice,” but preference is given to the nominalization. The. geni- tive modifier is handled by the end-of-clause algorithm, which is described in the next section. End-of-Clause Algorithm The end-of-clause algorithm is activated when the parser reaches the end of a clause containing a nominalization2. This algorithm has two objectives: 1. To determine the verbal concept of the nominaliza- tion, if it is still unknown, and to determine if the nominalization is being used in a non-verbal sense, and 2Actually, a general end-of-clause algorithm is activated when any clause ends. For the sake of brevity we will de- scribe only those parts of the general end-of-clause algo- rithm related to the interpretation of nominalizations, and will treat them as a separate algorithm. 2. To reevaluate each nominalization modifier that an interpretation has been found. If the verbal concept is still undetermined, this al- gorithm makes one last effort to establish it. First the algorithm fires the end-of-clause rules of the root verb. If none of the rules fires, it may be that the nominal- ization is part of a collocation. The NP of the nominal- ization is used to search WordNet’s list of collocations. This will provide the verbal concept in cases such as “primary election” and “free trade.” If no matching collocation can be found and the nominalization has both verbal and non-verbal senses, a set of heuristics based on work by Grimshaw(Grimshaw 1990) is used to reject any verbal sense. In the absence of any thematic roles, a verbal sense can be rejected if the nominaliza- tion is plural or has an indefinite article, e.g., Maxwell moved the controls and Tasha wanted a decoration. If the verbal sense of the nominalization can be rejected and the nominalization has only one non-verbal sense, then that sense can be selected. If these heuristics are unsuccessful or a non-verbal sense is selected, a verbal concept will not be found and further processing is abandoned. However, if the verbal concept is already known or is established by the first step of the end-of-clause, each prepositional phrase within the scope of the nominalization and each noun within the nominalization N P is reexamined to verify that it has been interpreted. Reexamination means to reactivate the appropriate algorithm for the nominal- ization modifier. This is necessary because the deter- mination of the verbal concept might come after several prepositional phrases have been parsed. End-of-Clause Algorithm 1. If end-of-clause rules do not determine the verbal concept, (a) Look up NP (minus articles, quantifiers, etc.) in WordNet’s list of collocations. (b) If the meaning of the NP is found, save it and goto step 2. (c) Else, if the nominalization is plural and there are competing non- verbal senses, assume that this is a non-verbal use of the nominal- ization and exit. (d) Else, if there is only one verbal sense, make it the verbal concept. 2. Reevaluate each determined) nominalization modifier (if verbal concept has been (a) See if the modifier has either been assigned a thematic role or has been attached to some other constituent. (b) If a modifier that has not been attached is found, fire the appro- priate rules (depending on whether the modifier is a prepositional phrase or resides within the nominalization’s noun phrase). (c) If a rule fires, be sure that the thematic role indicated by the rule has not already been filled. (d) If the role has been filled, reject that role and continue firing any other appropriate rules. Testing The algorithms were tested to determine how success- ful they were in disambiguating the nominalization, and in recognizing the underlying verbal concept of the nominalization and filling its thematic roles. The dis- course domain was comprised of biographical articles from the World Book Encyclopedia, which are being used in an ongoing research project to acquire histori- cal knowledge from encyclopedic texts(Hul1 1994). 1066 Natural Language Table 1: Algorithm Results nominal n arrest 29 birth 78 caDture 34 senses disambig. g . NP PP 3 100% lo?% 33% 91% 4 81% 100% 50% 97% 5 100% 100% 50% 100% The algorithms assume the existence of rules for dis- ambiguating the root verb of each of the nominaliza- tions, as well as the mapping rules for those syntactic constructions which are specific to the nominalization. The verb disambiguation rules had already been writ- ten as part of our ongoing research, and therefore, the effort needed to handle the nominalizations of these verbs was quite small. Moreover, a list of proper nouns representing proper names was used for recognizing people and locations. Procedure The results of the testing are shown in Table 1. Ten nominalizations were selected randomly from a list of nominalizations with at least 20 occurrences in 5000 biography articles from the World Book Encyclopedia. The column n shows how many occurrences of the nom- inalization were found in those articles. The algorithms were applied to each occurrence, and the results of the interpreter were examined to see if the nominalization was correctly disambiguated, if the genitive and the rest of NP was correctly interpreted, and how success- fully the algorithms interpreted prepositional phrases modifying the nominalization. Analysis The results in Table 1 illustrate the strengths and the one limitation of the algorithms. The correct sense of each nominalization was selected more than 70% of the time, with the worst disambiguation score, 72010, occurring when testing “control,” the most ambigu- ous nominalization with 11 WordNet senses. Failures to disambiguate were most often caused by situations where the verb rules could not be directly applied. For example, in the sentence Court was noted for her en- durance and control, nothing triggers any of the verb rules. Further, because “control” has both verbal and non-verbal senses, one can’t assume that this is an in- stance of either one. Other disambiguation errors re- sulted from rules that didn’t fire or selected the wrong verbal concept, or that missed a non-verbal sense. On the whole, however, these algorithms provide an effec- tive means of nominalization disambiguation. The results of determining the thematic roles of de- verbal nominalizations are given by the next three columns of Table 1. The thematic roles of genitives were found 93% of the time, showing how regular gen- itives are. The only statistically relevant problem in- volved two possessives used together, as in “his party’s nomination” or “their country’s trade.” This problem could be easily handled in a general manner. Interpreting the other elements of the noun phrase shows a limitation of the algorithms, which shouldn’t be surprising considering the difficulty of NP inter- pretation. The most significant problem was the in- terpretation of adjectives which do not fill thematic roles but portray a manner of the action. Exam- ples include “sole control,” “tight control,” “profitable trade,” “mass murder ,” and “powerful defense.” Re- lated to this problem are other adjectives which are not manners of the action but could not be interpreted as thematic roles, e.g., “foreign trade,” “extraordinary breath control,” and “important capture.” PPs were correctly attached and their meaning de- termined over 90% of the time. This shows that the verb’s mechanism for handling PPs can be readily used by the nominalization interpretation algorithms. Most failures were due to ambiguous PP heads and cases were the nominalization took prepositions differ- ent from the verb, which were unanticipated. Related Research Several knowledge-based approaches to interpretation of nominalizations can be found in the current litera- ture. PUNDIT is a system for processing natural lan- guage messages, which was used for understanding fail- ure messages generated on Navy ships(Dah1, Palmer, & Passonneau 1987). Nominalizations in PUNDIT are handled syntactically like noun phrases but semanti- cally as clauses, with predicate/argument structure. In fact, PUNDIT uses the same decomposition as the as- sociated verb. Special nominalization mapping rules are used to handle the diverse syntactic realization of constituents of nominalizations. Some components of our approach are similar; nominalizations inherit selectional restrictions and syntactic mappings from their associated verbal concepts and can have their own specialized mappings when appropriate. PUN- DIT avoids handling the ambiguity of the nominaliza- tion, including the ambiguity between the verbal and non-verbal senses and the polysemy of the nominal- ized verb. KERNEL(Palmer et ad. 1993), a succes- sor of PUNDIT, treats nominalizations in much the same way. Voorhees(Voorhees 1993) and Li et al.(Li, Szpakowicz, & Matwin 1995) both use WordNet as a source of disambiguation information, but neither ad- dresses interpretation of nominalizations. Grimshaw(Grimshaw 1990) states that a subclass of nouns, which she refers to as process or event nominals, have argument structure that is filled by grammatical arguments. Further, these arguments are obligatory to the same extent to which they are obligatory for the nominal’s associated verb. Other nouns, which she calls simple events or result nominals, do not have ar- Semantics & Discourse 1067 gument structure though they may take either comple- ments or modifiers. Grimshaw explains that the com- mon belief that nouns take arguments optionally (An- derson 1983; Dowty 1989) is really just a case of con- fusing ambiguous nouns that have both event and sim- ple or result senses, e.g., examination. She then pro- vides a comprehensive list of evidence supporting the notion that nouns take obligatory arguments, includ- ing methods for disambiguating these nouns. While knowing that a particular nominalization does or does not have argument structure can help in choosing be- tween its verbal and non-verbal senses, it can not dis- ambiguate the nominalization further. Moreover, the restriction that argument structure is obligatory begs the question of what to do when not all the arguments are present and the nominalization clearly describes an action. This phenomenon, illustrated by the sentences below, occurs quite frequently: (6) Some of Johnson’s accusers tried to implicate him in Lincoln’s murder, but failed. (7) When the news of Pompey’s defeat at Pharsalus in 48 B.C. reached him, Cato fled to North Africa. (8) He saw that city’s destruction by British and American bombing in 1945. Although the nominalizations “murder,” “ defeat,” and “destruction” in the sentences above do not meet Grimshaw’s criteria for having argument structure, they do take the arguments “Lincoln,” “Pompey,” and “city” respectively as their themes and they do denote events. Instead of portraying these nouns as passive nominals and calling their arguments adjuncts, our ap- proach handles them in the same manner as if they had written as the murder of Lincoln, the defeat of Pompey, and the destruction of that city. Conclusions We have provided knowledge-based algorithms for the semantic interpretation of nominalizations. These algorithms address the problems of differentiating between the nominalization’s verbal and non-verbal senses and interpreting the nominalization when it oc- curs as the head noun of NPs. Interpreting the nom- inalization involves determining the predicate of the nominalization when it is polysemous, and determin- ing the attachment of the nominalization’s PP modi- fiers and the identification of their thematic roles. One major limitation of this approach is the need for having hand-crafted representations of VM rules, ver- bal concepts and a general ontology. We are working on a parallel project that integrates WordNet’s lexical knowledge-base into our system. Preliminary results indicate that the task of defining VM rules and verbal concepts can be highly simplified by interfacing our ontology with the WordNet noun ontology and verb hierarchy. Acknowledgments This research has been funded by NASA-KSC Contract NAG-10-0120. References Anderson, M. 1983. Prenominal genitive nps. Lin- guistic Review 3: l-24. Dahl, D.; Palmer, M.; and Passonneau, R. 1987. Nominalizations in PUNDIT. In Proc. of the 25th Annual Meeting of the ACL, 131-137. Dowty, D. R. 1989. On the semantic content of the notion ‘thematic role’. In Chierchia, G.; Partee, B. H.; and Turner, R., eds., Properties, Types, and Meaning. Dordrecht : Kluwer. Gomez, F.; Segami, C.; and Hull, R. 1997. Determin- ing prepositional attachment, prepositional meaning, verb meaning and thematic roles. Computational In- telligence 13(l). To appear. Grimshaw, J. 1990. Argument Structure. Cambridge, Mass.: MIT Press. Hirst, G. 1992. Semantic interpretation and the reso- lution of ambiguity. New York: Cambridge University Press. Hull, R. 1994. Acquisition of historical knowledge from encyclopedic texts. Technical report, University of Central Florida, Department of Computer Science. CS-TR-94-05, dissertation proposal. Jacobs, P., and Rau, L. 1993. Innovations in text interpretation. Artificial Intelligence 63:143-191. Li, X.; Szpakowicz, S.; and Matwin, S. 1995. A wordnet-based algorithm for word sense disambigua- tion. In Proc. of IJCAI-95, 1368-1374. Marcus, M.; Hindle, D.; and Fleck, M. 1983. D- theory: Talking about talking about trees. In Proc. of the Annual Meeting of the ACL. McRoy, S. 1992. Using multiple sources for word sense discrimination. Computational Lingusitics 18:1-30. Miller, G.; Beckwith, R.; Fellbaum, C.; Gross, D.; and Miller, K. 1993. Introduction to WordNet: An on-line lexical database. Technical report, Princeton. CSL Report 43, revised March 1993. Palmer, M.; Passonneau, R.; Weir, C.; and Finin, T. 1993. The kernel text understanding system. Artifi- cial Intelligence 63~17-68. Quirk, R.; Greenbaum, S.; Leech, G.; and Svartvik, J. 1985. A Comprehensive Grammar of the English Language. New York: Longman Group, Inc. Voorhees, E. 1993. Using wordnet to disambiguate word senses for text retrieval. In Proc. of the 16th An- nual Int ‘1 ACM SIGIR Conference on Research and Development in Information Retrieval, 171-180. Weinberg, A. 1993. Parameters in the theory of sen- tence processing: Minimal commitment theory goes east. Journal of Psycholinguistic Research 22(3):339- 364. 1068 Natural Language
1996
158
1,797
Building Up Rhetorical Structure Trees Daniel Marcu Department of Computer Science University of Toronto Toronto, Ontario Canada M5S 3G4 marcu@cs.toronto.edu Abstract I use the distinction between the nuclei and the satellites that pertain to discourse relations to introduce a com- positionality criterion for discourse trees. I provide a first-order formalization of rhetorical structure trees and, on its basis, I derive an algorithm that constructs all the valid rhetorical trees that can be associated with a given discourse. Motivation Driven mostly by research in natural language generation, rhetorical structure theory (RST) (Mann &Thompson 1988) has become one of the most widely applied discourse the- ories. Despite its popularity, RST still lacks both a formal specification that would allow one to distinguish between well- and ill-formed rhetorical structure trees (RS-trees) and algorithms that would enable one to determine all the pos- sible rhetorical analyses of a given discourse. For example, consider the following text (in which each textual unit’ is labeled for reference): (1) [No matter how much one wants to stay a non- smoker,Al] [the truth is that the pressure to smoke in junior high is greater than it will be any other time of one’s life.Bl] [We know that 3,000 teens start smoking each day,cl] [although it is a fact that 90% of them once thought that smoking was something that they’d never do.D1 ] According to Mann and Thompson’s definitions (1988) the rhetorical relations given in (2) below hold between the individual text units ,2 because the understanding of both A1 and D1 will increase the reader’s readiness to accept the writer’s right to present Bl; the understanding of cl will increase the reader’s belief of Bl; the recognition of ~1 as something compatible with the situation presented in ‘Throughout this paper, I use interchangeably the terms textual unit and minimal unit to refer to clauses. *Throughout this paper, I use the convention that rhetorical relations are represented as sorted, first-order predicates having the form rhetTel(name, satellite, nucleus) where name, satellite and nucleus represent the name, satellite, and nucleus of a rhetorical relation, respectively. Multinuclear relations are represented as predicates having the form rhetyel(name, nucleus], nucleus*). D1 will increase the reader’s positive regard for the situa- tion presented in D 1; and the situation presented in D1 is a restatement of the situation presented in ~1. (2) RR = rhet-r-e&EVIDENCE, Cl, Bl) rhet-re~(CONCESSION, Cl, D1) whet-rel(RESTATEMENT, Dl, Al) Assume now that one is given the task of building an i rhet-re&JUSTIFICATION, Al, Bl) rhet-re~(JUSTIFICATION, D1, Bl) RS-tree for text (1) and that one produces the candidates in figure 1.” Any student in RST would notice from the beginning that the tree in figure 1 .d is illegal with respect to the requirements specified by Mann and Thompson (1988) because cl belongs to more than one text span, namely Al-421 and cl-Dl. However, even a specialist in RST will have trouble determining whether the trees in figure 1.a-c represent all the possible ways in which a rhetorical structure could be assigned to text (l), and moreover, in determining if these trees are correct with respect to the requirements of RST. In this paper, I provide a formalization of the structure of RS-trees and show how one can use it to find answers to the questions given above. Section 2 reviews the elements of RST that are relevant for this paper, provides an explanation for the ambiguity of RS-trees, and proposes an informal mechanism that would enable one to alleviate the problems that are associated with this ambiguity. Section 3 creates the setting for the full formalization of RS-trees, which is presented in section 4. The last section is dedicated to an algorithmic perspective of the formalization and a discussion of its relevance to discourse processing. S-trees: informal intuitions A critical analysis of I believe that the explanation for the current lack of al- gorithms capable of automatically building the RS-trees that pertain to a given discourse can be found not only in the ambiguous definition of the rhetorical relations, but also in the incomplete description of RS-trees that is pro- vided in the original theory. A careful analysis of the con- 3Throughout this paper, I use the graphical representation for W-trees that is described by Mann and Thompson (1988). Semantics & Discourse 1069 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. JUSTIFICATION Al-61 b Cl;,, Al-61 m Cl;1 Al El Cl Dl Al El Cl Dl -=jf+ &(I-Dl Cl Al Bl Bl Cl Dl cl 4 Figure 1: A set of possible rhetorical analyses of text (1). JUSTIFICATKIN? EVIDENCE? RESTATEMENT? JE!$--g Al El Cl Dl Figure 2: An example of the ambiguity that pertains to the construction of RS-trees. straints provided by Mann and Thompson (1988, p. 248) shows that their specification for RS-trees is not complete with respect to some compositionality requirements, which would be necessary in order to formulate precisely the con- ditions that have to be satisfied if two adjacent spans are to be put together. Assume, for example, that an ana- lyst is given text (1) and the set of rhetorical relations that pertain to the minimal units (2), and that that ana- lyst takes the reasonable decision to build the spans A~-B~ and Cl-Dl, as shown in figure 2. To complete the con- struction of the RS-tree, the analyst will have to decide what the best relation is that could span over ~l--Bl and Cl-Dl. If she considers the elementary relations (2) that hold across the two spans, she has three choices, which cor- respond to the relations rh&.&(JUSTIFICATION, D1, Bl), whet-rel(EVIDENCE, Cl, Bl), and &et-re/(RESTATEMENT, D1, AI). Which is the correct one to choose? More generally, suppose that the analyst has already built two partial RS-trees on the top of two adjacent spans that consist of ten and twenty minimal units, respectively. Is it correct to join the two partial RS-trees in order to create a bigger tree just because there is a rhetorical relation that holds between two arbitrary minimal units that belong to those spans? A possible answer is to say that rhetorical relations are defined over spans that are larger than one unit too; therefore, in our case, it is correct to put the two partial RS-trees together if there is a rhetorical relation that holds between the two spans that we have considered. But if this is the case, how can one determine the precise boundaries of the spans over which that relation holds? And how do the rhetorical relations that hold between minimal units relate to the relations that hold between larger text spans? Mann and Thompson (1988) provide no precise answer for these questions. Nuclearity and I&trees Despite the lack of a formal specification of the conditions that must hold in order to join two adjacent text spans, I believe that RST contains an implicit specification, which can be derived from Mann and Thompson’s (1988) and Matthiessen and Thompson’s (1988) discussion of nucle- arity. During the development of RST, these researchers noticed that which is expressed by the nucleus of a rhetor- ical relation is more essential to the writer’s purpose than the satellite; and that the satellite of a rhetorical relation is incomprehensible independent of the nucleus, but not vice- versa. Consequently, deleting the nuclei of the rhetorical relations that hold among all textual units in a text yields an incomprehensible text, while deleting the satellites of the rhetorical relations that hold among all textual units in a text yields a text that is still comprehensible. In fact, as Matthiessen and Thompson put it, “the nucleus-satellite re- lations are pervasive in texts independently of the grammar of clause combining” (1988, p. 290). A careful analysis of the RS-trees that Mann, Thomp- son, and many others built shows that whenever two large text spans are connected through a rhetorical relation, that rhetorical relation holds also between the most important parts of the constituent spans. For example, in figure 1 .a, the justification relation that holds between text spans cl-- D1 and A~-B~ holds between their most salient parts as well, i.e., between the nuclei D1 and Bl. I propose that this observation can constitute the foun- dation for a formal treatment of compositionality in RST. More specifically, I will formalize the idea that two adjacent spans can be joined in a larger span by a given rhetorical relation if and only if that relation holds also between the most salient units of those spans. Obviously, such a formal- ization will also specify the rules for determining the most salient units of the spans. A precise formulation of the RST problem Formally, the problem that I want to solve is the follow- ing: given a sequence of textual units u = ut, u2,. . . , UN and a set RR of rhetorical relations that hold among these units, find all legal discourse structures (trees) that could be built on the top of the linear sequence ur , ~2,. . . , UN. Throughout this paper, I use the predicates position(ui, i) and rhet-rel(name, satellite, nucleus) with the following se- mantics: predicate position(ui, i) is true for a textual unit ui in sequence U if and only if Ui is the i-th element in the sequence; predicate rhet-reZ(name, ui, uj) is true for textual units Ui and Uj with respect to rhetorical relation name, if and only if the definition provided by Mann and Thomp- son (1988) for rhetorical relation name applies for textual 1070 Natural Language Al - .EUS) Figure 3: An isomorphic representation of tree in figure 1.a according to the status, type, and promotion features that characterize every node. The numbers associated with each node denote the limits of the text span that that node characterizes. The horizontal segments that pertain to each node underline the limits of the span that that node spans over. units Ui, in most cases a satellite, and Uj, a nucleus. For ex- ample, from a rhetorical perspective, text (1) is completely described at the minimal unit level by the relations given in (2) and the relations given below in (3). (3) C pOSitiOn(Al , l), position(B1,2), position(Cl,3),position(D1,4) The formalization that I propose here is built on the fol- lowing features: 0 0 An RS-tree is a binary tree whose leaves denote ele- mentary textual units. Each node has associated a status (nucleus or satellite), a type (the rhetorical relation that holds between the text spans that that node spans over), and a salience or promotion set (the set of units that constitute the most “important” part of the text that is spanned by that node). By convention, for each leaf node, the type is LEAF and the promotion set is the textual unit to which it corresponds. A representation for the tree in figure l.a, which reflects these characteristics, is given in figure 3. The status, type, and salience unit that are associated with each leaf follow directly from the convention that I have given above. The status and the type of each internal node is a one-to-one map- ping of the status and rhetorical relation that are associated with each non-minimal text span from the original repre- sentation. The status of the root reflects the fact that text span A~-D~ could play either a NUCLEUS or a SATELLITE role in any larger span that contains it. The most significant differences between the tree in fig- ure 3 and the tree in figure 1 .a pertain to the promotion sets that are associated with every internal node. Consider, for example, the JUSTIFICATION relation that holds between units A1 and B1 : according to the discussion of nuclearity in section 2, the nucleus of the relation, i.e., unit B1, is the one that expresses what is more essential to the writer’s purpose than the satellite A~. Therefore, it makes sense that if span A~-B~ is to be related through other rhetorical relations to another part of the text, then it should do so through its most important or most salient part, i.e., By. Similarly, the nu- cleus D1 of the rhetorical relation CONCESSION that holds between units ~1 and D1 is the most salient unit for text span CI-D~. The intuition that the tree in figure 3 captures is that spans A1 -Bl and Cl--D1 could be assembled in a larger span A1 -Dl , because there is some rhetorical relation, in this case JUSTIFICATION, that holds between their most salient parts, i.e., D1 and Bl. The status, type, and promotion set that are associated with each node in an RS-tree provide sufficient information for a full description of an instance of a discourse structure. Given the linear nature of text and the fact that one cannot predict in advance where the boundaries between various text spans will be drawn, I will provide a methodology that permits one to quantify over all possible ways in which a tree could be build on the top of a linear sequence of textual units. The solution that I propose relies on the same intuition that constitutes the foundation of chart parsing: just as a chart parser is capable of quantifying over all possible ways in which different words in a sentence could be clustered into higher-order grammatical units, so my formalization would be capable of quantifying over all the possible ways in which different text spans could be joined into larger spans. Let Spa&j, or simply [i,j], denote a text span that includes all the textual units between position i and j. Then, if we consider a sequence of tex- tual units ur,u2 )... ,&, there are N ways in which spans of length one could be built, spanl,l, span2,2,. . . , SpanN,N ; N - 1 ways in which spans of length two could be built, span1,2, span2.3,. . . , spanN-l,N; N-2 ways in which spans of length three could be built, spanl,j,span2,4,. . . ,.spar~,-2,~; . . .; and one way in which a span of length N could be built, SpanI,,. Since it is impos- sible to determine a priori the text spans that will be used to make up a RS-tree, I will associate with each text span that could possibly become part of an RS-tree a status, a type, and a promotion relation and let the constraints described by Mann and Thompson (1988, p. 248) and the nuclearity con- straints that I have described in section 2 generate the correct RS-trees. In fact, my intent is to determine from the set of N(N+1)/2(=N+(N-l)+(N-2)+...+l)pOtentialteXt Semantics & Dwcourse 1071 spans that pertain to a sequence of N textual units, the subset that adheres to the constraints that I have mentioned above. For example, for text 1, there are 10 (= 4+3+2+ 1)potential wm i.e., span1,1,span2,2,span3,3,span4,4,span1,2,span2,3, spaqa, span1,3, span2,4, and span1,4, but only seven of them play an active role in the representation given in figure 3, i.e., spaq1, span2,2, span3,3, span4,4, span1,2, span3,4, and spaq4. In formalizing the constraints that pertain to an RS-tree, I assume that each possible text span, spant,h,4 which will or will not eventually become a node in the final discourse tree, is characterized by the following relations: o S(Z, h,status) denotes the status of spanl,h, i.e., the text span that contains units I to h; status can take one of the values NUCLEUS,SATELLITE, or NONE according to the role played by that span in the final RS-tree. For example, for the RS- tree depicted in figure 3, some of the relations that hold are: S( 1,2, NUCLEUS), S(3,4, SATELLITE), S(1,3,NONE). e T(Z, h, relationname) denotes the name of the rhetori- cal relation that holds between the text spans that are immediate subordinates of spanf,h in the RS-tree. If the text span is not used in the construction of the final RS- tree, the type assigned by convention is NONE. For ex- ample, for the RS-tree in figure 3, some of the relations that hold are: T( 1,l ,LEAF),T( 1,2, JUSTIFICATION), T(3,4, CONCESSION),T(l, 3, NONE). e P(Z, h, unitname) denotes the set of units that are salient for spanl,h and that can be used to connect this text span with adjacent text spans in the fi- nal RS-tree. If spanl,h is not used in the final RS- tree, by convention, the set of salient units is NONE. For example, for the RS-tree in figure 3, some of the relations that hold are: P( 1, 1, A1 ), P( 1,2, B1 ), P&3, NONE), P(3,4,Dl). A complete formalization of R&trees Using the ideas that I have discussed in the previous section, I present now a complete first-order formalization of RS- trees. In this formalization, I assume a universe that consists of the set of natural numbers from 1 to N, where N represents the number of textual units in the text that is considered; the set of names that were defined by Mann and Thompson for each rhetorical relation; the set of unit names that are associated with each textual unit; and four extra constants: NUCLEUS,SATELLITE,NONE, and LEAF. Theonlyfunction symbols that operate over this domain are the traditional + and - functions that are associated with the set of natural numbers. The formalization uses the traditional predicate symbols that pertain to the set of natural numbers (<, 5 A_>,=,+) and fi ve other predicate symbols: S, T, and P to account for the status, type, and salient units that are associated with every text span; rhet-rel to account for the 41n what follows, I and boundaries of a text span. h always denote the left and right rhetorical relations that hold between different textual units; and position to account for the index of the textual units in the text that one considers. Throughout the paper, I apply the convention that all unbound variables are universally quantified and that vari- ables are represented in lower-case letters while constants inSMALL CAPITALS. IakOmakeuse Of tW0 extrarelations (relevant-rel and relevant-unit), which I define here as fol- lows: for every text span span& rekvant-reZ(Z, h, name) (4) describes the set of rhetorical relations that are relevant to that text span, i.e., the set of rhetorical relations that span over text spans that have their boundaries within the interval [I, h]. For every text span spant,h, relevant-unit(Z, h, u) (5) describes the set of textual units that are relevant for that text span, i.e., the units whose positions in the initial sequence are numbers in the interval [I, h]: relevant-rel(l, h, name) E (3, n, sp, np)[ (4) position(s, sp) A position(n, np)A (I < sp 5 h) A (I 5 np 5 h)A rhet-reZ(name, s, n)] (5) reZevantunit(1, h, u) G (%)[position(u,x) A (1 5 x 5 h)] For example, for text (l), which is described formally in (2) and (3), the following is the set of all relevant-rel and relevant-unit relations that hold with respect to text segment [1,3]: {r&Va~t3?~(l,3,JUSTIFICATION), reZevant_r-e&l, 3, EVIDENCE), rekvant-unit( 1,3, Al), rekvant-unit( 1,3, Bl), r&Vant-Unit( 1,3, Cl)) The constraints that pertain to the structure of an RS-tree can be partitioned into constraints related to the range of objects over which each predicate ranges and constraints related to the structure of the tree. I describe each set of constraints in turn. Constraints that concern the objects over which the predicates that describe every span [Z, h] of an RS-tree range e For every span [l, h], the set of objects over which pred- icate S ranges is the set {NUCLEUS,SATELLITE,NONE). Since every textual unit has to be part of the final RS-tree, the elementary text spans, i.e., those spans for which I = h, constitute an exception to this rule, i.e., they could play only a NUCLEUS or SATELLITE role. [(l 5 h 5 N) A(1 5 I< h)] + {[l=h+ (6) @(I, h, NUCLEUS)V s(& h, SATELLITE))]A Wh + @(I, h, NUCLEUS) V s(l, h, SATELLITE)V s(k h, NONE))]) e The status of any text span is unique [(l 5 h 5 N) A (1 5 15 h)] -+ (7) [(S(Z, h, statusl) A S(Z, h, statusz)) + status1 = status21 1072 Natural Language o For every span [1, h], the set of objects over which predicate T ranges is the set of rhetorical relations that are relevant to that span. By convention, the rhetorical relation associated with a leaf is LEAF. [(l 5 h 5 N) A (1 5 1 5 h)] + {[Z = h + T(Z, h, LEAF)]A (8) [l+h + (T(1, h, NONE)‘/ (T(Z, h, name) -+ relevant-reZ(1, h, name)))]} o At most one rhetorical cent text spans relation connect adja- [(l 5 h 5 N) A (1 5 I< h)] + (9) [(T(Z, h, namel) A T(Z, h, name2)) -+ name1 = name;?] e For every span [I, h], the set of objects over which predicate P ranges is the set of units that make up that span. [(l 5 h 5 N) A(1 5 15 h)] + (10) [P(f, h, NONE)V (P(1, h, u) + relevant-unit(1, h, u))] Constraints that concern the structure of the R&trees The following constraints are derived from Mann and Thompson’s formulation of RS-trees and from the nucle- arity constraints that I have described in section 2. e Text spans do not overlap [Cl L h L N) A (1 L 11 5 h) A (1 i h2 L N)A (1 L Z2 L hd A (11 < Z2)A Vu < hd A (Z2 L WI + [ls(fl, hl, NONE) + s(/2, h2, NONE)] o A text span with status NONE does not participate in the tree at all [(I 5 h 5 N) A(1 5 1 <h)] + [(s(f, h, NONE) A P(!, h, NONE)A WI T(1, h, NONE)) V (d(/, h, NONE) A +(Z, h, NONE)A lT(l, h, NONE))] o There exists a text span, the root, that spans over the entire text (13) +(l,N,NONE) A+(l,N,NONE)A ~T(~,N,NoNE) o The status, type, and promotion set that are associated with a text span reflect the structural and nuclearity constraints that were discussed in section 2 [( 1 5 h 5 N) A (1 < 1-c h) A -d(& h, NONE)] + (14) @ name, split-point, s, n)[(Z 5 split-point < h) A(Nucleus$rst(name, split-point, s, n)V SateZZite$rst(name, splitpoint, s, n))] Formula (14) specifies that whenever a test span [Z, h] de- notes an internal node (1 < h) in the final RS-tree, i.e., its status is not none, the span [I, h] is built on the top of two text spans that meet at index split-point and either the formula denoted by NucleusJirst or SateZliteJirst holds. Nucleusfirst(name, split-point, s, n) E rhet_rel(name, s, n) A T(1, h, name)A position(s, sp) A position(n, np)A 1 5 np 5 split-point A splitqoint < sp 5 h A P(Z, splitgoint, n) A P(split_point + 1, h, s)A -u name = CONTRAST V name = JOINT) + s(Z, split-point, NUCLEUS)A S(split_point + 1, h, NUCLEUS)A (tJp)[P(l, h, p) + (15) (P(1, spZit_point,p)V P(spZitqoint + 1, h, p))]} A -I name = SEQUENCE + S(1, split-point, NUCLEUS)A S(split_point + 1, h, NUCLEUS)A (Vp)(P(L h, p> -+ W, split_point, p)>} A { (name#SEQUENCE A name+CONTRASTA name#JoINT) + S(1, split-point, NUCLEUS)A S(split.point + 1, h, s ATELLITE)A WpWU, k p) -+ W, split-point, p))} Formula (15) specifies that there is a rhetorical relation with name name, from a unit s (in most cases a satellite) that belongs to span [split-point + 1, h] to a unit n, the nucleus, that belongs to span [I, split-point]; that unit n is salient with respect to text span [I, split-point] and unit s is salient with respect to text span [spiitpoint + 1, h]; and that the type of van [A hl is given by the name of the rhetorical relation. If the relation is multinuclear, i.e., CON TRAST Or JOINT, the status of the immediate sub-spans is NUCLEUS and the set of salient units for text span il, h] consists of all the units that make up the set of salient units that are associated with the two sub-spans. If the relation is a SEQUENCE relation, both sub-spans have NUC LEUS Status, but the salient units for text span [I, h] are given only by the salient units that are associated with the last member in the sequence, which in this case is realized first. If the relation is not multinuclear, the status of text span [1, split-point] is NUCLEUS, the status of text span [split-point + 1, h] is SATELLITE and the set of salient units for text span [I, h] are given by the salient units that are associated with the subordinate nucleus span. The difference between the formalization of the mult- inuclear relation of SEQUENCE and the other multinu- clear relations stems from the Or CONTRAST, SEQUENCE is fact that, unlike JOINT not symmetric. For- mula SateZZiteJirst(name, split-point, s, n) is a mirror image of (15) and it describes the case when the satellite that per- tains to rhetorical relation rhet-reZ(name, s, n) belongs to text span [I, split-point], i.e., when the satellite goes before the nucleus. Due to space constraints, I do not reproduce it here. An algorithmic view of Given the mathematical foundations of RS-trees, i.e., formu- las (4)-( 1 4), finding the RS-trees for a discourse described along the lines given in (2) and (3), for example, amounts to finding a model for the first-order theory that consists of formulas (2) to (14). There are a number of ways in which one can proceed with an implementation: for example, a straightforward choice Semantics & Discourse 1073 Al Bl Cl Dl a) Al El Cl Dl Bi Cl b) cl d) Figure 4: The set of all RS-trees that could be built for text (1). e> is one that applies constraint-satisfaction techniques. Given a sequence U of N textual units, one can take advantage of the structure of the domain and associate with each of the N(N -+ 1)/2 possible text spans a status, a type, and a salience or promotion variable whose domains consist in the set of objects over which the corresponding predicates S, T, and P range. This gives one a constraint-satisfaction problem with 3N(N + 1)/2 variables, whose domains are defined by formulas (6) to (10). The constraints associated with these variables are a one-to-one mapping of formulas (11) to (14). Finding the set of RS-trees that are associated with a given discourse reduces then to finding all the solutions for this constraint-satisfaction problem. I have used Lisp and Screamer (Siskind & McAllester 1993) a macro package that provides constraint-satisfaction facilities, to fully implement a system that builds RS-trees. My program takes as input a linear sequence of textual units I/ = Ul,lQ,..., UN and the set of rhetorical relations that hold among these units. The algorithm builds automatically the corresponding constraint-satisfaction problem and then uses Screamer to find all the possible solutions for it. A simple procedure prints the RS-trees that pertain to each solution. For example, for text (l), the program produces five RS- tree configurations (see figure 4). Among the set of trees in figure 4, trees 4.a and 4.b match the trees given in the intro- ductory section in figure 1.a and l.c. Trees 4.c-e represent trees that are not given in figure 1. Consequently, it follows that five RS-trees could be built on the top of text (l), and that tree 1 .b is incorrect. It is easy to see that the reason that makes tree 1.b incorrect with respect to the formalization is that one of the constraints, i.e., the one that pertains to the rhetorical relation of evidence that is depicted between spans [3,4] (c~--D~) and [ 1,2] (AI--BI), does not hold. More precisely, the rhetorical relation of concession between ~1 and D1 projects D1 as the salient unit for text span [3,4] (Cl-Dl). The initial set of rhetorical relations (2) depicts an evidence relation only between units ~1 and B1 and not between D1 and Bl. Since the nuclearity requirements make it impossible for cl to play both a satellite role in the span [3,4] (cl-Do), and to be, at the same time, a salient unit for it, it follows that tree 1.b is incorrect. The formalization and the algorithm that I presented here account for the construction of RS-trees in the cases in which the input specifies rhetorical relations between non- 1074 Natural Language elementary spans as well. For example, if the input is en- hanced such that besides the relations given in (2) it also con- tains the rhetorical relation rheLrei(JUSTIFICATION, ~1, [BI-DI]), only the trees that are consistent with this ex- tra constraint will be valid, i.e., trees 4.c and 4.e. The formalization presented here distinguishes between correct and incorrect RS-trees only with respect to the origi- nal theory (Mann & Thompson 1988). Theme, focus, inten- tion, or other pragmatic factors could rule out some of the trees that are produced by the algorithm; but a discussion of these issues is beyond the scope of this paper. Conclusion In this paper I provided a mathematical formulation of rhetorical structure trees that is based on the original Rhetor- ical Structure Theory (Mann & Thompson 1988) and the nuclearity features that pertain to natural language texts. On the basis of a first-order formulation of valid rhetorical structure trees, I implemented an algorithm that takes as input a sequence of textual units and a set of rhetorical re- lations that hold between those units, and that builds all the valid rhetorical structure trees that pertain to that sequence. Acknowledgments. I am especially grateful to Graeme Hirst for long discussions and invaluable comments that helped me polish this work and to Jeff Siskind for bringing to my attention the similarity between charts and rhetorical structure trees, a similarity that catalyzed the emergence of the ideas presented in this paper. I am also grateful to Eduard Hovy, Ray Reiter, Manfred Stede, and Toby Donaldson for their comments on early drafts of the paper. This reasearch was supported by a grant from the Natural Sciences and Engineering Research Council of Canada. References Mann, W., and Thompson, S. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text 8 (3):243-28 1. Matthiessen, C., and Thompson, S. 1988. The struc- ture of discourse and ‘subordination’. In Haiman, J., and Thompson, S., eds., Clause combining in grammar and discourse, volume 18 of Typological Studies in Language. John Benjamins Publishing Company. 275-329. Siskind, J., and McAllester, D. 1993. Nondeterministic Lisp as a substrate for Constraint Logic Programming. In Proceedings of the Eleventh National Conference on Artijcial Intelligence, AAAI-93, Seattle, 133-l 38.
1996
159
1,798
Total-Order R/p lti-Agent Task-Network Planning for Contract S. J. J. Smith and D. S. Nau Computer Science Department and Institute for Systems Research University of Maryland College Park, MD 20742, USA sjsmith@cs.umd.edu nau @cs.umd.edu Abstract This paper describes the results of applying a modified version of hierarchical task-network (HTN) planning to the problem of declarer play in contract bridge. We rep- resent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to gener- ate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. This approach avoids the difficulties that traditional game-tree search techniques have with imperfect- information games such as bridge-but it also differs in several significant ways from the planning techniques used in typical HTN planners. We describe why these modifications were needed in order to build a successful planner for bridge. This same modified HTN planning strategy appears to be useful in a variety of application domains-for example, we have used the same planning techniques in a process- planning system for the manufacture of complex electro- mechanical devices (Hebbar et al. 1996). We discuss why the same technique has been successful in two such diverse domains. Introduction Tignum 2 is a computer system for declarer play at the game of contract bridge. Tignum 2 currently performs better at declarer play than the strongest commercially available program. ’ On 5000 randomly generated deals (including both suit contracts and notrump contracts), *This material is based on work supported in part by an AT&T Ph.D. scholarship to Stephen J. J. Smith, by Maryland Industrial Partnerships (MIPS) grant 501.15, by Great Game Products, by ARPA grant DABT 63-95-C-0037, and by the Na- tional Science Foundation under Grants NSF EEC 94-02384 and IRI-9306580. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders. ‘It is probably safe to say that the Bridge Baron is the best program in the world for declarer play at contract bridge. It has won a number of important computer bridge competitions- 108 Agents Bridge * To A. Throop Great Game Products 8804 Chalon Drive Bethesda, MD 208 17, USA bridgebaron@mcimail.com Tignum 2 beat the strongest commercially available pro- gram by 254 to 202, with 544 ties. These results are statistically significant at the CY = 0.05 level. We had never run Tignum 2 on any of these deals before this test, so these results are free from any training-set biases in favor of Tignum 2. This paper discusses the following issues: Unlike traditional game-playing computer programs, Tignum 2 is based not on brute-force game-tree search but on a modified version of Hierarchical Task- Network (HTN) planning. We discuss why bridge presents problems for the traditional approach, and why an HTN planning approach has worked well in Tignum 2. Although Tignum 2 is an HTNplanner, it incorporates several significant modifications to the planning tech- niques used in typical HTN planners. We extended the HTN framework to incorporate multi-agency and uncertain information, but restricted it to allow only totally-ordered plans. We describe why these modi- fications were needed in order to build a successful planner for the game of bridge. This same modified HTN planning strategy appears to be useful in a variety of application domains. For example, as described in (Hebbar et al. 1996) the same planning techniques (and some of the same code!) used in Tignum 2 have been used to build a process-planning system for the manufacture of com- plex electro-mechanical devices. We discuss why the same kind of planning technique has been successful in two such diverse domains. In this paper, we present only a sketch of our approach. Full details of our approach are in (Smith, Nau, & Throop 1996). From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. Background search tree. Although game-tree search works well in perfect- information games (such as chess (Levy & Newborn 1982; Berliner et al. 1990), checkers (Samuel 1967; Schaeffer et al. 1992), and Othello (Lee & Maha- jan 1990)), it does not always work as well in other games. One example is the game of bridge. Bridge is an imperfect-information game, in which no player has complete knowledge about the state of the world, the possible actions, and their effects. Thus the branching factor of the game tree is very large. Because the bridge deal must be played in just a few minutes, a full game- tree search will not search a significant portion of this tree within the time available. TN Planning. Our work on hierarchical planning draws on (Tate 1977; Sacerdoti 1977). In addition. some of our definitions were motivated by (Erol, Hendler, & Nau 1994; Erol, Nau, and Subrahmanian, 1995). In par- ticular, Wilkins’s SIPE planning system (Wilkins 1984; Wilkins 1988) was a very successful hierarchical plan- ning system. However, these works do not address the uncertainty and incomplete information required in bridge. To address this problem, some researchers have tried making assumptions about the placement of the oppo- nents’ cards based on information from the bidding and prior play, and then searching the game trees result- ing from these assumptions. However, such approaches have several limitations, as described in Section . In our work, we have taken a different approach to this problem, based on the observation that bridge is a game of planning. For addressing various card-playing situations, the bridge literature describes a number of tactical schemes (short-term card-playing tactics such as finessing and ruffing), and strategic schemes (long-term card-playing tactics such as crossruffing). It appears that there is a small number of such schemes for each bridge deal, and that each of them can be expressed relatively simply. To play bridge, many humans use these schemes to create plans. They then follow those plans for some number of tricks, replanning when appropriate. Planning with Uncertainty. Some work has been done on planning with uncertainty and incomplete in- formation (Peot & Smith 1992; Draper, Hanks, & Weld 1994; Kushmerick, Hanks, & Weld 1994; Collins & Pryor 1995). However, these works do not address prob- lems on the scale of bridge, where there is incomplete information about twenty-five cards. Encouragingly, problems on a grander scale are starting to be studied (Haddawy, Doan, & Goodwin 1995; Boutilier, Dearden, & Goldszmidt 1995; Lin & Dean 1995). We have taken advantage of the planning nature of bridge, by adapting and extending some ideas from task- network planning. To represent the tactical and strategic schemes of card-playing in bridge, we use multi-agent methods-structures similar to the “action schemas” or “methods” used in hierarchical single-agent planning systems such as Nonlin (Tate 1977), NOAH (Sacerdoti 1977) O-Plan (Currie & Tate 1985), and SIPE (Wilkins 1984; Wilkins 1988), but modified to represent multi- agency and uncertainty. Multi-Agent Planning. Much of the previous re- search on multi-agent planning has dealt with different issues than those that concern us here. In reactive plan- ning (Dean et al. 1993), the agent must respond in real time to externally-caused events-and the necessity of making quick decisions largely precludes the possibility of reasoning far into the future. In cooperative multi- agent planning (Gmytrasiewicz & Durfee 1992; Ped- nault 1987), the primary issue is how to coordinate the actions of cooperating agents-and this makes it largely unnecessary for a single planning agent to generate a plan that accounts for all of the alternative actions that another agent might perform. To generate game trees, we use a procedure similar to task decomposition. The methods that perform our tasks correspond to the various tactical and strategic schemes for playing the game of bridge. We then build up a game tree whose branches represent moves generated by these methods. This approach produces a game tree in which the number of branches from each state is determined not by the number of actions an agent can perform, but instead by the number of different tactical and strategic schemes the agent can employ. If at each node of the tree, the number of applicable schemes is smaller than the number of possible actions, this will result in a smaller branching factor, and a much smaller Bridge. Some of the work on bridge has focused on bidding (Lindelof 1983; Gamback, Ray net-, & Pell 1990; Gamback, Rayner, & Pell 1993). Stanier (1975) and Quinlan (1979) took some tentative steps towards the problem of bridge play. Berlin (1985) an approach to play of the hand at bridge similar to ours; sadly, he never had a chance to develop the approach (his paper was published posthumously). There are no really good computer programs for card- playing in bridge, especially in comparison to the suc- cess of computer programs for chess, checkers, and oth- ello; most computer bridge programs can be beaten by a reasonably advanced novice. Sterling and Nygate (1990) wrote a rule-based program for recognizing and executing squeeze plays, but squeeze opportunities in bridge are rare. Recently, Frank, Basin, and Bundy elated Work Multiagent Problem Solving 109 (1992) have proposed a proof-planning approach, but thus far, they have only described the results of apply- ing this approach to planning the play of a single suit. Khemani ( 1994) has investigated a case-based planning approach to notrump declarer play, but hasn’t described the speed and skill of the program in actual competition. The approaches used in current commercial programs are based almost exclusively on domain-specific tech- niques. One approach is to make assumptions about the place- ment of the opponents’ cards based on information from the bidding and prior play, and then search the game tree resulting from these assumptions. This approach was taken in the Alpha Bridge program (Lopatin 1992), with a 20-ply (5-trick) search. However, this approach didn’t work very well: at the 1992 Computer Olympiad, Alpha Bridge placed last. Game-Tree Search with Uncertainty. Play of bet- ter quality can be achieved by generating several ran- dom hypotheses for what hands the opponents might have, and doing a full complete-information game-tree search for each hypothesized hand. This approach is used late in the deal in Great Game Products’ Bridge Baron program. Ginsberg (1996) has proposed using such an approach throughout the deal, employing clever techniques that make it possible to perform such a full game-tree search in a reasonable amount of time. How- ever, Frank and Basin (1996) have shown some pitfalls in any approach that treats an incomplete-information problem as a collection of complete-information prob- lems, as these approaches do. There is not yet any evidence that these pitfalls can be overcome. Some work has been done on extending game- tree search to address uncertainty, including Horacek’s (1990), and Ballard’s (1983) work on backgammon. However, these works do not address the kind of un- certainty involved in bridge, and thus it does not appear to us that these approaches would be sufficient to ac- complish our objectives. Planning in Games. Wilkins (1980; 1982) uses “knowledge sources” to generate and analyze chess moves for both the player and the opponent. These knowledge sources have a similar intent to the multi- agent methods that we describe in this paper-but there are two significant differences. First, because chess is a perfect-information game, Wilkins’s work does not address uncertainty and incomplete information, which must be addressed for bridge play. Second, Wilkins’s work was directed at specific kinds of chess problems, rather than the problem ofplaying entire games of chess; in contrast, we have developed a program for playing entire deals of bridge. 110 Agents roblem Characteristics In our work, we consider the problem of declarer play at bridge. Our player (who may be a person or a com- puter system) controls two agents, declarer and dummy. Two other players control two other agents, the defend- ers. The auction is over and the contract has been fixed. The opening lead has been made and the dummy is vis- ible. The hands held by the two agents controlled by our player are in full view of our agent at all times; the other two hands are not, hence the imperfect informa- tion. Bridge has the following characteristics that are necessary for our approach: Only one player may move at at time. In general, no player has perfect information about the current state S. However, each player has enough information to determine whose turn it is to move. A player may control more than one agent in the game (as in bridge, in which the declarer controls two hands rather than one). If a player is in control of the agent A whose turn it is to move, then the player knows what moves A can make. If a player is not in control of the agent .q whose turn it is to move, then the player does not necessarily know what moves A can make. However, in this case the player does know the set of possible moves .~l rnighr be able to make; that is, the player knows a finite set of moves M such that every move *-1 can make is a member of M. Our approach is applicable to any domain with these characteristics. Modifications of our approach are pos- sible if some of these characteristics are missing. Conclusion By using techniques adapted from task-network plan- ning, our approach to playing imperfect-information games reduces the large branching factor that results from uncertainty in such games. It does this by produc- ing game trees in which the number of branches from each state is determined not by the number of actions an agent can perform, but instead by the number of differ- ent tactical and strategic schemes the agent can employ. By doing a modified game-tree search on this game tree, one can produce a plan that can be executed for multiple moves in the game. An especially efficient reduction in the branching fac- tor occurs when a agent plans a string of actions in succession that are all part of one strategic scheme; fre- quently, at a given point in time, only one action is consistent with the strategic scheme. Another impor- tant reduction occurs when an opponent is to follow to the play of a trick; the opponent’s move is selected by finding the matching actions in the task network, and frequently there are only one or two matching actions. Tignum 2, our implementation of the above approach, uses the above techniques to do card-playing for declarer in the game of bridge. In previous work, we showed that Tignum 2 performed better in playing notrump contracts than the declarer play of the strongest commercially available program; we have now shown that Tignum 2 performs better in playing all contracts (both notrump and suit). We hope that the approach described in this paper will be useful in a variety of imperfect-information domains, possibly including defensive play in bridge. We intend to investigate this issue in future work. In addition, we have a number of observations about planning and game playing; these appear below: Total-Order HTN Planning Unlike almost all other HTN planners Tignum 2 is a total-order planner: in all of its task networks and methods-and thus all of the plans that it generates- the tasks are totally ordered. Also unlike most HTN planners, Tignum 2 expands tasks in the order in which they will be achieved: in choosing which task to expand in a task network, Tignum 2 will always choose the task that needs to be performed first. We adopted the above approach because of the dif- ficulty of reasoning with imperfect information. It is difficult enough to reason about the probable locations of the opponents’ cards. If our task networks were partially ordered, then in many planning situations we wouldn’t know what cards the opponents had already played. This would make reasoning about the proba- ble locations of the opponents’ cards nearly impossible; this reasoning is a more serious problem than the prob- lems with uninstantiated variables that occur in perfect- information domains. Being forced into total ordering, however, can be turned to our advantage. To provide a coherent frame- work for reasoning about partially-ordered plans, most current AI planning systems are restricted to manipulat- ing predicates in order to decide whether to apply meth- ods and operators. In Tignum 2 no such restriction is needed: to decide whether to apply a method, Tignum 2 can use arbitrary pieces of code. This gives us a mech- anism for reasoning about the probable locations of the opponents’ cards. In addition, these arbitrary pieces of code are often simpler than the predicates would be. For example, consider a method that takes a finesse in a suit. Tignum 2 currently recognizes nineteen different kinds of finesse situations, such as Jx opposite KTx, xx opposite AJT, and x opposite AQ; one of these finesse situations must exist as a precondition for using the method. Using an arbitrary piece of code, it’s easy to check whether one of these finesse situations exists in the suit, and then to apply the method, or not, as appropriate. If we were to use the method while leaving some of the variables in the precondition uninstantiated, and then later in planning try to achieve the precondition through the play of tricks earlier in the deal, we would have to decide which of the nineteen situations to try to achieve-and it wouldn’t immediately obvious which of them were even possible to achieve. We have been quite successful in applying Tignum 2’s total-order HTN planning technique (as well as some of the same code used in Tignum 2!) to another ap- plication domain very different from bridge: the task of generating process plans for the manufacture of complex electro-mechanical devices such as microwave transmit- receive modules (Hebbar et al. 1996). That this same set of techniques should occur in two such widely vary- ing areas is quite striking. In particular, we can make the following observations: Q HTN planning has long been thought to be more use- ful in practical planning domains than planning with STRIPS-style operators (Wilkins 1988), and our ex- perience confirms this opinion. Bridge has a natural element of hierarchical planning. Humans use hierar- chies of schemes to create plans to play bridge deals. The bridge literature describes many such schemes. Hierarchical planning gives each play acontext; with- out such a context, one might search through many methods at each play. Hierarchical planning is also quite natural in process planning for complex electro- mechanical devices. In this case, the planning hierar- chy derives naturally from the part-whole hierarchy of the device itself. e Our experience also suggests that in a number of ap- plication domains, it may work well to develop plans in a totally-ordered manner, expanding tasks in the order that they are to be performed in the final plan. In AI planning, it is more common to expand tasks in some order other than the order in which they are to be performed. This way, the planner can con- strain its search space by making some “important” or “bottleneck” decision before commiting to other less-important steps. For example, if I want to fly to another continent, it probably is better for me first to decide what flight to take, rather than how to get to the airport. By considering the “fly” task before the “get to airport” task, I can probably constrain the search space a great deal. However, deciding on a step that will come later in a plan before deciding on a step that will come earlier in the plan also incurs a drawback: when I am deciding on the later step, cannot know what its input state will be, because I have not yet decided what sequence of steps to use to produce that input state. This funda- mental source of uncertainty can make the planning mechanisms much more complex than they would be otherwise. In both of the problem domains we have examined Multiagent Problem Solving 111 (contract bridge, and process planning for microwave modules), there can be situations where it might be desirable to make a decision about a later step before making a decision about an earlier step in the plan- but in both domains, such situations do not seem to occur often enough to make it worth the trouble to put in the necessary planning mechanisms. Planning in Games As in most games, the plan existence problem in bridge is rather trivial; a sequence of legal plays is all that is required. We focus instead on the optimization prob- lem: coming up with the best, or nearly the best, line of play. To do this, our planning procedure produced structures similar to game trees. Evaluation and se- lection of plans occured in these trees, and we learned that these structures seem to be a natural solution to the optimization problem. In addition, the wealth of devel- oped efficiency improvements for game trees-such as transposition tables, adaptations of alpha-beta pruning, and the like-are available. although we have not yet implemented any of them. Because our approach avoids examining all possible moves for all agents, it is related to the idea of for- ward pruning in game-playing. The primary difference from previous approaches to forward pruning is that previous approaches used heuristic techniques to prune “unpromising” nodes from the game tree, whereas our approach simply avoids generating nodes that do not fit into a tactical and strategic scheme for any player. Although forward pruning has not worked very well in games such as chess (Biermann 1978; Truscott 198 I), our recent study of forward pruning (Smith & Nau 1994) suggests that forward pruning works best in situations where there is a high correlation among the minimax values of sibling nodes. Part of our motivation for the development of Tignum 2 is our belief that bridge has this correlation. Some tasks and methods we added for suit play turned out to improve notrump play as well. For example, be- cause discarding losers is much more a factor in suit play than it is in notrump play, it wasn’t until we con- centrated on suit play that some vulnerabilities in the discarding routines became clear. From this, we learn that broadening the focus of a task network can improve the task network on a narrower focus. There are some pitfalls in suit play that didn’t exist in notrump play. For example, to get a ruff, one might have to cross to another hand. One way to cross to that other hand might be in trump, which might use up the trump for the ruff. Once the trump is used up, there’s no way to get it back. From this, we learn that so-called “white knights”- actions that make a false precondition, that was once true, true again-rarely arise in bridge. References Ballard, B. W. 1983. The *-minimax search procedure for trees containing chance nodes. Artificial Intelligence 21:327-350. Berlin, D. L. 1985. SPAN: integrating problem solv- ing tactics. In Proc. 9th International Joint Conference on Artificial Intelligence, 1047-105 1. Berliner, H. J.; Goetsch, G.; Campbell, M. S.; and Ebeling, C. 1990. Measuring the performance potential of chess programs. Artijcial Intelligence 43:7-20. Biermann, A. W. 1978. Theoretical issues related to computer game playing programs. Personal Comput- ing, September 1978:86-88. Boutilier, C.; Dearden, R.; and Goldszmidt, M. 1995. Exploiting structure in policy construction. In Proc. 14th International Joint Conference on Artificial Intel- ligence. Collins, G. and Pryor, L. 1995. Planning under un- certainty: some key issues. In Proceedings of the I4th International Joint Conference onArtiYcia1 Intelligence, 1670-1676. Morgan Kaufmann, San Mateo, California. Currie, K. and Tate, A. 1985. O-Plan-control in the open planner architecture. BCS Expert Systems Conference, Cambridge University Press, UK. Dean, T.; Kaelbling, L. P.; Kirman, J.; and Nichol- son, A. 1993. Planning with deadlines in stochastic domains. In Proceedings of the Eleventh National Con- ference on Artificial Intelligence, 574-579. MIT Press, Cambridge, Massachusetts. Draper, D.; Hanks, S., and Weld, D. 1994. Probabilis- tic planning with information gathering and contingent execution. In Proceedings of the 2nd International Con- ference on AI Planning Systems, Kristian Hammond, editor. AAAI Press, Menlo Park, California. Erol, K.; Hendler, J.; and Nau, D.S. 1994. UMCP: A sound and complete procedure for hierarchical task- network planning. In Proc. Second International ConJ: on AI Planning Systems (AIPS-94), pages 249-254. Erol, K.; Nau, D. S.; and Subrahmanian, V. S. 1995. Complexity, decidability and undecidability results for domain-independent planning. Artificial Intelligence 76175-88. Frank, I.; Basin, D.; and Bundy, A. 1992. An adap- tation of proof-planning to declarer play in bridge. In European Conference on Artificial Intelligence. Frank, I. and Basin, D. 1996. Search in games with incomplete information: a case study using bridge card play. Under review. Gamback, B.; Rayner, M.; and Pell, B. 1990. An ar- chitecture for a sophisticated mechanical bridge player. In Beal, D. F. and Levy, D.N.L., editors, Heuristic Pro- gramming in Artificial Intelligence-The Second Com- puter Olympiad. Ellis Horwood, Chichester, UK. Gamback, B.; Rayner, M.; and Pell, B. 1993. Prag- matic reasoning in bridge. Tech. Report 299, Computer 112 Agents Laboratory, University of Cambridge. Ginsberg, M. 1996. How computers will play bridge. Bridge World, to appear. Gmytrasiewicz, I? J. and Durfee, E. H. 1992. Decision-theoretic recursive modeling and the coordi- nated attack problem. In Proceedings of the 1st In- ternational Conference on AI Planning Systems, James Hendler, editor. Morgan Kaufmann, San Mateo, Cali- fornia. Haddawy, P.; Doan, A.; and Goodwin, R. 1995. Ef- ficient decision-theoretic planning: techniques and em- pirical analysis. In Proceedings UAI95,229-236. Hebbar, K.; Smith, S. J. J.; Minis, I.; and Nau, D. S. 1996. Plan-based evaluation of designs for microwave modules. ASME Design for Manufacturing conference, to appear. Horacek, H. 1990. Reasoning with uncertainty in computer chess. Artificial Intelligence 43137-56. Khemani, D. 1994. Planning with thematic actions. In Proceedings of the 2nd International Conference on AI Planning Systems, Kristian Hammond, editor. AAAI Press, Menlo Park, California. Kushmerick, N.; Hanks, S.; and Weld, D. 1994. An algorithm for probabilistic least-commitment planning. In Proceedings of the Twelfth National Conference on Artijcial Intelligence, 1123-l 128. AAAI, Menlo Park, California. Lee, K.-F. and Mahajan, S. 1990. The development of a world class Othello program. Arti$ciaZ Intelligence 43:21-36. Levy, D. and Newborn, M. 1982. All About Chess and Computers. Computer Science Press. Lin, S.-H. and Dean, T. 1995. Generating optimal policies for Markov decision processes formulated as plans with conditional branches and loops. In Third European Workshop on Planning. Lindelof, E. 1983. COBRA: the computer-designed bidding system. Victor Gollancz Ltd, London, UK. Lopatin, A. 1992. Two combinatorial problems in programming bridge game. Computer Olympiad, un- published. Manley, B. 1993. Software ‘judges’ rate bridge- playing products. The Bulletin (published monthly by the American Contract BridgeLeague), 59(11), Novem- ber 1993:5 l-54. Pednault, E. P. D. 1987. Solving multiagent dynamic world problems in the classical planning framework. In Reasoning about Actions and Plans: Proceedings of the 1986 Workshop, 42-82. Morgan Kaufmann, Los Altos, California. Peot, M. and Smith, D. 1992. Conditional nonlinear planning. In Proc. First Internat. Con. AI Planning Systems, 189-197. Morgan Kaufmann, San Mateo, Cal- ifornia. Quinlan, J. R. 1979. A knowledge-based system for locating missing high cards in bridge. In Proc. 6th In- ternational Joint Conf ArtiJicial Intelligence, pp. 705- 707. Sacerdoti, E. D. 1977. A Structure for Plans and Behavior. American Elsevier Publishing Company. Samuel, A. L. 1967. Some studies in machine learn- ing using the game of checkers. ii-recent progress. IBM Journal of Research and Development 2160 1-6 17. Schaeffer, J.; Culberson, J.; Treloar, N.; Knight, B.; Lu, P.; and Szafron, D. 1992. A world championship cal- iber checkers program. ArtiJicial Intelligence 53:273- 290. Smith, S. J. J. and Nau, D. S. 1994. An analysis of forward pruning. In Proc. 12th National Conference on Artijcial Intelligence, pp. 1386-l 39 1. Smith, S. J. J.; Nau, D. S.; and Throop, T. 1996. A planning approach to declarer play in contract bridge. Computational Intelligence, 12:1, February 1996, 106- 130. Stanier, A. 1975. Bribip: a bridge bidding program. In Proc. 4th International Joint Conf Artificial Intelli- gence. Sterling, L. and Nygate, Y. 1990. Python: an expert squeezer. Journal of Logic Programming 812 I-39. Tate, A. 1977. Generating project networks. In Proc. 5th International Joint Con& ArtiJcial Intelligence. Truscott, T. R. 1981. Techniques used in minimax game-playing programs. Master’s thesis, Duke Univer- sity, Durham, NC. Wilkins, D. E. 1980. Using patterns and plans in chess. Artificial Intelligence 14: 165-203. Wilkins, D. E. 1982. Using knowledge to control tree searching. Artificial Intelligence 18: l-5 1. Wilkins, D. E. 1984. Domain independent planning: representation and plan generation. Artificial Intelli- gence 22:269-30 1. Wilkins, D. E. 1988. Practical Planning. Morgan Kaufmann, San Mateo, California. Multiagent Problem Solving 113
1996
16
1,799
Using Plan Reasoning in the Generation R. Michael Young Intelligent Systems Program University of Pittburgh Pittsburgh, PA 15260 myoung+@pitt.edu of Plan Descriptions * Abstract Previous work on the generation of natural lan- guage descriptions of complex activities has indi- cated that the unwieldy amount of text needed to describe complete plans makes for ineffective and unnatural descriptions. We argue here that concise and effective text descriptions of plans can be generated by exploiting a model of the hearer’s plan reasoning capabilities. We define a computational model of the hearer’s interpreta- tion process that views the interpretation of plan descriptions as refinement search through a space of partial plans. This model takes into account the hearer’s plan preferences and the resource limitations on her reasoning capabilities to de- termine the completed plans she will construct from a given partial description. Introduction A number of natural language systems have been de- veloped for the generation of textual descriptions of plans (Mellish QL Evans 1989; Vander Linden 1993; Hailer 1994). However, systems have been limited in their ability to deal wit811 the large amount of de- bail found in complex act,ivit,ies: eit,her these systems dealt, exclusively with artificial plans of limited size or have generat,ed verbose text describing more real- istic plans. The qualit,y of a t,extual description is strained when that, clescript,ion contrains an inappropri- abe a.mount8 of detail. Providing a hearer with too much detail may needlessly cause her to eliminate from con- sideration compatNible alternate plans or may overtax her attNentional constraints (Walker 1996). Too little detail, alternatively, may allow the hearer to infer that the speaker is describing a plan that is inconsistent with the speaker’s actual plan. Providing too little de- tail may so underconstrain the interpretation that the hearer’s plan reasoning resources are overtaxed. For *Research supported by the Office of Naval Research, Cognitive and Neural Sciences Division, DOD FY92 Aug- mentation of Awards for Science and Engineering Research Training (ASSERT) P ro g ram (Grant Number: NO001493- I-0812) systems responsible for automatically generating de- scriptions of plans, understanding the interaction be- tween the quality of the description and the quantity of information it contains is essential In the approach we describe here, plans are rep- resented by collections of components and the task of describing a plan involves communicating these compo- nents. The problem that this research addresses is how to determine an appropriate subset of the components of a plan to communicate as the plan’s description. A principal claim we make is that effective description of one plan can be achieved by describing a second partial plan that is appropriately related to the first. The partiality of the second plan must be chosen so that the hearer can reconstruct the first from it based on the hearer’s knowledge about plans and planning. The hearer must be able to complete the description in much the same way that a planning system completes a partial plan. Exploiting a Model of Plan Reasoning This work addresses the communication of plan de- scriptions in a context we call plan identification. In this context, a speaker describes a plan P, called the soul’ce plan, in order t)o identify P as a solution to what the speaker believes is a mutually underst(ood planning problem. When identifying a plan, a speaker provides a description of P that’ is sufficient, for the hearer to single out P (or a plan sufficiently close to P) as the indicated plan in the space of possible solutions. The description the speaker provides will contain a descrip- tion of a subset of the plan components found in P - the speaker constructs this subset by anticipating how the hearer will reconstruct a complete plan from the partial plan defined in the description. In this paper we will only consider utterances that describe the presence of a component in the source plan.’ In instructional text, for instance, these types of utterances are often realized as imperatives like “Do action Q .” The problem we address here is the selec- tion of a subset of the components of a source plan ‘This formalism is readily extendable to other types of utterances - see below for a brief discussion. Semantics & Discourse 1075 From: AAAI-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. P sufficient to identify P to the hearer. In our ap- proach, a speaker uses the model of the hearer’s plan reasoning capabilities to select a subset with greater or fewer components depending on (at least) two fac- tors. First, the hearer’s plan reasoning resources may limit her ability to find completions; the constraints in a description may be so sparse that finding the com- pletions of the constraints is too great a task for the hearer. Candidate subsets that overtax the hearer’s abilities can be eliminated from consideration. The second factor determining the content of a de- scription is the amount of variance the speaker can t,olerate between the plan he is describing and the plan t#he hearer subsequently will form. In general, there may be a number of plans closely related to a source plan which the speaker would be happy for the hearer to identify. The degree to which these closely related plans vary from the actual source plan is dependent upon the measure of cacceptability that the speaker uses. In the limiting case, acceptability corresponds to iden- tity, although for many domains acceptability may be a much weaker notion. Given the hearer’s plan prefer- ences, the particular constraints in a description may guide him to solutions that are unacceptable to the speaker. Subsets that define planning problems where unacceptable plans are likely to be selected can also be eliminated. We will use a representation of the planning process referred to as plan-space search (Kambhampati 1993). Plan-space search provides a flexible representation of partial plans and the types of planning operations hear- ers may perform when interpreting a partial plan de- scription. In addition, plan-space search characterizes the planning activity of a wide class of current planning systems - developing a text generation system built on plan-space search allows us to apply these techniques to any plan representation that can be characterized in this way. In plan-space search, each node in the search space is defined by a partial plan and denotes a set of plans; this set, called the node’s candidate set, corre- sponds to the class of all legal completions of the par- t’ial plan at that node. Search through the plan space is performed by refining the partial plan at a given node. R.efinements correspond to one of a well-specified set of plan-const,ruction operations (e.g., adding a new step to e&ablish an open precondition). Each refinement of a parent node creates a child node whose candidate set is a subset of the parent’s. The plan space forms a graph whose single root node is an empty plan and whose leaf nodes are either inconsistent plans or solu- tions to the planning problem. In refinement search, an evaluation function is used to characterize the candidate set of each node encoun- tered in the search, mapping the node to one of three values. When a node evaluates to FAIL, there is no plan in the node’s candidate set that solves the plan- ning problem. Consequently, the node must be pruned from the search space and the search algorithm must backtrack. When a node evaluates to a complete plan contained in the candidate set, that plan solves the planning problem and the search algorithm can return this plan. When a node evaluates to I, the evaluation function cannot determine if a solution is contained in the candidate set and further refinement is needed. In this paper, we will use a partial-order, causal link planner called DPOCL (Young, Pollack, & Moore 1994). This planner extends the UCPOP planner (Pen- berthy & Weld 1991) by incorporating a hierarchical plan representation. Use of this representation mir- rors the hierarchical, incremental development of plans indicated in the manner that people tallc about plan- ning (Dixon 1987). Because DPOCL is not a system built especially for the generation of task descriptions (i.e., it is a domain-independent planning algorithm), DPOCL plans contain sufficient structure to ensure their executability. Consequently, they serve as strong test cases for the generation of plan descriptions. In addition, DPOCL is readily characterized as a plan- space search algorithm. A Hearer Model Based on Plan Reasoning In this section we propose a model to be used for de- termining an appropriate description of a plan - using this model involves determining specific inferences to be drawn by the hearer from any candidate description. In particular, our model anticipates the plan reasoning that the hearer undertakes to complete the partial de- scription the speaker provides. Definitions The computational model we use here is made of a number of components representing the planning algo- rithms and action representations used by a speaker when modeling the domain of discourse and the hearer’s model of the same domain. In our approach, the speaker has a planning model representing his own plan reasoning capabilities and a separate model of the hearer’s plan reasoning capabilities. 2 A planning problem consists of a complete specifi- cation of the problem’s initial state and the goal state and a complete specification of the domains’s action and decompositions operators. Definition 1 (Planning Problem) A planning problem PP is a three-tuple 4 PO, A, A +, where PO is a plan specifying only the initial state and the goal state, A is the planning problem’s set of action operator definitions and A is the set of decomposition operator definitions. As described above, a plan-space planning algo- rithm searches a plan space to find a solution to a 2We will refer to the speaker’s planning model as the speaker model and the speaker’s model of the hearer’s plan- ning model as the hearer model. 1076 Natural Language planning problem. Typical implementations produce a plan graph during this search representing the por- tion of the plan space searched to that, point. Definition 2 (Plan Graph) A plan graph GA =+ n, a + for some planning algorithm A is a singly- rooted, strongly connected graph with nodes n and arcs a. Each node ni E n is a plan defined by algorithm A and an arc ni + nj appears in a precisely when nj is a refinement of ni using algorithm A. During the planning process, the hearer model em- ploys a heuristic evaluation function to direct search through the space of plans. This function ranks plans that appear in the fringe of the current plan graph; search proceeds by expanding those fringe nodes that are ranked most promising. Definition 3 (Plan Ranking Function) For any plan p and plan graph GA, GA =+ n,a + and p E n, a plan ranking function f maps p and GA into a set of plans such that I) f partitions the plans in n into a totally ordered set of sets of plans, 2) this total order has a single minimal element and 3) each plan in n must be assigned by f into precisely one of these sets. 3 For ease of reference, we will identify the total order- ing on these sets with the non-negative integers; plans that are assigned a lower number are more preferred than plans assigned a higher ranking. In order to model the resource limits of a hearer when she is interpreting a description, the hearer’s limitations will be represented by a search limit func- tion that accepts as input a plan graph representing t,he space alrea.dy explored during a plan-space search. The function returns T if the plan graph exceeds the hearer’s search limit and returns F if it does not. Definition 4 (Search Limit Function) A search limit function d maps a plan graph G onto the set {T, F}. For any agent a, d,(G) = T precisely when G exceeds the planning resource limit for a. The hearer model’s planning system combines a par- ticular planning algorithm, a. search limit function and an evaluation function. Definition 5 (Planning System) A planning sys- tern PS is a three-tuple -+ A, d, f + where A is a plan- space search algorithm, d is a search, limit function and f is a plan evaluation function. Finally, a planning model PM is a pair consisting of a plamling problem and a planning system. Definition 6 (Planning Model) A planning model PM consists of a planning problem PP and a planning system PS. Complete plans assigned the rank 0 by an evalua- tion function in a planning model PM are called the preferred plans in PM. In this work we will use as a measure of acceptability the difference between the value of the speaker’s evaluation function f applied to a plan and f’s value when applied to the speaker’s source plan P, We will assume that the speaker has some value 6 that serves as a measure of the amount of variance from P that he will tolerate. Let G, be the plan graph in which P was found by the speaker, and let Gh be some plan graph constructed by the hearer while solving the same planning problem. The set of acceptable plans (or simply the acceptance set) for a given source plan P contains precisely those plans P’ in Gh such that Ifs(P,Gs) - fs(P’,Gh)l 5 S. Constraints on the Hearer Model There are a number of constraints that must be placed on the plans produced by either the speaker or the hearer model. First, as described earlier, we will use DPOCL to model the planning algorithm of the hearer. That is, Ah = DPOCL. Second, the planning algo- rithms of both speaker and hearer will be constrained to produce only complete plans that contain no un- necessary steps. We assume that the definition of the planning problem in the speaker and hearer models are identical. That is, the specification of the initial and goal states are the same. Furthermore, the discussion here will not deal with any incompatibility between the speaker and the hearer models’ representations of the operators in our domain. * Putting the Hearer Model to Use This hearer model is put to use during the selection of the contents of a plan description. The constraints in this description create a new planning problem for the hearer, one in which the empty plan PO is replaced as the root node by the partial plan characterized by the description. We will refer to this new plan as P, . PV has the same initial and goal states as PO but has some amount of plan detail already filled in. As a result, the characteristics of the plan space below this node differ from that of the plan space of the original problem. By examining the manner in which this new plan space will be searched in the hearer model, the speaker can determine the efficacy of the corresponding de- scription. To be cooperative, a speaker should select a set of plan constraints Pz, such that e Acceptability: the speaker believes that all com- pletions of P, of equal or greater preference to the 41n general, there may be considerable variance between the planning model used by the speaker and that assumed to be used by the hearer. Our approach does not commit to a particular policy for reconciling differences between speaker and hearer models but instead allows implementors to impose policy as their applications dictate. Semantics & Discourse 1077 source plan in the hearer model also occur in the acceptance set of the speaker, and Resource Adequacy: the speaker believes that at least one such acceptable completion exists in the plan-space of the hearer model within the bound dh. When identifying a plan, the speaker should de- scribe a partial plan whose completions in the hearer tnodel are acceptable with respect to the source plan. One interpretation of Grice’s maxim of quantity (Grice 1975) suggests that the speaker must determine a min- imal set of constraints that meet these requirements. To find a minimal subset of plan constraints to use as a description, the approach defined here uses the plan- ning system of the hearer model to evaluate candidate descriptions. To determine if a collection of plan con- straints describe a set of plans that are all acceptable with respect to the source plan, we can initialize the hearer model’s planning problem using a subset of the source plan’s components and search the space of plans rooted at that node. To find a description that obeys the maxim of quantity, we begin our search with the empty subset and increase the size of the initial com- ponent set until we reach the first set defining a plan space where every preferred plan is acceptable. A Sample Problem This section examines three descriptions for the same example planning problem. The planning problem we will use involves travel from London to Paris. 5 In this domain, there are three basic ways to travel from Lon- don to Paris: by train, by plane and by automobile. Each option involves the specification of some further detail; one can fly to either of the two airports in Paris, take the train to Paris either by ferrying across the En- glish Channel or traveling directly using the Channel Tunnel (on the Eurostar) or drive to Paris, again by taking the ferry across the Channel. The complete plan space for this planning problem is shown in figure 1. Each node in this graph represents a (possibly partial) plan; the graph is rooted at PO, the null plan that describes only the initial state (being in London) and the goal state (being in Paris). Each arc between two nodes in the graph indicates a refinement of the plan at the first node to form a new plan at the arc’s second node. The leaf nodes in this graph are all solutions to the planning problem and are labeled with text giving a rough indication of their structure. Each node is also labeled with an integer indicating the order that the hearer’s plan search function fh will search the space (described further below). We will define the hearer model as follows. For il- lustrative purposes, we will assume a limited resource 51n these exampl es, we will use a simplified version of the DPOCL planner and its plan representation tolimit the length of the discussion. The techniques we present here, however, are applicable to planning problems of arbitrary complexity. 1078 Natural Language 1, <$ n Initial (empty) plan - Train/ Eurostar FlY to Fly to Drive/ferry ferry DeGaulle Orly Figure 1: Complete Plan Space for Travel Problem. bound on the hearer’s plan reasoning, bounding the search she can perform to graphs with fewer than 5 nodes. Formally, for any graph G =4 n, a +, dh(G) = F precisely when InI < 5. The speaker’s source plan P is the plan to fly to Paris by taking a plane from Heathrow to De Gaulle (numbered #8 in figure 1). We will assume that the speaker has two simple but strong factors that effect his plan preferences: strong preferences against any plan that involves train or ferry travel. The speaker’s plan evaluation function fs and his measure of variance S will be set such that the only acceptable plans are those numbered #8 and #9 in figure 1. Providing Too Much Detail Consider the follow- ing description: Description 1 To travel to Paris from London, take the Tube to Heathrow. Next, take a plane to De Gaulle. Then take a bus from De Gaulle to Paris. In order to be in London before taking the Tube, use the effect of starting in London. In order to be at Heathrow before taing the plane to Paris, use the effect of begin at Heathrow after taking the Tube. In order to be at De Gaulle before taking a bus to Paris, use the effect of taking a plane from Heathrow to De Gaulle. In order to be in Paris, use the effect of taking a bus from De Gaulle into Paris. This description provides enough detail so that it specifies exactly one completed plan: the source plan P that appears as a leaf node (#8) in the original plan space from figure 1. The new plan space rooted at this node contains just this single plan. To evaluate this candidate description, the speaker uses the hearer model described above to complete this plan. Because the plan specified by the description is already complete, no search is required; this plan is the only plan in the set of preferred solutions in the hearer model. The plan is acceptable, since it is the source plan. By including so much detail in the description, the speaker eliminates other acceptable plans - in this case the plan to fly into Paris’s De Gaulle airport. Moving the root node of the new plan space farther from the source plan would make for a more concise description while including other acceptable plans in the hearer model’s preference set. However, as we show in the next section, this may also result in the inclusion of unacceptable plans in the preference set. Providing Insufficient Detail Consider the follow- ing description: Description 2 In order to Paris from London. to be in Paris, travel This description describes a plan containing only the abstract TRAVEL action with no additional detail. This new plan space is rooted at the node labeled #3 in figure 1 With a search limit function constraining the hearer model’s plan graph t,o contain no more than 5 nodes, the hearer model will only find two solutions to the planning problem: nodes #5 and #S. Both of these nodes are unacceptable given the definition offs described earlier. There are, then, no plans in the pre- ferred set of the hearer model that are also acceptable to the speaker, making description 2 unacceptable. Locating an Appropriate Description Effective descriptions may make reference to any or all of the components of a plan. Consequently, candi- date descriptions lie in the power set of the constraints present in the speaker’s source plan. The previous ex- amples use descriptions that lie on either side of an effective, concise description for the source plan. One obvious technique for finding the minimal set of con- straints that successfully describes the plan is to use a brute force search algorithm: consider each set in the power set of the constraints of the source plan. This technique would be initialized with the initial, null plan PD and would incrementally evaluate sets of constraints, always considering the unexamined sets with smallest cardinality next. The algorithm would halt with it had either found an acceptable plan or exhaust,ed the power set. of plan constraints. Using this technique, it,‘s possible to locate a set of plan constraints corresponding to the following de- scription: Description 3 In order to be in Paris, travel to Paris from London. Travel to Paris by flying. This description describes a plan that contains the TRAVEL step, and a decomposition for that step involv- ing a FLY action. The plan is partial, since it does not specify which airport to fly to. This new plan space is rooted at node labeled #7 in figure 1. The hearer model will search the plan space below this node and find two solutions to the planning problem: nodes #8 and #9. Both of these nodes are acceptable (in fact, they are the only two), making the description accept- able. Any other description of similar or lesser size would either be rooted at one of node #7’s siblings or its parents (with spaces that would either lead to un- acceptable solutions in the preference set of the hearer model or that would contain no solutions at all) de- scription 3 is minimal as well. The technique of selecting minimal descriptions us- ing exhaustive search corresponds to Dale and Reiter’s full brevity interpretation of Grice’s maxims used in the generation of referring expressions (Dale & Reiter 1995). This approach has two weaknesses. First, As Dale and Reiter point out, the approach is computa- tionally expensive, making it unappealing for describ- ing complicated plans or plan spaces. Second, it is not guaranteed to isolate a unique description. For any given planning problem, there may be a number of ac- ceptable descriptions all of minimal size. This simple technique is unable to distinguish between such com- peting descriptions. In these cases, the plan constraints themselves may suggest heuristics for choosing between candidate de- scriptions. For instance, partial plans that are more referentially coherent (Long & Golding 1993; Kintsch 1988), that is, whose plan graphs have fewer strong components, may be preferred for explanation over those that are not. Work in the comprehension of nar- rative texts (Long & Golding 1993; Graesser, Lang, & Roberts 1991; Graesser, Roberston, & Anderson 1981; Trabasso & van den Brock 1985) describes types of inferences drawn from descriptions of actions. It is possible that these cognitive models can given compu- tational definitions in plan identification. Related Work Several researchers interested in task-related discourse have employed action representations based on AI plans. The principal work on explaining plans pro- duced by AI planning systems is described by Mellish and Evans (Mellish & Evans 1989). Their system takes a plan produced by the NONLIN planner (Tate 1977) and produces a textual description of the plan. Their system generates clauses describing each component of the input plan and, as Mellish and Evans point out, this often results in unnatural descriptions containing too much detail. Other projects (Vander Linden 1993; Delin et al. 1994) produce more concise texts describing activi- ties, but rely on simplified models of plans whose size and complexity were limited. Dale’s dissertation (Dale 1992)) focusing on the generation of anaphoric referring expressions in text describing cooking plans, avoided the generation of overly detailed descriptions by a com- Semantics & Discourse 1079 bination of domain-specific techniques (e.g., linguistic information about the verbs associated with actions) and domain-independent ones (e.g., exploiting focus constraints within the text being produced). Conclusions and Future Work This paper has defined a computational model of a hearer’s plan reasoning capabilities that is useful for se- lecting between competing candidate descriptions. By viewing the hearer’s interpretation of a partial descrip- tion as the task of searching for a completion in a space of plans, we have been able to provide a formal account for the requirements of this task. The requirements are described in terms of the hearer’s planning algorithm, her individual plan preferences and any resource limits placed on her planning capabilities. This model char- acterizes a number of domain-independent planning al- gorithms; as a result, the model can potentially be used to generate descriptions of plans produced from a num- ber of automatic planning systems. Future work will address a number of issues. We will examine t*he use of additional forms of constraints in t)ext descriptions beyond t,hose discussed here (e.g., negative imperatives) and their role in bounding the search space that the hearer model must deal with. In addition, our future work will explore ways to extend this model to contexts where groups of agents use dia- log to coordinate their plan-related beliefs. Finally, we will investigate techniques for reconciling differences between a speaker’s model of the hearer and the meth- ods actually employed by the hearer during interpre- tation of a partial plan description. Acknowledgements The author thanks Johanna Moore, Martha Pollack and the reviewers for their helpful comments. References Dale, R., and Reiter, E. 1995. Computational inter- pretations of the Gricean Maxims in the generation of referring expressions. Applied Artificial Intelligence Journal 9. to appear. Dale, R. 1992. Generating referring expressions: Constructing descriptions in a domain of objects and processes. Cambridge, Massachusetts: MIT Press. Delin, J.; Hartley, A.; Paris, C.; Scott, D.; and Van- der Linden, K. 1994. Expressing procedural relation- ships in multilingual instructions. In Proceedings of the Seventh International Workshop on Natural Lan- guage Generation, 61-70. Dixon, P. 1987. The structure of mental plans for following directions. Journal of Experimental Psy- chology: Learning, Memory and Cognition 13118-26. Elzer, S.; Chu-Carroll, J.; and Carberry, S. 1994. Recognizing and utilizing user preferences in collab- orative consultation dialogues. In Proceedings of the Fourth International Conference on User Modeling, 19 - 24. Graesser, A.; Lang, K.; and Roberts, R. 1991. Ques- tion answering in the context of stories. Journal of Experimental Psychology: General 1201254-277. Graesser, A.; Roberston, S.; and Anderson, P. 1981. Incorporating inferences in narrative representations: a study of how and why. Cognitive Psychology 13:1- 26. Grice, H. P. 1975. Logic and conversation. In Cole, P., and Morgan, J. L., eds., Syntax and Semantics III: Speech Acts. New York, NY: Academic Press. 41-58. Haller, S. 1994. Interactive Generation of Plan Descriptions and Justifications. Ph.D. Dissertation, State University of New York at Buffalo. Kambhampati, S. 1993. Planning as refinement search: A unified framework for comparative anal- ysis of search space size and performance. Technical Report 93-004, Arizona State University, Department of Computer Science and Engineering. Kintsch, W. 1988. The role of knowledge in discourse comprehension: a construction-integration model. Psychological Review 95:163-182. Long, D. L., and Golding, J. M. 1993. Superor- dinate goal inferences: Are they automatically gen- erated during comprehension? Discourse Processes 16:55-73. Mellish, C., and Evans, R. 1989. Natural language generation from plans. Computational Linguistics 15(4). Penberthy, J. S., and Weld, D. 1991. UCPOP: A sound, complete partial order planner for ADL. In Proceedings of the Third International Conference on Knowledge Representation and Reasoning. Tate, A. 1977. Generating project networks. In Pro- ceedings of the International Joint Conference on Ar- tificial Intelligence, 888 - 893. Trabasso, T., and van den Brock, P. 1985. Causal thinking and the representation of narrative events. Journal of Memory and Language 24~612-630. Vander Linden, K. 1993. Speaking of Actions: Choos- ing Rhetorical Status and Grammatical Form in In- structional Text Generation. Ph.D. Dissertation, Uni- versity of Colorado, Department of Computer Sci- ence. Walker, M. 1996. The effect of resource limits and task complexity on collaborative planning in dialog. Artificial Intelligence. to appear. Young, R. M.; Pollack, M. E.; and Moore, J. D. 1994. Decomposition and causality in partial order plan- ning. In Proceedings of the Second International Con- ference on AI and Planning Systems, 188-193. 1080 Natural Language
1996
160