index
int64
0
18.8k
text
stringlengths
0
826k
year
stringdate
1980-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
800
Compliance Viewed as Programming a Damped Spring Stephen J. Buckley * Manufacturing Research IBM Thomas J. Matson Research Center Yorktown Heights, NY Abstract Parts mating often requires the use of compliant mo- tions, which cause an object in the grasp of a robot to slide along obstacles in its environment. This pa- per is about the interface between a compliant motion progra mming system and a compliant motion control system. We propose that in this interface the robot can be modeled as a damped spring. This model al- lows the programmin g system to specify and reason about compliant motions without worrying about low- level control details. The utility of the damped spring model is demonstrated by applications in teaching and planning of compliant motion strategies. This paper is about the programmin g system, and in 1 IntrQduetion A compliant motion causes an object in the grasp of a robot to slide along obstacles in its environment, using them as guides toward a goal region. This type of motion is particularly useful for parts mating. For a robot to perform a variety of compliant tasks, its compliance must be programmable. Figure 1 shows the logical structure of a robot system. The user gives task information to the programmin g system via the user interface, which is usually a high level language or a teleoperator-based teaching system. In the future, tasks will probably be presented via geometric models. Once the programmin g system has been presented with a user task, it sends compliant motion commands to the control system via the control interface. Finally, the control system sends hardware commands to the particular, its interface to the control system. It is the task of the control system to transform the actual dy- namics of the robot to desired dynamics. We propose that for compliant motion the desired dynamics should take the form of a damped spring. We will demonstrate the utility of the damped spring model for two differ- ent types of progra mming systems, a robot teaching system, and a model-based planning system. We will not address control issues here. Work is in progress elsewhere to implement compliant control systems of this nature [Whitney 19851. This section of the paper presents an example of a compliant motion strategy, and describes the damped spring model. Sections 2 and 3 sketch the application of the model to robot teaching and planning, respec- tively. Section 4 presents concluding remarks. robot via the robot interface. 1.1 An Exmple *Performed at the MIT Artificial Intelligence Laboratory. Supported in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract NOOOl4-85-K-0124, in part by General Motors Research Laboratories, under Contract 50-10, in part by the National Sci- ence Foundation, under contract 8451218.DMC, and in part by the IBM Corporation. Figure 2 depicts a three-dimensional T-shaped part, in the grasp of a robot. An obstacle is shown which contains a hole with an adjoining slot. First, the T- shape is to be inserted into the hole. Then, the shaft of the T-shape is to be slid into the adjoining slot. 762 Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. robot gripper Front View Figure 2: An insertion task. A three-dimensional T-shaped part is in the grasp of a robot. First, the T-shape is to be inserted into the hole of the obstacle. Then, the shaft of the T-shape is to be slid into the slot which adjoins the hole. VVe will assume that the robot can translate in three dimensions, but cannot rotate. Finding a solution to this problem is complicated by uncertainty in the initial configuration of the robot, in the position and force sensing of the robot, and in controlling the position and velocity of the robot. We will assume that these uncertainties are bounded. We can use the configuration space representation [Lozano-Perez 19831 to simplify the geometry of the problem. Let r be an arbitrarily chosen reference point on the robot. Consider the positions which r can take without causing a collision between the T-shape and the obstacle. Each face of the obstacle imposes a con- straint on the free motion of F. These constraints are represented explicitly as configuration space surfaces in Figure 3. Th e configuration space surfaces form a sequence of two holes. The first hole represents the constraints on the T-shape while it is inserted into the hole in the obstacle. The second hole follows from an intermediate chamber at the bottom of the first hole, and represents the constraints on the shaft of the T- shape while it is sliding into the slot of the obstacle. Configuration space surfaces such as these comprise a representation of the task geometry that is equivalent to the original geometry, but more explicit. In the new representation, we can think of the robot simply as the reference point F. It is possible to represent arbitrarily complicated polyhedral environments using configura- tion space [Lozano-Perez 19831. first bole second hole eft View start region intermediate chamber Figure 3: Front and left views of the configuration space representation of the T-shape insertion. The start region is an edge on the surface above the holes. The goal region is the bottom face of the second hole. Assume initially that the T-shape is in contact with the top of the obstacle, laterally aligned with the hole. The configuration space representation of this start region is shown in Figure 3. The goal region is the bottom face of the second hole. We are to specify a sequence of compliant motions which moves the T- shape from the start region to the goal region despite bounded sensing and control errors in the robot. The model-based compliant motion planner de- scribed in Section 3 was used to compute a solution for this problem. The planner returned a sequence of two compliant motions. Figure 4 shows the first com- manded motion. The robot is to aim for any point in the black polyhedron. With this commanded motion, the robot will reach and stop on a face of the first hole, as shown in the figure. Friction between the T-shape and the hole face will cause it to stay there. The black polyhedron is behind and more narrow than the stop- ping region to account for possible trajectory errors. The stopping region then becomes the start region for the second commanded motion. Figure 5 shows the second commanded motion. If the robot aims for any Buckley 763 Front View ,start region al region Left View Left View Figure 4: Front and left views of the first commanded motion. The start region is an edge on the surface above the holes. The subgoal for the motion is a face of the first hole, as shown. To attain the subgoal, the robot should aim for any commanded position in the black polyhedron. commanded position in the black polyhedron shown in the figure, then the robot will enter the second hole, slide along the lower side of the hole, and stop in the goal region. 1.2 Assumptions We will use polyhedral models to represent the geom- etry of the robot and its environment. Many objects can be accurately modeled by polyhedra. Curved ob- jects can be approximated by polyhedral models with many faces. To simplify the computations, we will assume that the robot can translate in three dimensions, but cannot rotate. We will assume that the robot is equipped with three-dimensional position and force sensors, which have bounded uncertainty. Figure 5: Front and left views of the second com- manded mot ion. The start region is a face of the first hole, as shown. The goal region is the bottom of the second hole. To attain the goal, the robot should aim for any commanded position in the black polyhedron. 1.3 The Damped Spring Compliance Model In order for a programmin g system to specify compli- ant motionsY it is necessary to have an abstract model of compliance. Currently, most noncompliant robots are programmed by commanding position goals. We have extended this method to compliant robots by at- taching an imaginary spring/damper combination be- tween the robot and the commanded position, as il- lustrated in Figure 6. Then, we simply let the robot go to where the spring pulls it. This simple model is called the dumped spring compliance model, and can be described mathematically by the equation f, = b(;c - p + x) (1) where f, is the reaction force on the robot, b is a damp- ing constant, p is the desired position of the robot, x is the actual position, and 2 is the actual velocity. Given a commanded position p, this compliance model implies the following robot behavior under ideal con- 764 Robotics / / Figure 6: To specify the insertion of a block into a square hole, we attach an imaginary spring/damper combination between the block and a commanded po- sition p behind the hole. ditions: In free space, if x is not equal to p, the robot moves in a straight line from x to p. In contact, if x - p is not contained in the friction cone of the con- tact, the robot slides toward the projection of p onto the configuration space surface. The damped spring compliance model is an exten- sion of the generalized spring formulation of Salisbury [1980], and draws on ideas from Lozano-Perez, Ma- son, and Taylor [1984]. It is also the first-order ana- log of the generalized impedance model [Hogan 19841. We chose to ignore the second order terms present in Hogan’s equation to avoid parabolic trajectories, which are more difficult to specify and reason about than straight-line trajectories. A lone spring would have been even simpler, but damping was necessary because the spring equation f, = kx does not contain a time pa- rameter, and thus cannot be used to specify the trajec- tory of the robot. The damping constant b may affect the stability of the robot, but under stable conditions and it has no effect on the outcome of a. motion, other than on the time it takes to execute the motion. Thus, the desired position p is the only parameter required from the user. It is the task of the underlying control system to present the desired dynamics of Equation 1 to the teaching system. For short-term testing purposes, we executed compliant motions at low speed on an IBM 7565 robot using a simple feedback loop written in AML. In the presence of sensing and control uncertainty, ideal trajectories cannot be attained reliably. Let ep be the maximum distance between a commanded position and the actual position attained by the control system environment etart region goal region compliant otion strategy commanded motion start region goal region Figure 7: An operational view of the robot teaching system. The user submits a problem to the system, consisting of an environment model, a start region, and a goal region. Then, the user is asked to suggest a series of commanded positions. in free space. Let f?, be the maximum angle between a commanded velocity and the actual velocity attained by the control system in free space. In Buckley [1987], we show that, in terms of these control error bounds, the robot can tra.ck a desired trajectory to within where d is the distance between interpolation points along the commanded trajectory. If the trajectory con- troller is implemented as an analog circuit, then a con- tinuous stream of control positions can be passed to the position controller, reducing ct to ep. The set of pos- sible free space trajectories can thus be bounded by a cylinder of radius ct. When the robot strikes a surface, this cylinder is projected onto the surface, forming a planar cylinder of radius et. 1 A Motion Termination The damped spring model allows one to program a compliant robot by issuing a commanded position. By choosing the comma.nded position carefully, one can of- ten cause the robot to stop in a desired goal region by sticking. It is sometimes useful to specify other types of motion termination as well. Our system allows addi- tional motion termination by position and force sens- ing. As the robot approaches a commanded position, it stops when its sensed position is contained in a spec- ified set of termination positions, and the orientation of its sensed force is contained in a specified set of ter- mination forces. 2 Application to obot Teaching Buckley 765 This section describes the use of the damped spring model in an implemented robot teaching system. Fig- ure 7 shows an operational view of the teaching system. The user submits a problem to the system, consisting Of: r~ a geometric model, representing the robot and its environment (e.g. workpieces, feeders, fixtures, tools). b a start region, which contains configurations of the robot. all possible initial 0 a goal region, in which the robot is to terminate under the desired compliant motion strategy. The teaching system displays the start and goal re- gions graphically, and prompts the user. The user then submits a commanded position p. In principle, p could be entered by guiding the robot, or with a light pen. In our experiment, p was simply typed in. The user should choose p in the hope that it will cause the robot to reach the goal region from the start region. If this is impossible, then the user should choose p in the hope that it will cause the robot to reach the goal region from an achievable intermediate goal. When p is entered, the teaching system computes a set R of configura.tions in which the robot is in con- tact with its environment, and from which the goal region can be reached reliably via p. R is called a pre- image of the goal region under the commanded motion p [Lozano-Perez, hIason, and Taylor 19841. The pre- image R is stored in a table along with p. R is now said to be soZved, and is added to the goal region. If a subset S’ of the start region is recognizably contained in R despite sensing uncertainty, then S’ is solved, and can be removed from the start region. If the new start region is empty, then the problem is solved. Otherwise, the system displays the new start and goal regions, and user interaction continues. By iteratively reducing the size of the start region, and increasing the size of the goal region, it is hoped that the user and system can together converge on a successful strategy. Each solved region R in the final table is accompa- nied by a commanded position, which is to be issued upon reaching R. On execution, the robot looks up its present sensed position and force orientation in the table. If the table entry corresponds to a goal region, then the robot stops. Otherwise, it executes the cor- responding commanded motion. This iterative lookup process implements a conditional test, which chooses the next commanded motion based on sensory input. This teaching system has several advantages over previous teaching systems, including: 1. Compliant motions have traditionally been difll- cult to specify by teaching. In our teaching sys- tem, specifying a compliant motion is simply a matter of specifying a commanded position. 2. Conditional tests are diflicult to specify by teach- ing. In our teaching system, conditional tests are inferred automatically. 3. Debugging a compliant motion strategy is time- consuming and costly. In principle, there is no need to debug the motion strategies produced by our teaching system; their reliability is ensured, assuming bounded uncertainty in the starting con- figuration of the robot, and in robot sensing and control. Pre-images were first proposed by Lozano-Perez, Mason, and Taylor as a subtask in a motion plan- ner. Their proposal did not specify an implementa- tion. Erdmann [1984, 19861 showed that for certain classes of termination conditions, pre-images can be computed by geometric backprojection. Erdmann im- plemented his scheme for planar robots, under the gen- eT&zed damper compliance model [Whitney 19771. Ro- tations in the plane were implemented by constructing slice projections for various ranges of rotations. We adapted Erdmann’s algorithm to three Euclidean di- mensions, using the damped spring compliance model. Under this compliance model, backprojection can be implemented by a series of three-dimensional set op- erations. The details of our algorithm are given in Buckley [1987, 19881. 3 Application to Robot Planning This section describes the use of the damped spring model in a model-based compliant motion planner. Canny and Reif [1987] showed that the problem of planning compliant motions with uncertainty is expo- nential time hard. To simplify the problem, we approx- imate the environment of the robot as a finite state space. Each state is a set of configuration space ver- tices, edges, and faces. The planner searches for a com- pliant motion strategy by repeatedly choosing a state, and constructing arcs which connect the state to other 766 Robotics states. An arc represents a set of commanded positions which are guaranteed to get from one state to another. Arc construction proceeds until a successful compliant motion strategy has been constructed from the start state to a goal state. The main computational activity in the planner is arc construction. Thanks to the generalized spring compliance model, this can be implemented by a se- ries of three-dimensional set operations. The details of our a.lgorithm are given in Buckley [ 19871. The planner was implemented and applied to the problem shown in Figure 2. The configuration space environment for this problem contains 118 vertices, edges, and faces (Figure 3). In this nontrivial envi- ronment, the planner synthesized the two-step motion strategy shown in Figures 4 and 5. The damped spring model is a useful model for pro- gramming compliant motions, for the following rea- sons: 1. It is easy to specify compliant motions using the model. One needs to specify a commanded posi- tion and a termination condition, which consists of a set of termination positions and a set of termi- nation forces. Specifying a commanded position is already a popular method of programming robots. Specifying termination conditions is not quite as commonplace, but in Sections 2 and 3 we have shown that a programmin g system can automat- ically compute termination conditions from task information. 2. Using the model, one can specify compliant mo- tions with wide utility. Commanded positions in combination with termination conditions provide a great deal of flexibility. 3. Reasoning about damped spring motions is basi- cally an application of set theory, as illustrated by the teaching and planning examples. With three degrees of freedom or less, set operations can be implemented using traditional computational ge- ometry. Above three dimensions, there are some computational difficulties, which are the subject of ongoing research. Acknowledgments Tom&s Lozano-Perez advised this research. John Canny, Bruce Donald, and Mike Erdmann provided technical advice. Jim Korein gave advice on the pre- sentation. eferences Buckley, S. 1987. “Planning and Teaching Compliant Motion Strategies”, Ph.D. dissertation, MIT Depart- ment of Electrical Engineering and Computer Science, also AI-TR-936, MIT Artificial Intelligence Labora- tory. Buckley, S. 1988. “Teaching Compliant Motion Strate- gies”, to be published in IEEE Journal of Robotics and Automation. Canny, J., and Reif, J. 1987. “New Lower Bound Tech- niques for Robot Motion Planning Problems”, 28th Annual Symposium on Foundations of Computer Sci- ence, Cambridge, Massachusetts, pp. 49-60. Erdmann, M. 1984. “On Motion Planning With Uncer- tainty”, S.M. dissertation, MIT Department of Electri- cal Enginering and Computer Science, also AI-TR-810, MIT Artificial Intelligence Laboratory. Erdmann, M. 1986. “Using Backprojections for Fine Motion Planning with Uncertainty”, InteTnationad Journal of Robotics Research 5(l), pp. 19-45. Hogan, N. 1984. “Impedance Control of Industrial Robots”, Robotics and ComputeT-Integrated Manufac- turing, Vol. 1, No. 1, pp. 97-113. Lozano-Phrez, T. 1983. “Spatial Planning: A Configu- ration Space Approach”, IEEE Transactions on Com- puters 32(2), pp. 108-120. Lozano-Perez, T., Mason, M., and Taylor, R. 1984. “Automatic Synthesis of Fine-Motion Strategies for Robots”, International Journal of Robotics Research 3(l), pp. 3-24. Salisbury, J.K. 1980. “Active Stiffness Control of a Ma- nipulator in Cartesian Coordinates”, IEEE Conference on Decision and Control, Albuquerque, pp. 95-100. Whitney, D. 1977. “Force Feedback Control of Ma- nipulator Fine &Iotions”, ASME Journal of Dynamic Systems, Measurement, and Control 99, pp. 91-97. Whitney, D. 1985. “Historical Perspective and State of the Art in Robot Force Control”, IEEE International Conference on Robotics and Automation, St. Louis, pp. 262-268. IBuckley 767
1988
57
801
Reasoning About Grasping S. A. Stansfield Sandia National Laboratories Albuquerque, N. M. 87185 Abstract The promise of robots for the future is that of intel- ligent, autonomous machines functioning in a variety of tasks and situations. If this promise is to be met, then it is vital that robots be capable of grasping and manipulating a wide range of objects in the ex- ecution of highly variable tasks. A current model of human grasping divides the grasp into two stages, a precontact stage and a postcontact stage. In this pa- per, we present a rule-based reasoning system and an object representation paradigm for a robotic sys- tem which utilizes this model to reason about grasp- ing during the precontact stage. Sensed object fea- tures and their spatial relations are used to invoke a set of hand preshapes and reach parameters for the robot arm/hand. The system has been implemented in PROLOG and results are presented to illustrate how the system functions. I. Introduction Most robots today are little more than advanced automa tion, limited to simple, repetitive tasks in l!ighly structured environments. But the promise of robotics for the future is that of autonomous machines capable of carrying out complex tasks in unknown or partially specified worlds - robotic explorers, for example, functioning in space and beneath the sea, or intelligent machines capable of enter- ing hazardous environments to perform rescue and clean- up operations. Such robots will need not only to reason about their environment, but also to act upon it in intelli- gent and flexible ways. For this reason, the design of general purpose hands and grasping algorithms has begun to receive attention. Not only is grasping of vital importance to our goal of in- telligent, autonomous robots, but it will also find utility in such near-term industrial applications as flexible manufac- turing and dextrous assembly. Ultimately, it will provide the robot with a mechanism for interfacing with and learn- ing about its world, independent of human programmers. Previous research into robotic grasping has been pre- dominantly analytical. Unfortunately, while such research provides us with useful insights into the mechanisms of grasping, it is often difficult to apply to “real world” situ- ations. This is due to the fact that the analysis itself often becomes unwieldy when too many factors are taken into consideration. Therefore, simplifying assumptions about the world must be made. For example, grasps are often assumed to be planar. They are often modeled as indi- vidual contacts on an object, sometimes with the further constraint that the contacts be point only, or that there be no friction involved in the grasp. Unknowns and un- certainties in the environment are usually not taken into account. There are no mechanisms within these models for automatically integrating information about the task or the perceived state of the world. And finally, the difficulty in controlling a multiple-degree-of-freedom robot hand ca- pable of providing such contacts - and the inherent inac- curacies of such a device - are usually not considered. Recently, psychologists and cognitive scientists have also become interested in the grasping problem. Their ob- servations on human grasping provide a number of poten- tially useful insights for the researcher in robotic grasp- ing. For example, it has been noted that humans tend to use a predetermined set of grasp configurations in the initial stage of a grasp [Jeannerod’%, Lyons851. Often, sev- eral fingers are coupled, reducing the degrees of freedom within the system [Ibera1187]. Righ-level knowledge about the task, the object to be grasped, and the perceived state of the world affect grasp choice and execution [ArbibSS, Cutkosky87, KlatzkygG]. And finally, perceptual informa- tion is utilized in all stages of the grasp. What all of this offers to robotic grasping is a mechanism for simplifying the synthesis and control of a grasp, while making fewer limiting assumptions about the world in general. This research addresses the design and implementa- tion of a robotic system that incorporates these ideas. The goal is robotic grasping of generic, or basic level, objecbs. If our robot is familiar with the object “screwdriver,” for example, then we would like it to be able to grasp alsy screwdriver which it ma,y encounter during the execution of some task - even a screwdriver which it may not have pre- viously encountered. To accomplish this, we need not only to “partition” the grasp into a set of achievable subtasks, but also to consider fundamental questions concerning the synergism of motor-interaction and high-level knowledge as mediated by perception. Theories of human grasping and manipulation offer guidance in both the former a.nd the latter, and we propose to use these as a foundation for our design. This paper presents a rule-based system to be used by 768 Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. a robot to reason about grasping in the precontact stage. The object representation paradigm upon which this rule- base operates is also described. Both the reasoning and the representation have been implemented in PROLOG, and we present some examples as a way of illustrating the oper- ation of the system. Finally, we discuss briefly how the system will be extended in the future. II. A Two-stage Model of Grasping One model of human grasping proposed by Jeannerod [Jeannerod78], Arbib, et al. [Arbib83], and Tomovic, et al. [Tomovic87], divides the grasp into two stages. In the initial, precontact stage, visual information and high-level knowledge are used in a feedforward manner to drive hand preshaping, hand/wrist orientation, and a ballistic reach toward the object. In the second stage of the grasp, haptic (tactile plus kinesthetic) information is used in a feedback manner to adjust and fine-tune the grasp. Both Arbib and Tomovic hypothesize complex, high-level control mecha- nisms involving both perceptual and cognitive functions, as well as low-level “ actions. An approach to robotic grasping would implement this model in the following way: In the precontact stage of the grasp, a vision system (or a combination of vision and exploratory touch) would be used to extract object features relevant to the task of grasping. These features would in- clude location of the object in space, segmentation of the object into components, and the determination of the rela- tions among these components. Bigh-level reasoning would then utilize this information, as well as knowledge about the task and the perceived state of the world, to select an appropriate hand preshape and to drive the reach toward the target. In the postcontact stage, the hand would make contact with the object and relevant haptic features would be extracted and used to fine-tune the grasp. This paper concerns itself with the precontact stage. We are interested primarily in the following question: What is the proper knowledge-base and object representa- tion paradigm to allow the system to generate the appro- priate set of grasp preshapes and grasp/reach parameters for the given object? We do not currently address the is- sue of what the optimal set of hand preshapes is. Lyons [Lyons85] has done such an analysis. Be proposes a set of hand configurations useful for grasping, and the three which we have chosen to implement are similar to those proposed by him. The research environment in which this work is being carried out consists of a Salisbury robot hand under posi- tion and force control; a PUMA 560 robot arm upon which the hand is mounted; a vision system; and a Symbolics 3645 running PROLOG to provide the high-level program- ming environment. Figure 1: Wrap hand preshape. III. Hand Preshaping and Reach We have chosen to implement the following three grasp preshapes: wrap, pinch, and grip. Figures 1 - 3 show these preshapes as implemented using the Salisbury hand. Each grasp is described by the number of virtual fingers which it uses (one, two, or three) and by the type of contact which the fingers make with the object (point or extended). The concept of virtual fingers is due to Arbib and Iberall [Arbib83]. Virtual fingers involve the use of one or more real fingers to implemept a grasp. The number of real fingers used depends upon the parameters of the object or component to be grasped. In this work, we will fix the number of real fingers to be used per virtual finger and will not yet attempt to do a dynamic mapping. Thus, a wrap involves two virtual fingers, the object is enclosed, and the contacts are extended. A grip involves three virtual fingers, the object is held at the fingertips, and the contacts are point. In the precontact stage of the grasp, the system uses information extracted from the object to generate a set of grasp preshapes and to determine parameters for both the hand shape and the reach. Currently, the only pa rameter of the hand shape with which we are concerned is the distance between the fingers. The high-level reasoning module checks that the selected object component will fit into the span of the hand for the chosen grasp. Parame- ters of the reach are: the approach plane, that is, the plane perpendicular to the axis along which the arm/hand will approach the target; the target point, that is, the point on the component which the approach axis will intersect; and the rotation of the wrist/hand to provide proper placement of the fingers on the component. The hand shapes and reach parameters are generated by a set of rules which operate on an object representation scheme which we call the spatial polyhedron. Because this Sta&ield 769 Figure 2: Pinch hand preshape. Figure 3: Grip hand preshape. representation is the driving force behind the high-level reasoning within the system, we will present it briefly be- low. We will then return to our discussion of reasoning for grasping and present some examples. IV. Object Representation The spatial polyhedral representation was developed as a mechanism for representing generic, or basic level, objects for active tactile exploration and categorization. It is a feature-based representation which includes both defini- tional information about an object in the form of a hi- erarchy of frames, and spatial information in the form of . a set of aspects [Koenderink79]. All information is sym- bolic, and the goal is to represent objects in such a way that generic objects such as “ may be modelled and reasoned about. An additional requirement is that the representation must be “ enough to allow its use within a robotic system equipped with real sensors. The representation is presented fully in [Stansfield87], and its successful use for categorizing basic objects explored by an active robotic system is shown. In this paper, we show how it may be used by a reasoning system to generate grasps. To that end, we will describe the representation informally and present an example. Objects within the system are defined hierarchically. An object is made up of a set of invariant components (a cup consists of a body and a handle, for example.) Each component is described by a set of features (the body con- sists of a curved surface, a planar surface, and a rim con- tour.) The features are described by view-dependent pa- rameters (the handle viewed from above has a different appearance than it does when viewed from the side.) In addition, as one moves around an object, different compo- nents will come into or leave one view. This information is embodied in the spatial polyhedron, which may be de- scribed informally as follows. Imagine an object at the center of an n-sided polyhe- dron. If the object were to be viewed, or sensed, along a line normal to each face of this polyhedron, then certain components and features of the object would be “ able while all others would not. Slight changes in atti- tude as the viewer moves around the object will not result in any new features coming into view. When the viewer has moved sufficiently, however, then he will be sensing the object from a different “ (or face of the spatial polyhedron) and different components and features will be viewable. Thus we model an object by mapping to each face of the spatial polyhedron all of the features which we expect to be “ along that face. This mapping consists of a list of these features and their appearance from the specified view. Thus the faces of the spatial poly- hedron represent different aspects of the object. Figure 4 shows a psuedo-PROLOG implementation of this repre- sentation for a glass. The face predicates implement the spatial polyhedron. 770 Robotics object(glass, bounding-volume, dimension) wrap-right-body(object) component(glass,body,bounding-volume) IF facecglass, body, contour:(rim,curved, right-face has curved surface AND face bounds-on-radius),sidel) (glass, body, surface:(nonelasti ,c, right-face has no other components front-face has curved surface AND back-face has curved surface AND distance between front-face and back-face fits within span of hand THEN noncompliant,smooth,planar, bottom-surface),countour (border, curved,bounds,on,radius) side2) facecglass, body, surface:(nonelastic, noncompliant,smooth,curved, side,surface),side3) face(glass, body, surface:(nonelastic, noncompliant,smooth,curved, side,surface),side4) valid grasp for object is wrap for component body. approach target is center of curved surface from right. oppositions are curved surface in front and curved surface in back. face(glass, body, surface:(nonelastic, noncompliant,smooth,curved, side,surface),side5) facecglass, body, surface:(nonelastic, Figure 5: Rule for invoking wrap preshape for a body from the right. noncompliant,smooth,curved, side,surface),side6) components.) Figure 4: Representation of glass. The rule-base for grasping consists of a set of rules concerning which features of an object must be present, and what the relations among these features must be, in The object database to be used by the system consists of portions. When there are multiple hypotheses concerning a set of generic objects represented as described above. It is assumed that vision, or a combination of vision and exploratory touch, has been used to extract the pertinent features and relations from the object to be grasped and V. that this information has been integrated into a symbolic easoning A representation also accessible to the system. Stansfield [Stansfield87] describes in detail a robotic system which utilizes vision and touch to explore and cat- egorize objects. The output of such a system serves as the input to the one described here. Reasoning for grasping consists of a set of rules concerning which features and relations must be present in the explored object in order to invoke a particular grasp preshape and to generate the grasp/reach parameters. Reasoning about grasps works as follows: An attempt is first made to categorize the object as one of those known to the system. It is not necessary that the robot recognize an object in order to grasp it. Indeed, the robot should be capable of grasping objects which it does not recog- nize. However, if an object is recognized, then assump- tions may be made about those portions which have not been sensed (that the back-side of a glass is a curved sur- face, for instance.) These assumptions may then be used by the system when generating grasps. If the object is not recognized, then the system will only generate grasps which place the fingers on sensed portions of the object, since it assumes that nothing is known about the unsensed the grasp and reach are invoked. Figure 5 shows the rule for order for a particular hand preshape to be valid for grasp- invoking a ing the sensed object. wrap grasp of the If these conditions are met, then an additional set of rules for generating the parameters of body of an object from’ the right. The necessary conditions are that there be a set of three adjacent curved surfaces; that the approach surface be unoccluded by other components of the object; and that the body fit within the span of the hand. All information concerning which features of the object are present, and what the parameters of these features are for a given aspect, comes directly from the spatial polyhedron, as do the target approach plane and the oppositions. The target point is currently predetermined for each feature type. Oppositions determine the placement of the virtual fingers and the rotation of the wrist. The term opposition Object hypothesis is: glass If object is borrowed from Iberall [Iberal187]. She uses the term is glass then these to describe the way in which the hand can apply forces components are missing: around a given object for a given task. We use the term in none essentially the same way. Oppositions are to some extent implicit in the relations between pairs of faces of the spa- tial polyhedron. The type of feature and its parameters, as well as the type of hand preshape under consideration, determine the choice of oppositions. VI. iscussion Example 1: Glass. In our first example, the input object is a glass. The system generates the following output: object identity, the system chooses the simplest. (The sim- plest object is the one with the fewest number of unsensed Stansfield 77 1 Object hypothesis is: mug If object is mug then these components are missing: handle I'm going to assume that the object is glass The object may be grasped as follows: Use hand preshape Wrap for component Body Approach target is the center of the curved surface from the right Opposition 1: curved surface from the front Opposition 2: curved surface from the back Use hand preshape Wrap for component Body Approach target is the center of the curved surface from the left Opposition I: curved surface from the front Opposition 2: curved surface from the back Use hand preshape Wrap for component Body Approach target is the center of the curved surf ace from the front Opposition 1: curved surface from the right Opposition 2: curved surface from the left Use hand preshape Pinch for component Body Approach target is point on rim from top Opposition 1: inside of chosen rim point Opposition 2: outside of chosen rim point Use hand preshape Grip for component Body Approach target is center of contour from top Opposition 1: contour from left Opposition 2: contour from right Opposition 3: contour from front Since the object is assumed to be a glass, the system generates the wrap preshape for the body in three different configurations (approach from the front, the left, and the right). This is because it assumes a curved surface behind the object upon which it may place fingers. In addition, grip and pinch preshapes are generated for the rim from above. Example 2: Unknown Object. When the glass and mug objects are removed from the database, and the same sensed object is input, the system can no longer categorize the object as one with which it is familiar. Therefore, it generates the following: I don't recognize the object I'm not making any assumptions about object identity The object may be grasped as follows: Use hand preshape Wrap for component Body Approach target is the center of the curved surface from the front Opposition 1: curved surface from the right Opposition 2: curved surface from the left Use hand preshape Pinch for component Body Approach target is point on rim from top Opposition 1: inside of chosen rim point Opposition 2 : outside of chosen rim point Use hand preshape Grip for component Body Approach target is center of contour from top Opposition 1: contour from left Opposition 2: contour from right Opposition 3: contour from front The wrap preshapes which approach from the left and the right are not generated this time because they would require that a finger be placed on the unexplored back-side of the object. Example 3: Mug. Our final example illustrates a set of grasps generated for an object with multiple components. The object is a mug with the handle visible and to the left of the body. The system generates the following: Object hypothesis is : mug I’m going to assume that the object is mug The object may be grasped as follows: Use hand preshape Wrap for component Body Approach target is the center of the curved surface from the right Opposition 1: curved surface from the front Opposition 2: curved surface from the back Use hand preshape Pinch for component Body Approach target is point on rim from top Opposition 1: inside of chosen rim point Opposition 2: outside of chosen rim point Use hand preshape Grip for component Body Approach target is center of contour from top Opposition 1: contour from left Opposition 2: contour from right Opposition 3: contour from front Use hand preshape Wrap for component handle handle is on the left Approach target is the center of the part from the left Opposition 1: part from the front Opposition 2: part from the back 772 Robotics Since the handle is on the left side of the body, the only wrap preshape generated for the body is an approach from the right. This is because the hanclle would be in the way if a wrap preshape were attempted from any other approach direction. The system has also generated a wrap preshape for the handle of the mug. Summary and Future In this paper, we have presented a rule-based reasoning system and an object representation paradigm for reason- ing about the precontact stage of a grasp. During this stage, object features and high-level knowledge are used in a feedforward manner to generate a hand preshaping, wrist/hand orientation, and ballistic reach toward the ob- ject. The spatial polyhedron was introduced as a mecha- nism for representing and reasoning about generic objects. Rules which utilize the information embodied in this rep- resentation were used to generate a set of grasp preshapes and reach parameters for the given object. Future work in this research will continue in two di- rections. First, the set of grasps generated by the system must be pruned down to one, which will be sent to the robotic devices. The grasp choice is current’ly left to the operator. An obvious extension to the high-level reasoner presented here would prune the grasps based on the task and the perceived state of the world. Thus, if the ta.sk was to “pour from the glass”, then any grasp which placed the hand over the cavity would be pruned. Such knowl- edge could be embodied in a set of rules utilized by a task planner. The perceived state of the world will affect grasp choice, as well. A grasp which requires the hand to ap- proach the object from the side would be pruned if the object were surrounded on all sides by other objects. The reasoning might also be extended to allow for use of multiple hands: the system currently generates a set of grasps for the object independent of the devices. Rules which map these grasps to different hands in a coordinated manner might be used to allow the system to grasp objects which are too large for a single hand. The second direction in which this research will pro- ceed is to extend the grasp procedure to include the postcontact stage. This will involve bringing the hand (equipped with tactile sensors) into contact with the object and then utilizing a robotic tactile perception system, such as that described in Stansfield [Stansfield87], to fine-tune the grasp. References [Arbib83] Arbib, hf., T. Ib’erall, a.nd D. Lyons. “Coor- dinated Control Programs for Movements of the Hand.” COINS TR 83-25. University of Massachusetts, Amherst. August, 1983. [Cutkosky87] Cutkosky, h:f. and P. Wright. “Modeling Manufacturing Grips and Correlations with the Design of Robotic Hands.” Proceedin,gs [Ibera1187] of the IEEE Conference on Robotics and Automation. Raleigh, N.C. April, 1987. pp. 1533-1539. Iberall, T. “The Nature of Human Prehen- sion: Three Dextrous Hands in One.” Pro- ceedings of the IEEE Conference on Robotics and Automation. Raleigh, N.C. April, 1987. pp. 396-401. [Jeannerod Jeannerod, hl. “Intersegmental Coordina- tion During Reaching at Natural Visual Ob- jects.” In Attention and Performance, IX. Long, J. and A. Baddley, Eds. Lawrence Erl- baum. Hillsdale, N.J. 1978. [Klatzky86] Klatzky, R., B. McCloskey, S. Doherty, J. Pellegrino, and T. Smith. “Hand Sha.ping and Object Processing.” Cognitive Science TR 8605. University of California, Santa Barbara. 1986. [Koenderink’lB] Koenderink, J. and A. van Doorn. “The Internal Representation of Solid Shape with Respect to Vision.” Biological Cybernetics. Vol. 32 (1979). pp. 211-216. [Lyons851 Lyons, D. “A Simple Set of Grasps for a, Dextrous Hand.” Proceedings of the IEEE Conference on Robotics and Automation. St. Louis, MO. April, 1985. pp. 588-593. [Stansfield87] St ansfield, S. “Visually-guided Haptic Ob- ject Recognition.” Ph. D. Thesis. University of Pennsylvania. December, 1987. [Tomovic87] Tomovic, R., G. Bekey, and W. Karplus. “A Strategy for Grasp Synthesis with Mul- tifingered Robot Hands.” Proceedings of the IEEE Conference on Robotics and Automa- tion. Raleigh, N.C. April, 1987. pp. 83-89. Stansfield 773
1988
58
802
Benjamin J. Mnipers and Uung-Tai Byun Department of Computer Sciences The Universit of Texas at Austin Austin, exas 78712 lY Abstract We present a qualitative method for a mobile robot to explore an unknown environment and learn a map, which can be robust in the face of various possible errors in the real world. Procedural knowledge for the movement, topological modeI for the structure of the environment, and metrical information for geometrical accuracy are separately represented in our method, whereas traditional methods describe the environment mainly by metrical information. The topological model consists of distinctive places and local travel edges linking nearby distinctive places. A distinctive place is defined as the local maximum of some measure of distinctiveness appropriate to its immediate neighborhood, and is found by a hill-climbing search. Local travel edges are defined in terms of local control strategies required for travel. How to find distinctive places and follow edges is the procedural knowledge which the robot learns dynamically during exploration stage and guides the robot in the navigation stage. An accurate topological model is created by linking places and edges, and allows metrical information to be ac- cumulated with reduced vulnerability to metrical er- rors. We describe a working simulation in which a robot, NX, with range sensors explores a variety of 2-D environments and we give its successful results under varying levels of random sensor error. 1 l Introduction Traditional approaches to the robot exploration, navigation and map-learning, based on the accumulation of accurate metrical descriptions of the environment, are highly vul- nerable to metrical inaccuracy in sensory devices and move- ment actuators [Brooks, 1985; Chatila and Laumond, 1985; Koch et al., 1985; Moravec and Elfes, 1985; Rao et al., 1986; Turchan and Wong, 1985; Kadonoff et al., 19861. Recent work taking a more qualitative approach [Kuipers and Byun, 1987; Levitt et al., 19871 shows great promise of overcoming the fragility of purely metrical methods. Humans perform well at spatial learning in spite of sensory and processing limitations [Kuipers, 19791 and partial knowledge [Kuipers, 19831. Many cognitive scientists [Lynch, 1960; Piaget and Inhelder, 1967; Siegel and White, 19751 observe that a cognitive map is organized into succes- *Support for this research is provided by NASA, under grant number sive layers. These results suggest that the basic element of a useful and powerful description of the environment is a topological description. The layered model consists of the identification and recognition of landmarks and places, procedural knowledge of routes, a topological model of con- nectivity, order, and containment, and metrical information of shapes, distance, direction, orientation, and local and global coordinate systems. Our approach attempts to apply the method to the problem of robot exploration and map-learning. The central description of the spatial environment in our qualitative approach is a topological model as in the TOUR model [Kuipers, 19781. The model consists of a set of nodes and arcs, where nodes represent distinctively recognizable places in the environment, and arcs represent travel edges con- necting them. The nodes and arcs are defined procedurally in terms of the sensorimotor capabilities of the robot. Metrical information is added on top of the topological model. A place in the environment corresponding to a node in the topological model must be ZocalZy distinctive within its im- mediate neighborhood by one geometric criterion or another. We introduce locally meaningful “distinctiveness” measures defined on a subset of the sensory features, by which some distinctive features can be maximized at a distinctive place. We define the signature of a distinctive place to be the subset of features, the distinctiveness measures, and the feature values, which are maximized at the place. A hill-climbing search is used to identify and recognize a distinctive place when the robot is in its neighborhood. When exploring, both the signature and the local maximum must be found. When returning to a known place, a robot is guided by the known signature. Travel edges corresponding to arcs are defined by local control strategies which describe how the robot can follow the link connecting two distinctive places. This local control strategy depends on the local environment and there may be several possible strategies. For example, in one environment, following the midline of a corridor may be reasonable; in another environment, maintaining a certain distance from a single boundary on one side is appropriate. We have implemented and tested successfully our ap- proach with a working simulator. We will discuss our method in detail, simulation results, and further extension. NAG9-200. 774 Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Model wit roeedural and ation The basic structure of a map, in our approach, is the topologi- cal model of which nodes are distinctive places and arcs are travel edges. We discuss how to define distinctive places and travel edges, and their procedural and metrical descriptions with a robot instance, NX. 2-l A Robot Instance NX We hypothesize that our approach is supported by any sen- sorirnotor system that provides sufficiently rich sensory input, and takes sufficiently small steps through the environment. For simplicity and concreteness, we currently define a specific instance of a robot NX which has sixteen sonar-type distance sensors covering 360 degrees with equal angle difference be- tween adjacent sensors, two tractor-type chains for move- ment, and an absolute compass for global orientation. Thus the input to NX is a vector of time-varying, real-valued func- tions [S,(t), S,(t), . . . . . S,,(t), Compass(t)]. Although we use NX to test our qualitative method, our approach does not depend critically on the choice of sensors and movement ac- tuators. 2.2 Distinctive Places Figure 1. Distinctive points in a neighborhood In order to have the nodes of the network-structured topological model we need to look for distinctive places (DPs). If we consider the geometry of a simple 2-D local neighborhood in Figure 1, we can argue that the dotted lines define a set of places that are qualitatively distinctive for one reason or another. There is clearly a place which is the most distinctive compared to its surroundings. Our approach at- tempts to find a suitable criterion for defining a maximally distinctive place in any given neighborhood. In environments dominated by obstacles and extended landmarks, we believe that a map based on DPs and connecting edges provides a more robust topological representation than, for example, regions related by adjacency. In an environment dominated by remote, point-like landmarks, the reverse may be true [Levitt et al., 19871. In order to formulate locally meaningful “distinctiveness” measures, we need to determine which sensory characteristics provide the distinguishing features by which a place becomes locally distinctive. We hypothesize that any reasonably rich sensory system will have distinctiveness measures that can be defined in terms of low level sensory input. Note that it is not necessary for a place to be globally distinctive; it is only necessary to be distinguished from other points in its im- mediate neighborhood. A set of production rules is used to decide whether NX is in the neighborhood of a DP and what distinctive features can be maximized in that neighborhood. Each rule consists of assumptions and a decision for the distinctive features. Here is an example: (d&rule DP-I210 (if (>= (number-of-objects) 3) (not (all-objects-far-away)) (not (there-is-wide-open-space))) (then (am-I-in-neighborhood-DP is 'DP-syrnm-equal))) Once NX knows what distinctive features can be max- imized locally in the neighborhood of a DP, NX performs a hill-climbing search around the neighborhood looking for the point of maximum distinctiveness (e.g., minimizing dif- ferences of distances to near objects, if DP-RlO is true). When a DP is identified, it is added to the topological model with its distinctiveness measures, connectivity to edges, and metrical information. The individual distinctiveness measures are an open- ended, domain- and sensor-specific set of measures. For our current robot, the measures we can define include the follow- ing. Q Extent of distance differences to near objects. Q Extent and quality of symmetry across the center of the robot or a line. a Temporal discontinuity small step. in one or more sensors, given a * Number of directions of spaces around the robot. reasonable motion into open e Temporal change in number of directions of motion provided by the distinct open spaces, with a small step. that minimizes or maximizes 0 The point along a path lateral distance readings. We summarize the levels of description of DPs: (An ex- ample is given in Section 3.) ge for a DP: Ability to recognize the neighborhood, knowledge of what features can be maximized in the neighborhood, and ability to perform the hill-climbing search to get to the DP. Learned in the exploration stage and used in the navigation stage. 8 Topological descriptions of a DP: A node in the topological model, connected to edges and other DPs. Added to the topological model when it is found and possibly updated during the process of constructing the model. Q Metrical information about a DP: Local geometry like directions to OPEN-SPACE, shape of near objects, dis- tances and directions to objects, etc. Continuously ac- cumulated in the exploration and navigation stage and averaged to minimize metrical error. 23 TsavelJ Edges Travel edges are defined in terms of local control strategies (LCS). Once a DP has been identified, the robot moves to another place by choosing an appropriate control strategy. Kuipers and Byun 775 While following an edge with a chosen strategy, the robot continues to analyze its sensory input for evidence of new distinctive features. Once the next place has been identified and defined, the arc connecting the two DPs is defined procedurally in terms of the LCS required to follow it. The edges followed during exploration are defined by some distinctiveness criterion that is sufficient to specify a one-dimensional set of points. Therefore, following our con- trol strategies, the robot will follow the midline of a corridor, or walk along the edge of a large space, but will not venture into the interior of a large space, where the points have no qualitatively distinctive characteristics. As shown in Figure 2, when the robot is following a known edge from one node to another, it starts by using the hill-climbing algorithm to locate itself at the DP correspond- ing to the fast node. It then follows the LCS associated with the arc and ends up somewhere in the neighborhood of the second place. Then the hill-climbing algorithm brings it to the DP corresponding to the second node. This method uses con- tinuous sensory feedback to eliminate cumulative error. t ’ . hill-climbing edge-following Figure 2. Movement with Error A set of production rules to decide a proper LCS depend- ing on the current sensory information is given to NX. An example of a rule is given below. (defrule LCS-rule10 () (if (>= (number-of-objects) 2) (TwoWalls-near-to-each-other) (TwoWalls-almost-opposite-direction)) (then (proper-LCS is 'pass-on-the-midline))) The current local control strategies are: 0 Follow-Midline * Walk-along-Object-Right 0 Walk-along-Object-Left 0 Blind-Step In summary for edges: (An example is given in Section 3.) a Procedural knowledge: Ability to choose and perform a proper LCS and knowledge of which control strategy defines the edge. Learned in the exploration stage and used in the navigation stage. 0 In the Topological model: An edge with direction, con- nected to two end-places. Added to the topological model when the second end-place is found. .S Metrical information: Curvature, distance, change of orientation, lateral width while traveling, etc. Con- tinuously accumulated in the exploration and navigation stage and averaged to minimize metrical error. 2.4 Position Referencing Problem While NX explores the given environment, it needs to know the current position. The current position is described topologically rather than metrically. When NX is at a DP, the current position is described by the current place name, the current orientation in degrees, and a travel edge through which NX has come to the current place from the previous place. When NX is on an edge, the current position is described by the previous place name, the current orientation, and an in- dication “ON-EDGE”. 2.5 Matching Process to Determine the Current Position When NX reaches a place during its exploration, the iden- tification of the place is the most important task. If a place has been visited before and NX comes back to that place, NX should recognize it. A new place must be recognized as new, even if it is very similar to one of the previously visited places. Our matching process is done topologically as well as metri- cally. While NX explores, it uses an exploration agenda to keep the information about where and in which direction it should explore further to complete its exploration. If (Place1 Directionl) is in the exploration agenda, it means that Direction1 is a reasonable direction for travel (e.g., points to OPEN-SPACE) from Place1 and has not been explored. Therefore, in order to delete (Place1 Dircctionl) from the ex- ploration agenda, NX should either visit Place1 later and leave in the direction Directionl, or return to Place1 from the op- posite direction. When NX gets to a place in the exploration stage, the exploration agenda can be either empty or not empty. If the exploration agenda is empty, it means that there is no known place with directions which require further exploration. Therefore the current place must be new, unless NX has inten- tionally returned to a previously known place through a known edge. If the exploration agenda is not empty, the current place could be one of the places saved in the exploration agenda. This is only possible when the current place description is similar to that of a place saved in the exploration agenda, and the difference between the current orientation and the direc- tion savedon theagendais approximately 180 degrees. The current and stored place descriptions are compared metrically, allowing a certain amount of looseness of match to provide robustness in the face of small variations in sensory input. But mismatching is possible. If there is any possibility, the topological matching process is initiated. From the topological model and procedural knowledge of edges and nearby DPs, the rehearsal procedure [Kuipers 19851 is ac- tivated to test the hypothesis that the current place is equal to a previously known place. NX constructs routes between the known place and adjacent DPs. It then tries to follow the routes and return to the current place. If the routes performed as predicted, then the current place matches the previously known one, and NX has identified the current place. If not, then the current place must be a new place with the same sensory description as the old one (e.g., two intersections in 776 Robotics the first environment in Figure 5). For any fixed search radius of this topological match, it is possible to construct an environment that will yield a false positive match. However, if there is a reference place that is somehow marked so as to be globally unique (e.g., “home”), false positives can be eliminated. 3. Simullator and Results We have developed a simulation system NX-SIM. Figure 3 is a copy of the simulation window. NX is represented as a triangle at P4 in Figure 3. The metrical lines in the “Measured Distances” box in the upper right comer show the 16 sensor readings at the current instant. The length of the line represents the sensor reading perceived by the robot. In this example, the sensor readings are subject to a 10% random error, so the true distance is indicated by an “x” (perceived only by the researchers). At the top left comer, the result of analysis of each distinc- tiveness measure considered currently is displayed. NX was located near Place1 initially. The first peak on the second row shows the symmetric and equal distance analysis while it tried to find Placel. The second, the third, and the fourth peaks correspond to Place2, Place3, and Place4, respectively. We show the graphic exploration results of three different error rates: 0% error in Figure 4a, 5% error in Figure 4b, and 10% error in Figure 4c. NX starts near Pl in each case, marked S. Pi means Place-i and Ei means Edge-i. We will trace NX’s movement with Figure 4c very briefly. It con- structs the correct map successfully in all three cases, but care- ful examination of figures 4a-c reveals subtle differences. Starting from S in Figure 4c, NX chooses Pass-on-the- midline and moves downward. Because of sensory error, it does not initially recognize that it is in a neighborhood. But while continuing to perform Pass-on-the-midline, it recognizes a qualitative change, and so it performs a hill-climbing search to minimize the difference of distances to near objects. This search turns it around, converges on a local maximum, and defines the place Pl. If we look at Figure 4a and 4b, we do not see this kind of backtracking around Pl. NX recognizes the neighborhood sooner than in Figure 4c. Once NX finds Pl, it records Pi’s information in the map as follows. PLACE : Name=Pl Procedural: Symm-Equal (i.e., Symmetry & Equal distance) Topological: Nil Metrical: Direction-requiring-more-exploration: 345 & 282 deg. Angle and Distance to Objects: (70 deg. 46 units) (317 38) (160 51) There is no topological information for Pl at this time. There are two directions in which NX can go from Pl. If there is no particular reason to choose an indicated direction, it chooses the direction which requires the least rotation. It rotates to the direction toward P2 and keeps the other direction on the agenda This selection rule, of course, would cause NX to lose badly in an infinite environment. An alternate rule, selecting the direction requiring the most rotation, would cause the explored region to grow roughly concentrically. While NX is moving ahead from Pl, it chooses Pass-on-the-midline and gathers metrical information about the edge such as dis- tance, shape, width of the edge, change of the width, and so on. Then NX finds the second DP, P2, which is characterized by Temporal-discontinuity. Status indicators; NBD: a neighborhood of a DP EDG: on an edge EXP: Exploration REN: Rehearsal NAV: Navigation Am : a current place Was : a previous place TO : a destination place (Values of several distinctive measures are shown on the left top, and measured dis- tances with error are shown on the right top.) Figure 3. NX-SIM Window Kuipers and Byun 777 Figure 4-a. Exploration result with 0 % sensor error Figure 4-b. Exploration result with 5% sensor error Figure 4-e. Exploration result with 10% sensor error EDGE : Name=El Procedural: Pass-on-the-Midline Topological: from Pl to P2 Metrical: Travel-history Distance ; ;U$+ (8 10) 05 6)(119) (18 18))) Lateral-width : ((DIR+ (81 ALMOST-STD 43))) Minimum-width: 80 D-Orientation : ((DIR+ -8)) Once P2 has been defined, the above is recorded in the map for El. Travel-history is a record of the number of rota- tions of each chain. DLR+ specifies the topological direction from Pl to P2. (81 ALMOST-STD 43) means that the dis- tance between the two walls is approximately 81 units and almost steady while it moves approximately 43 steps. The minimum distance between the two walls along E2 is 80 units. D-Orientation gives the net change of orientation in degrees along edges. It also updates the topological information of Pl at this moment, since El is connected to Pl. While NX leaves P2, NX thinks that Pass-on-the-midline is the appropriate LCS. You can see a line stretching to the direction between E2 and E6. But it soon realizes that Move- along-object-on-left or Move-along-object-on-right are more appropriate. Because it prefers smaller rotation angles, it chooses Move-along-object-on-left. We can see a significant difference between this and what happens in Figure 4a and 4b, as the result of the different amount of errors. However, the exploration process recovers from temporary errors, and is successful in all three cases. Then NX finds P3, E3, P4, E4, P5, E5, and P6. It moves along E6 and finds a place which looks similar to P2. The rehearsal procedure is activated for topological matching. Notice here that NX does not make the same trace stretching to the middle direction between E2 and E6 as before, because it already knows that if the current place is P2, Move-along- object-on-left-side is the proper LCS. We need to emphasize that a place visited several times does not need to be exactly the same location in the environment. Accumulated metrical information and the rest of exploration are discussed in detail in [Kuipers and Byun, 19881. We present more results with various environments in Figure 5. 1 Figure 5. More expbation results 778 Robotics 4. Summary and Future Work We have demonstrated a successful, robust, qualitative robot exploration and mapping method. The results show that our method can solve several of the problems of traditional ap- proaches. The major achievement of our approach is the elimination of cumulative metrical error. Key development tasks developed currently or in the near future are listed below. m Handling of systematic error (e.g., the acoustic peculiarities of sonar). e Use of metrical information for optimizing routes, edge- following procedures, and correction of topological er- rors. * Dynamic world (e.g., moving pedestrians). doors opening and closing; 0 Removal of dependence on global compass, and local orientation frames and their connections. Q Hierarchical representation of complex maps. use of References [Brooks, 19853 R.A.Brooks. Visual Map Making for a Mobile Robot, IEEE Proc. International Conference on Robotics and Automation, pp 824-829 [Chatila and Laumond, 19851 R.Chatila and J.Laumond. Position Referencing and Consistent World Modeling for Mobile Robots, IEEE Proc. International Conference on Robotics and Automation, pp 138-170 [Kadonoff et al., 19861 M.B.Kadonoff, F.Benayard-Cheriff, and A. Franklin. Arbitration of Multiple Control Strategies for Mobile Robots$+oc. SPIE, Mobile Robots. Cambridge MA. [Koch et al., 19851 E.Koch, C.Yeh, GHillel, A.Meystel and C.Isik. Simulation of Edge Planning for a system with Vi- sion and Map Updating, IEEE Proc. International Confer- ence on Robotics and Automation, pp 147- 170 [Kuipers, 19781 B.Kuipers. Modeling spatial knowledge. Cognitive Science,2:pp 129-153 [Kuipers, 19791 B.Kuipers. On representing commonsense knowledge, In N.V.Findler (Ed.), Associative networks: The representation and use of knowledge by computers. New York: Academic Press. [Kuipers, 19831 B.Kuipers. Modeling human knowledge of routes: Partial knowledge and individual variation. Proc. m-83, Washington,D.C. [Kuipers and Byun, 19871 B.Kuipers and Y.T.Byun. A Qual- itative Approach to Robot Exploration and Map-Learning, Proc. Spatial Reasoning and Multi-Sensor Fusion Work- shop, Chicago. [Kuipers and Byun, 19881 B.Kuipers and Y.T.Byun. A Robust Qualitative Method for Robot Exploration and Map- Learning, UT AI Lab TR-73 [Levitt et al., 19871 T.S.Levitt, D.T.Lawton, D.M.Chelberg and P.C.Nelson. Qualitative Navigation, Proc. DARPA Image Understanding Workshop, LOS Altos. [Lynch, 19601 K.Lynch. The Image of the City, Cambridge, MA: MIT Press. [Moravec and Elfes, 19851 H.Moravec and A.Elfes. High Resolution Maps from Wide Angle Sonar, IEEE Robotics and Automation. pp 116-121 [Rae et al., 19861 N.S.V.Rao, S.S.Iyengar, C.C.Jorgensen and C.R.Weisbin. Robot Navigation in an Unexplored Terrain, Journal of Robotic Systems, Vol.3 No.4. piaget and Inhelder, 19671 J.Piaget and B.Inhelder. The Child’s Conception of Space, New York: Norton. [Siegel and While, 19751 A.W.Siegel and S.H.White. The development of spatial representations of large-scale environments, Advances in child development and behavior, H.W.Reese Ed, New York: Academic Press. [Turchan and Wong, 19851 M.P.Turchan and A.K.C.Wong. Low Level Learning for a Mobile Robot: Environment Model Aquisition,Proc. The 2nd International Conference on AI Application, Miami Beach, FL. pp 156-161 [Kuipers, 19851 B.Kuipers. The Map-Learning Critter. UT AI Lab TR-18 Kuipers and Byun 779
1988
59
803
Invariant Logic: A. Calculus for Problem Reformulation Michael R. Lowry Stanford Robotics Laboratory Box 3350, Stanford CA 94309 And Kestrel Institute 1801 Page Mill Road, Palo Alto CA 94304 lowry@kestrel.arpa Abstract Symmetries abound in nature. Observing sym- metries often provides the key to discovering in- ternal structure. In problem solving, observing and reasoning about symmetries is a powerful tool for shifting viewpoints on a problem. A calculus for reasoning about problem sym- metries has been developed, called Invariant Logic. Invariant Logic is partially implemented in STRATA, a system which synthesizes algorithms through problem reformulation. In STRATA, Invariant Logic is used to reason about generalized problem symmetries for several purposes. The first purpose is as a calculus for generating expressions denoting problem symme- tries. The second purpose is problem abstraction - generating abstract problem descriptions which denote models in which the problem symmetries have been collapsed. The third purpose is prob- lem reduction - specializing a problem description by adding constraints in order to realize perfor- mance gains. 1 Introduction One hundred years ago mathematics was undergoing a rev- olution. The Kantian dictate that Euclidean Geometry is the only rationally conceivable basis for the physical uni- verse had been debunked. Numerous alternative geome- tries, each self-consistent, were being discovered, axioma- tized, and developed. Felix Klein found a unifying prin- ciple for relating and classifying the various geometries - Invariant Theory. The key idea is to classify mathematical structures by the transformations under which they are invariant. Invariant Theory has achieved wide influence in mathematics, physics (including relativity and quantum mechanics), and computer science. The calculus developed here is based upon relatively simple aspects of Invariant Theory. This paper presents initial work on Invariant Logic- a tool to automate reasoning about symmetries (denoted by groups of transformations) and invariants. In the STRATA system, Invariant Logic is used for problem reformulation, generating abstract data types, and algorithm synthesis. Section 2 overviews the basic concepts of Invariant Logic, illustrated with Euclidean symmetries of geometric figures. Section 3 shows how Invariant Logic can be used to ab- stract the representation of a simple combinatorial prob- lem. Section 4 demonstrates Invariant Logic applied to 14 Automated Reasoning generating abstract data types, when given a domain the- ory and a problem specification. Section 5 explores math- ematical aspects related to problem reformulation, includ- ing duality and isomorphism between theories. More tech- nical detail can be found in [Lowry, 19881. Prior work in problem reformulation and algorithm syn- thesis has addressed specific aspects of the use of symme- try. Amarel[Amarel, 1968] 1 s lowed how the symmetry un- der time reversal of solutions to the missionary and canni- bals problem could be used to halve the depth of the search space. Cohen[Cohen, 19771 later generalized this work to the class of state space search problems. Korf [Korf, 19801 gave many interesting examples of the potential use of sym- metries in abstracting problem representations, and devel- oped a set of primitive “isomorphic” reformulation rules. The mathematical basis for isomorphism between theories is formalized in section 5 of this paper. Kokar’s COPER system[Kokar, 19861 d iscovers equations for physical laws from experimental data. COPER uses the same mathe- matical foundation as Invariant Logic, though in a differ- ent setting - dimensional analysis. RlcCartney’s hfedusa system [McCartney, 19871 uses predefined geometric dual transforms for synthesizing algorithms in computational geometry. 2 Symmetry, Invariance, and Transformat ions This section describes the underlying concepts Logic using geometric figures and shapes. of Invariant The double headed eagle, crest of the Dukes of Savoy, has bilateral symmetry. It is mapped to itself through re- flection about the line 1. Reflection about 1 defines a one- to-one transformation &? which leaves the figure invariant. R maps point p to point p’, and vice versa. Note that R is its own inverse, applying R twice takes p back to itself: R compose R = identity. R and the identity transformation form a group of transformations. A group of transforma- tions is a set of transformations which include the identity, an inverse for each transformation, and is closed under composition. Cl osure under composition means that two transformations composed together result in another trans- formation from the group. In order to be invertible, each transformation must be one-to-one. Formally, the group From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. elements are the transformations, the group operation is composition of transformations. The hexagram, or Star of David, s both rotational and reflective symmetries. It is invariant under the six ro- tations about its center (multiples of 60 degrees) and six axis of reflection. In Invariant Logic, this is denoted: Invariant (hexagram, Rot at ions Join Re f dect ions). The Join operation takes two groups of transformations, forms the union, and then generates the closure of this union un- der composition. (An interesting geometric fact is that the closure of the reflective symmetries includes the rotational symmetries.) The Meet operation takes two groups of transformations and returns their intersection. The meet of the rotational symmetries and the 3 reflective symme- tries with axis through apexes of the hexagram are the rotations of multiples of 120 degrees. This is because two reflections whose axis form an angle of n x 60 degrees gen- erate a rotation of n x 120 degrees. The group of trans- formations which leave a geometric figure invariant form a lattice structure with respect to its subgroups, illustrated below for the hexagram. A subgroup is a subset of transfor- mations which are closed under composition and inverses. Notice that as more subgroups are joined together they converge upon the total group of transformations. The search space is commutative and convergent. In later sec- tions of the paper problem specifications will be abstracted by discovering problem symmetries and incorporating these symmetries into the problem formulation. l3ecause of the commutative and convergent lattice structure of the subgroups of a group, search control is a minor issue in problem abstractiou using Invariant Logic. Symmetry Group ions n x 120 I identity The orbit of an object under a transformation group is the equivalence class of objects to which it is trans- formed. It is an equivalence class because a transformation group includes the identity (reflexivity), inverses (symme- try), and is closed under composition (transitivity). The metaphorical origin of the term orbit is illustrated by the orbits of the triangles in a hexagram under rotations of multiples of 120 degrees. The black triangles are mapped to each other, as are the checkered triangles, thereby form- ing two equivalence classes of triangles in the hexagram. Each equivalence class can be generated by a representa- tive element and the group of transformations. Symmetry can be used for simplifying representations. With bilateral symmetry only half a figure needs to be given, along with the axis of reflective symmetry. The hexagram, considered as a set of line segments, can be represented by an isoceles triangle (one of the apexes), and the transformation group defined by the six rotations: Hexagram = Apply(Rotations, triangle). 3 xtensioiial Invariant easoning This section shows how Invariant Logic can be applied to problem abstraction when the semantics are given exten- sionally, i.e. as an explicit listing of a set. The classic missionary and cannibals problem [Amarel, 19681, is to move 6 people across a river in a 2 man boat without any of them eating each other. hiost accounts of this problem begin with the formulation that there are 3 missionaries and 3 cannibals, and a legal intermediate state is one in which the missionaries are never outnumbered and then eaten on either side of the river. However this is already an abstract formulation which incorporates a great deal of relevant information about the legal states. An observer would not see this formulation, instead he would see distri- butions of specific people on the left and right banks. He would probably begin to notice patterns, especially nearly identical intermediate states which only differed by inter- changing specific people. Just like the triangles of a hexagram described orbits un- der rotations of multiples of 120 degrees, so do the inter- mediate states describe orbits under the transformations defined by interchanging specific people. Assume the peo- ple are Mike,Max,Cal,Cindy,Cory, and that our observer notes that Mike,Mary,Max are mutually interchangeable, as are Cal, Cindy, Cory. He also notes that the left and right banks can be switched. On this basis he is able to partition the 34 legal states he observes into five orbits. Representative states in the five orbits are given below (the river is represented by !!): nobody !! Cal Cindy Cory Mary Max Mike Cal Cindy !! Cory Mary Max Mike Cal Cindy Cory !! Mary Max Mike Cal Mary !! Cindy Cory Max Mike Cal !! Cindy Cory Mary Max Mike These five representative states contain sufficient infor- mation to generate the complete set of 34 states given the transformation group which arises from the possible in- terchanges among people and banks. Note that each of these representative states are the smallest representative in their orbits with respect to lexicographic ordering on names. Except for the fourth state, they are all lexico- graphically ordered. This is the basic idea for problem reduction through transformations - choosing a representa- tion which satisfies additional constraints, such as ordered. A constraiut cau be added to a problem descrip- tion if some representative of each orbit satisfies the additional constraint. This is especially useful for algorithm synthesis, because a more efficient algorithm can often be synthesized when additional constraints can be as- sumed in an input-output specification. While problem reduction chooses a representative for each orbit, problem abstraction generates an abstract de- scription in terms of invariant properties for each orbit. Lowry 15 Invariance under a transformation group is a filter to de- termine which properties are relevant to an abstract de- scription. The value of an invariant property is shared by every element in an orbit. A set of invariant properties is complete if they uniquely determine an orbit from the abstract values and the transformation group. As an example, let X be a set, P be the powerset of X, and R be a subset of P. Thus R is a set of subsets of X. Let AZZPi(X) d enote the group of all possible premu- tations of the elements in X. A permutation is a one-to- one mapping of the elements in a finite set to themselves. It can be thought of as a reordering of a sequence. If Invariant( R, AZZPi(X)), th is means that the size of the subsets in R is a complete set of invariants. Proof: Let rl be an element of R, e.g. a subset of X, whose size is nl. Then for any other r2, subset of X with size nl, there is a one-to-one transformation from the elements of rl to the elements of r2. This transformation is contained in AZZPi(X). For any subset of X, si, whose size is not nl, there is no one-to-one transformation from rl to si. Thus the orbit of rl contains all subsets of equal size from X and only the subsets of equal size. QED, The following partial set of rules abstract set-theoretic types in terms of invariant properties. Proofs similar to the one above can be found in [Lowry, 19881. In the rules, MS is some mathematical structure which is being ab- stracted. In the missionary and cannibals example, MS is the set of legal intermediate states. MS is invariant under AZZPi(X), where X is some set used in MS. R is a subtype of MS. The type declarations are based on the REFINETn’ language. Map(domain, range) is the declaration for a partial function from domain to range. Set(domain) is the declaration for a set with elements from domain. TupZe(domainl...domainN) is the decla- ration for an ordered tuple with successive elements from domainl, domain2, etc. The abstraction of both the sub- type and the extension are given. 1. The invariant of a subset of X is its size: R : set(X) AND Invariant(MS,AZlPi(X)) 3 R : integer The extension is the size of the subset. 2. The proof given above, and embedding rule 1. R : set(set(X)) AND Invariant(MS, AZZPi(X)) + R : set(integer) The extension is a set of integers, representing the size of the subsets in R. 3. If the value of a multi-argument function is indepen- dent of one of its arguments, then delete the argument. R : map(tupZe(..X..), range) AND Invariant(MS, AlZPi(X)) + R : map(tupZe(....), range) Extension: project out the argument whose domain is X. 4. A similar rule for a relation. R : set(tupZe(..X..)) AND Invariant(h4S, AZZPi(X)) =+ R : set(tupZe(....)) Extension: project out the argument whose domain is X. 5. A function from domain to range can be transformed to a function from range to subsets of domain - i.e. the domain elements which map to a given range element. The invariant in this rule is the number of domain elements which map to a range element. R : map(X, range) AND Invariant(MS, AZlPi(X)) + R : map(range, integer) Alternatively j R : bag(range) Extension: the range is mapped to the number of elements in the inverseimage. 6. Every function defines an equivalence relation on its domain - the elements which map to the same range ele- ment. This partitioning of the domain is the invariant in this rule. R : map(Domain, X) AND Invariant(MS, AZZPi(X)) + R : partition(Domain) Extension: the domain is partitioned. When X is only a subset of the domain or range Y the following rules apply, where D is the set difference between Y and X. The extensions and subtypes are analogous to the previous rules, but involve tupling to separate X and D. 7. R : set(Y) AND Invariant(MS, AZZPi(X)) + R : tupZe(integer, set(D)) 8. R : map(Y, range) AND Invariant(MS, AllPi( + R : map(range,tupZe(integer, set(D))) 9. R : map(Domain, Y) AND Invariant(MS,AZZPi(X)) r~ R : tupZe(partition(Subdomainl), map(Subdomain2,D)) Subdomainl is the elements of Domain which map to X, and Subdomain is those which don’t map to X. These rules can be applied to obtain an abstract rep- resentation for the set of legal intermediate states in the missionary and cannibals problem. These rules are not strictly compositional on subtypes, technical details can be found in [Lowry, 19881. Each state is a mapping from people to locations, so the set of legal states has the fol- lowing type: set(map({Mibe, Mary, Max, Cindy, Cal, Cory}, {left, right})) The transformation group which leaves the 34 legal states invariant is defined usiug the AZZPi(X) construc- tion: AllPi(Mike, Mary, Max) Join AllPi(CaZ, Gory, Cindy) Join AlZPi(le ft, right). This transformation group is composed of three subgroups which are joined together. These subgroups will be used in successive rules to ab- stract the representation. Rule 8 uses AZZPi( Mike, Mary, Max) to obtain the ab- stracted type: set(map({Zeft, right}, tupZe(integer,set(CaZ, Gory, Cindy)))) Rule 1 then uses AZZPi(CaZ, Gory, Cindy) to obtain: set(map({Zeft, right), tupZe(integer,integer))) Finally Rule 5 uses AZZPi(Zeft, right) to obtain: set(bag(tupZe(integer, integer))) The extension for this abstract type is given below, with the corresponding representative state from each orbit. bag((0,0),(3,3)) nobody !! Cal Cindy Cory Mary Max Mike bag((0,2),(3,1)) Cal Cindy !! Cory Mary Max Mike bag((0,3),(3,0)) Cal Cindy Cory !! Mary Max Mike bag((1,1),(2,2)) Cal Mary !! Cindy Cory Max Mike bag((0,1),(3,2)) Cal !! Cindy Cory Mary Max Mike 4 Intensional Invariant Reasoning This section describes how STRATA uses Invariant Logic for abstracting a problem when the semantics are given intensionally, i.e. as a theory. First the rules for comput- ing symmetries for composite relations are given. Then 16 Automated Reasoning a simple problem is introduced, and it is shown how the Knuth-Bendix completion algorithm can be used to cal- culate additional problem symmetries. A special type of symmetry - congruences - are described. The application of Invariant Logic to the abstraction process is shown, and the alternative of specialization through problem reduction is described. The following rules of invariant logic provide a calcu- lus for determining the invariants of a composite relation based on the invariants of its subparts. The primary ob- servation for boolean operations on relations is that the result is invariant under the intersection (meet) of transfor- mation groups for the separate arguments. This supports “greatest common divisor” reasoning on composite rela tions. In the rules which follow, R is a relation i.e. type tuple(domainl,domain2.... domainN), and TG is a trans- formation group over the tuples. These rules are slightly simplified versions of ones in [Lowry, 19881, which address some technical issues concerning whether the domains are distinct. If a relation is invariant, then so is its complement: Invariant (R, TG) z Invariant(lR, TG) Boolean operations such as union and intersection pre- serve invariance with respect to the meet of transformation groups (Rl,R2 are sets of tuples of the same type): Invariant(Rl,TGl) AND Invariant(R2, TG2) + Invariant(R1 boolop R2, (TGl meet TG2)) The Cartesian product over relations preserves invariance under the direct product of transformation groups. Invariant(Rl,TGl) AND Invariant(R2,TG2) + Invariant (Rl @ R2, TGl @ TG2) IF a relation is invariant under a transformation group, then it is invariant under any of its subgroups: Invariant(R,TG) AND Subgroup(TGl,TG) + Invariant (R, TGl) Consider the problem of common-members: given two lists as input, output a list whose elements are the com- mon members of the two lists. Abstractly, this problem is simply set-intersection. The problem can be special- ized by problem reduction to assume the two input lists are ordered, an efficient algorithm is to march down the two ordered lists in tandem. The reduction is achieved by sorting the two input lists, which is a transformation which leaves the problem invariant. STRATA generates abstract problem descriptions from a concrete problem description and a domaiu theory. The conceptual foundation is partially described in [Lowry, 19871, this section discusses interesting aspects of the im- plementation not covered earlier. For common-member, the domain theory is that of lists; the following equational theory is given to STRATA, where variables are denoted by & . Equational theories are often used for specifying abstract data types. The equations can be turned into a logic program by making one side of each equation into the left hand side of a rewrite rule, and the other side into the right hand side. Equality between ground terms can be de- cided by rewriting them to normal form with the derived rewrite rules. (append nil &l3) = &I3 (append(cons &a &I)&l3)=(cons &a (append &I &13)) (member &x nil) = false (member &x (cons &y nil)) = (EQ &x &y) (member &x (append &Ll &l2)) = (member &x &Ll) OR (member &x &L2) ;; equations defining semantics of AND, OR, IFF (common-member &Ll &L2 &L3) = (member &x &L3) IFF (member &x &Ll) AND (member &x &L2) The first step is to generate the problem symmetries. The rule for boolean composition of relations deduces that the symmetries of member are a subset of the symme- tries of common-member. The symmetries of member are not known, and the compositional calculus does not apply (the rules above don’t handle recursive definitions). The homomorphism method described in [Lowry, 19871 finds the symmetries of member; it is implemented by applying the Knuth-Bendix completion algorithm. The basic idea in setting up the Knuth-Bendix algorithm is to make two copies of the problem name, and linking them. One copy has a heavier weight than any other symbol (e.g. member). The other copy has a lighter weight than any other symbol (e.g. member’). Intuitively, the K-B algorithm percolates the problem definition through the domain theory to the lighter weight problem name, generating problem symme- tries. **Derived from K-6 when member defined with append** (member’ &x (append &y &z)) = (member’ &x (append &z &y)) (member’ &x (append &y &y)) = (member’ &x &y) **Derived from K-B when member defined with cons** (member’ &x (cons &a (cons &b &L))) = (member’ &x (cons &b (cons &a &L))) (member’ &x (cons &a (cons &a&L))) = (member’ &x (cons &a &L)) These derived theorems denote the symmetry of lists under transformations of reordering and deleting repeated elements. Structural Induction is used to verify that new equations can be extracted from these theorems. An equa- tion represents a special kind of problem symmetry (a con- gruence), which, when substituted into itself still denotes a problem symmetry. As part of the structural induction proof for the commutativity of append, STRATA gener- ates and proves the following theorem: (member’ &x (append (append &x &y) &z)) = (member’ &x (append (append &y &x) &z)) After proving structural induction, STRATA extracts the following axioms and adds them to the theory of lists, thereby deriving the abstract data type for sets. The added axioms for append make it commutative and idempotent, thus the semantics are set-union, and append becomes set-union (names don’t matter for the denotation of a theory) : Invariant(common-members, (append’ &y &z) = (append’ &z &y)) Justifies add-axiom((append’ &y &z)=(append’ &z &y)) Invariant(common-members,(append’ &x &x) = &x) Justifies add-axiom((append’ &x &x) = &x) An alternative to abstraction is problem reduction - adding constraints which can be achieved with the trans- formations which leave the problem invariant. In contrast to abstraction, there are many possible problem reductions Lowly 17 because there are many possible representative elements in each orbit. One source of constraints are derived precondi- tions for operators. In this example, if the lists are ordered an inexpensive necessary condition on membership is that the first element of the list is less than or equal to the element being tested for membership. This gives rise to the derived precondition that the input lists are ordered, which can be achieved by repeatedly switching adjacent list elements which are out of order (bubble sort). This switching transformation is denoted by one of the derived equations for cons: (cons &a(cons &b &L))=(cons &b (cons &a &L)) See [Lowry, 19881 for the application of problem reduction to synthesizing Karmarkar’s linear optimization algorithm. 5 Duality and Isomorphic Theories Duality can be expressed as a symmetry among the sym- bols of a theory which leave the true sentences invariant. A transformation from symbols to symbols is a represen- tation map, designated RMap. Duality has bilateral sym- metry, an example is boolean algebra (Not is self dual): And +-+ Or true t---+ false Not c--f Not This representation map transforms true sentences to true sentences: (z And true) = x t----f (x Or false) = x Duality of a theory is easy to verify - simply trans- form the axioms with the representation map, and prove the transformed axioms using the original theory: Dual(Theory, RMap) G Axioms I- RMap(Axioms) Be- cause proofs are invariant under renaming of symbols, we obtain for free the dual proofs by applying the represents tion map, which is its own inverse: RMap( Axioms) i- Rmap( Rmap( Axioms)) = Axioms Duality is often exploited in algorithms. Mini-max search and alpha-beta pruning use duality to efficiently search the space of look ahead moves in a competitive game. Linear optimization problems, particularly the spe- cial class of network flow problems, can be efficiently solved by primal-dual algorithms. In geometric algorithms, the duality between lines and points in 2-D projective geom- etry can be used to expand theuses of subroutines. For example, collinearity of points can be mapped to intersec- tion of lines. In duality, the representation map is from symbol to symbol. Isomorphic theories are defined by generalizing the representation map from symbols to terms. A repre- sentation map from symbols to terms is not invertible, so the definition is more complex. Theory A and theory B are isomorphic iff there exists representation maps Rl from A to B and R2 from B to A which satisfy: 1. Axioms(B) I- Rl(Axioms(A)) 2. Axioms(A) t- R2(Axioms(B)) 3. Rl o R2 = Identity(A) 4. R2 o Rl = Identity(B) Boolean algebra with primitives Nand,true, false is 18 Automated Reasoning isomorphic to defined boolean algebra defined with primitives And, Or, Not, true, f alse. The representation maps Rl, R2 are given below: Or(x, y) -+ Nand(Nand(x), Nand(y)) And(x, y) -+ Nand(Nand(x, y)) Not(x) ---f Nand(x) Not(And(x, y)) - Nand(x, y) The last two conditions on isomorphism between theo- ries is that the composition of representation maps yields the identity. This entails that an equivalence be proven, for Nand the composition of representation maps yield: Nand(x, y) + Not(And(x, y)) --) Nand(Nand(Nand(x, y))) The following equivalence has to be proved in order to show that Rl o R2 is the identity: Nand(x, y) = Nand(Nand(Nand(x, y))). Similar equivalences are needed for And, Or, Not. An alternative definition of isomorphism between theo- ries is that each can be conservatively extended with de- fined relations and functions to include the other. Isomor- phism between theories is one way to define reformulation. 6 Acknowledgements This paper benefitted from the technical and editing help provided by Tom Binford, Leonid Frants, Joseph Goguen, Walter Bill, Douglas Smith, George Stolfi, and Yinyu Ye. Special thanks go to the reviewers, who made substantial comments on clarifying the paper. This work was sup- ported in part by DARPA sub-contract AIADS-S10935-1 and ONR contract N00014-87-K-0550. The views and con- clusions in this paper are solely those of the author. References [Amarel, 19681 Saul Amarel. On representations of prob- lems of reasoning about actions. Machine Intelligence 3, 1968. [Cohen, 19771 Brian Cohen. The mechanical discovery of certain problem properties. Artificial Intelligence, 8, 1977. [Kokar, 19861 h4 ieczslaw hI. Kokar . Determining argu- ments of invariant functional descriptions. Machine Learning, 1, 1986. [Korf, 19801 Richard E. Korf. Towards a model of repre- sentation change. Artificial Intelligence, 14(l), April 1980. [Lowry, 19871 Michael R. Lowry. The abstrac- tion/implementation model of problem reformulation. In IJCAI-87, August 1987. [Lowry, 19881 Michael R. Lowry. Algorithm Synthesis through Problem Reformulation. PhD thesis, Stan- ford University, 1988. [McCartney, 19871 Robert D. McCartney. Synthesizing al- gorithms with performance constraints. In AAAI-87, 1987.
1988
6
804
Subassembly Stability Nice Boneschanscher Hans van der Drift Laboratory for Manufacturing Systems Department of Mechanical Engineering Delft University, The Netherlands Abstract Planning a product assembly requires that we determine the order in which the product subparts are to be assem- bled. One constraint on this ordering is that the subas- sembly must be stable at each stage under the gravitational force and the insertion force of the next part to be assembled. In this paper, we discuss the stability problem for the case where the subassembly sits on a ta- ble. A program has been written to solve this problem for a class of subassemblies. The input to the program con- sists of a model of the subparts and their interconnections, and a set of external insertion forces. The program tests whether the total disturbance force is contained in the set of all stable forces between each subpart and the table. A linearized model of friction in six dimensions is used in the computation. 1 .O Introduction In recent years, attention has focused on the need for an automatic system for planning product assemblies (Lozano-PErez 1976, Taylor 1976, Lieberman and Wesley 1977, Lozano-PJrez et al 1987). Planning an assembly requires that we determine the order in which the product parts are to be assembled. In general, this is a difficult problem, because of the large number of possible sol- utions (Hornem de Mello and Sanderson 1986). One constraint on this ordering is that the subassembly must be stable at each stage under the gravitational force and the insertion force of the next part to be assembled. In this paper, we discuss this stability problem for the case where a three-dimensional subassembly sits on a table, and the insertion force is given. We will assume that the parts of the subassembly can be accurately modeled by rigid polyhedra. Fahlman (1973) investigated the stability of a subassem- bly of blocks (bricks and right triangular wedges), using an iterative numerical method which propagates forces through the subassembly. The correctness and computa- tional complexity of his method have not been estab- lished. Blum, Griffith, and Neumann (1970) implemented a subassembly stability test based on linear programming. Stephen J. Buckley Russell H. Taylor Manufacturing Research IBM T.J. Watson Research Center Yorktown Heights, NY For each part, force balance equations are written in terms of point contacts with adjacent parts and gravitational forces. Friction at each contact point is approximated by four linear inequalities. The program searches for a si- multaneous solution to the force balance equations in which the forces acting on each body are either zero or internal to the body. Palmer (1987) established the computational complexity of the subassembly stability problem for rigid polygons in the plane. He showed that guaranteed stability is NP-hard, under assumptions similar to what we call ‘limited superposition” in “2.2 Multi-Point Friction” on page 2. He also showed that potential stability is in I’, and that for a special class of subassemblies, guaranteed sta- bility without friction is in P. This paper is organized as follows. “2.0 Friction” dicusses our representation for friction. Our stability algorithm is then described in “3.0 Algorithm” on page 3. “4.0 Experiments” on page 5 presents some of our exper- imental results. “5.0 Conclusions” on page 5 summarizes our contributions. 2.0 Friction Consider Figure 1, which shows an insertion force ap- plied to a two-dimensional block on a table. To compute the stability of the block with respect to the table, we must take friction into account. INSERTION FORCE INVERTED FRICTION CWE Figure 1. An insertion force applied to a two-dimensional block on a table. 780 Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. The friction cone of the table with respect to the block is defined as the set of possible reaction forces that the table can exert in response to an applied force from the block (Coulomb, see Baumeister 1978). The angle of the friction cone is equal to ~rctan(p), where p is the coefflcicnt of friction between the block and the table. Coulomb made the following empirical observation about friction cones: if an applied force opposes a possible reaction force in the friction cone, then an equal and opposite reaction force will be generated, and no relative motion witI OCCUY. The in- verted friction cone of the table with respect to the block thus contains the set of stable forces that can be exerted on the block at the point of contact. 2. I Point Fviction In this subsection, we present a mathematical formulation of the inverted friction cone for a point contact in three dimensions. Our formulation is taken directly from Erdmann ( 1984). For a point contact in three dimensions, the number of degrees of freedom is six. However, only the three translational degrees of freedom are subject to friction. Thus, the friction cone is a three-dimensional subset of six-dimensional space. The inverted friction cone can be derived by writing down the equations for static equilib- rium of an applied force/torque f at the contact point. The following constraints hold: 1. The applied torque must be zero. 2. The applied force must be interior to the contact surface. 3. (Coulomb’s Law) The tangential component of the applied force must be less than or equal to p times the normal component of the applied force. Erdmann showed that these constraints can be written as the following system of linear equations: i’;=o I2 11 - - f”E<klCfd ) P C2.21 - - fG&k$.t ) P P31 - - f+=O P-41 where: torque at the contact point six-dimensional outward normal vector a pure sliding vector a pure sliding vector perpendicular to i, orthogonal projection onto the real space tangent plane XN- = orthogonal projection onto the real space normal. The constants Z, FP, and i; can be computed from ge- ometric models of the p<arts in contact. There arc many possible choices for iP and il. For a particular choice, Equations 2.1-2.4 defme a two-dimensional slice of the invcrtcd friction cone. A linearized three-dimensional cone can be obtained by taking the Minkowski sum of a -finite number of cone slices. In practice, WC’ found that eight slices give a fairly accurate approximation. Let 3 . ..fp represent stable force/torques in the eight slices, where each j is defined similar to 7 in Equations 2.1-2.4. We can then write the linearized cone as r, =A +f2 + *-* +x. P51 Equation 2.5 can be comb&d with all of the slice equations in a system of the form where X consists of x,x,fi, . . . ,f,, and l represents the forces in the inverted friction cone. 2.2 Multi-Point Friction When two parts are in n-point contact, each possible re- action force can bc viewed as a nonncgativc linear com- bination of n forces, each of which comes from the friction cone of one of the contact points. We call this the full superposition assumption. Under full superposition, we can write the set of possible reaction forces of an n-point contact as n F composite = 0 4 I261 i=l where F, . . . F, are the friction cones at the contact points, and @ denotes Minkowski sum. FcOn,,,osite can be viewed as the composite. friction cone of the n-point contact. We will assume that Coulomb’s criterion for stability extends to composite friction cones under multi-point contact. Note that the composite inverted friction cone of an n-point contact can be computed using Equation 2.6 by fast inverting the point friction cones 1;1: . There is reason to b&eve that some of the reaction forces under full superposition are impossible, or at least unreli- able. If the applied force is exterior to some subset of the contact surfaces, then it is difficult to believe that these contact surfaces contribute to the reaction force. Let denote the set of outward normal vectors at the n contact points. Then we might write the composite friction cone as where Boneschanscher, van der Drift, Buckley and Taylor 781 exterior(E) = (X I 2 0 E > 0). We calI this the limited superposition assumption. For example, consider the planar two-point contact shown in Figure 2. For simplicity, ignore rotations. Fl and F, rep- resent the friction cones of the contact points. If we as- sume full superposition, then the composite friction cone of B with respect to A is given by Fl@F,, which yields the entire plane. This implies that A may not slide at all. If we assume limited superposition, then the composite friction cone is equal to J’, U F,. (The applied force can never be strictly interior to both surfaces.) This implies that sliding wilI occur as long as the applied force is not in either of the inverted friction cones. Figure 2. A planar two-point contact. Ramifications of full and limited superposition on stabil- ity will be discussed later in this paper. 3.0 Algorithm In this section, we will describe an algorithm which as- sumes full superposition. In general, the algorithm com- putes only potential stability. In “3.4 Guaranteed Stability” on page 5, we will discuss cases for which we conjecture that the algorithm computes guaranteed sta- bility. The input to the program is a geometric model of a sub- assembly and a set of disturbance forces. The modeler knows a set of basic bodies, namely cuboids, cylinders, cones and wedges, all represented as polyhedra. It also knows a basic set of contact situations: AGAINST (vertex, face) INSIDE (face 1, face2) TIED (partl, part2) A vertex against a face. Face 1 is contained in face 2. Part 1 is rigidly attached to part 2. The output of the algorithm is the set of orientations for which the subassembly will be stable on a table. Figure 3. The structure of the program. Figure 3 depicts the structure of the program. “Subas- sembly Stability” is the block controlling the entire flow of the algorithm. It uses “Candidate Orientations”, “Makearc”, “Net Relation” and “Simplex” to fmd all stable orientations. “Candidate Orientations” determines all orientations in which the subassembly is stable on a table, assuming that the parts are rigidly attached to each other. With a convex hull algorithm, all “first guess” orientations are found. Then, orientations in which the center of gravity of the complete subassembly is not supported are pruned, leav- ing the set of orientation candidates. Next, we relax the assumption that the parts are rigidly attached. In “Makearc”, the geometric model is translated into a network of parts. A relation is created in the net- work for each pair of parts in contact. Then, for each part in the network, the network is reduced to a single relation between the part and the table in “Net Relation”. The fmal relation is tested for stability in “Simplex”. If for a particular orientation all parts are found to be stable, then the orientation is stable. If one or more parts are found to be unstable, then the orientation is unstable. 3.1 Malceavc “Makearc” generates a relation for each contact point in the subassembly. A relation connects two parts, describ- ing the stable forces that one of the parts can exert on the other through the contact point (ignoring the rest of the subassembly). The stable forces at a contact point are given by the inverted friction cone at that point, which is represented by the system of linear equations described in “2.1 Point Friction” on page 2. 3.2 Net Relation “Net Relation” reduces a network of part relations to a single relation between a part and the table. Similar to strategies described by Smith and Cheeseman (1986), the actions to reduce the network are merging and com- pounding. Merging is performed when there are two par- 782 Robotics Figure 4. A simple merge action. allel relations between two parts (see Figure 4). Assuming full superposition, merging is accomplished by computing the Minkowski sum of the two relations, and results in a single relation between the parts. In a compound action, a chain of two relations is reduced to a single relation by intersecting them, thereby eliminating the middle part in the chain (see Figure 5). Consider Figure 6, which represents an abstract network of three parts. We are interested in determining the net relation of part A relative to part C. Network reduction is accomplished by iteratively repeating the following steps until a single relation remains: 1. Merge until no longer possible. 2. Compound until no longer possible. In Figure 6, a compound action is performed fast, elimi- nating part B. Then, the two remaining relations are merged to produce a single relation between A and C. Sometimes, neither compounding nor merging is possible during the reduction of a network. Two possible causes for this are bidirectional relations and loops. Assume that we are interested in the stability of a part A with respect to the table. A bidirectional relation is a relation between two parts in which forces can propagate in either direction on the way from A to the table. A loop is a circular path of relations on the way from A to the table, but not in- cluding A or the table. When these structures occur, transformations must be performed on the network so that reduction can continue. We are investigating algo- rithms to perform these transformations. Each initial relation in the network represents an inverted friction cone, represented by a system of linear equations. Let FI represent an initial relation, given by the following linear equations: Figure 5. A sirnplc compound action. I I Figure 6. An assembly requiring merging and compounding. Let 1 represent the stable disturbance forces in this sys- tem. Let F, represent another initial relation, given by the following linear equations: CjkO RjQO Let x represent the stable disturbance forces in this sys- tem. During the reduction of a network, several merge and compound actions are performed, each adding new con- straints to the initial system of equations. In the case of a merge action involving F, and I’$, the following six con- straints are introduced: 2=X +fi jj represents stable disturbance forces in the merged re- lation. It can bc seen that merging is the Minkowski sum of the two relations. In the case of a compound action, the following six constraints are introduced: “s=fi 3 represents stable disturbance forces in the compounded relation. It can be seen that compounding is the inter- section of the two relations. This is a simplified version, without disturbance forces acting on the part to be climi- nated. Disturbance forces can be added to the framework fairly easily. Previous constraints remain undisturbed during network reduction. When the reduction is complete, one set of six variables represents the net relation of the part relative to the table. The test for stability is accomplished by as- signing the disturbance force on the part in question to the final set of six variables, resulting in six additional equations. The complete system of equations is then passed to a Simplex program, which detcrmincs their fea- sibility. 3.4 Computational Compl&ty This subsection gives the average running time of the stability algorithm. Our analysis assumes full supcrposi- tion, and dots not necessarily hold if bidirectional re- lations and loops exist in the subassembly. Boneschanscher, van der Drift, Buckley and Taylor 783 Let m be the number of parts in the subassembly, and n the number of vertices. Then there are O(n) initial re- lations in the subassembly network. Since each merge and compound action eliminates one relation from the net- work, network reduction can be performed in O(n) steps. The output of network reduction is a system of O(n) equations in O(n) variables. These equations are passed to the Simplex program, which is known to run in time O(n3) on the average. Thus, it takes O(n3) steps on the average to determine the stability of a single part. Fur- thermore, it takes Q(mn3) steps on the average to deter- mine the stability of an entire subassembly at a given orientation. In extreme cases, it is possible for the Simplex algorithm to take exponential time. Polynomial worst-case time can be guaranteed by substituting Karmarkar’s algorithm (Karmarkar 1984). 3.4 Guaranteed Stability Palmer (1987) proved that for subassemblies of polygons in the plane without friction, if no contact points have an interior angle of less than 7c on both sides of the contact, then the problem of guaranteed stability is in P, and can be solved by linear programming. We conjecture that this result extends to three dimensions by taking the minimum interior angle on both sides of each contact point. If so, then our algorithm computes guaranteed stability for a class of frictionless subassemblies, including all subas- semblies that consist of stacked rectangular parts. Another conjecture is that an algorithm based on limited superposition will compute guaranteed stability in the presence of friction. In many cases, limited superposition and full superposition yield the same result (the next sec- tion presents one such case). In these cases, we conjecture that our current algorithm computes guaranteed stability in the presence of friction. We are attempting to establish these conjectures. In addi- tion, we are investigating algorithms to: 0 identify cases where limited and full superposition are equivalent. e compute guaranteed stability for cases where limited and full superposition arc not equivalent. Although the latter problem is in general NP-hard, we anticipate that many practical cases can be solved by effr- cient search procedures. 4.0 Experiments The described algorithm has been implemented in AML/X, linked to a Fortran Simplex routine. It has been tested on a number of geometric models. Figure 7 sum- marizes one of our tests. In this test, p was equal to 0.1. Figure 7. The right hand cases are unstable. 5.0 Conclusions An algorithm to determine the static stability of a subas- sembly on a table has been developed. It converts a ge- ometric model into a network of parts, and represents the relations between the parts as linear equations. A linearized model of friction in six dimensions is used to represent the contact situations in the subassembly. After reducing the network to a net relation between a part and the table, linear programming is used to determine the stability. In general, the algorithm computes potential stability. We conjecture that it computes guaranteed stability in the absence of friction when no contact points exist which have minimum interior angles of less than z on both sides of the contact. We also conjecture that the algorithm computes guaranteed stability for cases where limited and full superposition are equivalent. We are investigating algorithms to recognize these cases, and to compute guaranteed stability for cases where limited and full superposition are not equivalent. Our method is similar to that of Blum, Griffith, and Neumann (BGN), in the sense that we too USC linear progra mming. However, we have gone beyond the BGN results in the following ways: 1. When a subassembly is unstable, the BGN algorithm does not indicate which parts arc unstable. Since this information is useful to an assembly planner, we compute the stability of each part individually. 2. We address the issue of external insertion forces, while BGN limits its scope to gravitational forces. 7m Robotics Our method for linearizing friction is based on Erdmann’s equations, rather than the BGN method. Erdmann’s equations allow an arbitrary number of faces in a linear&cd friction cone, while the BGN method appears to be limited to four faces. The BGN paper implicitly assumes full superposi- tion. We have identified the alternative concept of limited superposition, and we are investigating an ef- ficient algorithm which uses it to compute guaranteed stability. We are investigating an algorithm to compute the robustucss of a stable insertion force; that is, the “closeness” of a stable insertion force to an unstable force. We would like to acknowledge contributions from the following people: Bela Musits, for valuable comments on the work; John Forrest, for advice on the Simplex program; Mike Erdmann, for advice on friction; Wally Dietrich and Lee Nackman, for advice on AML/X; V.T. Rajan, for advice on friction and linear progamming; and Bob Wittrock, for advice on linear programming. eferences Baumeister, T., editor 1978. Marks’ Standard Haud- book for Mechanical Euginecrs, McGraw-I Iill. Blum, M., Griffith, A., and Neumann, B. 1970. “A Stability Test for Configurations of Blocks”, AI Memo 188, MIT Artificial Intelligence Laboratory. Erdmann, M. 1984. “On Motion Planning With Uncertainty”, S.M. dissertation, MIT Dcpartmcnt of Electrical Engineering and Computer Science, also AI-TR-8 10, MIT Artificial Intelligence Laboratory. 4. 5. 6. 7. 8. 9. 10. 11. 12. Fahlman, S. 1973. “A Planning System for Robot Construction Tasks”, AI-TR-283, MIT Artificial In- telligence Laboratory. I-Iomcm de Mello, L., and Sanderson, A. 1986. “AND/OR Graph Rcprescntation of Assembly Plans”, CMU-RI-TR-86-8, The Robotics Institute, Carnegie-Mellon University. Karmarkar, N. 1984. “A New Polynomial-time Al- gorithm for Linear Programming”, ACM Symposium on the Theory of Computiug 16, pp. 302-3 I 1. Liebcrman, L., and Wesley, M. 1977. “AUTOPASS: An Automatic Programming System for Computer Controlled Mechanical Assembly”, fIBha Journal of Research and Dcvelopuwmt 21(4), pp. 321-333. Lozano-Pkez, T. 1976. ‘The Design of a Mechanical Assembly System”, S.M. dissertation, MIT Dcpart- ment of Electrical Engineering and Computer Sci- ence, also AI-TR-397, MIT Artificial Intelligence Laboratory. Lozano-P&z, T., Jones, J., Mazer, E., O’Donnell, P., Grimson, W.E.L., Tournassoud, P., and Lanusse, A. 1987. “IIandty: A Robot System That Recog- nizes, Plans, and Manipulates”, IEEE Hnternational Confcrcncc on Robotics and Automation, Raleigh, North Carolina, pp. 843-849. Palmer, R. 1987. “Computational Complexity of Motion and Stability of Polygons”, Ph.D. disscrta- tion, Cornell University. Smith, R., and Cheeseman, P. 1986. “On the Repre- sentation and Estimation of Spatial Uncertainty”, Hntcrnational Jourual of Robotics Rescarcla S(4), pp. 56-68. Taylor, R. 1976. “A Synthesis of Manipulator Con- trol Programs From Task-Level Specifications”, Ph.D. Dissertation, Stanford University, also AIM-282, Stanford Artificial Intelligence Laboratory. Boneschanscher, van der Drift, Buckley and Taylor 785
1988
60
805
Inferring Igncbrance from the Locality of Visual Ernest Davis New York University 251 Mercer St. New York, NY 10012 Abstract This paper presents a logical theory that supports high-level reasoning about knowledge and perception. We construct a formal language in which perception can be described. Using this language, we state some fundamental axioms, and we show that these are sufficient to justify some elementary but interesting inferences about perception. In particular, our axioms make it possible in some cases to infer that an agent does not know about a particular event, because he has had no way to find out about it. 1. Introduction Intelligent creatures learn much of what they know through direct perception. Therefore, reasoning about the acquisition of knowledge over time often requires a high- level understanding of the power and limits of perception. This paper presents a logical theory that supports high-level reasoning about knowledge and perception. One type of reasoning that may be supported by a theory of perception is the inference that an agent must be ignorant of a particular fact because he has had no way to find out whether it is true. Such means of inferring ignorance may be important, either to infer that a secret can be kept from another party, or to help an agent plan to find out a given fact. We focus on the following problem: I. Steve is in a closed room with no windows and walks across the room. Claire is outside the room. Infer that Claire does not know now that Steve has crossed the room. We further require that our theory support the follow- ing inferences: II. Andrew is in his office and does not see any cows there. Infer that he knows that there are no cows in his office. III. Joanne does not know whether there are flowers on the dining room table. However, she knows that the top of the dining room table is visible from any point in the din- ing room. Infer that Joanne knows that she can find out whether there are flowers on the dining room table by going and looking. This work was supported by NSF grant #DCR-8603758. IV. Fred has seen that Max has been with him all night. Infer that Fred knows that Max was not five miles away any time that night. V. Judy sees Sharon standing facing a bus. Infer that Judy knows that Sharon knows that there is a bus in front of her. These problems illustrate various aspects of percep- tion. In (I) we infer that an agent is ignorant of a fact from our knowledge of the physical limits of vision. Since Claire cannot see Steve inside the room, she cannot know what is happening in the room. (II) shows the gaining of knowledge from inferences based on both prior knowledge and percep- tion. Given what Andrew knows about the size of cows, the presence of a cow is physically incompatible with what he sees. (III) involves reasoning about possible future states of perception and knowledge. (IV) involves perception extended over an interval of time. (V) shows that one agent can infer the perceptions and knowledge of another agent by perceiving their physical situations and knowing their per- ceptual and inferential powers. The formal model developed in this paper is rich enough to support the statement and proof of close analogues of all the above inferences. Our purposes are quite different from those of com- puter vision research; we therefore need a theory of percep- tion at a different level of description. The facts about vision needed to solve problems (I) through (V) above are little used in computer vision programs. For example, consider the fact that it is impossible to see what is happening outside a closed room from inside it. Such a fact has little impor- tance in computer vision, since it rarely constrains the interpretation of a particular image. However, it is of impor- tance in reasoning qualitatively about how vision will aug- ment knowledge in a given situation; one can use it to deduce that an agent will have to leave a closed room to know what is happening outside it. Some limitations of our theory should be noted. First, we ignore learning through perceptions of conventional signs such as writing. Second, we model only visual percep- tion. Finally, our models of vision and of physical causality are greatly simplified. We are interested here in the connec- tion between perception and knowledge, and a more com- plex physical theory would have added irrelevant complex- ity. We believe that the essential structure of our theory will carry over to more realistic physical models. 786 Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. 2. The Toy World We formulate our theory within a “toy” world. Many of the assumptions below are obviously invalid in general; however, the basic structure is largely independent of these assumptions. In particular, the relations between knowledge and perception are independent of the particular physics of motion and vision assumed. Our toy world is constructed as follows: There is a fixed set of physical objects moving about through space. Objects are rigid, maintaining a constant shape; however, their position may change continuously. At each time instant, each object occupies a connected regular set of points. The places occupied by two objects at a single time may not overlap. Besides their shapes, objects have time- invariant visible properties, such as their coloring. Objects may be also be characterized in terms of non-physical pro- perties and relations, such as “being a Republican”. There are no other physical restrictions on the world. A course of events is physically possible iff each object maintains a constant shape and properties and moves con- tinuously, and no two objects ever overlap. Some objects are agents. At each instant of time, an agent has a body of knowledge with the following proper- ties: A.1 Knowledge of axioms: All general axioms - axioms of predicate calculus, geometry, time, physics, knowledge and perception - are known. A.2 A.3 Consequential closure: Any logical agent’s knowledge is known. Veridicality: All knowledge is true. implication of the A.4 Positive introspection: If an agent knows a fact, he knows that he knows it. A.5 Memory: If an agent knows a fact (with no time indexicals) at one time, he knows it at all later times. A.6 Internal Clock: An agent always knows what time it is. An agent also has perceptions. A point is visible to an agent if it is not occluded from him by an object in between. An agent A can see the visible properties of an object at a visible point. Moreover, A can see the properties, not only of individual points but also of connected sets of points. Specifically, let X be a connected set such that each point of X is visible to A. If all of X lies inside an object 0 with visi- ble properties P 1, P2, . . . . then A sees that X lies in some object with properties P 1, P 2, . . . If X lies in free space, then A sees that X lies entirely in free space. Perception provides knowledge about the world, and it is the only source of information as to which of the many physically possible courses of events is actually happening. Specifically, we assume the following: 3. Formal Model Our model of knowledge derives from Hintikka’s (1971) possible worlds semantics for knowledge. Moore (1980) combined this with a temporal logic, by identifying epistemic possible worlds and temporal situations. We modify Moore’s model in two ways. First, we adopt a con- tinuous model of time [McDermott, 821. Second, we use two levels of possible worlds: layouts and situations. A lay- out is a timeless physical description of the instantaneous state of the world. A situation is a placement of a layout within a temporal structure and a system of knowledge rela- tions. Perceptions are associated with layouts; knowledge is associated with situations. In our toy physics, an object is an atomic individual with a set of visual properties. A layout specifies the objects in the world and the figure occupied by each object. The figure occupied by an object 0 in layout L is denoted “place(O,L)“. A behavior describes the progression of layouts over time. The function “scene(B,7)” maps a behavior B and a time T to the layout of B at time T. Behaviors are con- strained by the requirements that all layouts in the behavior have the same objects; that objects have constant shape over time; and that objects move continuously. We distinguish certain layouts and behaviors as physi- cally possible. In our simple physics, a layout is physically possible if no two objects overlap, and a behavior is physi- cally possible if each of its layouts is physically possible. We allow physically impossible layouts and behaviors as valid objects of thought; this simpliiies the physical axioms. Even physically impossible behaviors must obey the above constraints. What the agent can see in a given layout is determined by the laws of vision. The perceptions of an agent A in a lay- out L fix all aspects of L at points which are not occluded from A, and fix no aspects of L at points which are occluded from A. We say that a layout L 1 is visually compatible with L with reference to A, written “v-compatible(A,L,L 1)” if L 1 is consistent with everything that A can perceive in L. V-compatibility is an equivalence relation over layouts. Our theory of the scope and limits of vision is expressed in terms of the properties of the v-compatible relation. In our model of vision, L2 is v-compatible with L 1 with respect to agent A if both L2 and L 1 are physically possible and the following condition holds: Let X be a con- nected set of points, such that every point in X is visible to A in L 1. Then each point of X is visible to A in L 2. Moreover, if X lies entirely in A in L 1, then X lies entirely in A in L 2; if X lies entirely in some object 0 1 in L 1, then, in L 2, X lies entirely inside some object 0 2 with the same visible proper- ties as 0 1; if X lies entirely in free space in L 1 then X lies entirely in free space in L 2. A.7 Anything that is perceived is known. A behavior B 1 is visually compatible with behavior A.$ If a physical statement is physically possible, and it BOforagentAuptotimeTif,asfarasAcanseeinBOup does not contradict any past or present perceptions, to time T, the world could be going through B 1. We write then it cannot be known to be false. this relation, “bv-compatible(A,B 0,B 1,T)“. In OUT model of Davis 787 vision, we assume that all such information comes through the layouts; that is, two physically possible behaviors are visually compatible if corresponding layouts are visually compatible. bv-compatible(A,B 1,B 2,TS’) = t/~g~ v-compatible(A,scene(B l,T),scene(B 2,T)) Two states of the world may be identical in their phy- sical layout and yet differ in other respects. To accommo- date this, we define a situation as a state of the world, including the physical layout, the non-physical properties and relations of objects, and the knowledge states of agents. The knowledge states of agents, however, are not a com- ponent of thesituation, but are encoded in accessibility rela- tions between situations. The function “layout(S)” maps a situation to its layout. knowledge, together with the axiom that all chronicles have a physically possible behavior. Vc phys-poss(behavior(C)) A.3, veridicality, holds if knowledge accessibility is reflexive. vJA,S WL%Q A.4, positive introspection, holds if knowledge acces- sibility is transitive. vA,SI,S&S3 WAS l,=) A WWLS3) =a k(A,S 1,s 3) AS, memory, holds given the following: Let situation S 1B be accessible from S OB, and let S OA precede S OB in the same chronicle. Then, since everything the agent knows in SOA, he also knows in SOB, and S 1B is consistent with everything he knows in SOB, there must be a scene S IA in the chronicle of S 1B which is accessible from SOA. (Figure 1) Non-physical properties of objects are made parts of situations rather than of layouts, in order to allow different agents to have different degrees of knowledge about them. In our system, we can allow Tom to know that all cows are large, but Sid not to know this. This would not be possible if we associated the non-physical property of being a cow with layouts. ~A,SIB,SOB,SOA II W-U OB,S 1B) A precede@ OAS OB) 1 * 3~1~ [ k(A,S OA,S 1A) A precedes(S lA,S 1B) J The knowledge of an agent in a situation S is represented by an accessibility relation between S and other situations that are consistent with his knowledge. Let A be an agent and let S 1 and S 2 be two situations. S 2 is accessi- ble from S 1 relative to A, written “k(A,S l,S2)“, if as far A knows in S 1, the state of the world might be S 2. We say that A knows in S 1 that $ is true if $ is true in all situations that are accessible to A from S 1. If these relations hold @ . !C . ..__ B p I ..CkC ,>< j Note that in both the visual compatibility relation and (SOA) the knowledge accessibility relation, more information corresponds to a smaller extension of the relation. The more you know, the more variations in the world you can rule out as false, and therefore the fewer possible states of the world are consistent with your knowledge. A chronicle describes a progression of situations over time, The function “situation(C,T)” maps a chronicle C and a time T onto the situation in that chronicle at that time. Each situation S has a unique time in a unique chronicle, denoted “time(S)” and “chronicle(S)“. Each chronicle C has associated a behavior, which is the progression of layouts of the scenes. scene(behavior(C),T) = layout(situation(C,‘I)) then the diagram can be completed If situations S 1 and S 2 are in the same chronicle, and the time of S 1 is earlier than the time of S 2, we say that S 1 precedes S 2. precedes(S 1,s 2) E [ chronicle(S 1) = chronicle(S 2) A time(S 1) I time(S 2) 1 Axiom of Memory A.6, internal clock, holds if any two accessible situa- tions occur at the same time. vA,so,s1 k(A,S 0,s 1) * time(S 0) = time(S 1) We can achieve properties A.1 - A.8 by imposing the following requirements: Al, kmwledge of the axioms, and A.2, consequential closure, follow immediately from the definiuon of A.7, that perceptions are known, holds given the fol- lowing: If S 1 is knowledge accessible from SO, then the lay- out of S 1 is visually compatible with the layout of S 0. That is, for a situation S 1 to be consistent with an agent’s knowledge, the layout of S 1 must be compatible with what 788 Robotics the agent sees; conversely, if a layout is not compatible with what the agent sees, the agent knows that that cannot be the real layout. vA.SO,Sl W%S 0,s 1) * v-compatible(A,layout(S O),layout(S 1)) A.8, that perception is the only source of knowledge of the course of events, holds given the following: Let CO be the real chronicle. Let B 1 be a behavior that is visually com- patible with the behavior of C 0 up to time T relative to agent A; thus, as far as A could have seen up to time T, B 1 could be the real behavior. Then it is consistent with A’s knowledge that B 1 actually was the real behavior; that is, B 1 is the behavior of some knowledge accessible chronicle Cl. VA,Co,Bl,T bv-compatible(A,behavior(CO)J? 1,T) * 3~ 1 [ k(A,situation(CO,T),situation(C 1 ,T)) A B 1 = behavior(C 1) ] 4. Proof We can now sketch how an analogue to inference (I) can be formulated in our model, and proven from the above axioms of perception and knowledge, together with suitable axioms of geometry and physics. The full version of this paper [Davis, 881 shows the formulation and sketches the proofs of all the inferences (I) through (V), and it gives a complete formal proof of (I). These are omitted here due to length limitations. Claire is on one side of a wall, for an interval of time i0. On the other side of the wall, occluded from Claire, is an object omystery. The object lies strictly within some larger region, which is entirely occluded from Claire. (Figure 2). During i0, Claire stays motionless, the object stays within its envelope, and no other object ever intersects the envelope. We wish to prove that there is no way for Claire to know whether the object is motionless or whether it is moving around within its envelope, since either arc equally compati- ble with the motions of the objects that Claire does see. This conclusion can be formalized as follows: At the end of i0, there is one knowledge accessible situation that follows on a chronicle in which the object is motionless; there is another knowledge accessible situation that follows on a chronicle in which the object is in motion. [ & 1 k(aclaire,sOz, situation(C 1 ,end(iO))) A motionless(omystery,behavior(C I),iO) ] A [ gc2 k(aclaire,sOz,situation(C 2,end(iO))) A ~motionless(omystery,behavior(C2),iO) ] To prove this, we construct two particular behaviors. In the first, every object moves just as it does in the real world except that omystery stays motionless throughout i0. In the second, every object moves just as it does in the real world except that omystery moves continuously within xen- velope throughout i0. We show that both of these arc physi- cally possible, since no other object comes within the envelope, by hypothesis, and so no other object interacts A Claire Figure 2 Claire cannot know whether the object is moving with omystery. Both compatible with Claire’s perceptions, since the identical objects are visible to Claire in the identi- cal places. Hence, by axiom A.8, Claire cannot know of either of them that it did not occur. 5. The Problem with Inferring Qnorance The above proof rests on axiom A.8, which states that, if a course of events is possible and it is compatible with the agent’s perceptions over time, then the agent cannot know that this is not the course of events that took place. Clearly, some such axiom is needed if we are to deduce the ignorance of agents from the limits of their perceptions. Unfortunately, axiom A.8 is so strong that it rules out many plausible states of knowledge. This problem is particu- larly exacerbated by the weakness of our physics, which allows all kinds of courses of events as possible. For exam- ple, suppose that Fred and a table are the only objects in a closed room. In our theory, there is no way that Fred can ever be sure that the table is the only other object in the room; nor is there any way that he can be sure that he will ever see the table in the future. For it is physically possible that there is a swarm of small bees which has hitherto always hidden itself on the far side of the table, but which is just about to come and completely surround him, so as to occlude his viewing anything in the future. Since this is phy- sically possible, by axiom A.8 he cannot know that it is false. The ultimate problem here is that our theory of knowledge, like most such theories, approximates rationality in terms of the axiom of consequential closure, the assump- tion that an agent can make all logical deductions. It has often been pointed out that this axiom is too strong [Kono- lige, 861 [Levesque, 841; it is less often noted that it is also too weak, and that plausible reasoning must be allowed as a source of knowledge 6. Previous Work Little work has been done in AI on reasoning about perception. ATTEND, the focussing of a sensory organ, was a primitive act in conceptual dependency [Schank, 751 and was causally connected to MBUILD, the performance of a Davis 789 mental act; but the logic of these acts was not developed in detail. A number of theories, such as [Appelt, 821, [Allen and Perrault, 803 and [Morgenstern, 871, have studied the acquisition of knowledge through communication, but these have not looked at direct perception. The definition of a per- ception as a set of physical layouts was put forward in [Davis, 841. Hintikka (1969) used a modal theory of percep- tions to eliminate the need for sense-data as ontological primitives. The situation semantics of [Barwise and Perry, 821 studied the logical structure of sensory verbs in detail, but did not relate it to knowledge acquisition or to physical constraints. Reiter and Mackworth (1987) give a formal account of the relation between an image and a physical situation. There have been a number of “active” vision systems that have reasoned about where to look to acquire relevant knowledge. For example, the SHAKEY robot looked for landmarks to locate itself. There have also been vision pro- grams, such as [Garvey, 761 and [Selfridge, 821 that have used constraints based on the limitations of visual processes in object recognition. The work in the paper can be viewed as presenting a formal mechanism which justifies such infer- ences, and which will enable a intelligent system to reason about them directly. 7. Conclusions We have given a formalism in which a few basic problems relating perception and knowledge can be stated and solved. There are two significant technical innovations in this work. The first is the concept of a physical layout, which specifies just the physical state of the world, and the description of perception in terms of layouts. The second is axiom A.8, which limits an agent’s knowledge of contingent physical facts of certain types to that which can be deduced from his perceptions, together with physical laws. This axiom, however, places constraints on an agent’s knowledge that are often unacceptably strong. In most practical prob- lems, the positive inference, “Since A sees Q, A knows Q,” is more important that the negative inference, “Since A doesn’t see $, A cannot know Q;” hence, it may be best to drop axiom A.8 or to restrict its scope. The theory of the connection between perception and knowledge is largely independent of the physical theory. In particular, axioms A.1 through A.7 may be used together with any set of physical laws, and with any laws delimiting the powers of vision in terms of bv-compatibility. Axiom A.8 can also be used with any definition of physical laws and laws of vision, but it may put strong limits on the range of agents’ knowledge of physical properties; that is, it may require that either all agents always know the value of a Par- ticular physical property or that no agent ever know it. Acknowledgements Thanks to Leora Morgenstem and John Sterling for helpful suggestions and to Ralph Grishman and Larry Manevit.z for pointing out the difficulties discussed in sec- tion 5. References [Appelt, 823 Douglas Appelt. Planning Natural-Language Utterances to Satisfy Multiple Goals. SRI Technical Note 259.1982. [Allen and Perrault, 801 James Allen and C. Raymond Per- rault. Analyzing intention in utterances. Artificial Intelli- gence, vol. 15,143-178. 1980 [Bat-wise and Perry, 821 Jon Bat-wise and John Perry. Situa- tions and Attitudes, MIT Press. 1982. [Davis, 841 Ernest Davis. An Ontology of Physical Action. NYU Tech. Rep. 123. 1984. [Davis, 881 Ernest Davis. Inferring Ignorance from the Locality of Visual Perception. In D. Miller and D. Atkinson (eds.) Planning Systems for Mobile Robots, Prentice Hall, to appear. 1988. [Garvey, 861 Tom Garvey. Perceptual Strategies for Purpo- sive Vision, SRI AI Center Tech. Rep. 117. 1986. [Hintikka, 691 Jaako Hintikka. On the Logic of Perceptions. Im Models for Modalities D. Reidel Publishing, Dordrechts, Holland. 1969. [Hintikka, 711 Jaako Hintilcka. Semantics for Propositional Attitudes. In L. Linsky (Ed.), Reference and Modality, Oxford University Press. 197 1. [Konolige, 861 Kurt Konolige. A Deduction Model of Belief, Pitman Publishing, London. 1986. $evesque, 841 Hector Levesque A Logic of Explicit and Implicit Belief. In Proc. AAAI-84, 198-202. 1984 [McDermott, 821 Drew V. McDermott. A Temporal Logic for Reasoning about Processes and Plans. Cognitive Sci- ence, vol. 6,101-155. [Moore, SO] Robert Moore, Reasoning about Knowledge andAction. SRI Technical Note 191. 1980. [Morgenstem, 881 Leora Morgenstem. Foundations of a Logic of Knowledge, Action, and Communication. NYU Ph.D. Thesis. 1988. peiter and Mackworth, 871 Ray Reiter and Alan Mack- worth. The Logic of Depiction. Tech. Rep. 87-24, U. British Columbia Computer Science Dept. 1987. [Schank, 751 Roger C. Schank. Conceptual Information Processing, North Holland, Amsterdam. 1975. [Selfridge, 821 Peter Selfridge. Reasoning about Success and Failure in Aerial Image Understanding, Tech. Rep. 103, U. Rochester Computer Science. 1982. 7% Robotics
1988
61
806
Jahir Pabon and David Gossard Massachusetts Institute of Technology 77 Massachusetts Avenue Room 3-449 Cambridge, MA 02139 A central problem inherent to autonomous systems is the absence of an external reference frame in which sensory inputs can be interpreted. It is hypothesized that, in natural systems, sensory information is transformed into a consistent internal representation that serves as an internal invariant reference frame. This paper presents a hierarchical connectionist network for learning coordinated motion in an autonomous robot. The robot model used in the adaptation studies consists of three subsystems: an eye-like visual receptor, a head, and an arm. The network contains a hierarchy of adaptive subnetworks for processing sensory information. The performance of the hierarchical system was observed to improve towards an asymptotic value. The performance was found to be one order of magnitude better than that of non-hierarchical systems. This suggests that the intermediate layers may be serving as an internal invariant reference frame for the robot. I. Introduction Autonomous systems is a research area of large practical importance. Autonomous robots have the potential to play an important role in factory automation as well as in unmanned missions for space and undersea exploration. Autonomous systems also provide a vehicle for the study of natural, i.e. human, systems. Such research has two potential benefits: increased insight into perception and control strategies used by humans, and the identification of subsets of those strategies that can be efficiently implemented in autonomous robots. A central problem inherent to autonomous systems is the absence of an external reference frame in which sensory inputs, such as positions, can be interpreted. In particular, an autonomous system must be able to interpret sensory information in a way which takes into account the relative positioning of sensory and motor components with respect to the system’s external environment. Sensory information must be transformed into a consistent internal representation that serves as an internal invariant reference frame. Furthermore, it is desirable that the system learn this representation in order to compensate for unforeseen changes in the environment or in the system itself following growth or damage. This paper reports on a hierarchical system of parallel distributed processing elements for producing coordinated motion in an autonomous robot. We consider it very important for an autonomous system to be able to “close the motor coordination loop” adaptively. Closing the loop involves two processes: Learning to form an internal reference frame on which sensory input can be consistently interpreted, and learning to use that internal interpretation to generate voluntary movement. Simulation studies demonstrate that the system learns an internal invariant reference frame and uses that internal reference frame to produce coordinated motion 2. Most of the research work in adaptive control techniques for robotic applications (Koivo and Guo, 1983; Leininger, 1984; Dubowsky and Kornbluh, 1985; Atkeson and McIntyre, 1986; Slotine and Li, 1987) is based on the use of some external reference frame to measure performance errors of the system and therefore not directly applicable to the study here. Networks of parallel distributed processing elements, i.e. “neural networks”, possess a number of useful computational properties (Grossberg, 1988; Hopfield, 1982; Kohonen, 1984; Rumelhart, 1986). Several preliminary studies have partially shown that these networks have the potential to accomplish adaptation in autonomous systems (Barto, 1984; Bullock, 1988; Grossberg and Kuperstein, 1986; Kawato, 1987; Kuperstein, 1987; Pabon and Gossard, 1987b; Psaltis, 1987). Kuperstein (1987) presented a connectionist model that adaptively controls a visually guided robot arm to reach target spots in three dimensions. The visual input in his model, however, was produced by two cameras preprogrammed to point at the target. This, in essence, provided an external reference frame for the system. The camera orientations were subsequently encoded into the activation of two two-dimensional arrays of units. These activation maps were then used as the input to the adaptive network that sent signals to the arm actuators to move the arm endpoint to the desired position. Pabon and Gossard 791 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Kuperstein makes reference to his previous work (Grossberg and Kuperstein, 1986) on adaptive control of saccadic eye movements as a possible mechanism to close the loop, -but no model of the complete process was presented. In addition, no mathematical support for convergence of the adaptation method was presented. the 3. Neural Networks: A Short Review Connectionist networks are arrays of simple, neuron-like, highly interconnected computing elements. One of the basic network architectures is the two-layer feed-forward network. Figure 1 shows the topology of this network. The network consists of a set of input units, x , connected to a set of output units, y , through a set of weights, w. The activation of the output units in the network is given by yi = f& Wij Xj) j=l where xi is the activation of unit j connected to unit yi, wii is the strength (weight) of the connection from unit xj io yip and f is an output activation function. Adaptation in these networks is achieved by regulating the strength of the connections (weights) among the network units. One well-known adaptation method is that of backpropagation (Rumelhart, 1986). During the learning process, the weights wij, are modified so as to minimize the difference between the output activation yi, and a reference output ri. This is achieved by using the expression: 6Wij = a xj(ri-yi) (f ‘) where 6wij is the change in the wei ght wij, CY+ is the learning rate parameter, and (f) is the derivative of the output-activation function. This adaptation law is frequently referred to as the delta rule. The learning parameter, a, determines the rate and performance of the adaptation process. It should be emphasized that this network is- just one building block in the control structure, and that the reference signal, ri, although external to the network, is produced by components internal to the autonomous system that form part of the same structure. Two-layer feed-forward networks are useful as building blocks in applications involving adaptive mappings. The application in question question here is that of Learning Motion Control. In this application, the input units encode commands and sensory signals; the output units produce motor signals that are fed to plant actuators to generate motion; and the reference signals represent desired plant responses. 4. Model and Control Figure 2 presents the robot model used for our adaptation studies. It consists of three hierarchical subsystems: an eye-like visual receptor, a head, and an arm. Each subsystem has two degrees of freedom: the eye subsystem can rotate in two directions, the head subsystem can translate in two directions, and the (planar) arm has two links with rotational joints. In each degree of freedom, the position is controlled by an antagonist pair of muscle-like actuators, i.e. opposing springs whose stiffness is regulated by control (activation) signals. In the eye, for example, changing the activation signals to a pair of actuators causes a rotation of the eye to a new position where the spring forces are in equilibrium. See Figure 3. The eye contains a population of light receptor units arranged in a two-dimensional array, called here the retina. The level of activation of each receptor is determined by the amount of light incident upon it. Thus a target light spot impinging upon the retina generates a distribution of activations across the units in which the most active units will be those closest to the point where the light strikes the receptor array. This distribution of activation is called the retinal map. In this study a decaying exponential (gaussian) distribution was assumed and is described in the Appendix. An on-center off-surround receptive field similar to those in human retinal receptors could also be used and would produce similar results. The set of activation signals sent to the eye and head actuators were similarly encoded into 2D arrays of units. Thes,e arrays are called the eye position map and the head position map respectively. The protocol for the learning experiments was the following. During the learning phase, the current endpoint of the arm is used as the target. A random signal generator is used to supply activation signals to the arm actuators so as to span the complete arm workspace. The system’s goal is to use the sensory information (retinal map, eye position map, and head position map) to generate command signals to the arm actuators which match those produced by the random generator, so as to keep its endpoint in the original position. The distance between desired and actual arm endpoint positions is taken as the error, a measure of the system performance. Learning is assumed to be complete when the average error over the arm’s workspace is sufficiently small (e.g., less than 5% of the characteristic length of the workspace). After this, a testing phase can be carried out with visual targets presented to the system in the form of light spots on the viewing plane. The adaptive control scheme used is presented in Figure 4. The thick lines denote that the given signal is encoded onto a population of units (a 2D array). Given current eye and head positions (centered at the beginning of the learning process), the target generates a retinal map on the visual receptor array. The retinal map and the current eye position map are input to a first adaptive network (number 1 in Figure 4). The output of this first network evolves, during learning, into a representation of the target that is invariant with respect to eye orientation, i.e. given a fixed target and head position, this signal remains constant independent of 792 Robotics changes in orientation of the eye. This signal is called TPME (for Target Position Map invariant with respect to Eye orientation). The TPME is then input to a second adaptive network (number 2 in Figure 4) which generates the motor commands to the eye actuators. After the eye reaches its new orientation, an error signal is generated by the retinal unbalance decoder, which weighs the eccentricity of the retinal map. The error signal is used to modify the connectivity matrix of network 2 using the delta rule. Backpropagation is then used to modify the connections in network 1. A simple network architecture to measure the eccentricity of an activation map is described in (Pabon, 1987a). The TPME and the head position map, are used as input to a third network (number 3 in Figure 4). The output of this network evolves, during learning, into a representation of the target that is invariant with respect to head position, i.e. given a constant target, this signal remains constant independent of changes in orientation of the eye or position of the head). This signal is called TPMH (Target cosition Map invariant with respect to Head position). The TPMH is then input to a fourth adaptive network (number 4 in Figure 4) which generates the motor commands to the head actuators. After the head reaches its new position, an error signal, obtained by weighing the eccentricity of the eye position, is used to modify the connectivity matrix of network 4 according to the delta rule. Backpropagation is again used to modify the connections in network 3. The TPMH is also used as input to a fifth network (number 5 in Figure 4). The outputs from network 5 are the command signals to the arm actuators. The arm will then move, attempting to reach the target. During learning, the output from network 5 is compared to the random signal that originated the arm movement. The difference is then used to modify the connectivity matrix of network 5. 5. Results A number of simulation studies was conducted to examine the qualitative and quantitative behavior of the model and its control. In the first study the eye subsystem alone was examined. The objective was to select appropriate values for the model parameters. The results from this study were presented in (Pabon and Gossard, 1987b), where it was found that the values of the learning rate parameter, a, proposed by the authors were always in reasonable agreement with the best values derived from the simulations. 5.1 Entire System with Internal Layers The results from the eye simulations were used in a second study, where the eye, head and arm subsystems were examined working together as proposed in Figure 4. The parameter values used in the simulations of the entire system were the following (length values are normalized and therefore nondimensional: eye radius, 0.25; distance from eye center of rotation to the viewing plane, 3.0; head workspace, square of dimension 4*4 centered about the base joint of the arm; arm links length, 1.0; arm workspace, defined by the joint limiting angles (O’, 135”); retina composed of a square array of 5*5 receptor units; eye position encoded into a square array of 5*5 units; head position encoded into a square array of 5*5 units; encoding parameter, 30 for all maps; output activation function, (1-e-“)/(l+e-“); The error of the arm (i.e. the global error) is defined as the distance between the target point and the actual arm endpoint position, expressed as a percentage of the arm link length. The time history of the error of the arm position is shown in Figure 6(a)-(b). It can be seen that the performance of the system approaches an asymptotic value. The steady state error, defined as the average error over the last 10% of a run of lo4 iterations, was 8.4% of the arm link length 5.2 Process Without Internal Layers Simulations were also carried out of an alternative adaptive controller with no internal layers, i.e. all the sensory information was fed directly into a two-layer network which generated the motor commands to the arm. This controller is shown in Figure 5. Using the same number of units per map and a similar number of iterations, the steady state error of the arm (global error) in the model without internal layers was on the order of 50%. This is about one order of magnitude larger than the error observed in the model with internal layers. 6. Conclusions The steady state error of the arm decreases asymptotically towards a small value (-8% of the arm’s length). The asymptotic value to which the system’s error tends is sensitive to several parameters: the number of units used in the encoding maps, n , the encoding parameter, s, and the learning rate parameter, a. The number and structure of internal layers play an important role in the efficiency of the adaptation process. Specifically, the fact that the performance of the system with internal layers is so much superior to the performance of the system without internal layers suggests that the system is using networks 1 and 3 as what amount to an internal invariant representation of its environment. The TPM-E described here (i.e. the output of adaptive network 1 in Figure 4) was inspired by experimental evidence that activation levels of certain cells in the posterior parietal cortex (of the monkey brain) are a function of both retinal maps and current eye position. The control system presented here is an extrapolation of this basic idea to handle additional degrees of freedom. The successful performance of the system studied here suggests that it is possible that a set of cells exists whose activity further correlates the activity of those cells in the posterior parietal cortex with current head position, corresponding to the TPM-H of the system here. Pabon and Gossard 793 Parallel distributed controls for artificial systems would be very robust. The weights in the control system studied here were initialized to random values. Through learning, they eventually “encoded” the kinematics of the particular robot geometry. The same control system could, without modification, learn other robot geometries. Such systems could thus compensate for unforeseen changes in the environment or in the robot itself following growth or damage. The performance of these systems would also degrade gracefully with the loss of individual units. Appendix A. Encoding of Continuous Variables on a Population of Units A pair of continuous variables (0,~) in the ranges ((J max’ omin) and (ymax,ymin) respectively can be encoded as the activation of a two-dimensional set of units Ixij } , i,j=l ,n, using the encoding function: X ij = exp(-s [ (z)2+(z)2] > where s is the encoding parameter; 80 = cmax - Omin; % = Ymax - Ymi* ; (oi ,Yj) are the characteristic values of unit ij (value of the pair (0,~) that produces a maximum activation of the unit). The encoding parameter, s, determines the degree to which the activation is distributed across the units. References [l] Atkeson, C. G., and McIntyre, J., “Robot trajectory learning through practice”, Proc. of IEEE Conf. on Rob. and Autom., 1986. [2] Barto, A. G., Sutton, R. S., and Anderson, C. W., “Neuronlike elements that can solve difficult learning control problems”. IEEE Trans. on Systems, Man, and Cybernetics, Vol. SMC-14,1984,834-846. [3] Bullock, D., and Grossberg, S., “Neural dynamics of planned arm movements: Emergent invariants and speed- accuracy properties during trajectory formation”, submitted to Psychological Review, 95(l), 1988. [4] Dubowsky, S., and Kornbluh, R., “On the development of high performance adaptive control algorithms for robotic manipulators”, in Robotics Research: The Second International SvmDosium, Hanafusa, H., and Inoue, H. eds., MIT Press, Cambridge, MA, 1985. [S] Grossberg, Stephen, “Nonlinear Neural Networks: Principles, Mechanisms, and Architectures”, Neural Networks, Vol. 1, No. 1, pp. 17-61, 1988. [6] Grossberg, S., and Kuperstein, M., Neural Dynamics of Adaptive Sensorv-Motor Control, North-Holland, New York, 1986. [7] Hopfield, J. “Neural networks and physical systems with emergent collective computational abilities”. Proc. Natl. Acad. Sci. vol. 79, pp. 2554-2558, (1982). [8] Kawato, M. , K. Furukawa and R. Suzuki, “A Hierarchical Neural-Network Model for Control and Learning of Voluntary Movement”, Subm. to Biol. Cybem. 1987. [9] Kohonen, T. Self-organization and Associative Memory. New York, Springer-Verlag, 1984. [lo] Koivo, A., and Guo, T., “Adaptive linear controller for robotic manipulators”, IEEE Trans. Auto. Control, AC-28(2), pp. 162-170, 1983. [ 111 Kuperstein, M. “Adaptive Visual-motor Coordination in Multijoint Robots using Parallel Architecture”, Proc. of IEEE Rob. and Autom., pp. 1595- 1602, 1987. [12] Leininger, G. G., “Adaptive control of arms using self-tuning methods”, in Robotics Research: The First International Symposium, Brady, M., and Paul, R. eds., MIT Press, Cambridge, MA, 1984. [13] Pabon, J., and Gossard, D., “A two-dimensional neural network to stabilize position of input patterns”. MIT CADLAB technical report. July 1987a. [14] Pabon, J., and Gossard, D., “A methodology to select appropriate learning rate parameters in feed-forward networks”, presented at the Third Annual Aerospace Applications of Artificial Intelligence Conference (AAAIC), Dayton, Ohio, October, 1987b. [15] Psaltis, D., Sideris, A., and Yamamura, A., “Neural Controllers”, Proc. of IEEE First International Conference on Neural Networks, Vol. IV, pp.551-558, San Diego, CA, 1987. [16] Rumelhart, D., Hinton, G. and Williams, R. “Learning internal representation by error propagation”. In Distributed Parallel Processing. MIT Press, Cambridge, 1986. [17] Slotine, J.-J., and Li, W., “Adaptive robot control: A new perspective”, sub. to IEEE Conf. on Decision and Control, 1987. I input output reference adaptation signal Figure 1. A two-layer feed-forward network. 794 Robotics retinal map eye position head position to arm actuators Figure 2. The Autonomous Robot. 1 A Figure 3. The eye subsystem Figure 5. Adaptive Process without Internal Layers %’ 1 s a b %%’ 9001 s er 0 Figure 6. Time evolution of the arm position error Figure 4. The adaptive control loop. Pabon and Gossard 795
1988
62
807
Ian Douglas orswill and Rodney Allen Brooks MIT Artificial Intelligence Lab I-AC m-,1- -l---- o-.--- -- 999 lecnnology mpare Cambridge, MA 02139 Abstract We describe a system that approaches and follows arbitrary moving objects in real time using vision as its only sense.-The system uses multiplesimple vision computations which, although individually nnrelinhle rnmnlement enrh nther in R rnsannpr -A.Av-a-‘-l, --“Lr’“‘-*-“’ uuI*L .,“AhuA *-. - *Asu&A*.w* mediated by a situated control network. The ob- jects can move over a wide variety of backgrounds including those with strong secondary reflections from light sources. Previously unseen objects can be tracked against backgrounds that include other moving objects. Computations are carried out in image coordinates at roughly 5 frames per second on a Lisp machine. The camera need not be calibrated or aligned di, and the system can tolerate a wide range of dynamically changing ac- tuator response characteristics. 1 Introduction In this paper we describe a robot that foiiows arbitrary moving objects in real time using vision as its only sense. The system is constructed from simple components using simpie hardware and runs at approximateiy five frames per second. It has been tested by the authors and several others with a variety of objects and lighting conditions. Testing was performed in normal environments inhabited by nannla yb”plb. NeithbL er tho c.nv;mT\nmantc ,-an- tha hoh=rr;nc AC VI&Q L&L .LI”III‘IL.II”O ll”L “11L “LIIU”I”L “I the inhabitants were altered for the experiments. Approach-and-follow is an example of a simple naviga- tion behavior. I-Iaving identified an object as being in some sense interesting, the agent moves so as to keep the object centered in its visual field. The agent effectively computes and f0ii0wS a smoothed approximation to object’s path. A-lternatively, the object may be viewed as “dragging” the agent as it moves. Approach-and-follow is simple enough to be implemented very efficiently, yet can operate in a rhnr\t;i. canrl ,,,n,ac-l;,t,hl, w,,lA l-‘ha ch/xvt nnth fwn, can- LA‘LLVULL UIIU UrLy’LU’L”CL”‘L “““IILL. .a, ILL 311”I Ir ycarrrr ‘IVlll JLLL- sors to effecters allows the system to be highly responsive in dynamic situations. The system is inspired by the way a kitten piays with a bd. It iS lZl.K’fZdV DZ3SSiVe at first, but K!aCtS t0 tf&Il Object’s ---v--u a -~ motion by following at a respectful distance. People exper- imenting with the robot during demos sometimes “play” with the rnhnt IE if ii ~X,PF~ .a trn,,nn .anim~l cnmdat;m.ac ..A”LL “I‘b l”““” u.3 IL I” ..-A.\- u J “ULL& urrrLr*rw) J”‘l‘b “II‘Ib.3 lead it around, and sometimes herd- it by causing it to back away from the object. Thus approach-and-follow al- l- _-_- ---,1-s- ---I :-L ^_^_ A:-- _1..---:- L-L--.:--:- -,:L- lUW3 corr1p1cx anu lilccrct3clllg uy11arr11c u~llkLvlvr 111 StJlK of its its simplicity of implementation. 2 Implementation To implement an agent which exhibits approach-and-follow behavior, it is necessary to divide the scene into objects (segmentation). The agent must also choose a specific ob- ject to follow (triggering). Having done this, the agent must maintain the identity of the object from frame to frame (tracking). Finally, the motors must be driven in such a way as to f0ii0w the object in question (motor con- trol). In our system, each of these tasks is implemented by one or more asynchronous process. The system is imple- mentell un a lisp machine with a standard frame-grabber and the MIT AI Lab Allen robot. Processes are simulated as lisp functions which are called in a round-robin fashion. v-7. vv ires connecting them are simuiated with giobai variabies. Visual input is obtained by subsampling the grabbed im- age to 32 x 28. The camera is uncalibrated’, but must face roughly forward. The lisp machine sends motor commands to the robot via an RS-232 serial line. Figure 1 shows the processes and connectivity of the system. Fat lines represent visual pathways between pro- cesses, whiie thin iines represent simpler controi signais. Visual information enters on the left side, while motor commands exit the right side and feed back through the environment. 2.1 Segmentation The system must divide the visual world into visual ob- jects. We make the distinction between visual objects and physical objects because they need not lie in one-to-one correspondence. It is sufficient that the visual object being tracked correspond to a single physical object over time2. In our system, we take visual objects to be connected re- gions of the image. Segmentation is the process of dividing the image into such regions. ‘There exists a iarge iiterature on segmentation (see [BB82] for a survey). For our purposes a very simple tech- nique is sufficient. We first apply a local predicate to all points in the image and then label connected components of ones in the resulting binary image using a blob coloring algorithm[BB82, page 1511. Many possible predicates could be used. For example, we couid use a simpie threshoid on the grey ievei. This is only useful for following objects of known homogeneous in- “The originai camera mounting consisted of a wedge ofnews- paper and some duct-tape. ‘In this sense, it is not even necessary to divide the world into objects, oniy “the-object-I-am-traclcing” and “everything- else”. 7% Robotics From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. match-succeeded Figure 1: Structure of the approach-and-follow system Figure 2: Leaking of homogeneous regions tensity however. An alternative would be local homogene- ity; regions of homogeneous intensity would be grouped into connected regions. Local homogeneity is relatively stable with respect to camera noise in the sense that most pixels do not change. The connectivity of the binary im- age formed by applying the local homogeneity test to each pixel is highly unstable however. This is due to that fact that images of objects consist of large homogeneous regions separated by relatively thin edges. When camera noise per- turbs the values at edge points, changes in connectivity can result, perhaps causing the target object to be considered a part of the floor. Figure 2 shows the binary image of a white sheet of paper on an untextured floor with clutter in the background. Black pixels correspond to homogeneous regions. The top edge of the paper and one pixel of the bottom edge have been blurred out, causing the interior to be joined to the floor. The problem is solved by using a local heterogeneity constraint, the inverse of the previ- ous constraint. Since connected regions of change tend to be separated by larger distances than connected regions of homogeneity, the topology of the the resulting image is st.shle, The performance of the system is relatively insensitive to the actual operator used to detect heterogeneity3. Sim- 30u.r current implementation computes the sum of the ab- solute values of the differences in the z and y direction for efficiency reasons. It applies this operator not to the image it- self, but to its pixel-wise logarithm, so as to normalize for local ilarly, the system is relatively insensitive to the choice of threshold. The segmentation threshold was set once, and has not needed to be modified since. The segmentation is implemented by two processes, the heterogeneous box continuously computes the binary im- age of heterogeneous points, and the coloring box com- putes the connected components. 2.2 Tracking Tracking is implemented using a matcher box, and a latch. The matcher continuously outputs the silhouette of the blob in the coloring which has maximal overlap with the other input (usually the matcher output from a previous iteration), subject to the constraint that the in- put and output blobs cannot vary in area by more than a constant factor. This implements the tracking of a blob. The matcher also asserts a control line whenever it finds a match. The latch is a state-element which holds the sil- houette of the object being tracked from a previous match. When a new match is found, the latch updates itself. When no match is made, the latch retains its state. The two processes, matching and latching, track the current object wherever it moves in the visual field. If the object is briefly lost, it can be found again provided that it has not moved too far. The latched silhouette of the truck from figure 3 is shown in figure 3. Tracking is bootstrapped by inserting a multiplexer be- tween the output of the latch and the input of the matcher. To attend to a new object, the latch is bypassed and the silhouette of the new object is injected into the matcher. More complicated matching algorithms are imaginable. The current algorithm has the disadvantage that it requires the object’s silhouette to overlap from frame to frame, thus constraining the velocity of tracked object. It is however, fast. Since the current system runs at approximately five frames per second, a one-foot object is only constrainted to move slower than five feet per second. This speed is considerably faster than the motors can drive the robot. Figure 3 shows a snapshot of the states of the retina- bus, the output of the heterogeneous box, and the latch, during the course of tracking a toy truck. intensity. Horswill and Brooks 797 Figure 3: Grey-scale image, heterogeneous pixels, and silhouette of a toy truck 2.3 Motor Control Having tracked the object from frame to frame, the sys- tem can now drive its motors so as to follow the object. To do this the system must have some understanding of the relative position of the object. This need not mean a pre- cise set of coordinates however. The information will be used only to determine whether the robot should turn left or right, move forward or backward, etc. If we constrain the camera to point roughly forward and down at a flat floor, then the robot need only keep the object centered in its visual field4. In effect, the object drags the robot as it moves. Motor control is accomplished by six asynchronous pro- cesses. The centroid box computes the centroid of the object being tracked and asserts its z and y image coordi- nates on the object-x and object-y wires, and its area on the object-area wire. The rotate box drives the turning motor left or right so as to reduce the difference between the value on the object-x wire and center of the screen. The advance box drives the drive motor so as to reduce the difference between the object-y wire and an arbitrary screen line (scan line 18 is presently used within a 28 line image). Two extra boxes were added after the fact to in- sure that the advance box did not get into trouble. The advance-hold box inhibits the output of the advance box until the rotate box has had a chance to move the object from the left or right edge of the screen. This prevents the combined advance and rotate motions from running the object off the edge of the screen. The advance-hold box and the rotate box do not communicate directly. Instead, the advance-hold box samples the object-x wire. In ef- fect, they may be said to communicate through the world. The crash box samples the output of the matcher and inhibits the advance box when the object being tracked runs off of the bottom of the screen. This saves the robot when the object it is tracking overlaps a much larger ob- ject such as a baseboard, and the segmenter joins them. When the robot tries to center the baseboard in its field of view, the advance box would sometimes drive the robot into the wall. Finally, the motor-control box actually drives the motors. It also stops the robot when the area of the object is zero (i.e. when no object is being tracked). 41t is also necessary to assume here that the object is resting on the floor. 2.4 Triggering We have described all of the machinery necessary to follow a visual object, but we also need machinery to controi that machinery. That is, we need machinery to initiate follow- ing behavior when an “interesting” object is found, and to terminate following when it is lost. Rather than have a separate mechanism which detects interesting objects, we reuse the existing components as follows. We add a set of processes to detect “interesting” pixels, and use the matcher already in the tracker to find the interesting object. Using this system, it is not even necessary to check whether there is an interesting object to be tracked. The system has two states: tracking and bored. When the system is tracking, it has an object and the multiplexer feeds the matcher with the output of the latch. When the system is bored, the multiplexer feeds the matcher with the set of “interesting” pixels. If there are no interesting pixels, then no match will occur and the system will stay bored. If there are interesting pixels, they will be matched to the interesting object and the sys- tem will switch to trucking state. Boredom is determined by another set of processes monitoring the success of the matcher. Triggering is thus decomposed into candidate de- tection and attention control. Candidate detection is accomplished by looking for pixels whose grey levels change significantly over time. These pixels are assumed to correspond to moving objects. Changing pixels are detected by comparing two versions of the camera input, one of which is fed through a delay line. The comparison is made using a thresholded difference op- erator by the motion box. Attention is controlled by a boredom box which forces the tracker into bored mode by switching the multi- plexor whenever the matcher has failed for two consecu- tive frames. This allows it to tolerate occasional matching failures, but prevents it from locking on to camera noise. However, if someone walks in front of the object being tracked, the system will get bored and retarget before the object becomes unoccluded. For this reason, an attention box was added later. The attention box waits until it has seen seven successive matches and then raises the bore- dom box’s threshold to four frames, or slightly less than a second. The attention box is reset when then bore- dom wire is asserted, thus resetting the threshold of the 798 Robotics boredom box. The system has one extra component, the wall box, which further filters the candidates by suppressing any ob- ject which runs off the top of the screen. The suppression is accomplished by clearing the tracker’s latch. The wa box has the effect of preventing the system from trying to follow objects on walls (see below). erimentatisn The system was tested in two areas of the MIT AI Lab which are regU!Wiy inh-h&J hv peopie--a smag !abn- IILILUUI”bU VJ ratory crowded with chairs and equipment, and a larger lobby area. The areas were not modified, other than to clear paths for the robot. The small laboratory is carpeted, while the lobby has a shiny Linoleum floor which reflects images of the overhead lights. It is necessary that the floor not have significant texture at the same scale as the ob- jects, although the images of the lights on the Linoleum floor have not proven to be a problem. The system was tested under conditions ranging from early morning when the areas were deserted to mid-afternoon when members of other research groups were rushing about preparing for a demonstration. Workpeople and researchers unfamiliar LL LL- --.-A-- _--^__ 1A --....l--1.. ._*- 11, :, p..,,c ,$.+I., ,,l.,c wirn 6ne sys~ern WUULU regularly waln 111 LLU~IL VL CIIC LVVVL or have conversations in front of it during the course of the experiments. The system has been tested with and without sunlight, and with varying amounts of overhead lighting. The system has tracked a black videotape box, a pink plastic flamingo, a dark grey notebook, and a white plas- tic food container as they were dragged along by a bright white string. Later, a radio-controlled toy truck was used. The system was able to follow all of the objects. It was relatively insensitive to the relative contrast of the object and the background. For example, the blue toy truck is easily tracked on the blue Linoleum floor. The grey notebook is also easily found on the carpet which is a close shade of grey. The major constraint is simply that the object have significant visual angle so that it may be found in the low resolution image. Several things can confuse the system however. The biggest problem is the overlap of objects. The present segmentation algorithm joins adjacent or overlapping ob- jects into a single object, which is usually rejected by the matcher as being too large. Thus the tracked object is lost when it moves too close to furniture or the baseboards of walls to be distinguished at low resolution. Sometimes the joined objects do get through the matcher however, and this can led to disastrous results. Since the introduction of the crash box however, there have been no such mishaps. A particularly annoying example of the problem is that the string dragging the objects is usually bright enough to be seen against the relatively dark background, in spite of its thinness. Sometimes it is seen as a part of the object and the system will track along the string to the person drag- ging it. Sometimes it will just pan back and forth along the string. The string is thin enough however that it is lost eventually, and the system retargets. The system as yet has no notion of collision avoidance. This is usually not a problem because it approximates the path of the object being followed, a path assumed to be clear. There are three cases where this has lead to trouble. The first is when it follows an object which does not rest on the floor, such as a wall poster. This is partially alievi- ated by requiring that no object run off of the top of the screen (the wall box). The second case is when the object “pushes” the robot backward into a wall. The final case is when the robot tries to cut a corner through a doorway. The farther the robot is from the object being tracked, the more it smooths the path of the object. When the object is far away and takes a tight corner, the robot can smooth its path into the door frame. Thus while collision avoidance is simplified by the robot’s pattern of interaction with the world, a backup system is called for. In another set of experiments, static objects were fol- lowed either by forcibly gating their silhouettes into the matcher5 or by waving a hand in front of them to attract the system’s attention. Books and magazines with differ- ent amounts of texture on their covers could be approached this way. When several objects were placed along a path and suc- cessively gated into the matcher, the robot could be made to follow a static path. At one point, one of the authors (IDH) gated the the silhouettes of the legs of successive people talking in the lobby. The robot drove up to each zancl wzaitm-l nniotlv Thor while the cllprant triuvpring CQ~- c1*llxA . . ULVbU yu’b”‘J. _A....” . . *-*AI YLLI u..A*-I*” “‘oo -AA&- o ponents only facilitate patterns of interaction such as lead- ing, herding, and play, other triggering mechanisms could implement other patterns of interaction. Such triggering mechanisms and the patterns of interaction to which they lead are presently being explored. There are a number of projects which use situated control networks. These include our own subsumption networks [Bro86] which consist of graphs of finite state machines augmented with timers, the REX based networks of sim- P r33+700i ---L:-L L ___- -lee L- uiated synchronous gates or ~nraoo~ wnrch nave itlbo ueen interfaced with a real time vision system [III87], and the Pengi nets of simulated synchronous gates of [AC87], which have relied on simulated vision processors. The most relevant of these is the work of Wells [III87]. Be, though, concentrated on the more traditional role as- sumed of vision; to produce a three dimensional model of the world. His work does share similarities with ours how- ever; he bootstraps from one simple process to another, and uses the results from one time step to focus the atten- tion of his vision algorithms in the next step. Agre and Chapman [AC871 treat vision as a set of sub- routines that can be invoked by a “central system” (funnily enough the oniy state in their system is buried in which subroutine calls to vision are currently active). An ab- straction barrier between the central and vision systems provides a clean interface. Wallace, et. al. have also developed a navigation system (for road following) which uses only computations in image coordinates [WST*85]. ‘This feature was added for purposes of debugging. The output of the segmenter is updated on the screen as it is com- puted. Objects can then be gated into the matcher by pointing with the mouse. Horswill and Brooks 799 A case can be made that the approach we have taken to linking sensing to action is reminiscent of the cyber- netic approaches of the fifties and sixties. This is true to the extent that our system opts for short paths from sensors to effecters and uses feedback through the world to quickly verify its decisions. These are desirable traits, particularly for a system operating in an unpredictable en- vironment, and a behavior such as approach-and-follow is simple enough to be easily implemented in this manner. The major difference is that our system exhibits a mod- ularity which aids its understanding and evolution. Cyber- netics, and often connectionism, tend to treat the brain as an amorphous whole with little structure. The components of the system described in this paper can be modified in- dividually without hurting the structure as a whole. New triggering components can be added, the matching algo- rithm can be modified, etc. In addition, we consider the system we have described here as just one component of a larger system controlling a mobile robot. Other modules with similar importance in a complete system might im- plement behaviors such as collision avoidance, exploration of a static world, etc. We have in fact extended the sys- tem described here to follow corridors. This work, and the approach-and-follow system, are described in detail in [Hor88]. 5 Conclusions We have presented a robot that follows moving objects in a variety of environments using vision as its only sense. The system is built from relatively simple and unreliable components. Some unusual aspects of its structure bear mention. The system performs segmentation and motion analy- sis, tasks which are quite difficult to do well. We have not solved either problem. Our system solves only lim- ited forms of the problems, and still does a poor job of each. Nonetheless, the system performs its task as a whole quite well. Two lessons can be learned from this. First, while the segmenter does a poor job of determining the set of pixels that corresponds to a given object, or even of finding all of the pixels in a given set of connected hetero- geneous pixels, it does a perfectly adequate job of finding stable blobs attached to isolated objects, which is all it needs to do. That is to say, it is adequate for the task. The same is true of the motion detector. It does a lousy job of finding all of the moving pixels, but it finds enough for the matcher to uniquely determine the moving object. Second, while the tracker may lose an object occasionally, the motion detector will usually find it again. The motion detector sometimes hallucinates motion, but it is usually in homogeneous areas. Since there are no objects in ho- mogeneous areas, the matcher simply rejects the motion. Thus individually unreliable components can combine syn- ergistically to form a more reliable whole. It is also interesting that the system performs no plan- ning or collision avoidance. While an account of these ca- pabilities would be necessary for a full theory of navigation, our system avoids much of the problem through the struc- ture of its interactions with the world”. In effect, it lets its 6As was mentioned above, some sort of extra collision mech- anism is called for, perhaps a set of whiskers. 800 Robotics target perform these tasks. Finally, we have argued [Bro86] that it is possible to build systems that appear intelligent to observers yet have no central representations or loci of control. We have also argued [Bro87] that things such as central representations of the world are purely in the mind of the observer. A system situated and acting within the world does not need them. Ours is an example of such a system. Instead of representing the visual world in a single, central database, our system distributes the representation in a principled manner. Acknowledgements Support for this research is provided in part by the Uni- versity Research Initiative under Office of Naval Research contract N00014-86-K-0685, and in part by the Advanced Research Projects Agency under Office of Naval Research contract N00014-86-K-0124. eferences [AC871 [BB82] [Bro86] [Bro87] [Hor88] [III871 [RK88] Philip E. Agre and David Chapman. Pengi: an implementation of a theory of activity. In Pro- ceedings of the Sixth National Conference on Artificial Intelligence, pages 268-272, 1987. Dana H. Ballard and Christopher M. Brown. Computer Vision. Prentice Hall, 1982. Rodney A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automoation, 2( 1):14-23, March 1986. Rodney Brooks. Intelligence without repre- sentation. In Proceedings of the Workshop on the Foundations of Artificial Intelligence, MIT, June 1987. Ian D. Horswill. Reactive Navigation for Mo- bile Robots. Master’s thesis, Massachusetts In- stitute of Technology, March 1988. A revised version to appear as a technical report of the MIT Artificial Intelligence Laboratory. William M. Wells III. Visual estimation of 3- d line segments from motion - a mobile robot vision system. In Proceedings of the Sixth National Conference on Artificial Intelligence, pages 772-776, AAAI, July 1987. Stanley J. Rosenschein and Leslie Pack Kael- bling. The synthesis of digital machines with provable epistemic properties. In Joseph Halpern, editor, Proceedings of the Confer- ence on Theoretical Aspects of Reasoning About Knowledge, pages 83-98, Morgan Kauffman, 1988. [WST*85] R. W 11 a ace, A. Stenz, C. Thorpe, H. Moravec, W. Whittaker, and T. Kanade. First results in robot road-following. In IJCAI-85, 1985.
1988
63
808
From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. CUBRICON includes methodology to handle these prob- lems. Some systems use default techniques to handle am- biguous pointing. These techniques include: (1) a point returns the entity represented by the “foremost” icon where the system uses a mechanism to remember the order in which icons are “painted” on the display (i.e., which are further in the background and which are fore- most in the foreground); (2) the icons or entities are assigned weights representing importance and the icon with the largest weight is selected as the interpretation of an ambiguous point; or (3) the icon whose “center” is closest to the location pointed at is selected. Combina- tions of the such techniques can also be used. A serious disadvantage of the above listed point-interpretation techniques is that it is difficult, if not impossible, for cer- tain icons to be selected via a point reference. Such default techniques have deliberately not been used in the CUBRICON system. CUBRICON’s acceptance of NL accompanying a point gesture overcomes the limitations of the above weak default techniques and provides a more flexible referencing capability. CUBRICON also includes the ability to infer the in- tended referent when the referring dual-media expres- sion is inconsistent or produces no apparent candidate referents. A dual-media expression is inconsistent when the natural language part of the expression and the ac- companying point cannot be interpreted as referring to the same object(s). For example, the user might say “this SAM” and point to an airbase. A dual-media ex- pression has no apparent referent when the user’s point touches no icons (i.e., he points to an “empty” area). The referent resolution problem has been addressed for systems that accept natural language only [Grosz, 1981, 1986; Sidner, 19831. The problem of ambiguity, including referent ambiguity, is well recognized in natu- ral language understanding [Hirst, 19881. The problem of correcting reference identification failures during the natural language understanding process has been ad- dressed using a relaxation technique [Goodman, 19851. Generation of natural language references is addressed by Sondheimer et al. [ 19861 and McDonald [ 19861. In contrast to these efforts, the work discussed in this paper addresses the problem of referent identification and ref- erence generation for language consisting of combined natural language and deictic pointing gestures. Related work includes the development of TEMPLAR [Press, 19861 at TRW and XTRA [Kobsa et al., 19861 at the University of Saarbrucken. The TEMPLAR system seems to provide only for a pointing gesture to substitute for a natural language definite reference within a natural language sentence during input, rather than allow a pointing gesture to also be used simultaneously with a NL reference during both input and output. In the TEMPLAR system, the natural language phrase for the object selected by the point is inserted in the input string to allow the NL parser to complete its processing. Our work is closer to that of Kobsa and colleagues with the XTRA system. XTRA accepts input of simultaneous NL and pointing gestures. Our approach provides for a more diverse set of referent types and resolution knowl- edge sources. The next section presents a brief overview of the CUBRICON system. Subsequent sections discuss the knowledge sources used to process these dual-media ex- pressions in input and output, the syntax and interpreta- tion of such expressions used in input, and the process of generating such combined media output. 2 System Overview The CUBRICON system is intended to imitate, to a cer- tain extent, the ability of humans to simultaneously ac- cept input from different sensory devices (such as eyes and ears), and to simultaneously produce output in dif- ferent media (such as voice, pointing motions, and draw- ings) . The design provides for input to be accepted from three input devices: speech input device, keyboard, and mouse device pointing to objects on a graphics display. Output is produced for three output devices: color- graphics display, monochrome display, and speech out- put device. The CUBRICON software is implemented on a Symbolics Lisp Machine using the SNePS semantic network processing system [Shapiro, 1979, 19861, an ATN parser/generator [Shapiro, 19 821 and Common Lisp. Speech recognition is handled by a Dragon Sys- tems VoiceScribe 1000. Subsequent sections of this paper present example sentences that include simultaneous coordinated point- ing gestures to objects on a graphics display. Figure 1 shows one of the geographical displays that was gener- ated by the CUBRICON system. The example sentences in this paper are expressed with simultaneous pointing to objects on such a display. CUBRICON also generates other types of displays including other visual illustrations, tables, and forms. The CUBRICON system includes several critical knowledge types that are used during language under- standing and generation: (1) task domain knowledge, (2) dual-media language knowledge, (3) sentential syn- Figure 1 Example CUBRICON Display 820 User Interfaces tax and semantics, and (4) the discourse context. These knowledge types are discussed in the next section. 3 Resources for Referent Determination and Reference Generation 3.1 Domain Knowledge Task domain entities with information about them are represented declaratively in a semantic network knowl- edge base. The associated information includes infor- mation relevant to mission planning as well as information that is relevant for communication purposes. Task domain entities include airbases, surface-to-air missile (SAM) systems, fuel storage facilities, and tar- gets. The knowledge base is structured as an AK0 (“a kind of”) hierarchy. The hierarchy is a tree structure with each node representing an entity type or class. As- sociated with each entity type are attributes and possibly subparts. Examples of attributes are an entity’s disposi- tion (friendly vs. enemy) and its geographical location, when appropriate. Relations between concepts are also represented in the knowledge base, for example, the re- lation between an aircraft type and the type of munitions that it carries. 3.2 Dual-Media Language Knowledge The CUBRICON grammar defines the language ac- cepted as input and used for output. According to this grammar, pointing gestures are allowed with (or in place, of) a noun phrase or a locative adverbial phrase. Case frames associated with the verbs of the lexicon are used for language understanding. Constraints on the slots of the case frames are used during referent resolution for the dual-media noun phrases and locative adverbial phrases. Another form of semantic language knowledge is the association of words and graphics forms/icons with do- main entities in the knowledge base. Each entity type and instance represented in the knowledge base can have one or more names as attributes. Such names would include “fighter base” for a certain type of airbase and “SA-2” for a certain type of SAM. If appropriate, each entity in the knowledge base can have a graphical form or icon for expressing the entity visually. 3.3 Sentential Context When analyzing user input, the immediate linguistic con- text (syntax and semantics) of the sentence processed thus far is used in the process of interpreting the remain- der of the sentence, including dual-media references. The semantic structures that are particularly useful are: 1. the case frame associated with the main verb of the sentence. 2. a type or category named in a noun phrase or locative adverbial phrase. 3. a property or attribute named in a noun phrase. 4. a relation expressed in a noun phrase. The use of these semantic structures is discussed in Sec- tion 6. 3.4 The Discourse Model The attentional discourse focus space [Grosz, 1978, 1986; Sidner, 1983; Grosz and Sidner, 19851 is a key knowledge structure that supports continuity and rele- vance in dialogue. The CUBRICON system tracks the attentional discourse focus space of the dialogue carried out in multi-media language and maintains a representa- tion of the focus space in two structures: (1) a main focus list and (2) a set of ancillary focus lists called vir- tual displays. The main focus list includes those entities and propositions that have been explicitly expressed (by the user or by CUBRICON) via natural language, point- ing, highlighting, or blinking. A virtual display is a list of all the objects that are “in focus” because they are vis- ible in a given window on one of the displays. CUBRICON maintains one virtual display per window. The dialogue focus space representation is used in both understanding user input and generating system output. When processing user input, the attentional fo- cus space representation is used for determining the in- terpretation of anaphoric references [Sidner, 19 831 and definite descriptive references [Grosz, 19811. In the case of a definite reference, if an appropriate referent is not found in the main focus list, then CUBRICON con- sults the virtual displays. 4 Multi-Media Language Understanding CUBRICON accepts coordinated simultaneous natural language and pointing (via a mouse device). The user can input natural language (NL) via the speech device and/or the keyboard. Input from the NL and pointing devices is accepted and fused into a compound stream maintaining the information as to which point gesture(s) occurred with (or between) which word(s) of the sen- tence, The CUBRI,CON Parser/Interpreter is an ATN that accepts the compound stream produced by the Input Coordinator and produces an interpretation of the com- pound stream. Each noun phrase or locative adverbial phrase can consist of zero or more words of text along with zero or more pointing references to icons on the display (there must be at least one point or one word). The pointing input that is a component of a noun phrase or locative adverbial phrase can occur anywhere within the phrase. From observing people using mouse points, it seems that mouse points commonly (a) substitute for an entire noun phrase: “What is the status of <point>? ” (b) substitute for the head noun: “What type of SAMs are these <point>, <point>a <point>, ?” (c) are used in conjunction with a complete NL noun phrase: “Display the status of this <point> airbase.” The objects that can be referenced via pointing in the CUBRICON system are of four types: 1. a geometric point represented by a pair of coordi- nates on a map or graph; 2. an entity represented graphically; 3. a table entry; 4. a window on a display. Pointing gestures alone can be categorized according to the following: (1) mouse click on intended icon(s) only, (2) mouse click on the region where the extents of two or more icons overlap and not all were intended to be selected, or (3) mouse click misses the intended icon(s) altogether. The problem is in determining the intended referent(s) of a combined natural language and pointing reference. In the second case listed above, the mouse point alone is ambiguous. In the third case, the point gesture has no immediate referent. When pointing Neal, Dobes, Bettinger and Byoun 821 gestures are used in the context of natural language dia- logue, several knowledge sources can be applied to the problem of identifying the intended referent. The CUBRICON methodology for referent resolution is dis- cussed in the next section. 5 Referent Resolution Methodology CUBRICON uses the several knowledge sources dis- cussed in Section 3 when determining the referent of a combined natural language and pointing reference. For “ill-formed” expressions in which the interpretation of the NL is inconsistent with the object(s) touched by the point and those expressions which apparently have a null reference (e.g., the user points at an empty area), CUBRICON infers the intended referent according to the methodology discussed at the end of this section. The following examples illustrate the CUBRICON meth- odology. The first example depends primarily on the use of the task domain knowledge represented in the knowledge base as well as the ancillary graphical discourse focus list (refer to Section 3.3). USER: “What is the name of this <point> airbase?” When the phrase “this <point> airbase” is parsed, the system uses the point coordinates to determine which icons are touched by the point. The virtual display is then searched in order to retrieve the semantic network nodes representing the objects which were graphically displayed by the “touched” icons. Within the knowl- edge base, the system has a representation of the cate- gory to which each object belongs as well as a representation of the airbase concept. From the hierar- chy of the knowledge base, the system determines which of the objects selected by the point gesture are airbases and discards the others. If the user has pointed at a minimum of one airbase, then the system uses this (these) airbase instance(s) as the referent of the dual- media noun phrase. Discussion of the situation in which the user has pointed at no airbases is deferred to the end of the section. The second example entails the use of the syntax and semantics of the sentence processed thus far, along with the knowledge base, to determine the referent of the phrase “this <point>“. Here the concept of “mobility” is the critical item of information . USER: “What is the mobility of this <point> ?” From the virtual display, the system retrieves the objects represented by the icons which were touched by the point gesture. From the syntax of the noun phrase “the mobility of this <poinu” and the semantics of the word “mobility” as represented in the knowledge base, the system deduces that mobility is a property (as opposed to a subpart or some other possible relation that could exist between the concepts mentioned) of the object men- tioned in the prepositional phrase. The system then de- termines which of the objects selected by the point gesture have a property called mobility by consulting the knowledge base. The other objects selected by the point are discarded. The resulting set is used as the referent of the phrase “this <point>“. In the next example sentence, the case frame plays an important role in referent determination. USER: “Are these battalions <point>, <point+ <point>, based here <point> ?” In order to determine the interpretation of the phrase “here <point>“, the use of the case frame for the verb phrase “is based” is necessary. If we consider the phrase “here <point>” alone, the interpretation is unclear. Should it be a location represented by a pair of coordi- nates, or should it be some institution at the location indicated by the deictic reference? The case frame of the verb phrase “is based” provides the necessary infor- mation. This case frame requires an agent and an ob- ject. The object must be an institution with which the agent is (or can be) officially affiliated. The knowledge base provides information concerning what types of enti- ties are based at what types of facilities or institutions. Thus the phrase “here <point>” of the example sentence is interpreted as the institution at the location specified by the <point> due to the constraints of the verb’s case frame. If the user’s point gesture touches no graphic icon, then the system infers the intended referent, as discussed in the next paragraph. In the above paragraphs, we deferred discussion of the event in which the interpretation of natural language together with the point reference results in an apparent null referent. This event can occur in two ways: (1) the user’s point touches at least one icon, but it (they) is (are) inconsistent with the natural language expression (e.g., the user says “airbase” but points to a SAM) or (2) the user points at a location on the display which contains no objects. CUBRICON includes methodology to infer the intended referent in both of these situations. CUBRICON uses the information from the sentence parsed and interpreted thus far as filtering criteria for candidate objects. The system searches in the vicinity of the location of the user’s point to find the closest ob- ject(s) that satisfy the filtering criteria. If one is found, then the system responds to the user’s input (e.g., com- mand for action, request for information), but also indi- cates to the user that the object to which he pointed was not consistent with the natural language phrase that he used and states the inferred referent. In the event that no qualified object is found in the vicinity of the user’s point, then an appropriate response is made to the user with a request for him to restate his input. The methodology described in this section provides CUBRICON with the ability to determine the referent of expressions that consist of natural language and pointing gestures. This methodology handles both well-formed expressions as well as expressions in which the user’s point is inconsistent with the accompanying natural lan- guage- 6 Multi-Media Reference Generation CUBRICON has the ability to intelligently use combined pointing and natural language references when respond- ing. The system currently points at an object displayed on a CRT by blinking the object. We are considering other pointing techniques such as displaying a blinking arrow that points to the desired object and displaying a blinking circle around the desired object. The algorithm that CUBRICON uses to generate an expression for a given entity is as follows: 1. if the entity to be expressed is the most salient one of its gender and number according to the dis- 822 User Interfaces course focus list, then express the entity with a pronoun of the appropriate gender, number and case. 2. else if the entity to be expressed is currently dis- played on one of the CRTs (determined by con- sulting the virtual display), then express the entity by the natural language phrase “this XXXX” with simultaneous “pointing” to the entity on the dis- play * The name for the entity represented by “XxXx” is selected from the name of the basic level category [Peters & Shapiro, 19871 to which the entity belongs. 3. else if the entity to be expressed is the most salient one of its kind according to the discourse focus list, then express the entity with the definite deter- miner “the” followed by the name of the class. 4. else generate the most specific reference possible for the entity. 7 Current Status and Future Direction The work discussed in this paper has been implemented in the CUBRICON system. Our current discrete speech recognition system will be replaced by a continuous speech recognition system in the near future. When this change occurs, we anticipate that we may need a more sophisticated method of coordinating the timing of the individual words of an input sentence with the user’s mouse-pointing gestures. We anticipate two possible problems: (a) accounting for the “speech recognition delay” (the delay between the time a word is spoken and the time it is available to the processor) since mouse- point gestures entail no delay and (b) the occurrence of pointing gestures that are not expressed by the user in coordination with their corresponding natural language phrase (if it exists). Additional work needs to be done on the question of when the system should generate a reference in com- bined natural language and pointing. Generation of such references should depend on a variety of factors such as: the modality of the user’s input, the level of complexity or clutter on the graphics display, the level of sophistica- tion of the user, and attributes of the discourse context such as the number of times a given entity has recently been referenced. 8 Summary Multi-modal communication is common among humans. People frequently supplement natural language commu- nication with simultaneous coordinated pointing gestures and drawing on ancillary visual aids. Such multi-modal communication can be used very effectively for huma- computer dialogue also. The Intelligent Multi-Media In- terface Project [Neal & Shapiro, 19881 is devoted to the development of intelligent interface technology that inte- grates speech, natural language text, graphics, and point- ing gestures for human-computer dialogues. This paper focused on the use of deictic pointing gestures with si- multaneous coordinated natural language in both user input and system-generated output. We discussed sev- eral critical knowledge types that are used during multi- media language understanding and generation: (1) task domain knowledge, (2) dual-media language knowl- edge, (3) sentential syntax and semantics, and (4) the discourse context. A referent resolution methodology for processing dual-media input references was dis- cussed. This methodology handles the synergistic mu- tual disambiguation of simultaneous natural language and pointing as well as inferring the referent(s) of incon- sistent NL/pointing expressions and expressions that have an apparent null referent. We also presented a methodology that supports context-sensitive generation of deictic dual-media references based on the above knowledge sources. The work discussed in this paper has been implemented in the CUBRICON system. References [Goodman, 19851 B.A. Goodman, “Repairing Reference Identifi- cation Failures by Relaxation,” Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, Chi- cago, Ill., pp. 204-217, 1985. [Grosz, 19781 B. J. Grosz, “Discourse Analysis,” in D. Walker (ed.) , Understanding Spoken Language, Elsevier North-Hol- land, New York, pp. 229-345, 1978. [Grosz, 19811 B.J. Grosz, “Focusing and Description in Natural Language Dialogues, ” in Elements of Discourse Understand- ing, A.Joshi, B. Webber, & I.Sag (eds.), Cambridge Univ. Press, pp. 84-105, 1981. [Grosz and Sidner, 19851 B.J. Grosz and C.L. Sidner, “Dis- course Structure and the Proper Treatment of Interruptions, ” Proc. of IJCAI-85, pp. 832-839, 1985. [Grosz, 19861 B.J. Grosz, “The Representation and Use of Focus in a System for Understanding Dialogs,” in Readings in Natu- ral Language Processing, B. J. Grosz, K. S. Jones, B. L. Webber (eds.), Morgan Kaufmann Pulishers, pp. 353-362, 1986. [Hirst, 19881 G. Hirst, “Semantic Interpretation and Ambiguity,” Artificial Intelliaence. Vol. 34.No.2. u~.131-177, 1988. [Kobsa et al., 19561 A. Kobsa, J. Ailgayer, C. Reddig, N. Reithinger, D. Schmauks, K. Harbusch, W. Wahlster, “Com- bining beictic Gestures and Natural Language for Referent Identification, ” Proceedings of the 12th International Confer- ence on Computational Linguisticsm, Bonn, FR Germany, 1986. [McDonald, 19861 D. McDonald, “Description Directed Con- trol: Its Implications for Natural Language Generation,” in Readings in Natural Language Processing, B.J. Grosz, K.S. Jones, B.L. Webber (eds.), Morgan Kaufmann Publ., pp. 519-537, 1986. [Neal and Shapiro, 1988lJ.G. Neal and S.C. Shapiro, “Intelli- gent Multi-Media Interface Technology, ” Proceedings of the Workshop on Architectures for Intelligent Interfaces: Elements and Prototype, Lockheed AI Center, Monterey, CA. pp. 69-91, 1988. [Peters and Shapiro,19871 S.L. Peters and S.C. Shapiro, “A Representation for Natural Category Systems, ” Proc. of ZJ- CAI-87, Milan, Italy, pp. 140-145, 1987. [Press, 19861 B. Press, “The U.S. Air Force TEMPLAR Project Status and Outlook, ” Western Conf. on Knowledge-Based En- gineering and Expert Systems, Anaheim, CA, pp. 42-48, 1986. [Shapiro, 19791 S.C. Shapiro, “The SNePS Semantic Network Processing System”. In N. Findler, ed. Associativ Networks - The Representation and Use of Knowledg by Computers, Aca- demic Press, New York, pp. 179-203, 1979. [Shapiro, 19821 S.C. Shapiro, “Generalized Augmented Transi- tion Network Grammars for Generation from Semantic Net- works, ” AJCL, Vol. 8, No. 1, pp. 12-25, 1982. [Shapiro and Rapaport, 19861 S. C. Shapiro and W. Rapaport, “SNePS Considered as a Fully Intensional Propositional Se- mantic Network, ” Proc. of AAAZ-86, pp. 278-283; in 6. McCalla & N. Cercone (eds.) Knowledge Representation, Springer-Verlag Pub, 1986. [Sidner, 19831 C.L. Sidner, “Focusing in the Comprehension of Definite Anaphora, ” in Computational Models of Discourse, M. Brady & R.C. Berwick (eds.), The MIT Press, pp. 267-330, 1983. [Sondheimer and Nebel, B, 19861 N.K. Sondheimer and B. Nebel , “A Logical-Form and Knowledge-Base Design for Natural Language Generation, ” Proc. of AAAI-86, pp. 612-618, 1986. Neal, Dobes, Bettinger and Byoun 823
1988
64
809
Gerhard Fischer and Christian Rat&e Department of Computer Science and Institute of Cognitive Science University of Colorado, Campus Box 430, Boulder, CO 80309 Abstract Spreadsheet systems have changed the way the world perceives and deals with computers. In an attempt at maintaining the positive elements of spreadsheets while overcoming some of their limitations, we have developed FINANZ, a computational environment for developing financial planning systems. FINANZ con- tains a form-based user interface construction system, which allows the creation of advanced user interfaces without the need for conventional programming. It uses constraint based programming for the represen- tation of knowledge about the application domain. Its layered architecture (based on object-oriented knowledge representation) supports the modification and extension of the system and the dynamic genera- tion of explanations. I. Imtroductiorn If we believe that the real impact of the computer in the infor- mation age will be determined by whether domain experts and technologically unsophisticated users will perceive the com- puter as a useful and usable device -- then spreadsheet systems have changed the way that the world perceives and deals with computers. In this paper we first describe dimensions of success models for user-centered computer systems, which provide some rationale for the success of spreadsheets and which help us identify some of their shortcomings. To overcome these shortcomings, we have used methods and techniques from Ar- tificial Intelligence to develop FINANZ, a computational en- vironment to develop financial planning systems. The major contributions of FINANZ are illustrated. We conclude by evaluating our system building effort and by indicating exten- sions and future research in this area. 2. Success Models for User-Centered Computer Systems One way to advance the state of the art in a field is to identify “success models”, i.e., activities, systems, and tools which work well. Previously, this approach has provided us with a great deal of insight in our work in designing computer-based learning and working environments by looking at skiing as a success model [Fischer 81; Burton, Brown, Fischer 841. In doing so, we have identified the features of success models and tried to transfer them to less successful systems. In the work described in this paper, we have taken a similar approach in the area of user-centered computer systems by focusing on spreadsheet-based programs. We see a strong mutual relation- ship between research in user-centered system design and ar- tificial intelligence, especially if we consider the important goal of AI being to build systems augmenting human intel- ligence (as interactive knowledge media, as tools for conver- sation, and as intelligent support systems, which support cooperative problem solving processes between humans and computers [Stefik 861). imensions of Success Phdlels Without any attempt to compile a complete list (see [Norman, Draper 861 for additional views and features of user-centered systems), we try to characterize the dimensions which explain why spreadsheets are success models, indicate the shortcom- ings of spreadsheets, and demonstrate the contributions of FINANZ. Conviviality. According to Illich [Illich 731, “convivial tooIs are those which give each person who uses them the greatest opportunity to enrich the environment with the fruits of his or her vision. Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user.” Currently most systems belong either to the class of general purpose programming languages or to the class of turn-key systems. General purpose programming languages are convivial in the sense that they allow “in principle” the user to do everything, but they are too far away from the concep- tual structure of the problem and it takes too long to get a task solved. Turn-key systems are easy to use, but they can not be modified by the user and therefore they do not allow users to contribute to their goals. Convivial systems (supporting modifiability, tailorability, and extensibility) are a necessity if we believe in the fundamental assumption that it is impossible for a system designer to create a problem domain-oriented environment which captures all functionality that might conceivably be needed for a given application. 802 User Interfaces From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Useful and Usable Systems. Useful computers which are not usable are of little help; but so are usable computers which are not useful. One of the major research goals of user-centered system design is the resolution of this design trade-off and the achievement of both goals simultaneously. Useful computers require complex systems with a rich functionality (e.g., providing a large number of suitable abstractions) and are therefore in danger of becoming unusable. Making high functionality systems usable and exploiting their power re- quires computer-based intelligent support systems which take advantage of the interactive and dynamic nature of computer systems. Usable systems are often limited in their usefulness by their limited applicability and extensibility. Subjective Computability. In user-centered system design the crucial issue is not what users can do “in principle” -- what matters is what users can really do. The epistemological adequacy of a formalism in user-centered system design is primarily not a formal or theoretical issue (theoretically almost all formalisms and programming languages are Turing- equivalent) but a cognitive issue. For many problems, the question of subjective computability (to create systems which are usable for tasks which many users were unable to tackle in the past) is more relevant than whether a problem is com- putable in theory. Subjective computability can be increased by eliminating prerequisite knowledge and skills and by rais- ing the level of abstraction towards the expertise of the user. Constrained design processes (such as selection, simple com- bination, instantiation, etc.) which users can handle are of greater relevance than unconstrained design possibilities which are beyond their grasp. Human Problem-Domain Communication. Most computer users are not interested in computers per se, but they want to use the computer to solve problems and to accomplish their tasks. The goals of Human problem-domain communication [Fischer, Lemke 881 include building the important abstract operations and objects of a given application area directly into the environment. This implies that the user can operate with personally meaningful abstractions. In most cases the seman- tics of a problem domain should not be eliminated by reducing the information to formulas in first-order logic or to general graphs. Systems are better understood when users can directly manipulate the concepts of an application. Reducing Complexity. User-centered system design is a worthwhile goal because there is no “conservation law of complexity” [Simon 811, which requires that the complexity and usability of a system is a given constant. Complexity is reduced by exploiting what people already know and what they are already familiar with, by using familiar represen- tations (based on previous knowledge and analogous situations), by exploiting the strengths of human information processing and by designing “better” systems which exploit the unique possibilities of interactive computer systems (e.g., bY generating custom-tailored and user-centcrcd representations). readsheets as Success Spreadsheets are considered success models by the sheer fact that they have changed the way the world perceives comput- ing. They have created a turn-around in buying consideration; users want a spreadsheet -- on which hardware it would operate is only a secondary consideration (“software buys hardware”). The popularity and usefulness of spreadsheets is based on the fact that they make contributions to all of the criteria enumerated in the previous section: they let users do their tasks; they provide usability and usefulness by handling a wide range of problems; they increase the subjective com- putability of non-programmers; they let domain experts operate effectively by matching their conceptualizations; and they reduce complexity through their value propagation mechanisms by eliminating concerns about low level com- putations and consistency maintenance. 2.3 Shortcomings of Spreadsheets Despite their success, spreadsheets have a number of limita- tions. They are not “smart” programs -- there is no under- lying knowledge machinery to attach arbitrary complex daemons to individual fields (e.g., parsers for allowing input information be given in different notations or dependency relationships to allow the creation of dynamic explanations). They suffer from a lack of extensibility (despite the examples provided by [Kay 84]), which limits their applicability for problems which do not fit exactly into the basic spreadsheet paradigm. This lack of extensibility is due to the fact that spreadsheet systems are constructed as monolithic systems in- stead of as layered systems using multiple levels of abstrac- tion. Spreadsheets do not support constraint-based computations, they only allow value propagation in one direction (“one-way constraints”). They cannot be extended in natural ways to more general form-based systems (e.g., the usability decreases when one has to deal with several spreadsheets simultaneously). Models of the user are not supported. They could be used to present different external representations and views of reduced complexity (e.g., in the case of a grant proposal (see next section) for the applicant or the program director in the granting agency). FINANZ is a computational environment to develop financial planning systems that are based on an extension of the spread- sheet paradigm. It supports its users in various domains such as project budget planning. It gains its power by being tuned to very specific application domains in which operations often are only meaningful to the domain expert. By building on a more powerful object-oriented base, designers can develop more powerful sets of functions than are found in spreadsheet programs. In the following sections we describe FINANZ from the viewpoints of the user and the system designer. Fischer and Rathke 803 3.1 Interacting with FINANZ Spreadsheets have been successful because they adopted an interaction format that people were already familiar with, and has enhanced its functionality by making the entries change dynamically. With FTNANz we want to keep the basic inter- action style and at the same time apply it to a richer domain. In most spreadsheet systems, there are two conceptually dis- tinct modes of operation: programming and executing. In the programming phase, the dependency structure is established; in the execution phase, values are supplied by the user and propagated to dependent fields by the system. In FINANZ, a value for a field is supplied by selecting it using the mouse. A formula is specified by selecting the operations from a menu that is associated with the field (Figure 3-l). Figure 3-1: Formula Specifications A formula is specified by selecting the appropriate operations from a menu. Operations are domain” dependent and reflect the system’s knowledge about the application domain. Entries such as “Indirect Costs” are applied to a field. The system guides the user by asking for the fields that contribute to the selected operation. The user specifies these fields by pointing at them with the mouse. There are some differences on FINANZ and spreadsheet systems: the interface level between Free positioning of fields. Instead of having a predefined grid of fields, FINANZ cells are “liberated” [Lewis 871 in the sense that they can be put anywhere inside the forms’ boun- daries. Their sizes can also be changed to allow for longer pieces of text. New fields are generated by copying existing ones. They initially take the shape of their originals, but can then be modified using operations such as move and reshape that are generic to all screen objects. Typed fields. A field’s content is an external representation of some data object. The user’s input is interpreted and con- verted to a standardized internal representation. From there the possibly modified external representation is produced and displayed in the field. The interpretation and conversion processes are determined by the type of the field. This allows the connection of sophisticated parsers to fields. For instance, a field containing dates accepts the date specification in a variety of syntactical forms. The printed representation of a date can depend on, for instance, the length of the field, the demands of the specific application, or the user’s preference. Multiple forms. FINANZ is integrated in a window-based en- vironment [Boecker, Fabian, Lemke 851 which supports the concurrent display and activation of multiple forms. Depen- dencies can be easily established between fields of different forms using direct manipulation. The increased functionality provides new challenges to the user interface. In spreadsheets the value of a field is deter- mined either by the user or by a formula. In FINANZ, a field can be part of any number of constraints. The user can ask for all the information that is needed to determine the rationale for a value of a certain field. The system displays the relation- ships and the user’s input values that are responsible for a derived value (Figure 3-2). It is important to note that this includes non-static information. In contrast to spreadsheets, the dependency structure of a derived value of a FINANZ field is not predetermined. It depends on the previous dialogue and activations of value propagations. Explanation capabilities become especially important when the user is asked to resolve a conflict that is generated by more than one constraint and several field values (see Section 3.2). FINANZ signals the conflict by highlighting all the responsible fields and displaying a message asking for user control (Figure 3-3). The system designer may have implemented some conflict resolution strategies such as preference of one field over another. If there is only one field that is in conflict with the user’s input, the value typed in by the user may be preferred. Users can influence conflict resolution by marking fields as constants. This has the effect of fixing their value for the conflict resolution process, and allows users to explore effects of modifications under the condition that certain fields remain unchanged. 3.2 Designing FINANZ In designing FINANZ, we combined the two main perspectives of user interface design in Artificial Intelligence: FINANZ is an interface using AI techniques and an interface to an AI system. In systems such as FTNANZ which gain their power through domain-oriented communication capabilities, a strict separa- tion between interface and application seems neither desirable nor possible. The internal representational mechanisms of FINANZ are based on constraints [Borning 79; Steele 801. -By selecting an opera- tion from the menu, users establish a constraint between fields. 804 User Interfaces Figure 3-2: Explanations The user has asked for an explanation of a field’s contents. The system displays the fields and constraints that are ul- timately responsible for the derived value. Constraints are verbalized in the explanation window, which is displayed as an answer to the explanation request. Explanations of con- straints are augmented by field names and actual field values. Explanation capabilities are especially important when there is no obvious way by which a field value is determined. Constraints operate bi-directionally and they propagate changes automatically. Understanding the functionality of a spreadsheet system in terms of constraints rather then in terms of operations allows not only for multi-directional propagation of values, but also for a better way of representing complex relationships. The computational paradigm is that of a con- straint satisfaction process that takes all of the specifications into account. In the representational basis of FINANZ, constraints are special classes in OBJTALK, the object-oriented knowledge represen- tation language mathke 861 which is used as implementation vehicle for FINANZ. When the user selects an operation from the menu the system instantiates the appropriate constraint class with the fields specified by the user. Changing one of these fields is internally represented as a message passing event to all constraint instances in which the field plays a role. As a result new values are computed and propagated to other fields. The computations are represented as methods in the constraints’ classes. When a depended field is set, the sources are recorded with it, i.e., the method that computed the value along with the fields that triggered the method. This infor- mation is used for explanation purposes, detecting reasons for conflicts, and dependency directed backtracking. New constraints can be introduced by simple subclassing be- cause they are objects that define their behavior in classes of OBJTALK. Most of the properties such as recording reasons for derived values and producing explanations are located in a common superclass and need not be specified each time a new constraint class is introduced. The described representational mechanisms enhance spread- sheet programming in various ways (thcsc enhancements to spreadsheet technology are currently only available to system designers; users are unable to define new constraints): Constraints can be non-numeric. Dependencies between spreadsheet fields are represented as reactions to messages. These reactions are not restricted to perform numerical opera- tions. Any kind of symbolic computation is possible. For instance, information about a person such as their qualifica- tions and their status can be related to their salaries and/or their duties. Fields can be involved in more than one constraint. In spreadsheet systems, the contents of a depended field has to be determined by a single formula. In FINAXZ, fields usually take part in more than one constraint. Field values can therefore be derived from multiple sources. This can cause conflicts if users provide more information than necessary. FINANZ provides several built-in conflict resolution strategies that, for instance, rank the user supplied inputs. If no conflict resolu- tion strategy is specified, the system points out the conflicting fields and asks the users to specify which field they would like to be changed (see Figure 3-3). Figure 3-3: Conflict Resolution The user is asked to resolve a conflict caused by several con- straints and a number of fields. The amount assigned to “Person A” in the second year of the project is in conflict with his overall salary and the percentage and time values of the “Academic Year”. Either of these values can be retracted to solve the conflict. The percentage and time values are also constraint to be equal to the ones of “Person B”. Their modification is likely to affect “Person B’s” figures. In this situation, users can ask for explanations of all of the field values, including those which are highlighted. Fischer and Rathke 805 Constraints can be combined. Often, constraints that are specific to an application domain are combinations of more primitive constraints. The constraint for the time percentage of the working hours during the Academic Year, for instance, is constructed by combining the more primitive percentage and multiplication constraints. tax law). Also, the modifiability and tailorability of the sys- tern should be enhanced by providing a kit for the construction of new constraints at the end-user level (with the goal that the domain expert becomes even more independent of the knowledge engineer [Borning 861). The design methodology used in FINANZ is important. For moving from general purpose spreadsheets to specific applica- tion domains, the development of mm-like systems which employing high-level abstractions is crucial. In the same way Acknowledgments The authors would like to thank Andi di Sessa, Hal Eden, Jonathan Grudin, Francesca Iovine, Andreas Len&e, Clayton Lewis, Helga Nieper, Curt Stevens and Bill Swartout for criticizing drafts of this as FTNANZ provides the appropriate domain related abstrac- paper. The research was supported by grant No. DCR-8420944 from tions for its users, the representational technology provides the the National Science Foundation, and grant No. MDA903-86-CO143 appropriate abstractions for constructing mm-like systems. from the Army Research Institute. Constraints are established during the programming phase. FTNANZ becomes a meta-system for designing budgetary relationships. The important difference from other meta- systems (such as EMYCIN) is the level of abstraction that is used. FINANZ primitives are domain dependent. This makes them less general but at the same time more usable for the designed purpose (see Section 2.1). The gap that needs to be bridged from the primitives of the language to the intended result is much narrower than the one starting from general representational formalisms such as frames, rules, and con- straints. By intentionally reducing the generality we are sup- porting the knowledge engineering task. 4. Conclusions and uture Directions Using more powerful representational mechanisms than in spreadsheets, we must be careful not to loose those aspects which made spreadsheets a success model. Design tradeoffs are balanced in a different way in FINANZ than in spreadsheets. With FINANZ we have overcome some of the shortcomings of spreadsheets mentioned previously -- but have we introduced other ones? By providing more support for specific application domains, the tradeoff between generality and familiarity of concepts on one side and specialization on the other side must be carefully evaluated. There is a strong interdependency be- tween systems which support human problem domain com- munication, and the necessity for modifiability and tailorability of systems. FINANZ shows that an object-oriented approach towards knowledge representation mathke 861 provides a good environment for making the construction of domain-oriented systems a practical and worthwhile ac- tivity. FINANZ as a major application system has had a strong impact on the shape of our tools and has served as a major driving force for the continuous enhancement of our tools over the years. References [Boecker, Fabian, Lemke 851 H.-D. Boecker, F. Fabian Jr., AC. Lemke, WLisp: A Window Based Programming Environment for FranzLisp, Proceedings of the First Pan Pacific Com- puter Conference, Australian Computer Society, Mel- bourne, Australia, September 1985, pp. 580-595. [Borning 791 A.H. Borning, Thinglab -- A Constraint-Oriented Simulation Laboratory, Technical Report SSL-79-3, Xerox Palo Alto Research Center, Palo Alto, CA, 1979. [Borning 861 A.H. Borning, Defining Constraints Graphically, Human Factors in Computing Systems, CHI’86 Con- ference Proceedings (Boston, MA), ACM, New York, April 1986, pp. 137-143. [Burton, Brown, Fischer 841 R.R. Burton, J.S. Brown, 6. Fischer, Analysis of Ski- ing as a Success Model of Instruction: Manipulating the Learning Environment to Enhance Skill Acquisition, in B. Rogoff, J. Lave (eds.), Everyday Cognition: Its Development in Social Context, Har- vard University Press, Cambridge, MA - London, 1984, pp. 139-150. [Fischer 8 13 G. Fischer, Computational Models of Skill Acquisi- tion Processes, Computers in Education, Proceedings of the 3rd World Conference on Computers and Education (Lausanne, Switzerland), R. Lewis, D. Tagg (eds.), July 198 1, pp. 477-48 1. Fischer, Lemke 881 G. Fischer, A.C. Lemke, Construction Kits and Design Environments: Steps Toward Human Problem-Domain Communication, Human-Computer Interaction, Vol. 3, No. 3, 1988. [Illich 731 I. Illich, Tools for Conviviality, Harper and Row, New York, 1973. [Kay 841 A. Kay, Computer Software, Scientific American, Vol. 251, No. 3, September 1984, pp. 52-59. Abandoning general computational environments in favor of increased subjective computability raises the important ques- tion of what kind of general characteristics a problem must have to make spreadsheets or FINANz a useful implementation technology. Describing this space is important, so that users can get a feeling for the kind of problems they can solve. One of the extensions which we want to address in our future work on FINANZ is to increase the number of abstractions used (e.g., to include abstractions from related domains such as budget or 806 User Interfaces &ewis 871 C.H. Lewis, NoPumpG: Creating Interactive Graphics with Spreadsheet Machinery, Technical Report CS-CU-372-87, Department of Computer Science, University of Colorado, Boulder, CO, August 1987. /Norman, Draper 861 D.A. Norman, S.W. Draper (eds.), User Centered System Design, New Perspectives on Human- Computer Interaction, Lawrence Erlbaum Associates, Hillsdale, NJ, 1986. [Rathke 861 C. Rathke, ObjTalk: Repraesentation von Wissen in einer objektorientierten Sprache, PhD Dissertation, Universitaet Stuttgart, Fakultaet fuer Mathematik und Informatik, 1986. [Simon 811 H.A. Simon, The Sciences of the Artificial, The MIT Press, Cambridge, MA, 1981. [Steele 801 G.L. Steele, The Definition and Implementation of a Computer Programming Language based on Constraints, Technical Report MIT-TR 595, MIT Ar- tificial Intelligence Laboratory, Cambridge, MA, 1980. [Stefik 861 M.J. Stefik, The Next Knowledge Medium, AI Magazine, Vol. 7, No. 1, Spring 1986, pp. 34-46. Fischer and Ratbke 807
1988
65
810
Automatic Construction of User-Interface Displays* Uigal Arens Lawrence Miller Stuart C. Shapiro Norman K. Sondheimer USC/Information Sciences Institute 4676 Admiralty Way Marina Del Rey, CA 90292 (213) 822-1511 Abstract Construction of user interfaces for most computer applications remains time consuming and diffi- cult. This is particularly true when the user inter- face system must dynamically create displays in- tegrating the use of several interface modes. This paper shows how Artificial Intelligence knowledge base and rule technology can be used to address this problem. NIKL is used to model the entities of the appli- cation domain and the facilities of the user inter- face. Rules are written connecting the two mod- els. These rules range from application specific to general rules of presentation. The situation to be displayed is asserted into a PENN1 database. A Presentation Designer interprets this data using the domain model, chooses the appropriate rules to use in creating the display, and creates a description of the desired display in terms of the interface model. A system, Integrated Interfaces, using this de- sign for an integrated multi-modal map graphics, natural language, menu, and form interface has been created and applied to a database reporting application. 1 Introduction In spite of the development of user interface tool kits, con- struction and enhancement of user interfaces for most com- puter applications remains time consuming and difficult. Estimates of user interface code as a percentage of appli- cation code run as high as 60%. Among the most difficult interfaces to build are those that dynamically create dis- plays. Such systems must automatically choose between multiple media (hardware), multiple modes (software sys- tems), and multiple methods (choices with software sys- tems). Simply having several modes available is not enough - their use must be integrated. By this we mean that differ- ent items of information must be distributed to appropri- ate modes, the amount of redundancy should be limited to the amount needed to establish co-reference, and the dif- ferent presentation modes must all work from a common *This research is supported by the Defense Advanced Re- search Projects Agency under Contract No. N0014-87-K-0130. Views and conclusions contained in this report are the authors’ and should not be interpreted as representing the official opin- ion or policy of DARPA, the U.S. Government, or any person or agency connected with them. meaning representation to assure accurate presentation. Further, the interface system integrating a set of modes must be capable of dynamically producing displays. Fixed multi-modal displays are not sufficient for rapidly chang- ing environments. Finally, the techniques employed must support generalization and enhancement since the initial interface is certain to require enhancement over time. Ex- isting systems do not and cannot achieve these objectives. Artificial intelligence knowledge base and rule technol- ogy can be used as a basis for automatic display con- struction. Information to be displayed can be recognized and classified, and display creation can then be performed based on the categories to which information to be pre- sented belongs. Decisions can be made based on given rules. This approach to developing and operating a user interface allows the interfaces to be more quickly created and more easily modified. We call such a system a model- driven presentation design system. In the Integrated Interfaces project at IS1 we have be- gun to address the problem of constructing integrated user- interface displays. We have produced a design that sup- ports integration of display modes, dynamically produces multi-modal displays, and supports generalization and en- hancement. It does all this through a system of models and rules. The interface model brings together the different modes in a single uniform way. Another model describes the application, providing a uniform meaning representa- tion. The rules explicitly state how information described in application terms relates to presentation modes. These rules take advantage of the model of interface capabilities to integrate the modes. Given information to display, a Presentation Designer applies these rules to dynami- cally produce display descriptions. Device drivers interpet such descriptions to create the actual displays. Employing this design, our Integrated Interfaces system is able to present retrieved information using a combina- tion of output modes - natural language text, maps, ta- bles, menus, and forms. It can also handle input through several modes - menus, forms, and pointing. As a demon- stration, we have implemented an interface to an existing Naval database reporting application. Our presentation designer creates displays similar to those being prepared manually for the Navy on a daily basis. Section 2 of this paper discusses knowledge bases and rules in more detail. Section 3 describes the knowledge representation systems we are depending on. Section 4 gives examples. Section 5 compares Integrated Interfaces to the two systems most like ours. Section 6 summarizes our conclusions, and Section 7 discusses some of our plans for the future. The paper concludes with a description of our current status. 808 User Interfaces From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Figure 1: Fragment of Application Model 2 Presentation design is achieved in our system by the ap- plication of a system of antecedent-consequent rules. The rules classify the information that needs to be presented and map types of information to appropriate types of pre- sentations. 2.1 Models Our models characterize or define the categories of entities our user interface can deal with. The application model identifies the categories of ob- jects and actions in the application’s view of the world. We indicate subclass relations present among categories, as welI as relationships between objects and actions. For the Naval database application, for example, we have a class of ships, which has subclasses categorized by opera- tional status. (See Figure 1 for a small fragment of this model.) The interface model describes the categories of objects and actions of the interface world. The objects here include windows, tables, maps, text strings, and icons. The actions include creation, deletion, movement, and structuring of displays. Describing all interface modes together in a single model is a necessary basis of integration. (See Figure 2 for a small fragment of this model.) Only the application model needs to be created for each new application interface. 2.2 Rules The presentation rules are simple in essence: they map objects from the application model into objects in the in- terface model. For example, a daily status report may be mapped into a map. A position report may be mapped onto a point on the map. A ship’s planned future activi- ties may be mapped onto a text string. The following is a paraphrase of part of a rule for the naval briefing applica- tion: “To display a Ship whose Mission is Mobile, use an Figure 2: Fragment of Interface Model Arrow, with its Course establishing the arrow’s Qrienta- tion, and Text as a Tag presenting its Schedule.n As can be seen, this rule takes its condition from the application model and the rest of its vocabulary from the application and interface models. It is the rules in conjunction with the interface model, that allow integration. They can be used to distribute in- formation among modes, minimize redundancy, and coor- dinate presentations. For example, the above rule creates an mixed graphic and natural language text display. These rules are arranged according to the class subsump- tion hierarchy of the models. For example, the rules appli- cable to all ships are further up the hierarchy than those applying only to ships on exercises. We allow both ‘low-level,” application-specific rules, and “high-level,” application-independent rules. The above rule is an example of the first type. The follow- ing is an example of the second: “To request a Choice Among Alternatives when the Cardinality is Large, use a Fill-in-the-Blank From; otherwise use a Menu.” 2.3 ule Application Presentation design can now be described as the task of realizing the application domain categories within which a request for information presentation falls, selecting the appropriate rules that apply to those categories, and re- describing the application terms in the request into appro- priate presentation terms. Realization relates the facts about instances to the ab- stract categories of the model. For example, the concrete facts about Sprite, a ship with a malfunctioning radar, must lead to the realization that it is a Disabled Ship. Selection works by allowing for the appropriate mapping rules to be chosen, allowing for additivity. Selection also assures that all aspects of the demand for presentation are met by some rule. Redescription applies the rules, map- ping each aspect of a common-sense view of a presentation into an equivalent presentation form. The forms produced by rule application are not actually the commands to the output subsystems (i.e., the map graphics system, text generator, and the forms system). Instead, they are interpretable by device drivers that con- trol these systems. Arens, Miller, Shapiro and Sondheimer 809 3 Knowledge epresentation TQOlS Our implementation of presentation design depends on two knowledge representation systems: NIKL and KL-TWO. NIKL holds our models. KL-TWO automatically carries out realization. KL-TWO also holds the demands for pre- sentation and receives the forms read by the device drivers. This section provides a brief introduction to these tools. NIKL [Kaczmarek et al., 19861 is a network knowledge- base system descended from KL-ONE [Brachman and Schmolze, 19851. This type of system supports descrip- tion of the categories of entities that make up a domain. The central components of the notation are sets of concepts and roles, organized in IS-A hierarchies. These hierarchies identify when membership in one category entails mem- bership in another. The roles are associated with concepts (as role restrictions), and identify the relationships that can hold between individuals that belong to the categories. The role restrictions can also hold number restrictions on the number of entities that can fill these roles. We have been experimenting with a naval assets domain model for the naval database reporting application men- tioned above. It has a concept Disabled-Ship that is meant to identify the ships that are unable to carry out their mis- sions. Disabled-Ship IS-A type of Ship distinguished from Ship by having a role restriction Readiness that relates Disabled-Ship to NonOperational-Status, i.e., all ships with nonoperational status are disabled. All Ships can have ex- actly one filler of the Readiness role restriction. The con- cept of NonOperationaLStatus is partly defined through the IS-A relation to a concept Readiness-Status. This sit- uation is shown graphically in Figure 1 in the typical net- work notation used for KL-ONE knowledge bases. KL-TWO is a hybrid knowledge representation system that takes advantage of NIKL’s formal semantics [Vilain, 1985). KL-TWO links another reasoner, PENNI, to NIKL. For our purposes, PENN1 can be viewed as managing a data base of propositions of the form (P a) and (Q a b) where the forms are variable free. The first item in each ordered pair is the name of a concept in an associated NIKL network and the first item in each ordered triple is the name of a role in that network. So the assertion of any form (P a) states that the individual a is a kind of thing described by the concept P. The assertion (Q a b) states that the individuals a and b are related by the abstract relation described by Q. NIKL adds to PENN1 the ability to do taxonomic rea- soning. Assume the NIKL database contains the concepts just described in discussing NIKL. Assume that we as- sert just the following three facts: (Ship Sprite), (Readi- ness Sprite C4) and (NonOperationaLStatus C4); C4 is a U.S. Navy readiness code. using the knowledge base, PENNI is able to deduce that (Disabled-Ship Sprite) is true. PENN1 also provides a truth maintenance system that keeps track of the facts used to deduce others. When our rules are used to determine aspects of a presentation from facts about the world, the truth maintenance sys- tem records the dependencies between the application do- main and the presentation. For example, (Readiness Sprite C4) triggers a rule which asserts (Disabled-Ship Sprite). If 810 User Interfaces (Readiness Sprite C4) is retracted, PENNI’s truth mainte- nance system will automatically retract the assertion that Sprite is disabled. 4 Examples The power of a model-driven presentation design is in its flexibility. The designer of a system does not specify rigidly in advance in what form information will be requested from the user, and how data and results will be displayed. In- stead, our models contain descriptions of the types of in- formation the application programs deal with, and of the types of graphical tools and instruments available. The rules for presentation enable the system to generate on- demand displays appropriate for given needs. Here are some concrete examples. 4.1 Construction of a Visual Representation of an Object Consider the knowledge about ships and about graphical instruments encoded in the NIKL models in Figure 1 and Figure 2. Let us assume that the user wishes to show ships engaged in a Mobile mission with a special Icon, and that the icon should be oriented in a direction identical to the ship’s course. In addition, assume that Disabled-Ships are to be shown with Red icons and that the Schedule of a ship is to be shown in the natural language Tug of the Icon representing it. A version of the rules that we would use to achieve this is shown in Figure 3. The antecedent con- siders the categories of one or more individuals and their relationships, all in NIKL terms. The consequents provide assertions about the graphic representation of objects for the PENN1 database. These rules are asserted into PENN1 so that the truth maintenance system may keep track of the dependencies between antecedent facts and their resul- tant consequents, as explained in the previous section. The functions Image and Textual-Description map the constants of the common sense world into constants of the visual and textual world, respectively. For example, Rule 5 states that if some individual, x, is a Ship and another individual, y, is its Schedule, then the Tug of the image of x is the textual-description of y. The textual-description of y will be created by the invocation of our text generator. 1. IF (Operational-Ship x) or (NonDeployed-Ship x) THEN (Coloration Image(x) Green) 2. IF (Disabled-Ship x) THEN (Coloration Image(x) Red) 3. IF (Ship x) and (Course x y) THEN (Orientation Image(x) y) 4. IF (Ship x) and (Mission x y) and (Mobile y) THEN (Icon-Type Image(x) Arrow) 5. IF (Ship x) and (Schedule x y) THEN (Tag Image(x) Textual-Description(y)) Figure 3. Sample Presentation Rules. To complete the example, suppose that the follow- ing set of facts was asserted into the PENN1 database: (Ship Sprite), (R ea iness Sprite C4), (NonOperational- d Status C4), (M is&on Sprite X87), (Patrol X37), (Sched- uZe Sprite U46), (Course X.??7 2,!?0), and (Employment- Figure 4: Model Fragment Schedule U46). Suppose further that the NIKL model defined Patrol to be a subclass of Mobile missions. Re- alization would recognize the ‘Sprite’ as a Disabled Ship and one engaged in a Mobile mission on a course of 220 degrees. Selection would identify that Rules 2, 3, 4 and 5 apply. Redescription would result in the addition to the PENN1 database of the description of the image of the ‘Sprite’ as a red arrow with an orientation of 220, and with a textual representation of its schedule as its label. Due to the use of KL-TWO’s truth maintenance system, if any of the facts pertaining to Sprite is retracted, an automatic change in the description of its graphic image will occur. 4.2 Classifying Collections of For many requests for information encountered in our application domain the design of a presentation requires global considerations that rules of the kind listed above cannot provide for. It would therefore be hopeless, at this point, to try to write rules that would attempt to derive an elaborate presentation entirely from low-level information about the objects to be described. Our approach provides us with a partial solution to this problem. The availability of models of the application and of dis- plays to our Presentation Designer gives it the advantage of being able to recognize collections of data as represent- ing information of a certain known type. The Presentation Designer can then make use of presentation techniques spe- cialized for this type of data to provide the user with more appropriate displays. For example, Figure 4 provides portions of our model that include the class Pacific Situation, a collection of data about ships and ports in the Pacific Region, which includes certain specific information from the ships’ employment schedules. When provided with data about ships in the Pacific re- gion and their employments, the Presentation Designer would classify the data in its model of the application, rec- ognizing that it has received a collection of data belonging to the class Pacific Situation. Then the Presentation De- signer can use specific presentation rules appropriate for displaying the information. In the application domain we have considered there is a preferred way for presenting this information, to which we try to conform. This preferred presentation has developed in the Navy in the course of years of handcrafted situation briefing presentations. The specific presentation rules appropriate only for Dis- play Pacific Situation will combine the entities created by more general rules, of the kind described in the previous section, to produce the final presentation. 4.3 Generation of an isphy A presentation design system must also deal with the preparation of displays for the purpose of soliciting nec- essary information from the user. Here, again, the models of all aspects of the task and the application are valuable. At some point the user may indicate a desire to view data concerning one or more ships in some region. In terms of our model (see Figure 4), that would mean indicating a preference for Display a Situation. As it turns out, the Presentation Designer does not have any rules that can be used to redescribe this general request into a presentation, but there exist ways of satisfying more specific requests. For example, there exist ways to satisfy a request for dis- playing a single ship’s situation or the situation of all ships in a region. In this case, the system collects all options the user can choose among to construct an executable request. A rule of the Presentation Designer is used to compose a display form that will present these options to the user. The re- sult of this design is a set of assertions in PENNI that the device driver for a separate form management pack- age (QFORMS) [K aczmarek, 19841 will use to prepare the input form. The form below, presented to the user, to make one of several specific choices: allows the user Pacific JXegions: Western Pacific cl South China Sea Indian Ocean is Eastern Pacific cl Pacific Command Region cl Ship: It is instructive to examine precisely how this form is cre- ated. The concept Display a Situation has two subclasses of actions, namely Display Ship Situation and Display Pa- cific Situation. Our system considers the possibility of generating an intermediate two item submenu, something like: Situation in Pacific Region Situation of Ship Such a small menu is unsatisfactory from a human fac- tors standpoint. We therefore formulated a general condi- tion stated in the rule used here, saying that -if the number of choices is less than N, and if the choices can be fur- ther subdivided, then the proposed menu should not be displayed. Instead, a more detailed form should be gener- ated, one based on the subchoices. Our prototype uses the value 3 for N, so in this case the rule causes the Presen- tation Designer to immediately generate the more specific form. A user is free to change the value of N, thus modify- ing the design of forms the system generates in situations like the one above. Arens, Miller, Shapiro and Sondheimer 811 Note that the geographic regions available were specified by name in the form created, while ships were not. Rather, the user is allowed to specify the desired ship by typing it on the form’. This distinction is a result of informa- tion concerning the cardinality of the relevant collections of objects - information encoded in our models. Since the number of possible choices for region is small, they are enumerated. However, the number of ships is larger, so the user is provided with a way to specify a choice explicitly instead. Finally, the result of an end user completing this form is also controlled by the model. QForms allows actions to be associated with menu choices and form fillings. In cre- ating a menu, the Presentation Designer adds an action to each field conditioned on the field being selected with the mouse. This action will result in an assertion in PENNI, indicating that the user is requesting the action described by the model category from which the menu choice origi- nated. Fill-in-the-blank forms work similarly. 5 elated Work Perhaps the best known previous work dealing with the is- sue of Presentation Designer is that of Mackinlay [Mackin- lay, 19861. Much like part of our system, Mackinlay’s APT uses in- formation about characteristics of data provided to it, to produce a graphical representation of that data. The dif- ferences between the two systems become clear when we consider the variety of data each deals with and the vari- ety of presentations they produce. APT produces graphs of various kinds, and much of its effort goes into decid- ing which axes to choose, and how to indicate the values along each axis. Data dealt with is limited to what can be presented using such graphs. Consequently, Mackinlay has succeeded in producing a system which can generate graph- ical presentations automatically using only “low-level’ in- formation about the objects and their attributes. Our system is expected to generate a much wider variety of displays. Certain display layouts are often chosen simply to conform to pre-existing preferences of Navy personnel. Consequently, unlike Mackinlay, we must provide for the possibility of following pre-set sterectypical instructions in certain cases. We thus must devote considerable effort to recognizing which cases require these special displays. A further significant difference between the systems is the complexity of the data we are required to present. We needed a sophisticated knowledge representation language, NIKL - a facility which Mackinlay found unnecessary. Both systems make use of sophisticated reasoning facilities. The CUBRICON system [Neal and Shapiro, 19881 shares many of the same goals with our system, but differs in ini- tial focus. Like our system, CUBRICON uses a sophisti- cated knowledge representation/reasoning system to man- age an integrated, multi-modal interface, including maps, icons, tables, and natural language text. Whereas the CUBRICON project is trying to construct a unified com- municating agent, with multi-modal input/output melded within a natural language understanding/generation sys- tem, our system highlights the rules that map between the ‘The actual form (Figure 5.) uses the title Report as opposed to Ship, since it allows other types of reports as well. application and interface models, and views natural lan- guage generator as a rather impermeable display agent. CUBRICON is more focused on producing the grammar and rules for a multi-modal language, we are more focused on producing an easily used, multi-modal user-interface management system. 6 Conclusions We have realized the Integrated Interfaces design in a sys- tem that utilizes natural language, graphics, menus, and forms. Specifically, the Integrated Interfaces system can create maps containing icons with string tags and natu- ral language descriptions attached to them. It can further combine such maps with forms and tables presenting ad- ditional, related information. In addition, the system is capable of dynamically creating menus for choosing among alternative actions, and more complicated forms for spec- ifying desired information. We have constructed application models describing con- cepts in an important real-world domain - the naval situ- ation briefing. We have implemented rules that enable the creation of different types of integrated multi-modal out- put displays based on the Navy’s current manual practices. We have represented large enough portions of both the gen- eral and application specific domains to demonstrate that a model-driven presentation design approach is potentially useful in real-world situations. In achieving this result, we have done more than produce a system for constructing and controlling multi-modal ap- plication interfaces. We have shown that what would oth- erwise appear to be distinct communication mechanisms, viz., graphics, natural language, tables, etc., can be treated as part of an integrated whole, all relating to a common level of meaning representation. We have further shown that the decisions on the use of the appropriate mode can be represented straightforwardly by explicit rules relating information to be presented to the method of presentation. This work can serve as the basis of a comprehensive theory of multi-modal communication. 7 ture ark Despite the successes illustrated in the previous examples outstanding problems remain. Our future plans include adding the following structures to our system. A User Model - A user model will enhance the Pre- sentation Designer by allowing it to tailor presenta- tions to individual user preferences. For example, it would enable the system to label only ports and re- gions unfamiliar to a user, thereby reducing screen clutter. A Dialogue Model - A dialogue model will allow the presentations to be more closely tailored to specific users’ requests. Currently, the Presentation Designer is simply provided with data to display. It is not aware of the purpose of the display. A Screen Model - A screen display is more than a bitmap; it is viewed by a user as containing icons and text which have real world denotations. The in- terface’s internal description of the display must be 812 User Interfaces rich enough to allow a user to alternate between ref- erences to screen entities and their denotations. A screen model will make such relationships explicit. A demonstration version of the Integrated Interfaces sys- tem is now available at ISI. The current version models the domain of Navy ships in the Pacific Ocean. A user may use the system to access information about ships’ loca- tions, tasks, readiness status, and more. The resulting in- formation is displayed using combinations of maps, menus, tables, and natural language output (Figure 5). The system is written in Common Lisp and runs in the X windows environment under UNIX on HP 9000 Model 350 workstations. Displays are presented on a Renaissance color graphics monitor. The map graphic modality is sup- ported by ISI’s Graphics Display Agent. Menus and forms are created using &FORMS [Kaczmarek, 19841. Natu- ral language output is produced by ISI’s Penman system [Sondheimer and Nebel, 19861. cknowledgements We wish to acknowledge the crucial help provided by others working on the Integrated Interface project. Paul Ravel- ing has developed the graphical interface and continues to maintain the GDA. Chin Chee has ported QFORMS and Penman to the HP workstation and is responsible for co- ordinating the various parts of the system. Jim Geller has contributed to the implementation of the spatial reasoner. [Brachman and Schmolze, 19851 Ronald J. Brachman and James 6. Schmolze. An Overview of the KL-ONE Knowledge Representation System. Cognitive Science 9(2), 1985, pp. 171-216. [Kaczmarek, 19841 T om Kaczmarek. CUE Forms Descrip- tion. IS1 Internal Report. USC/ISI, Marina de1 Rey, CA, 1984. [Kaczmarek et al., 19861 Tom Kaczmarek, Ray Bates, and Gabriel Robins. Recent Developments in NIKL. Proceed- ings, AAAI-86. Philadelphia, PA., August, 1986. [Mackinlay, 19861 Jock D. Mackinlay. Automatic Design of Graphical P resentations. Ph.D. Thesis, Department of Computer Science, Stanford University. Stanford, CA, December 1986. [McAllester, 19821 D. A. Mchllester. Reasoning Utility Package User’s Manual. Massachusetts Institute of Tech- nology, AI Memo 667. Cambridge, MA., April, 1982. [Neal and Shapiro, 19881 J. G. Neal and S. C. Shapiro. In- telligent Multi-Media Interface Technology. Proceedings, Architectures for Intelligent Interfaces: Elements and Prototypes, J. W. Sullivan & S. W. Tyler, Eds., Lock- heed AI Center, 1988, pp. 69-91. [Sondheimer and Nebel, 19861 Norman K. Sondheimer and Bernhard Nebel. A Logical-Form and Knowledge- Base Design For Natural Language Generation. Proceed- ings, AAAI-86, Philadelphia, PA., August, 1986, pp. 612-618. [Vilain, 19851 Mark Vilain. The Restricted Language Ar- chitecture of a Hybrid Representation System. IJCAI- 85: Proceedings of the Ninth International Joint Con- ference on Artificial Intelligence. Los Angeles, CA., Au- gust, 1985, pp. 547-551. Figure 5. Arens, Miller, Shapiro and Sondheimer 813
1988
66
811
Specification By Reformulation: A Paradigm for Building Integrated User Suppod Environments1 John Yen, Robert Neches, and Michael DeBellis USC / Information Sciences Institute 4676 Admiralty Way, Marina de1 Rey, CA 90292 Abstract Specification by reformulation is a general interface paradigm. It is an abstraction of retrieval by reformulation, a paradigm used in previous systems for assisting users in formulating database queries. Specification by reformulation serves as a general foundation upon which domain specific applications can be built. To illustrate its usage, we describe three services built within it: a database retrieval aid, a notecards facility, and an electronic-mail interface to an on-line procurement system. Building systems in this way illustrates the concept of an integrated user support environment - a set of cooperating tools for end users that can be extended by application builders. 1. Introduction An integrated user support environment is a set of modular software tools that can interact and smoothly pass partial results back and forth [Neches 881. It is similar to the concept of a tool-oriented programming environment such as Unix. However, whereas a programming environment consists of tools that are used primarily by computer programmers, a user support environment consists of higher-level tools that can cooperate and aid application users in their day-to-day activities. Just as good programming environments are marked by close integration between their tools (for example the ability to invoke an editor from a debugger in order to edit the source code of the function being examined in the debugger) the tools of a user support environment should be constructed in an open fashion so that they will naturally be able to work together: In this paper, we will demonstrate a particular approach to constructing such environments. Our starting point is an interface paradigm called retrieval by reformulation that has been used for knowledge-based database retrieval systems. We will show how this ‘The research described in this paper was supported by DARPA under contract No. MDA903-86-C-0178, and by the Air Force Logistics Command under contract No. F33600-87-C-7047. Views and conclusions contained in this paper are those of the authors, and should not be interpreted as representing the official opinion or policy of the sponsoring agencies. paradigm can be abstracted to a general paradigm which we call specification by reformulation. By this we mean a paradigm of human/computer interaction in which the user develops by successive approximations a specification of the objects a system is to manipulate and/or the behavior it is to evince. In this style of interaction, the system provides an environment which facilitates the refinement of the specification, largely by generating feedback for the user about the specification in its current form and by providing guidance about means for modifying that specification. The implementation of the specification by reformulation paradigm consists of a knowledge representation structure that represents the current specification, and general functions that provide feedback and guidance on refinement of the specification. Once an environment for operating within the specification by reformulation paradigm has been implemented, it is relatively easy to build applications on top of it. Although such applications will seem to have little in common on the surface, at a higher level of abstraction they will all utilize the same paradigm for accomplishing their various tasks. The one thing that they will have in common is that they will all depend on the structure of a database or knowledge base to successfully complete their tasks. Constructing such systems benefit users via a consistent interface and cooperating tools and system builders via modularity and reusability of code. We will discuss the BACKBORD system [Yen 881, an implementation of the specification by reformulation paradigm. BACKBORD illustrates how the paradigm can be applied to database query formulation, knowledge base browsing,. the creation and attachment of notes to a knowledge base, and the creation of mail messages. These services are tools which must operate in an integrated fashion within an intelligent workstation for procurement of standard electronic parts [Neches 881. 2. Specification by Reformulation Specification by reformulation is an abstraction of retrieval by reformulation, a paradigm for assisting users in formulating database queries [Tou 82, Williams 841. Retrieval by reformulation assists users who know what they are trying to retrieve but cannot construct a query to retrieve it, either because they do not understand the 814 User Interfaces From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. ‘query language or because they lack knowledge about the structure of the database. The major techniques of the paradigm came from a psychological theory of human remembering [Williams 811. Stelzner and Williams [Stelzner 8G] developed the term specification by reformulation to refer to a generalization of the retrieval by reformulation paradigm used to develop knowledge base interfaces. By abstracting the retrieval by reformulation paradigm, we achieve a general paradigm for interacting with large amounts of stored data that is based on a psychological theory of human information retrieval. 2.1. Previous Work Previous systems such as RABBIT [Tou 82, Williams 841, ARGON [Patel-Schneider 841 and InteZZiscopeTM (a recent commercial product of Intellicorp) have used retrieval by reformulation to aid database users. Such systems consist of a query (i.e., the description), a matching list, and an example. The query serves as the current context that the user has been able to establish and a description of what is being searched for in the database. The matching list contains all database records that match the description used for the last retrieval. The example shows the detailed content of one of the matching records. The major idea is to provide interactive guidance on possible ways to reformulate the query. Using this guidance, users can modify the original query to better reflect their intent. For example, the user can select values from the example to further constrain the query. The refined query is then used for another retrieval. Reformulation and retrieval iterate until the user is satisfied with the retrieval results. 2.2. A General Interface Paradigm Specification by reformulation provides a high level tool for applications utilizing databases or knowledge bases. The paradigm consists of alternating between (1) creating/refining a specification for achieving ,a user’s goals and (2) obtaining feedback on the effect of the current specification and guidance about how it can be modified. When satisfied with this process, the user can then execute actions that utilize the resultant specification. In a retrieval-by-reformulation aid, refinement means modifying the query, feedback is obtained by retrieving against the query, guidance is obtained through menus indicating ways that contents of information fields in the display can be used to modify the query, and actions consist of tools for graphically displaying the retrieved data. By generalizing the notions of queries, retrievals, and examples to those of specifications, feedback, and guidance, we can address applications well beyond database browsing. We next will describe an architecture for doing so, followed by some illustrative applications. 3. Representing Specification by eformulation This section elaborates the paradigm and describes its implementation in BACKBORD. Since BACKBORD operates on NIKL knowledge bases, a few terms in the NKL knowledge representation language must be introduced. A NIKL knowledge base consists of concepts and roles, which correspond to frames and slots in frame-based systems. The value restrictions in NIIU are used both to provide values of a slot and to provide restrictions on possible values. In this paper all references to knowledge base objects will appear in Bold face. A more detailed description of the NIKL language is provided by [Moser 831. The most significant aspect of NIKL related to our concerns is the presence of an automatic classifier [Schmolze and Lipkis 831, which utilizes the semantics of concept and role definitions to reason about where new concept descriptions fit with respect to pre-existing concepts in a subsumption (isa) hierarchy. 8.1. Refinement: Search in a Description Space Specification by reformulation involves a search through a space representing an abstraction hierarchy of descriptions. At any given point in time, the current specification can be thought of as a node in that space. Based on the node’s location, and the feedback obtained from it, the system helps users select operations to produce descriptions that are closer to expressing their intent. The specification consists of various roles. Each role itself has properties (information on possible values, restrictions on cardinality, etc.). Based on these properties the specification can always be classified to determine its current position in the knowledge base. Based on its classification, certain things can be said about the specification in its current state (Feedback). Based on the system’s model of valid specifications, it can make inferences about how the specification needs to be changed in order to classify into a valid place (Guidance). An application suitable for the paradigm must be formulated so that successful completion can be viewed in terms of one or more concepts to be found or created in the knowledge base. Once formulated in this manner, execution of the application takes the form of searching through the space of possible descriptions until the correct one is found. Th us, an application developer’s task is to provide BACKBORD with: (1) a taxonomy of the concepts in the application domain (e.g., a knowledge base model of the information covered in a database); (2) mappings from class concepts in the taxonomy to procedures for obtaining feedback (e.g., functions for converting a concept into a database query); (3) associations between class concepts and actions applicable to members of that class (e.g., offering auto- dialing on data entries containing phone numbers). With this information, the system can provide a general-purpose user interface for viewing specification Yen, Neches and DeBellis 815 descriptions, obtaining feedback and guidance, modifying the descriptions, and invoking actions upon resultant descriptions. The interface is customizable within certain parameters, because an application developer specifies how feedback is generated from a description and what actions can be applied to it. However, the interface provides a framework in which application developers are freed from specifying much that would otherwise be required, and in which users see a high degree of consistency across individual tools. 3.2. Guiding the User Through the Search Space Because of the general nature of the specification by reformulation paradigm, aids can be provided to help the user navigate through the space of possible descriptions which can be used with little or no modification by specific applications. In BACKBORD the specification is represented by a structure with superconcepts, roles, and value restrictions for the roles. This is a NKL concept’ to be classified into the knowledge base. Figure 3-l shows a BACKBORD screen browsing a database of standard electronic parts. The specification is displayed in a window called the description window. BACKBORD also has a scrollable window which contains all the objects that classify below the current specification (the window labeled “Matching Instances” in Figure 3-l). The user can select any one of these to be displayed in detail in the “Example” window, which appears just above the “Matching Instances” window. TEHPERATURE-RIINGE. I I SupcrConcepts:p SOURCE............ iQTT&VJ SUPPLY-UOLTnGE.. . . m TEl,PERATURE-RANGE. m PIN-NUflBER . . . . . . . . m I&TYPE........... a OUTPUT............ VLLLcTuq IICCESS-TINE.. . . . . . & J 1 ORGANIZATION...... m Figure 3-l: A BACKBORD screen for database browsing 2 The specification is actually a special structure that allows disjunctive value restrictions and other facilities that are not available in the current version of NIKL. The specification by reformulation paradigm as implemented in BACKBORD consists of operations on the specification and matching objects. 3.2.1. Operations on the Specification BACKBORD allows the user to modify the superconcepts and value restrictions of the specification. These modifications take advantage of the fact that all superconcepts and value restrictions are concepts in the knowledge base. All such concepts will henceforth be referred to as specification concepts. The operations provided for modifying specification concepts are: Generalize, Specialize, and Siblings: These options allow the user to replace a reference to a specification concept by a reference to one or more of its immediate parents, children, or siblings in the concept taxonomy. NKL Browser: This invokes the IS1 Grapher [Robins 871 to give the user a graphical display of the knowledge base relative to the chosen specification concept. The user may then choose any concept from the graph to replace the specification concept. Search: The search option creates a recursive call to BACKBORD with the specification concept as the starting specification. 3.2.2. Operations on the Matching Objects The matching objects are used as prompts for things that should (or should not) be in the specification. One can modify or add a role or a superconcept to the specification by transferring the value from a matching object. One can also perform any of the operations described for specification concepts on a concept that serves as a value restriction or superconcept for a matching object. 4. Example Applications of Specification by Reformulation Once we have implemented the specification by reformulation paradigm, building applications on top of it consists of developing customizations or extensions to the representation of the specification and/or the functions that implement feedback and guidance. The following are examples of applications in the BACKBORD system developed using this methodology. 4.1. Database Retrieval When using BACKBORD for database retrieval, the specification represents a query to the database. The matching objects represent database instances that would be retrieved using the current specification. The specification is refined by selecting roles and role values from the matching instances, until it retrieves the instances that are desired. 816 User Interfaces This is very similar to retrieval by reformulation systems such as ARGON and RABBIT. The main difference between the database capabilities of BACKBORD and retrieval by reformulation systems is that the retrieval by reformulation systems worked in a mostly bottom up fashion. Although such systems created an internal knowledge base representation for the data being retrieved, the user was never able to explicitly view the structure of that representation. Thus, the feedback in these systems all came from the prompts provided by matching instances. In addition to this type of feedback, BACKHORD makes possible a top down manner of specification. The user is able to view the hierarchical representation of the database. In this way the user can modify the specification based on feedback from matching examples (bottom up) and by specializing the type of object that is being searched for (top down). 4.2. The Mail Interface ISI’s FAST project [Neches 881 provides price quotes and handles purchase requests for electronic parts via computer mail. The BACKBORD mail interface to FAST (see Figure 4-l) helps the user construct messages for part quotations and orders. In the mail interface, the specification represents a message being constructed by the user and the matching instances are examples of previously completed messages that can be used to help construct the current message. The following describes the steps that were necessary in order to build the mail interface using the specification by reformulation paradigm: 4.2.1. A Model of Legal Specifications New concepts describing the hierarchy of message types must be entered into the knowledge base. The concept (in this case Message) that should serve as the starting point when entering the mail interface must also be specified. 4.2.2. Feedback Procedures The most important method of feedback is the retrieval of objects that classify under the specification. From these objects, the user can find a message or messages similar to the one being created. The default commands for manipulating examples allow the user to utilize information in the examples to modify the new message, e.g., by copying a field. A type of feedback that is specific to the mail interface is the ability to compare the specification to valid message classes and advise the user on how to change the specification in order to classify it into a message type that is valid to send. 4.2.3. Associations Between Actions and Concepts The actions associated with the mail interface concepts are sending and incorporating mail messages. This was the primary effort involved for this application. It Ftelda w,,h CO”CC~~~ value3: ACCOUNT-ID.. . pm-m4 ACCOUNT-NflIK. Rt-GGW, PIN-NUIIULR.. . . . . . (LWX~~IX4 SC-TYPE . . . . . . . . . . . F713-Vg OUTPUT............ p=mmJq MCESS-TIBE.. . . . . . vtmq ORGANIZILTION.. . . . . ~7TLWZ~ K11SM 1 fO 10 s&conccpts:p - _ Flelds ulth concept values: ACCOUNT-IO..... m ACCOUNT-NAME... m NEEDED-BY-DATE. m ,WNUFnCTURER... m PlmT . . . . . . . . . . . pwIL41-;‘q Fields ulth no” concept value’,: OUANTITY...... a DESCRIPTION... pim”,. ,tl o,n. >iI Figure 4-1: A BACKBORD screen for the mail interface consisted of interfacing the mail concepts to the Unix mail facility. This included converting concepts to messages and messages to concepts and executing Unix commands (such as Send and Inc) from the Lisp Workstation. All of this is transparent to the user. The following describes some of the advantages of building the mail interface using the specification by reformulation paradigm: 4.2.4. Integration The mail interface illustrates the integrated user support environment concept. It is entered as a result of the user locating a Part, using BACKBORD’s database browsing capabilities, and deciding to send a message to the FAST broker regarding the Part. Thus, the roles of the message specification that refer to the part (the Part and the Manufacturer) will by default be filled in with values from the Part found by the database query. 4.2.5. Consistent Interface As can be seen from figure 4-l the mail interface is very similar to the browsing interface (figure 3-l). The operations for constructing a mail message are also similar to those for constructing a query. All of the operations described in section 3.2.1 can be used to construct a message. 4.3. The Notecards Interface - TINT The Intelligent Note Taker (TINT) enables users to create notes, attach notes to knowledge base objects, and retrieve notes relevant to an object. Notes are classified into a note taxonomy in the knowledge base. A detailed discussion of TINT can be found in [Harp 883. For the note creation task, the specification represents a new Yen, Neches and DeBellis 817 note to be created. The procedure for creating a note is almost identical to that for creating a mail message. Just as in the mail interface, the user must decide the type of note to create (using a taxonomy of note types) and correctly instantiate the fields of the note using objects from the knowledge base. 5. Conclusion This paper has described specification by reformulation, an interface paradigm that extends the retrieval by reformulation paradigm into activities beyond query-based retrieval. Examples of the use of the paradigm were shown through BACISBORD. BACKBORD is an example of an integrated user support environment - a modular set of tools that naturally interact with each other. Such an environment has the following advantages: Integration and Consistency: The user is provided with one environment with the same interface conventions and with integrated capabilities. Reusability and Ease of Maintenance: Modules that accomplish very different tasks can all be based on the specification by reformulation paradigm. By having a general module that captures the user interface paradigm, the code size and maintenance problems for these application modules is significantly reduced. BACKBORD is a domain-independent shell for information systems. In essence, BACKBORD is analogous to an expert system shell in that it separates domain-independent components from domain-specific feedback and actions, just as expert system shells separate a domain-independent inference engine from domain-specific rule bases. The mechanisms that implement these facilities make BACKBORD the beginning of an integrated user support environment. By building the appropriate knowledge base and adding domain-specific feedback and actions, system builders can extend BACKBORD to their own applications. Acknowledgments We would like to thank Brian Harp, John Granacki, and Paul Rosenbloom for their comments on earlier drafts of the paper. References [Harp 881 Harp, B. & Neches, R., “A Knowledge-based Notecard Environment, ‘I in Proceedings of Workshop on Architectures for Intelligent Interfaces: Elements and Prototypes, Monterey, California, March 1988. [Moser 831 M.G. Moser, “An Overview of NIIU, the New Implementation of IU-ONE,” in Research in Natural Language Understanding, Bolt, Beranek, and Newman, Inc., Cambridge, MA, 1983. BBN Technical Report 5421. [Neches 881 Robert Neches, FAST Workstation Project Overview, USC/Information Sciences Institute, Technical Report ISI/RS-88-203, December 1988. [Patel-Schneider 841 P. F. Patel-Schneider, R. J. Brachman, and H. J. Levesque, “ARGON: knowledge representation meets information retrieval, ‘I in Proceedings of the First Conference on Artificial Intelligence Applications, Denver, Colorado, December 1984. [Robins 871 Gabriel Robins, “The IS1 Grapher: a Protable Tool for Displaying Graphs Pictorially,” in Symboliikka ‘87, Helsinki, Finland, August 1987. reprints available through USC/IS1 technical report ISI/RS-87-196 [Schmolze and Lipkis 831 James Schmolze and Thomas Lipkis, “Classification in the KL-ONE Knowledge Representation System,” in Proceedings of the Eighth International Joint Conference on Artificial Intelligence, IJCAI, 1983. [Stelzner 861 Marilyn Stelzner, Michael D. Williams, Specification by Reformulation, An Approach to Knowledge Based Interface Design, Intellicorp, Mountain View, CA, 1986. [Tou 821 Tou, F.F., M.D. Williams, R, Fikes, A. Henderson, and T. Malone, “RABBIT: An Intelligent Database Assistant,” in Proceedings M-8, pp. 314-318, 1982. [Williams 811 M. D. Williams and J. D. Hollan, “‘The process of retrieval from very long term memory,” Cognitive Science 5, 1981, 87-119. [Williams 841 M. D. Williams, “What makes RABBIT run?,” Int. J. Man-Machine Studies 21 , 1984, 333-352. [Yen 881 John Yen, Robert Neches, Michael Debellis, “Backbord: Beyond Retrieval by Reformulation,” in Proceedings of the Workshop on Architectures for Intelliqent Inter faces. Montereu. Cali fornia. 1988. 818 User Interfaces
1988
67
812
n The Extraction Information From Alex Pentland Vision Science Croup, The Media Lab, Massachusetts Institute of Technology Room E15-410, 20 Ames St., Cambridge MA 02138 We present a closed-form solution to the prob- lem extracting shape information from image shading, given standard assumptions and oblique illumination. Neither integration nor iterative propagation of infor- mation is required. An improved method for estimat- ing the illuminant direction is also presented.’ Introduction The extraction of shape from shading has a relatively long history within the field of computer vision. There have been two general classes of algorithm developed: Local algorithms, which attempt to estimate shape from local variations in image intensity, and informa- tion propagation algorithms, which attempt to propa- gate contour information across a shaded surface. Local algorithms, such as Pentland [I] or Ferric and Levine [2], use shading information within a small region to estimate surface orientation. Thus a subse- quent integration step is required to obtain surface shape. These local methods of estimating surface ori- entation have been shown [2,3] to produce accurate estimates whenever z(z, g), the imaged surface, has derivatives that obey: %; - %,” &u - &Jv ------= (1) 2, GJ GA, Examples of surfaces which satisfy this condition ev- erywhere are surfaces of revolution whose axis is par- allel to the z axis and cylinders whose axis lays in the image plane [4], however on more general surfaces this condition holds only at infrequent, isolated points ‘This research was made possible by National Science Foun- dation, Grant No. DCR-85-19283. I wish to thank Berthold Horn, Graham Smith, and Yvan Lelerc for their comments and insights. The global algorithms, principally developed by Horn and his students [5], makes use a smoothness as- sumption to relate adjoining points. This enables the strong information available at smooth occluding con- tours to be iteratively propagated across the surface. The assumption of smoothness is perhaps the primary limitation to the applicability of these algorithms; for instance, the smoothness constraint adopted in [5] im- plies that the algorithm will converge to the correct surface only when the condition in Eqn. (1) holds [6]. Integration is normally required to obtain the surface shape. In this paper we develop a novel formulation of the shape-from-shading problem, one that permits a direct, closed-form solution for the surface. Neither integration nor iterative propagation of information is required to solve for the height field, however low- frequency shape information cannot be recovered. This formulation also permits us to develop an improved es- timator of illuminant direction. maging of Surfaces Let z = ~(5, y) be a surface, and let us assume that: (I) the surface is Lambertian, (2) the surface is illuminated by (possibly several) distant point sources, (3) the surface is not self-shadowing. We will also take z < 0 within the region of inter- est, and assume orthographic projection onto the x, y plane. We will let k = (5~) ye , 2~) = (cos 7 sin 0, sin r sin 0, cos 0: be the unit vector in the mean illuminant direction, where r is the tilt of the illuminant (the angle the im- age plane component of the illuminant vector makes with the s-axis) and B is its slant (the angle the illu- minant vector makes with the z-axis). 826 Vision From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Under these assumptions the normalized image in- tensity I(%, y) will be where m, (f, 8) is the magnitude at position (f, 0) on the Fourier plane, and qSlr is the phase. %, Y) = pcosrsina + qsinrsino + cost (p2 + q2 + 1p2 (2) where 2.1 Linear Approximation If we then take the Taylor series expansion of I( x, y) about p, q = 0 up through the quadratic terms, we obtain 1(x, y) m cos a+p cos r sin a+q sin r sin 0- yqp2+q2) This expression gives an excellent approximation when IPI 9 lal < 1. Under the condition ]p] , IqI < 1 the linear terms of Eqn. (5) will dominate the power spectrum except when the average illuminant is within roughly f30” of the viewers’ position. When either p, q < 1 or the illumination direction is roughly perpendicular to the line of sight the quadratic terms will be negligible. ing We will assume that such is the case in the follow- analysis2. Thus we will approxi .mate the intensity function by: I(z,y) = cos0 +pcosrsina + qsinrsina Note that this is exactly the lunar reflectance function 2.2 Spectral Properties We will let the complex Fourier spectrum F,(f, 8) of’ +,Y) be F, (f, e) = r-n, (f, e)ei+“(f,e) 2Note that when these conditions are not true, then the image gradient direction is approximately parallel to the image-plane component of the surface orientation (tilt), and the magnitude of the image intensity gradient is proportional to the remaining component of orientation (slant). Thus when our assumptions are seriously violated, the recovery of surface orientation may be accomplished by local analysis of the image gradient field. Now since p and q are partial derivatives of a(~, y), their transforms F’p and F* are related to FZ in an elementary fashion. We can write In this case, the Fourier transform of the image I is (ignoring the DC term): Fz(M) = 27~ sin afm=(P, 0)ei(d=(f@)+?r/2) [cos 8 cos 7 + sin 0 sin 71 (10) 2.3 ecovery of Shape This spectrum depends, as expected, upon the illu- minant direction and the surface Z(Z, y). What is re- markable is that given the illuminant direction we can recover the surfaces’ Fourier transform directly, except for an overall scale factor and the low-frequency terms which are lost in going from Eqn.(7) to Eqns.(8) and (9). That is, if we let then the Fourier transform of the a surface is simply F,(f,Q = mz(f, ++(W~h/2) 27r sin af[cos 0 cos 7 + sin 0 sin 71. (12) The ability to directly recover surface shape from the Fourier components of the image suggests a theory of human shape perception. It is known that the vi- sual systems’ initial cortical processing areas contain many cells that are tuned to orientation, spatial fre- quency and phase. Although the tuning of these cells is relatively broad, it is clear that one could produce a coarse estimate of shape by (I) phase-shifting the cells’ responses by 7r/2, (2) scale the cells activity by l/f, where f is the spatial frequency that the cell is tuned for, and (3) biasing the cells’ activity to remove coarse variation in the distribution of activity versus orien- tation, i.e., to remove the effects of the illumination direction. Pentland 827 2.4 Estimating the Illuminant Direction. is a fractal Brownian surface with D = 2.3; max(p, q) w Pentland [7] introduced a method of estimating illumi- nant direction from the distribution of image deriva- tives as a function of image direction. The method works by assuming a statistically uniform distribu- tion of surface orientations, and then performing a maximum-likelihood analysis to estimate the cosine variation in image gradient magnitude induced by the directionality of the illuminant. In summary, the re- sult is that: (xi, yi) = (pTp)-lpT(a, dI22, . . ..aJ (13) where (zT,, y;) are the unnormalized x and y compo- nents of the illuminant direction, p is a 2 x n matrix of directions (dxi, dyi) and dli is the mean magnitude of dI(x, y)/dxi + dI(x, y)/dyi. Given (xi, yi) we may then find the complete il- luminant direction, which is simply: xL = xi/k YL = YT;lk ZL = d 1 - 5; - y; (14 where k=j/m (15) and E(dl) is the expected value of dl/dxi + dI/dyi over all directions i. This method has proven to be quite robust [2,3,7], however the assumption of uniformly distributed sur- face orientations is disagreeably strong. We can im- prove this method substantially by observing that in Eqn.( 10) th e illuminant produces a similar effect in each frequency band. Thus if we make the much weaker assumption that the power in a particular spatial fre- quency band is uniformly distributed over orientation3 then we can use a similar method to estimate the il- luminant direction, substituting the magnitude of the Fourier components for magnitude of the first deriva- tives. In particular, Eqn.( 13) becomes (xi, YE) = (PT/m3T(f7% m2, .“, 77%). (16) where the rni are the magnitude of the Fourier compo- nents within the selected frequency band in direction (dx, dy)- We have applied Eqn.(l2) to both synthetic images of complex surfaces, such as is shown in Figure l(a) (this 30r, more precisely, is not distributed lated with the illuminant effects in a way that is corre- 5.0), as well as to complex natural images such as shown in Figures 2(a) and 3(a). The use of synthetic imagery is necessary to answer the two important ques- tions concerning this method: One, is the Taylor series approximation a good one, and two, is the recovery stable and accurate? Figure l(b) h s ows the distribution of intensity val- ues obtained when the surface of Figure l(a) is illu- minated from L = (1,1, I)/&. Figure l(c) shows the distribution of errors between the full imaging model and the Taylor .series approximation using only the linear terms. As can be seen, the approximation is a good one, even though this surface is often steeply sloped (i.e., max(p,q) = 5.0). Figure l(d) h s ows the surface recovered by use of Eqn.(l2). Because the low-frequency terms and the overall amplitude cannot be recovered, it was neces- sary to scale the recovered surface to have the same standard deviation as the original surface before we could compare the two surfaces. Figure l(e) shows the differences between the original surface and the recovered surface. As can be seen, the recovery errors are uniformly distributed across the surface. These er- rors have a standard deviation that is approximately 5% of the standard deviation of the original surface. It appears that these errors can be attributed to the Taylor expansion approximation breaking down for steeply-sloped regions of the surface, i.e., those with IPI > IQI ” 1. Figure 2(a) shows a high-altitude image of a moun- tainous region outside of Phoenix, Arizona. This area has been the subject of intensive study, so that we are able to compare our shape-from-shading algorithm to, for instance, results obtained using stereopsis. In par- ticular, the Defense Mapping Agency has created a depth map of this region using their interactive stereo system. The stereo depth map they recovered is shown in Figure 2(b). Such maps are hard to interpret, so we created an synthetic image from this stereo depth map using standard computer graphics techniques. The im- age created from the stereo depth map is shown in Fig- ure 2(b). In addition, Figure 2(d) shows a perspective view of this stereo depth map. Figure 2(e) shows the depth map recovered from the shading information in Figure 2(a), by use of Eqn. (12). As part of the recovery process, the illuminant direction was estimated from the Fourier transform of the image by use of Eqn.(lG). To aid in the evaluation of this shading-derived depth map, we also created an 828 Vision [5] Rrooks, M. J., and Horn, B. .I (1985) Shape and Source from Shading, Ptoc. Int. Joint Conf. op2 Artificial Intelligence, Los Angeles, pp. 932-936. [6] Smith, G. B., Personal Communication. [7] Pentland, A. P. (1982) Finding the illuminant di- rection Optical Society of America,, Vol. 72, No. 4, 448-455. OI [8j Frankot, R.T., and Chellappa, R., (1987) A Method For Enforcing Integrability In Shape From Shad- ing Algorithms, Ptoc. First Id. Conf. on Com- puter Vision,, pp. 118-127, June g-11, London, England c Figure 1: (a) A fractal Brownian surface, (b) the dis- tribution of intensities within the image of the surface in (a), (c) the distribution of differences between the image and our linear-term-only Taylor series approxi- mation, (d) the surface recovered from shading (com- . pare to (a)), and (e) the errors in the recovery process. Figure 3: (a) An image of a woman used in image com- pression research, (b) a perspective view of the depth map recovered from shading information alone, by use of Ev. (lz), ( c a close-up of the recovered surface in ) the neighborhood of the womans ’ face; note the pres- ence of eyes, cheek, lips, nostrils, and nose arch, (d) a shaded, oblique view of the recovered surface. Figure 2: (a) An image of a mountainous region outside of Phoenix, Arizona, (b) a depth map of this region obtained from a stereo pair by the Defense Mapping Agency, (c) an image created from this stereo depth map, (d) a perspective view of the stereo depth map, (e) the depth map recovered from shading information alone, by use of Eqn. (1 this shading depth map, and (g) a perwective from image shading. 2), (f) an image created from view of the depth map derived 830 Vision
1988
68
813
Feature Recognition Using Correlate Contained in Multiplle Neighbm%oods Andrea Califano Exploratory Computer Vision Croup IBM Thomas J. Watson Research Center PO Box 704, Yorktown Heights, NY 10598 Abstract Parameter transforms play a very important role in the recognition of geometric features in image data. Local operators devised to compute parametric descriptions of geometric entities using a small neighborhood p(x,y) about points of interest have been succesfully employed. These operators fail to exploit the long distance correla- tions present in the image (distant points belonging to the same feature). Thus, their accuracy decreases with the order of the parametric properties (e.g., position, direction, curvature, torsion, etc.) and they are very sensitive to noise. This paper presents a generalized neighborhood con- cept that allows parameter-extraction operators to use the joint information of different portions of the same feature. This produces up to a few orders of magnitude improve- ment in accuracy (signal/noise ratio) and a smoother response of the transform. A general framework, based on a connectionist ap- proach, is presented to deal with the complex response in parameter space generated by such operators. A layered and concurrent scheme to extract 3D surfaces intersection curves is presented which, exploiting the properties of these operators, is able to reconstruct lines and conic sections in three-space. . ocal Parameter transhr The usual approach for the recognition of complex geometric features in images is to introduce a, possibly partial, parametric description of the feature of interest and then devise an operator to extract parameters from sets of image data. For example see [Kimme Ballard and Slansky, 1975; Slansky, 19783. If multiple features are present in the data, the portion of image used as an input by the operator must be small enough to avoid, on average, the interference of more than one feature. Thus small, possibly overlapping, neighborhoods of fixed size are generally used; these are chosen such that the entire image is covered. Usually, for discrete samplings, the input to the oper- ator is composed of the data points contained in a window of small size. Several problems limit the usefulness of local operators: 0 Poor resolution and high noise sensitivity:due to the mul- tiple noise sources, namely measurement and quantization noise, and due to the limited size of the neighborhoods, only zero and first-order properties of the features can be extracted with suffi- 1 - Thanks to Ruud M. Belle for his valuable contributions cient accuracy. For a curve in three-space for instance, these would be the location and the orientation of a curve element. Recognition of features with higher-order properties (e.g., cur- vature for tonics sections) is extremely difficult [Bolle, Kjeldsen and Sabbah, 19871. 0 Small number of points: some features like surface inter- section curves, given their one-dimensional nature, extend over O(N) data points (pixels) of a N x N image. This results in a small number of neighborhoods (computational units) taking part in the transform. . Poor handling of complex parametrization: complex parametric features are handled by leaving some of the parameters undetermined; this introduces a one-to-many map- ping from image to parameters space. This works well provid- ing that the dimension of the undetermined portion of parameter space is small [Shapiro 1978a; Shapiro, 1978b]. It can be ex- perimentally shown that, when this dimension exceeds two, the parameter space tends to become overloaded so that no structure appears. 2. S For the above reasons, a different, non local approach is intro- duced. Such an approach must preserve some important charac- teristics of local operators, for instance their intrinsically parallel nature. Let us consider a bidimensional image where each point (x,y) has an associated value z (intensity, range, etc.). We definep(x,y) a neighborhood centered about the point (x,y), which is small compared to the image size. Let P = fp(Xi,yi) ; i = I . . . N) be a partition of the image, that is, a set of neighborhoods containing all the points of the image with possible overlapping. Let us de- fine a new partition MP (“multi-partition”) whose elements are all possible (unique) combinations of N neighborhoods p(x,y) in groups of K: MP={p(x,,y,)up(x,,y,)u...up(x,,y,);i=l..JV,J’=i+l...N ,... } (2.1) An element of the partition contains all the image data points of P(Xi ,yi ), P(x. ,Yj ) to P(xk ,Y~ ). The total number n of (unique) elements of dl e partition A4P is given by the combinations of N objects in groups of K: N 0 N! II= K =(N-K)!K! (2.2) where an element is considered not unique if it can be obtained from another by a permutation of the indi ces ij, . . . . k. For in- Califano 83 1 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. rli Image Da ta r---7 ;w63 Figure 1 stance the two elements: Mpji...k z Mpij...k (2.3) would be considered equivalent and only one would appear in the partition. For convenience we will use the word “unit” to indicate a local neighborhood and.“K-unit” to indicate one of the general- ized neighborhoods obtained by combining the local ones in groups of K (unit and I-unit are synonyms). As an example, let us consider the case of a discrete image of sizeL x L. If we choose P to be the set of all possible non overlapping windows of size M x M m = (L / M)2 is integer] we have N windows in the par- tition. If we choose a group size of two, the multi-partition MP is composed of N(N-1)/2 possible combinations of two windows as shown in figure 1. 3. Properties of Using K-unit neighborhoods for parameter transforms has many advantages, among which: Enhanced accuracy The accuracy of the extracted parameters increases with the number of points taking part in the transform; the accuracy also depends on the relative position of the points on the feature. A K-unit contributes to the extraction of the parameter of a feature if all the composing local neighborhoods p(x,y) contain only points on the feature. Thus, since a K-unit contains the points of K different units, the computation has an expected increase K in the number of active points, compared to a local transform where only the points of a single unit would be considered. Another factor contributing to the enhanced accuracy is that distant points on the same feature are jointly used for the para- meter extraction. In general, due to the complexity of the para- meter transform model, it is impossible to quantify this contribution. However, using a rough and simplified model, we can quantitatively estimate the decrease in error when K-units are used, for the special case of a 2D-line of specified lenght par- allel to the x-axis. Suppose we have a set of points {(x,y)J evenly spaced with respect to the x-axis over an interval of length L with the value of y distributed within a finite interval 2Ay: Y(~)=Y,*AY (3.1) If Ay is small with respect to L, these points “correspond” to a line parallel to the x-axis. Now choose the partition to be the set of windows iWi~ of size M x M centered about the points f(Xi ~0) ; i=I . . N=L / l}(l = Xi+l- Xi), with 1 I M << L (SCX fig. 2a). With these assumptions, all the points in a window are con- tained in a rectangle of size 2Ay x M, If we compute the angle of the line using I -units, the worst case estimate for the angle is 8 = atan(2 Ay / j%4) as shown in figure 2b. If we use the result of the computation for all possible N windows on the line to generate a distribution in parameter space, we can estimate an upper bound for the error by taking the average of the worst case errors of the single measures: which is, as one would have expected, the same of the single- measure case. Let us now use the generalized neighborhood concept with two windows per group. As shown in figure 2c, considering two windows Wi and Wit all the points are contained in a rectangle of size 2Ay X [abs( Xi -xi ) + M]. Thus the maximum error for 6 on the single measure is given by: DAY A(!$ = atan Ilj-i (3.3) The N windows in groups of two generate N (N - I) / 2 2- units that can be used, like in the previous case, to produce a dis- tribution on the parameter space; this time, the upper bound for the error becomes: 2 4=-g E IA0,jI=N&6,i,atan (2’Ay’ N(N-l)i=lj=i+l 1G-i)+M (3.4) Figure 3 shows the behavior of the ratio between the error in the two cases, compared to the number of windows over the line (in the example Ay = M = 1). Increased number of computational units In some cases, especially with small images or low-dimensional features, the number of units that can take part in a local 5.5 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 2 4 6 6 10 12 14 16 16 20 22 24 26 26 30 N Figure 2(abc) Figure 3: Accuracy increase. 832 Vision parameter extraction can be very small. The generalized neigh- borhood concept allows for an increased number of such active units. If a feature has N units on it, the total number of K-units is given by (2.2): Statistical analysis of local ~e~g~~~~~~~~s. If we keep one of the neighborhoods p(x,y) fixed and generate all possible K-units obtained by varying the other (K-I) units we arrive at: 1 N 0 (N - l)! n=E’ K =(N-K)!K! (3.5) such K-units. This can be used for a statistical analysys for the fixed unit. For instance, then, it can be established that a unit has voted a certain number of times for the parameter vector of some specific features. This information is very important to define a competition process between mutually exclusive fea- tures generated by the same data points. This mechanism is described in detail in the next sections. The trade-off for the enhanced accuracy is a complex response of the parameter transform, an increase in computation time and a nonlinear response of the transform. Correlation between different features. When generalized neighborhoods are used on images where several features are present, we have two possible configurations for a K-unit. 0 All the units in the K-unit are located about different points of the same feature. = The units are shared among different features. In the first case, the transform produces an estimate for the parameters of the feature. In the second case, the units are still used to produce the parameter vector of a single feature and they generate an unpredictable value. Figure 4 shows the result obtained by fitting lines to points on two different features. As shown, lines are scattered in all possible directions and positions generating a quasi-random re- sponse in parameter space (due to the deterministic nature of the process the distribution is pseudo-random &amperti]). Usually, due to the nature of the parameter vectors produced in this sec- ond case, these K-units contribute to the general noise back- ground and no accumulation is produced in parameter space . However it is experimentally shown that false confidence peaks can be generated if several features are present in the image. This is a consequence of the highly structured information present in the image which, when jointly used, produces corre- L Plane 2 - 45 vote‘s Figure 4 lated noise and hence false confidence peaks. In particular, some feature can alter the response in parame- ter space of other features. For instance, as shown in figure 5, a line in three-space can amplify the response of all the planes con- taining it. In fact, any single point not on the line can cooperate with different combinations of points on the line producing al- ways the same parameterse (those of the plane containing the line and passing through the point). If there are N units on the line and one on the point, ( ) Ni1 K-units will return the parameters corresponding to the plane as a result . This increases combinatorially with the lenght of the line and can produce erroneous hypotheses. binatoriaii computational ti The combinatorial increase in computation time of units per group grows, [given by (2.2)] is counterbalanced by the smoother response of the parameter extraction due to the in- creased number of K-units taking part to the transform. Com- putational complexity is inversely related to the size of the units. In fact, if the size is reduced by a factor M, the number of units in the partition increases by the same factor. This is due to the necessity of covering the entire image. In this case, if N is the previous number of units in the parti- tion, the number of K-units that participate to the transform be- comes: M*N n= ( 1 K (4.1) Nonlinear response If two features of the same kind (two conic section, two planes, etc.) have different “size” (length for curves, area for surfaces, etc.) the local response of the transform is not a linear function of the size. As we have seen above, if there are N units on a fea- ture, the total number nfl of K-units that produce an estimate for the parameter vector of the feature is given by (2.2). If N is large with respect to K, this value can be approximated by a simple power law: NK n(f)sz (4.2) thus allowing for a simple renormalization rule [Califano and Bolle, 19871. It is important to notice that due to the pseudo-ran- dom nature of the correlation noise, the renormalization should only be performed after having isolated the confidence peaks from the noise background. Since the latter is not subject to the same law the signal to noise ratio would be greatly deteriorated otherwise. 5. The noise sources and the correlation induced by the interaction between different features in the image produce a large number of hypotheses in the parameter space. Thus a “filtering” mechanism has to be introduced in order to distinguish between noisy, erroneous and true hypotheses. This can be elegantly accomplished by instantiating the par- ameter spaces as networks where nodes correspond to hypothe- ses characterized by the appropriate parameters. The links in the network are connections between nodes and computational units Califano 833 (K-units) or between nodes themselves. Hypotheses are generated by K-units using these links; they can be partial or complete descriptions of geometric features. Logically, each node computes the confidence of the network in the existence of the specific hypothesis in the input. This is done by assigning an activation level to the nodes. Parameter transforms are used to create or support nodes in the network correponding to the output parameters. Updating of the activa- tion levels can be performed in a standard connectionist way Feldman and Ballard, 1982; Sabbah, 19851. If the acivation level of a unit falls below a noise threshold NT, the unit is deac- tivated and no longer takes part in the relaxation process. Quantitatively the activation level of a node at iteration i, de- noted by AL,,(i), is computed as: 4mm = 0 (5.1) A&&) =ALmd& - 1) +SU,, +TD,, -LZ,,,(i - 1)-D(5.2) where (5.3) represents bottom-up reinforcemenran TD,, top-down re- inforcement. WV node is a measure, (“weighted vote”) assigned to the unit by the input data (K-units) through the parameter trans- form operators. That is, a measure of confidence that the corre- sponding hypothesis exists based only on data measurements (see [Sabbah and Bolle, 19861). E is a normalizing factor and kbu a rate parameter. The term D is the decay term that suppresses spurious hypotheses. The lateral inhibition term LI is generated as a weighed sum of the activation levels of competing units. This term insures that semantically incompatible hypotheses inhibit each other so that stronger ones survive while the others are eventually deactivated by the noise threshold term NT in a “winner-take-all” network Feldman and Ballard, 19821. In previous connectionist networks involving parameter transforms [Belle, Kjeldsen and Sabbah, 19871, hypotheses would mutually inhibit each other when their parameters are “close”. This implies the generation of a metric in parameter space so that all hypotheses within a certain radius would be con- nected with inhibitory links. This approach has some limitations, namely: a The parametrization must be chosen to ensure the stability of hypotheses in parameter space biapunov, 19471. This means that a small perturbation of the geometric feature in the image must produce a small perturbation of the parameter vector. This is not always possible. For instance, no matter how little the direction of a line is perturbed, the variation i the position of a point on the line (needed for the complete parametriza- tion) can become arbitrarily large depending on where the per- turbation is applied. . Such parametric interaction does not make use of geometrical or topological knowledge (domain knowledge) about the hypotheses. Thus only hypotheses that have the same repre- sentation can interact and no inter- submitted to ICCV Con.. 1988.parameter space interaction is possible. For instance, the hypothesis of a sphere should support one of a circle of same radius in the image while the hypothesis of a conic section should be incompatible with one of a line if they have both been supported by the same image units. . A totally symmetric interaction between hypotheses presents some difficult normalization issues. In fact, since the number of competing hypotheses grows as the dimension of the parameter space increases, one with a high level of activation, corresponding to an effectively existing feature in the image could be killed by a large number of competing ones just above the noise threshold. Also, once domain knowledge be- comes an active element of the network structure, diffeent sources of inhibition should be normalized separately. For the above reasons, a more involved interaction model is introduced based on the following rules: 0 Hypotheses compete when: O Domain knowledge establishes their incompatibility or O Their activation is supported by some identical image I- units (windows) and domain knowledge does not preclude their incompatibility. In this case the strength of the inter- action is proportional to the percentage amount of common supporting I -units. a Hypotheses support each other when domain knowledge es- tablishes their mutual consistence. . Hypotheses can only inhibit others with a lower or equal level of activation. It is very important to notice that this kind of approach is con- sistent with K-unit parameter transforms. In fact, by definition, the K-unit structure allows different hypotheses to be supported by the same I -unit. This implicit knowledge can be used to cor- rectly set up the inhibition network using the above guidelines. When a hypothesis meets some existence criteria, namely a certain ratio between activation and lateral inibition, the corre- sponding feature is considered to be present in the image. In case that only a partial description of the feature is produced, the unit can start another process in a higher-level parameter space to ex- tract the remaining parameters. A layered structure of parameter spaces is generated in this way. The above mechanism allows for concurrent extraction of different geometric features and their mutual support or inhibi- tion through LI - and TD -links. This model for the interaction has proven very reliable and robust as shown in the section on curves extraction in three- space. 6. Corn aeionall complexity. It is important to reduce the combinatorial explosion of the com- putation time with an increasing’size of the image to a more ap- pealing linear one. To do that we can introduce the concept of a “Search Radius” R, such that each of the neighborhoods’ center points (x,y), in a K-unit, is within a distance R, from the center point of the next one (see figure 6). Since R, can be thought as the distance over which we expect two neighborhoods to possibly contain coherent information, we can think of R, as a radius of coherence. Let NR be the total num- ber of I-units (local neighborhoods) contained in a circle of co- herence of radius R,. Depending on the metric we select in order to define the circle of coherence, the number n of possible K- units generated by keeping the first unit fixed and choosing the others within R, from each other is: (6.1) 834 Vision Figure 6 which does not depend on N, the total number of units on the image. Thus the total number of K-units N,, becomes N . n due to the N possible choices for the first unit. N,, is now a linear function of the image partition size N and thus of the image size itself. However, even if more contained, we still have a com- binatorial explosion with respect to K. It is important to notice that such a definition of the coherence radius allows for K-units to extend on distances much longer than R, depending on the value of K. In fact, it is not required that all the units have centers within the coherence radius but only that a sequence can be established on the K-unit ordering, such that every I-unit’s center is within an R, from the next one. It immediately follows that, if K is larger than two, chains of units of maximum length K R, can be formed, thus allowing for longer distance correlation extraction. Statistical properties of the distribution generated by K-units within a coherence radius are under further investigation. However the experimental results are very promising and the re- duction of computation time is significant. 7. Experiments Figures 7bc show the distributions generated in parameter space from a line parameter transform using the 128 x 128 pixels size synthetic image in figure 7a as an input. Lines are parametrized using their angle (x-axis) and their signed distance from the origin (y-axis). The parameter transform is based on a least square error line fitting algorith using points contained either in I-units or in 2-units on the image. Here the units are windows of size 8 x 8 . Due to the high level of noise in the image and to the small difference in the lines parameter vectors, the result for I- units does not show a significant accumulation. In the distribu- tion obtained with the multi-neighborhood approach, on the contrary, the parameter vector values corresponding to the three lines in the image show significant accumulation with respect to the noise background. Figure 8b shows the distribution on a partial parameter space generated by an ellipse parameter transform. The x-axis corre- sponds to the ratio between the sqares of the two axis of the el- lipse (a ratio of one corresponds to a circle) while the y-axis to the rotation angle of the ellipse with respect to the image x-axis. The parameter transform uses a fitting algorithm proposed by Bookstein [1979] based on the scatter matrix of the image data. The image is shown in figure 8a and it has the same size of the previous case. It contains a circle and an ellipse (ratio between the axis 2). The fitting algorithm extract in a single step the five- dimensional parameter vector for an ellipse. Since it is im- possible to display a distribution on a five-dimensional parameter space, its projection along the two parameter axis is shown. Even when only quantization noise is present, the local ap- proach produces just scattered points in parameter space. Figure 8b shows only the results using 2-units. The same unit size of the Figure 7(abc) --. ---.- (a) Figure 8(ab) previous case is used. As it can be seen, the values coresponding to the two image features (ratio 1, angle 0 for the circle and ratio 4, angl0 for the ellipse) show a significant accumulation. . rves recognition in three-s We give here a brief outline of a layered and concurrent scheme for extraction of lines and conic sections in three-space with respect to the use of generalized neighborhoods. This system is part of a general effort of our group [Belle, Kjeldsen and Sab- bah, 1987; Bolle, Califano and Kjeldsen, 19881 for the recogni- tion of objects generated by patches of planes and quadrics of revolution in range data images. The recognition hierarchy for curve parameter extraction is shown in Figure 9. 3D edge detection techniques are used to generate maps of the image discontinuities. Since the dimensionality of the parameter vectors of interest can range from four for 3D lines to eight for 3D ellipses or hy- perbolas we divide our recognition process in two stages. First using parameter transform based on a scatter matrix fit- ting algorithms [Duda and Hart, 19731 we concurrently search for (1) lines and (2) planes that contain intersection curves. Whenever a significant plane is found, we search for conic sec- tions contained in the plane, again using a fitting algorithm based on the scatter matrix [Bookstein, 19791. The best experimental compromise between accuracy and computability is to use 2- units for line extraction and 3-units for planes and tonics extrac- Figure 9 Cal&m0 835 tion. The whole ensamble of parameter spaces is set up as a net- work with the structure described in section 5. Figure 10a shows a noisy artificially generated 64 x 64 depth map (laser-range-finder generated images have also been used and produce similar results [Bolle, Califano and Kjeldsen, 19881). It contains a cylinder with the axis aligned with the y- axis, with a hemisphere of the same radius at one end and a cone at the other. In this case, units have a size of 2 x 2 for lines ex- traction and of 4 x 4 for planes and conic sections extraction. Figure lob shows the situation after two iterations of the re- laxation process. At this time 8 out of the 13 initially active line hypotheses and 9 out of the more than 80 plane hypotheses are still active. Some of the planes, having satisfied our existance criteria, have initiated the search for tonics. In fact, three conic section hypotheses associated with two planes have just been created. After six iterations (see figure 10~) only the correct hypothe- ses for lines and tonics have survived and their lateral inhibition has reached a level of zero so that they will continue to exist. The four lines are the linear limbs of the object, one circle is the limb of thesphere while the other is the intersection between the cyl- inder and the cone. The secondorder discontinuity curve, between the cylinder and the sphere, is not found because the low-level edge opera- tors do not detect such discontinuities. A promising operator for second order discontinuities is under investigation. Figures lob and 1Oc display the projection of the active curves at the two stages of recognition. Activation levels are in- dicated by the gray level (darker = more active). Planes are not displayed since we only use them as an intermediate step for con- its extraction. More than the number of existing planes (2) have been found in the image after our relaxation process. This is due to the in- teraction between lines and planes as described in section 4. This contribution can be eliminated by changing the lateral inhibition l After Iteration 1: 36 aaivc cbm in PLANARCURVES 0 naive objeas in JD-coNIcs l After Iteration 6: Figure lO(abc) model for plane hypotheses to include links with the line para- meter space. A new model is under investigation. 9. Conclusions A new approach to parameter extraction in images has been proposed where using correlated evidence from distant part of the image allows for complex parametric feature recogniion. Up to eight-dimensional features have been experimentally reconstructed from synthetic and laser-range-finder range data images. eferences [Bolle, Kjeldsen and Califano, 19881 Ruud M. Bolle, R. Kjeldsen and A. Califano, “Evidence Integration of Curves and Surfaces for 3D Object Recognition,” IBM TR RC 13592, 1988. [Bolle, Kjeldsen and Sabbah, 19871 R.M. Bolle, R. Kjeldsen and D. Sabbah, “Primitive shape extraction from range data,” in Proc. IEEE Workshop on Comp. Vision, Nov.-Dec. 1987, Miami Beach, FL, pp. 324-326; also IBM Tech. Rep. RC 12392, AI System Group, IBM T.J. Watson Res. Center, July 1987. [Bookstein, 19791 F.L. Bookstein, “Fitting conic sections to scat- tered data,” Computer Graphics and Image Processing, Vol. 9, No. 1, Jan. 1979, pp. 56-7 1. [Califano and Bolle, 19871 Andrea Califano and Ruud M. Bolle, “Localized noise propagation effects in parameter trans- forms,” to appear in Proc. SPIE Conf. Intell. Robots and Comp. Vision, Nov. 1987, Cambridge MA. [Duda and Hart, 19731 R.O. Duda and P.E. Hart, “Pattern Clas- sification and Scene Analysis,” New York, John Wiley & Sons, 1973 [Feldman and Ballard, 1982]J. A. Feldman and D.H. Ballard, “Connectionist models and their properties,” Cognitive Science, Vol. 6, pp. 205-254. [Kimme, Ballard and Slansky, 19751 C.Kimme, D. Ballard and J. Slansky, Finding circles by an array of accumulators,” Comm of the ACM, Vol 18, No. 2, Feb 1975, pp. 120-122 Eamperti] J. Lamperti, “Stochastic Processes”, Academic Press, New York biapunov, 19471 A. M. Liapounov, “Probleme generale de la stabilite’ du mouvement,“Ann. of Math. Studies, N. 17, Prin- ceton Univ. Press., 1947, Princeton, NJ. [Sabbah, 19851 D. Sabbah, “Computing with connections in visual recognition of Origami objects,” Cognitive Science, Vol. 9, No. 1, Jan-March 198, pp. 25-50. [Sabbah and Bolle, 19861 D. Sabbah and Ruud M. Bolle, “Ex- traction of surface parameters from depth maps viewing planes and quadrics of revolution,” in Proc. SPZE Conf. Intell. Robots and Comp. Vision, Oct. 1986, Cambridge MA, pp. 222-232. [Shapiro, 1978a] S.D. Shapiro, “Properties of the transform for the detection of curves in noisy pictures,” Comp. Graphics and Image Processig, Vol. 9,1978, pp. 219-236. [Shapiro, 1978b] S.D. Shapiro, “Feature space transforms for curve detection,” Pattern Recognition, Vol. 10, 1978, pp. 129-143. [Slansky 19781 J. Slansky, “On the Hough transform for curve detection, IEEE Trans. on Comp., Vol. 27, No. 10, Oct. 1978, pp.923-926. 836 Vision
1988
69
814
eing Suspicious: Criti uing Problem Specifications Stephen Fickas and P. Nagarajan Computer Science Department University of Oregon Eugene, OR. 97403 Abstract One should look closely at problem specifications before attempting solutions: we may find that the specifier has only a vague or even erroneous notion of what is required, that the solution of a more general or more specific problem may be of more use, or sim- ply that the problem as given is misstated. Using soft- ware development as an example, we present a knowl- edge-based system for critiquing one form of problem specification, that of a formal software specification. Suppose one were given a problem P to solve. Suppose fur- ther that it was known that generating a solution S for P will require a large effort. The question we ask in this paper is a pragmatic one: should we take P on blind faith and forge ahead, or should we scrutinize P carefully before committing resources to its solution? We will argue for the latter approach. More specifically, we propose that three types of specification critiques can account for a use- ful and interesting set of specification errors: Unsupported policy: a domain goal or policy that we wish the system to obey is not supported by any specifica- tion component. Obstructed policy: a domain goal or policy that we wish the system to obey is actively obstructed by a specification component. Superfluous component: a specification component can be seen to support no domain goal or policy of importance. To investigate this hypothesis, we have built a computer- based system that, given a problem specification in a specif- ic domain, will view the specification as suspect until it can be rationalized, using the three criteria above, against a set of domain policies. In this paper we describe this sys- tem, and report on our efforts to evaluate it on a standard specification problem. 2 Towards a specification critic The characterization of P, for our project, is that of a for- mal software specification. In earlier work, we also stud- This work is supported under National Science Foundation g-rant DCR-8603893. ied formal and automated means of mapping specifications to implementations, i.e., the solution space S Eickas, 19851. While our project is concerned with soflare speci- fication techniques, we suggest that the approach we pro- pose here might find application in any domain where problem specification is difficult (because of complexity, ambiguity, ignorance) and solution techniques are costly. Our interest in a specification critic (henceforth, we will use problem spec#cation and specification synonymously) is one part of a larger project whose goal is to provide assistance to a software analyst in producing a formal spec- ification. This project, called Kate [Fickas, 19871, rests on the following 3 components: 1. A model of the domain of interest. This includes the common objects, operations, and constraints of the domain, as well as information on how they meet the types of goals or policies one encounters in the domain. 2. A specification construction component the design of the emerging specification. that 3. A critic that attempts problem specification. to poke holes in the client’2 controls Our focus in this paper is on the first and third compo- nents, the domain model and the specification critic (see [Swartout 19831 and [Yue 19881 for a complimentary, domain independent approach to specification analysis). 2.1 Basic critic components The critic consists of a model part, an example part, and correspondence links between components in model and example. The use of example is as the representation of a specification under review. Our problem description lan- guage, used by both model and example, can be viewed as equivalent to a Petri-net in its support of places, tokens, transitions, and non-deterministic control. However, it also extends the basic Petri-net model in the following ways: it supports token objects, token types and token abstraction through a class hierarchy similar to Greenspan’s RML language [Grcenspan, 19841; it intro- duces the notion of a place type with capacity [wilbur- ‘We will use the singular form of client as a useful simpli- fication in this paper. In reality, there are often many ‘ ‘clients’ ’ to satisfy. Fickas and Nagarajan 19 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Model cases are linked to policies. Each such link can take on one of two values: positive - the case supports the policy; negative - the case obstructs the policy. Figure 1 shows a small portion of the resource management model with policies denoted by square boxes, domain cases denot- ed by rounded boxes, and policy-to-case links as highlight- ed arcs: negative arcs end with a black circle, positive arcs are drawn normally. Arcs between domain cases are taxo- nomic. Note that figure 1 presents a static view of model; when used in a critique, correspondence links would exist between cases and example components. Further, each poli- cy would be marked with a value important, unimportant, or unknown. Thus, the final step the user must carry out to run the critic is to give some or all of the policies a val- ue; unmarked policies are given a value unknown, which in turn is conservatively viewed as important by the system. By choosing various policy values, we can “take the view” of various system users. For instance, if we take the (selfish) view of a user of the library, we might mark a large selection set and unlimited borrowing as important, and all other policies as unimportant. We can later change policy values to reflect the good of the whole (i.e., take the library administration’s view by marking privacy and minimization of cost as important) and rerun the critic. 2.2 Critic execution There are three types of problems that are of interest to our critic: Non-support: A policy marked as important (or unknown) is IiRked positively to a model case. A case match is not found in example. Obstruction: A policy marked as important (or unknown) is linked negativezy to a model case. A case match is found in example. Superfluousness: A policy marked as unimportant is linked positively to a model case. A corresponding case match is found in example. The third critique is based on the notion that components added to a specification in support of an unimportant poli- cy will tend to add unnecessarily both to the complexity of the system, and to the cost of its operation and mainte- nance. We note that it is possible, and in fact typical, for the same case to have both positive and negative links to some set of policies. That is, the case may positively support policy Pl and negatively affect policy P2. Taking an exam- ple from figure 1, Pl is give user’s a good stock on hand, P2 is give users a useful working set, and the case is force turnaround (or restrict borrowing). While we would like to think that Pl and P2 are never both simultaneously marked as important, it is not untypical for a client to describe a conflicting set of goals or policies. In fact, most borrowing systems can be viewed almost solely as compro- mises between conflicting concerns. One key process in 20 Automated Reasoning specification design is then coming to grips with conflict- ing policies through various forms of trade-off and com- promise. While some of our other work on Kate has begun to explore this area pickas, 19871, this version of the crit- ic does not attempt to temper its criticism by looking beyond single links. On the other hand, it does allow poli- cy values to be changed and a critique to be rerun. There are three forms of output from the critic. The first is a parameterized version of canned text, e.g., “policy Pl is marked as important and is not supported in example’ ’ , “policy P2 is marked as important and is obstructed by component C in example”. Second, and more importantly, we have begun to explore the use of simulation to back up a critique. Our long term goal is to be able to provide the rich and seemingly inex- haustible example generation (in the form of verbal simu- lations) seen in protocols of human analysts attempting to back up a point pickas et al, 19871. As a start, we have built a simulation component that 1) allows each of the critic’s cases to include one or more scenarios for demon- strating that case dynamically, and 2) uses the scenarios to animate the corresponding portions of example, in the form of transitions firing and tokens being moved around the net. Each scenario includes an initial marking of the rel- evant sub-net in example, and constraints on the non- deterministic control to force exemplary paths to be taken. The initial marking data may be either a) abstract -- if the case pattern uses abstract objects, we can use instances of the same abstract objects in setting up an example run -- or b) concrete -- we may decide to use a refinement of the case objects, e.g., Mary Smith checking out The Life of a Gene and keeping it 3 years beyond its due date. In this way, the scenario set for any case C may contain a mixture of abstract and concrete examples of C. The system runs a case’s scenarios, one by one and in ordered fashion, under user direction. Figure 2 shows a snapshot of a run-on-a-depository sce- nario being simulated using its matching example compo- nents (shown in parentheses). The corresponding case, that of a resource underflow condition, matches on any uncon- strained check out action (unpredicated transitions are rep- resented by vertical lines) where human borrowers and physical resources are involved. As an initial marking for this scenario, we set up a small number of resources and a larger number of resource users. The simulation will be abstract in the sense that tokens represent any physical resource (any type of library resource in this case) and any human borrower (staff, faculty, students in this case). It is concrete in that each token represents a single, physical resource or single human user (as opposed to, say, an infor- mation resource, a mechanical resource-consumer, or aggre- gates of each). The other key piece of information necessary to make this scenario work is a constraint on non-deterministic con- trol, one that will run the check out action continuously until no resources are left (but demand still exists). In other words, to give the worst case view we will ignore other processes that may exist to replenish the resource stock (e.g., buy more, force check in) or lessen demand s to a useful work- allow users to have a large selection to Figure 1: Portion of model (e.g., remove borrowers) as long as the check out transi- tion is enabled. The third and final type of output the critic can supply is in a more positive form. In particular, the user can ask to see all example components supporting a particular poli- cy, or given a specific component in example, he or she cart ask to see the cases that have been matched using the com- ponent, and the policies those cases support. When integrat- ed with a specification editor, this has shown to be a useful tool in determining the ramifications of specifica- tion changes in terms of the overall goals of the system. 3 valuation We have run the critic on the problem description in the Appendix from various points of view, e.g., with policy values reflecting selfish users, with policy values reflect- ing the good of the whole. In this section, we will discuss a critique of our best reconstruction of the implicit poli- cies of the problem after looking at the origins of the library example and talking to the authors of the version used in mSSD 19871. We set the policies to reflect a small academic library, possibly a department library run by a secretary. While we feel confident in this interpreta- tion after talking with the various authors of the text, informal descriptions such as this are clearly a problem for any translator, human or machine, in terms of ambiguity and missing policy information. With this in mind, we give several representative findings of our critic: 1. The query actions in L5 and L6 are found to be obstruc- tive; they may be used to give out user-confidential infor- mation. In general, any action that gives out information about a user’s borrowing record, whether now or in the past and whether to the same user, or to someone else, is part of a case that is linked negatively to the policy of maintaining user privacy; this policy is marked as impor- tant here. It is worth discussing one supporting scenario for this case in more detail, that of a devious-borrower. It consists of the following actions (transitions): borrower B checks out resource R; borrower C gains access to B’s identity; C queries the system, as B, to find what resources B has checked out; C learns that B has checked out R. The point to note is the need to represent behavior for both the sys- tem and its environment. In this case, the scenario uses existing components of example (check out, query), but also supplies environment components of its own (illicit gain of one borrower’s id by another) to run the simula- tion. In summary, the scenario extends example with new components (objects, places, transitions) to make a point. 2. The constraint that the check out action must be carried out by a staff person (L8) is obstructive; it matches a mon- itorcd-withdrawal case, which in turn is linked negatively to a (sub) policy of minimizing circulation staff. 3. Certain actions are not supported. In particular, the actions (and associated cases) of adding and removing a book (see L3) can be viewed in finer grain, e.g., remove- lost, remove-stolen, remove-damaged, replace-lost, replace-stolen, replace-damaged. These type of actions are captured in cases linked positively to various sub-policies of accounting for human foibles. Since the general human foibles policy is viewed as important, these cases look for a match. None is found. Also, the division of users into groups (staff and ordi- nary borrowers in L7) is without corresponding actions to add and remove members from a group. There are corre- sponding cases in model that are linked positively to the policy (and sub-policies) of recognizing the human dynam- ics of group membership, and these cases expect to match if that policy is important. It is and no matches are found. While no superfluous components were found on this run, it is not hard to change policy values to generate such a cri- tique. For instance, by marking the (sub) policy of account- ing for human forgetfulness as unimportant, the query in L5 becomes not only obstructive (see 1 above), but super- fluous as well, a bad combination in general. To further evaluate these results, we asked an experi- enced library analyst to critique the text description of the Appendix outloud, and recorded the session in both audio and video form. We will summarize the four major points to come from this work; in Eickas, 19871 and Fickas et Fickas and Nagarajan 21 Ham, 19851; it allows computable predicates on both arcs and transitions, each of which can reference token/object slot values and token/object types. Initial versions used NIIU [Kaczmarek et aE, 19861 as the basis for implementa- tion; more recent versions are built on the SIMKIT pack- age of KEE. The model is used to represent a set of “interesting” problem specification cases to consider for a particular domain. The domain we have chosen initially is that of resource management systems. Our cases, to date, are hand- coded transcriptions taken from 1) written texts and arti- cles on analyzing problems in the resource management domain, and 2) protocols of human analysts, familiar with the domain, constructing and critiquing specifications. A case consists of the following fields: . A description of a particular pattern to look for in example. As discussed above, the representation used in example is that of an augmented state-transition net; the pattern here takes the form of a sub-net. . A link to a policy. This is used to index the case to higher level concerns within the domain. This will be discussed shortly. . An ordered set of simulation scenarios. These are used to demonstrate various aspects of the case. Each sce- nario contains operational instructions for 1) setting up initial data, 2) constraining non-deterministic con- trol to exemplary paths, and 3) running the sub-net in example linked to the case. . Canned text description. As the name implies. The correspondence links tie model cases to actual con- structs within the specification, i.e., they bind components of a case’s sub-net to components in example. As an illus- tration, figure 2 depicts a sub-net pattern in a resource underflow case. Correspondence links have been built to bind the sub-net components to a particular specification of a library under critique; the specific library/example components that are bound are shown in parentheses. To use the critic, we must first translate the specifica- tion to be critiqued into example format, i.e., into our aug- mented Petri-net representation. The specification/example we will discuss in this paper is that of an automated library system, a standard in discussing specification research. The particular incarnation we will use comes from the problem set handed out prior to the Fourth Inter- national Workshop on Software Specification and Design [FTWSSD, 19871; it is reproduced, with line numbers for reference, in the Appendix. After translation, correspondence links must be forged between case components in model and specification com- ponents in example. The system supplies some matching help here by looking in example for token, place, transi- tion, and predicate names that are commonly used in resource management domains, and hence in model cases. Components unrecognized by the system must be manually linked by the user. Finally, we must deal with the overall goals of the client. In particular, we have come to believe that there is no such thing as an inherently good or bad specification, only one that does not conform to the resource limitations and users’ goals in force. Thus, the goodness or badness of a component in example can only be judged relative to the user’s goals and the resources available. We will use the term policy to denote both organizational goals and resource constraints. For the latter, we will include resource limits on both the development of an implementa- tion and on the operational environment, e.g., “minimize operational staffing costs”. Based on discussions with domain experts and a study of the domain literature, seven broad policy classes were defined for resource borrowing systems2: 1. 2. 3. 4. 5. 6. 7. Allow users to have a large selection to choose from. Allow users to gain access to a useful working set and keep it as long as necessary. Maintain the privacy of users. Recognize the human dynamics of group (patron, staff, administration) membership. Account for human foibles, e.g., forgetting, losing items, stealing. Account for development resource limitations, e.g., money, staff, and time available to develop the sys- tem. Account for production environment limitations, e.g., money, staff, and time available to run and maintain the delivered system. Each of these seven can be further refined, e.g., maintain privacy of users’ borrowing record, maintain privacy of users’ queries, etc. We can also further specify each policy in terms of more specific domains, e.g., maintain an ade- quate stock of books on the shelves, maintain an adequate stock of video tapes available for rental. We allow each policy to be in one of three states: impor- tant - the client has explicitly noted that the policy should be enforced; unimportant - the client has explicitly noted that the policy should be ignored; unknown - no explicit statement has been made about the policy. A value given to policy P is inherited by all ancestors of P. Thus, marking the policy of accounting for human foibles as important will in turn mark all refinements of that policy -- forget- ting books, stealing video tapes -- as important. Converse- ly, we can mark policies in a finer grain if necessary: prevention of stolen items is unimportant, but reminding forgetful users of borrowed items is important. 2We make no claim that this is either a necessary or suffi- cient list of policies, but simply one that has allowed us to handle the set of resource management problems that we have studied to date. Also, it is clear that certain poli- cies in this list extend beyond this domain. 22 Automated Reasoning physical resources (books) checked out (on loan) borrowers (patrom) Figure 2. Resource underflow case (with bindings) al, 19871 we describe our protocol analysis and results more fully. I. We found general support for our representation of poli- cies. The analyst spent the first part of the session estab- lishing the “goals” of the library. These goals were all covered by our current set of policies. 2. The analyst’s critique registered well with the critic’s. With few exceptions (see below), the same type of was given by analyst and critic, if not in the same style. case 3. The major discrepancy between analyst and critic was in the analyst’s ability to deftly juggle competing concerns. In particular, she was able to weigh the importance of vari- ous policies, and order them when considering a particular component in the description, e.g., giving borrowers a use- ful working set must supersede concerns with keeping ade- quate stock on the shelves, the query in L5 along with the constraint in L9 could be viewed as a livable compromise. 4. The analyst’s ability to generate, at will, both abstract and concrete examples to back up a point was impressive. The critic’s contrasting lack discussed in the next section. of sophistication in this area is In summary, the comparison of the critic’s analysis with that of the human analyst points to the representation of policies in a specification critic as a key component. Our findings also point to the need for a more refined view of policies, their interaction, and their connection to domaiu cases, and a more powerful means of backing a case with a range of scenarios. The next section discusses these issues andothers raised by our experience with this critic. Below we list the issues that we feel must be of immedi- ate concern for the next version of the critic. Policy interaction and utility. Wilensky describes different types of goal interaction, and plans for handling each of the types lWilensky, 19801. It seems clear that we will need something similar for our policies, e.g., “user privacy always overrides timely access to resources”. Along the same line, a notion of policy utility beyond the simple val- ues important and unimportant will be necessary. This has been brought home, in particular, as we have begun to look at compromise strategies that allow two or more conflict- ing policies to each be partially met simultaneously [Fickas, 19871. We also note the correspondence of features in [Chapman 19821 and goals in [Mostow and Voigt 19871 to our work on policies in general. Simulation. We have shown how simulation can be an effec- tive critiquing tool. However, its full potential would seem to rest on better models of explanation in general, e.g., when should we use abstract or concrete data, how much of the context must be provided, how far do we have to follow the results. As an example, we have shown a case of “a run on a depository”. Is this enough to convince a client that a potential problem exists in his or her speci- fication? In particular, we do not show a real consequence of a depository running out of resources, i.e., loss of confi- dence by borrowers turned away from the depository that it is a reliable source. In other words, we expect the client to infer this type of knowledge, and to decide if it is worth worrying about. Whether this is warranted or not is clearly dependent on the sophistication of the client in the domain. Along the same lines, what is the right mix of scenarios to attach to a case? For example, in the scenario in figure 2, we jump right to the unlikely event of everyone want- ing resources at once. Examples such as this are sometimes easy to dismiss as too extreme (however, see “Bank Runs’ ’ , the formation of the FDIC to prevent them, etc.). A more convincing scenario might show a gradual deple- tion of resources under average (or even favorable) borrow- ing conditions. In the end, we might like a progression of best case to worst case scenarios. We can simulate this crudely in our current critic by attaching an ordered set of scenarios to a case. In operation, we expect a best case criti- cism to be presented first. If the user decides to address the criticism by editing the specification, the critic moves on to a slightly worse case. This cycle of system critiques and user fixes continues until either the system has thrown its toughest critiques at the specification (e.g., the extreme scenario in figure 2 is reached) and the user has addressed them all, or the user has decided to live with some scenar- ios not being handled (because of limitations in space, time, money or any number of other reasons). We note the similarity of the above argument style with that of Rissland’s work in the area of case-based legal rea- soning pissland, 19861. We also note that in her system a single case or scenario is represented in addition to one or more dimensions along which the case can be stretched (for instance, “resource supply and demand”). A separate exam- ple generator can instantiate the base case by moving along one or more dimensions. In general, this dimen- sion/generator approach is clearly more powerful than the explicit scenario list we now employ, and is one that we believe will move us closer to that seen in our human ana- lyst. An interactive critic. The goal here is an interactive editor for developing specifications, i.e., a system that provides Fickas and Nagarajan 23 tools for both construction and criticism in an interleaved fashion. This is in much the same spirit as that of deficien- cy-driven algorithm design in Steier and Kant’s DESIGN- ER system [Steier and Kant, 19851. Our current system sup- ports both a specification/example editor and the critic we have discussed in this paper. Thus, a user can edit a specifi- cation, run the critic, respond to criticism through further editing changes, etc. The problem is a lack of automation in matching, and rematching after changes. We are explor- ing two approaches to the matching and rematching prob- lem. First, we have given the editor a component catalog for the resource management domain. These components are ones found in our cases. If the user selects components from the catalog in the construction of his or her specifica- tion, we can automatically match them against cases (actually, we just follow component-to-case links, avoid- ing matching altogether). If the user supplies non-catalog components, then we must fall back on common names, and finally, user intervention. Interleaving of editing and critiquing brings up the rematching problem: given a local editing change, we would like to avoid rematching the entire specification to the entire case-base. Our approach has been to isolate changes to a small subset of specification components, rematching these while retaining past matches outside of the local context. While we have had some preliminary success in localizing changes in the specification language we are using [1Fickas, 19871, the problem remains an open and difficult one. References [Chapman, 19821 Chapman, D., A program testing assis- tant, Communications of the ACM, September, 1982 [TFickas, 19851 Fickas, S., Automating the Transformation- al Development of Software, IEEE Transactions on Soft- ware Engineering, Vol. 11, No. 11, November, 1985 [IFickas, 19871 Fickas, S., Automating the Software Specifi- cation Process, Technical Report 87-05, December, 1987, Computer Science Department, University of Oregon, Eugene, OR. 97403 pickas et al, 19871 Fickas, S., Collins, S., Olivier, S., Problem Acquisition in Software Analysis: A Prelimi- nary Study, Technical Report 87-04, August, 1987, Com- puter Science Department, University of Oregon, Eugene, OR. 97403 EIWSSD, 19871 Fourth International Workshop on Soft- ware Specification and Design, IEEE Computer Society, Order Number 769, Monterey, 1987 [Greenspan, 19841 Greenspan, S., Requirements Modeling: A Knowledge Representation Approach to Software Requirements Definition, Ph.D. Thesis, Computer Science Dept., Toronto, 1984 [Kaczmarek et al, 19861 Kaczmarek, T., Bates, R., Robins, G., Recent Developments in NIKL, In Proceedings of AAAl-86, Philadelphia, 1986 [Mostow and Voigt, 19871 Mostow, J., Voigt, K., Explic- it integration of multiple goals in heuristic algorithm design, In Proceedings of IJCAI-87, Milan, 1987 [Rissland, 19861 Rissland, E., Dimension-based analysis of Hypotheticals from Supreme Court Oral Argument, COINS, University of Massachusetts [Steier and Kant, 19851 Steier, D., Kant, E., The Roles of Execution and Analysis in Algorithm Design, IEEE Transactions on Software Engineering, Vol. 11, No. 11, Nov. 1985 [Swartout, 19831 Swat-tout, W., The GIST Behavior Explainer, In Proceedings of M-83, Washington, DC, 1983 Wilbur-Ham, 19851 Wilbur-Ham, M., Numerical Petri Nets - A Guide, Report 7791, Telecom Research Labora- tories, 1985, 770 Blackbum Road, Clayton, Victoria, Australia 3 168 wilensky, 19801 Wilensky, R., Meta-planning, In Pro- ceedings of AAAI-80, Stanford, 1980 [Yue 19881 Yue, K., Directionality and stability in system behaviors, In Proceedings of the 4th Conference on Al Applications, San Diego, 1988 Ll. L2. L3. L4. L5. L6. L7. L8. L9. LlO. Lll. L12. L13. Consider a small library database with the follow- ing transactions: 1. Check out a copy of a book / Return a copy of a book; 2. Add a copy of a book to / Remove a copy of a book from the library; 3. Get a list of books by a particular author or in a particular subject area; 4. Find out the list of books currently checked out by a particular borrower; 5. Find out what borrower last checked out a par- ticular copy of a book. There are two types of users: staff and ordinary borrowers. Transactions 1, 2, 4 and 5 are restricted to staff users, except that ordinary borrowers can perform transac- tion 4 to find out the list of books currently bor- rowed by themselves. The data base must also satisfy the following con- straints: - All copies in the library must be available for checkout or be checked out. - No copy of the book may be both available and checked out at the same time. - A borrower may not have more than a predefined number of books checked out at one time. 24 Automated Reasoning
1988
7
815
1. Performance of a System to Locate Address Blocks on Mail Pieces1 Ching-Huei Wang, Paul W. Palumbo and Sargur N. Srihari Department of Computer Science State University of New York at Buffalo Buffalo, NY 14260 Abstract The objective of an Address Block Location System (ABLS) is to determine the position and orientation of a destination address block in a mail piece image of either a letter, magazine, or parcel. The corresponding sub-image can then be presented to either a human or machine reader (OCR) to direct the mail piece to the appropriate sort category based on the ZIP code. ABLS is capable of dealing with a wide range of environments from those having a high degree of glo- bal spatial structure to those with no structure. The system consists of several specialized tools and a con- trol structure so that the tools are opportunistically invoked and coordinated. Its performance on a train- ing and testing image database of difficult cases is described. Introduction Machines for automatically sorting letter mail have existed for several years. Present machines can only correctly process about 55% of the letter mail presented to them [HKP84, USP841. The reasons for this relatively poor performance have been determined to be in the areas of address block location as well as locating and reading the ZIP code within the address block. Either a standard address location is assumed or a few features such as window reflectivity and high edge density are used to locate a block of data likely to contain the destination address. Therefore, current automatic letter mail sorting machines can be easily confused by extraneous data on the face of an envelope, which is frequently present in third class mail. Plats (e.g., magazines) and irregular parcels and pieces (IPP) are not presently sorted automatically. In a previous paper fwaS86], we introduced the gen- eral problem. This paper describes a refinement of the archi- tecture of ABLS and performance with an image database consisting of many difficult cases. Described is a solution methodology (Section 2), the software and hardware imple- mentation (Section 3), a description of experimental results and analysis (Section 4), and the main research contributions of this system (Section 5). ‘This work was supp orted by the Offux of Advanced Technology of the USPS Under Task Order 104230-85-M3349. 2. Solution Methodology ABLS inputs several types of images (photopic, RGB, infra-red, and color under ultraviolet illumination) of a single mail piece, and produces one or more candidate blocks, their orientation(s) and confidence values associated with being the destination address block (DAB). 2.1. Components of ABLS ABLS is composed of six major components: a mail statistical database, a rule-based inference engine, a control mechanism, a control data, a blackboard, and a tool box. The mail statistical database [GTR86][SHP86] contains the statistics of the geometric features of all meaningful information blocks on many samples of mail pieces. This includes the probability that a destination address block and a return address block are in a particular location in a 3 by 3 grid on the image, the average and standard deviation of the aspect ratio, number of text lines, and the address block length of a typical hand and machine generated destination address block. The rule-based inference engine performs forward rea- soning on various rule modules which are stored with each tool. The inference engine acts as the interpreter of all the rules. The control mechanism is responsible for checking the termination condition, selecting a tool, combining new evi- dence, and updating the context. Tool selection is based on the benefit/cost estimation of each tool. The control mechan- ism estimates the utility of each tool in the current context using the tool’s utility module. The tool with the maximum utility is chosen as the tool to be applied next and its utility is reset to zero before it is applied. The control data provides information for the control mechanism about the interdependency between the tools, and the criteria for accepting a block as a destination address can- didate. The dependency graph (Figure 1) is a directed graph to specify the temporal sequence of applying tools as well as to minimize continuously re-estimating the utility of each tool. A node in a dependency graph is triggered if one of the arcs entering the node is activated. Each node in the depen- dency graph represents whether the utility of the associated tool should be re-evaluated during tool selection. The control mechanism will not invoke the utility rule module of a tool unless its associated node in the dependency graph is trig- gered. Wang, Palumbo and Srihari 837 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Figure 1. Dependency graph for specifying quence of applying specialized tools. the temporal se- The design of ABLS consists of many specialized tools that are appropriately deployed. An effective method for integrating knowledge from various sources is the blackboard model of problem solving EHL80,Nii86]. The blackboard model divides a complex problem into loosely coupled sub- tasks, and each subtask is attacked by a specialized tool. The ultimate goal is to integrate knowledge from various sources to achieve a common goal. The blackboard contains the current context, the confidence values of labeling hypotheses, and the geometric attributes of blocks extracted from low- level image processing. The tool box contains a collection of tools, many of which are image analysis related. The tools in ABLS can be divided into three categories based on their purpose: (i) desti- nation address candidate generation tools for generating desti- nation address candidates from the mail piece images, (ii) destination address candidate discrimination tools for distin- guishing the actual destination address from the destination address candidates, and (iii) control mechanism tools for uni- fying the block features between overlapping blocks, combin- ing evidence and initiating new hypotheses, and determining when to halt the processing. Table I lists all the tools in ABLS. Each tool contains rules for estimating the benefit and cost of using it, selecting parameters, and evaluating and interpreting results. 2.2. Control Strategy The control strategy of ABLS is an integration of both bottom-up and top-down processing. For the bottom-up pro- cessing, initially, one of the thresholding tools is chosen and applied to the entire mail piece image. The thresholded image is then segmented, bottom-up, into blocks using a segmenter tool. The physical attributes of a segmented block are then interpreted to generate evidence either supporting or refuting a block as being the destination address. In addition to using bottom-up processing, ABLS occasionally will use top-down processing to search for the address block on a par- ticular portion of the mail piece image. The global orientation of a mail piece represents the rotation needed (in 90 degree increments) to correctly orient the assumed destination address. Initially, mail piece orienta- tion is unknown to ABLS but the location of the postage or meter mark may be able to help determine the correct global orientation of a mail piece. If the correct orientation cannot be determined prior to the interpretation of segmented blocks, ABLS will interpret each segmented block in all four global orientations with the correct global orientation assumed to be the one which obtains the maximum degree of support. 2.3. Computation of Utility Value The strategy of tool selection in ABLS is determined using the dependency graph and the utility value of each tool. The dependency graph specifies the interdependency between tools and dictates which tools are eligible for estimating the utility value. After the utility value estimation, ABLS always chooses the tool with the maximum utility value as the tool to be used next. Through the experimental results on 174 complex training images, the utility value of a tool is com- puted from the following five measures: (1) (2) (3) (4) (5) the efficiency of the tool, i.e., the ratio of the number of times the tool is used to the number of times it generates evidence to support the real destination address block as the destination address. the effectiveness of the tool, i.e., the ratio of the number of pieces of evidence generated by this tool to the number of pieces of evidence that support the real destination address block as the destination address. the average CPU processing time of the tool. the percentage of mail population that the tool is designed to deal with. For example, the MSEG tool is designed to segment machine-generated address blocks, while the HSEG tool is for segmenting hand- generated address blocks. the special situation adjustment, i.e., those occasions which require the immediate attention of the system to invoke a particular tool. For example, whenever there are new pieces of evidence generated, the evi- dence combination tool (EVHP) will be invoked to combine evidence. 2.4. Confidence Values and Evidence Combination ABLS can utilize one of several tools at any different time while processing an input mail piece image. When several tools are used, it is necessary to combine evidence gathered from the application of each tool. Each new 838 Vision Category Destination Address Candidate Generation Tools Destination Address Candidate Discrimination Tools Control Mechanism Tools TABLE I Fuuctional Descriptions of Knowledge Sources. atures, e.g., aspect ratio, ponents, etc., to classify how likely a block is a destination address, return address, or advertis- evidence generated by the application of a specialized tool is associated a confidence value to represent the degree to which it supports or refutes a particular labeling hypothesis. An example of a rule with a confidence value is: IF aspect ratio(A) = x and size(A) = y THEN A is the DAB confidence z where A is a particular block being tested, x and y are rule parameter, and z is the probability associated with the confidence of the rule results. The confidence value z represents the a posterior-i probability P (hi 1 e), where e is the evidence (i.e., the condition or “IF” part of a rule), and hi is the labeling hypothesized to be associated with the block (i.e., destination address, return address, postage, advertising text, and graphics). In other words, P (hi 1 e) represents the probability of assigning label hi to block A given evidence e . In the above example, evidence e is the aspect ratio and size testing portion of the rule while label hi is the DAB label. There are several ways to estimate the a posterior-i pro- bability P (hi 1 e ). The approach we have taken is to use the available statistics from the mail statistics database plus sub- jective estimation and then use the experimental results (from the image database) to fine tune the subjective estimation. This approach is by no means complete, or statistically sound, but it represents the best possible from the available information. The scheme to combine confidence values of evidence is based on Dempster-Shafer theory of evidence combination [Bar8 1, Sha76] with a block on a mail piece having one label of the following five-label set: destination address, return address, postage, advertising or miscellaneous text, and graphics. 3. Implementation In terms of the implementation running on a single CPU system (SUN-3), the tools in ABLS can be divided into two categories. The first category of tools is primarily imple- mented in C (10,000 lines) with some additional knowledge rules and Lisp functions (7,000 lines) to do the interface between the control structure and these tools. Tools falling into the first category are the ADTH, COTH, MSEG, HSEG, HWDE, HWDI TEXA, SHAP, and ICDE tools. The detailed descriptions of tools in the first category can be found in [SWP87]. Tools in the second category are basically coded in knowledge rules with some additional Lisp functions to Wang, Palumbo and Sriharl 839 implement those tasks not easily coded in the knowledge rules. Tools belonging to the second category include the BLCM, BLCS, ZIPM, LAYQ, LOCA, SIZE, UVDE, COVF, HEUR, UNIF, EVHP, and STOP tools. 4. Experimental Results and Analysis In order to test the performance of ABLS, experiments were conducted using an image database consisting of 174 complex training mail pieces images. The current training image database consists of four categories: letters, flats (e.g., magazines, newspapers), irregular parcels and pieces, and manual letters. The images are not a strict statistical sample but skewed towards cases that would be difficult to handle. All input images in the training image database have been oriented manually to the correct orientation. This manual orientation has not been performed for testing ABLS with 30 USPS testing images which will be described below. The statistics of performance on 174 training images are shown in Table II. Each testing is classified as a success (S), i.e., the destination address is the highest ranked block, and the segmented destination address contains enough address to correctly sort the mail piece, a partial success (P), i.e., the destination address is the highest ranked block, but the segmented destination address contains insufficient address to correctly sort the mail piece, a reject (R), i.e., sys- tem cannot recommend any block as the destination address block because all figures of confidence were too low, or an error (E), i.e., the highest ranked block is not the destination address. The cause of failure of those pieces not classified as success (Table III) is roughly divided into six categories: hand generated destination address @I), poor image quality (Q), the destination address located in an unusual location (L), the size of destination address is either unusually large or unusually small (S), the destination address near other text blocks which causes an over-segmentation (N), and other text blocks are similar to the destination address and located in the usual location for a destination address (C). Since a failure could be the result of multiple causes, the summation TABLE II. Statistics of Performance. t Mail Class s P R E, Flats I Pieces 41 1 3 5 Percent Sl%j 4% IS%IlO% 1 of each category’s percentage may be over 100. The statistics of performance in Table II show that ABLS achieves an 81% overall success rate. The overall success rate of ABLS can not be higher than the percentage of acceptable segmentations which is 83%. This means that ABLS can achieve a very high success rate (81 / 83 = 98 %) given an acceptable segmentation result. Therefore, the key to substantially enhance the performance of ABLS lies in the improvement of segmentation results. In the early stage of ABLS development, only one bottom-up segmentation tool (MSEG) was used, and it can only achieve about 60% correct segmentation results. However, with the incorporation of more tools using either different methods to segment an address block or to repair the segmentation results, ABLS now can achieve an 83% acceptable segmentation rate. The experimental statistics (Table II) also show that ABLS achieved a high success rate on letter mail since they are well structured and have the destination address in a stan- dard position. For the other three mail classes, generally speaking, the major cause of failure, besides the segmentation failure, is due to the confusing text blocks which are too close to the destination address. The block splitting tool (BLCS) of ABLS is aimed at solving this kind of failure, and achieves limited success. The CPU time required by the system for these training images, on average, is 10.4 minutes with 6.3 minutes for letters, 8.2 minutes for manual letters, 7.7 minutes for IPPs, and 18.2 minutes for flats. The long processing time required for flats is mainly because of the large digital image size encountered in this mail stream. The average processing time per invocation of each of the image analysis tools is as fol- lows: TEXA-3 seconds, SHAP-12 seconds, COTH-23 seconds, MSEG- 1.5 minutes, HSEG-1.6 minutes, HWDI-2.0 minutes, ICDE-3.1 minutes, ADTH-5.5 minutes, and HWDE-6.3 minutes. The average processing time per invo- cation of each of the control structure tools is as follows: LAYO-1 second, EVHP- 1 second, STOP- 1 second, UNIF-1 second, SIZE-l second, BLCM-2 seconds, LOCA-2 seconds, BLCS-3 second, CQVF-4 seconds, HEUR-6 seconds, and ZIPM-9 seconds. We are presently focusing on methods to reduce the processing time per mail piece by several orders of magnitude using several specialized image processing boards. This should produce a system which performs at real-time rates. This system has also been tested on 30 images that were not part of the training set and are skewed toward more difficult cases. The performance on the 30 test images was: 77% (success with correct global orientation), 0% (success with incorrect global orientation), 23% (P), 0% (R), and 0% (E), which are consistent with the results of the previous experiments. The performance codes for test images are slightly different from the definitions because the definition of the partial success (P) is relaxed to include every test image which does not have a satisfactory segmentation result. Therefore, there is no reject and error because all the failure 840 Vision TABLE III Statistics of Cause of Failure. cases in test images are due to the the poor segmentation results. 5. Conclusion A methodology for designing a system to recognize address blocks in an environment that may be structured, par- tially structured or random has been described. The approach has been to utilize specialized tools to generate several candi- dates for the destination address block, and to distinguish the destination address from other candidates. The framework is flexible enough to incorporate as many tools as possible into the system if experimental results can establish the usefulness of those tools. Knowledge about the selection and utilization of each tool is kept independently and separately on each tool so that the addition, deletion, or modification of a tool will not cause side effects on other tools. The experimental results demonstrate that using multi- ple thresholding tools, segmentation tools, and refined- segmentation tools is a promising direction toward locating address blocks on mail pieces with varying degrees of com- plexity and quality. Commercial letter mail sorting machines of today largely assume a standard position for the address block and cannot process mail pieces with complex structure. The key set of features which are useful for locating an address block has been identified and the necessary image analysis tools for extracting those features have also been developed and evaluated on an image database. This system shows an interesting and effective method for using black- boards in addition to many image processing routines to locate information in a digital image. eferences [Bar8 l] Bamett, J. A., “Computational Methods for a Mathematical Theory of Evidence”, Proc. 7th IJCAI, 198 1, 868-875. [EHL80] Erman, L. D., Hayes-Roth, F., Lesser, V. R. and Reddy, D. R., “The Hearsay-II Speech- Understanding System: Integrating Knowledge to Resolve Uncertainty”, Computing Surveys 12, 2 [GTR86] [m841 [Nii86] [Sha76] [SHP86] [SWP87] WSP84] [was 861 (June 1980), 213-253. GTRI, Automated Processing of Irregular Parcel Post: IPP, Letter, and Flat Statistical Database, Electronics and Computer Systems Lab. Georgia Tech Research Institute., 1985-1986. Hull, J. J., Krishnan, G., Palumbo, P. and Srihari, S. N., “Optical character recognition techniques in mail sorting: a review of algorithms”, Tech. Rep.-214, Dept. of Computer Science, State University of New York at Buffalo, June 1984. Nii, H. P., “PART ONE Blackboard Systems: The Blackboard Model of Problem Solving and the Evolution of Blackboard Architectures”, AZ kfagazine 7, 2 (Summer 1986), 38-53. Shafer, G., A Mathematical Theory of Evidence, Princeton University Press, 1976. Srihari, S. N., Hull, J. J., Palumbo, P. W. and Wang, C. H., ‘ ‘Address Block Location: Evaluation of Image and Statistical Database”, Tech. Rep. 86-09, Dept. of Computer Science, SUNY at Buffalo, April 1986. Srihari, S. N., Wang, C. H., Palumbo, P. W. and Hull, J. J., “Recognizing Address Blocks on Mail Pieces: Specialized Tools and Problem Solving Architecture”, AZ magazine 8, 4 (Winter 1987), 25-40. USPS, Engineering Report on OCR Readability Guidelines, June 1984. Wang, C. and Srihari, S. N., “Object Recognition in Structured and Random Environment: Locating Address Block on Mail Pieces”, Proc. AAAI-86: Fifth National Corference on Artifscial Intelligence, Philadelphia, PA, August 1986, 1133-l 137. Wang, Palumbo and Srihari 841
1988
70
816
Michael L. Brodie GTE Laboratori Incorporated Waltham, A 02254 brodie@gte.com (csnet) Abstract The effective application of AI Technology and the development of future computing systems require the integration of AI and Database Technologies. The integration will benefit both AI and Databases and will substantially advance the state of computing. Information Systems are among the greatest potential beneficiaries of AI Technology. What if advanced reasoning capabilities could be added to any Information System? What if intelligent interfaces could replace unfriendly interfaces? What if AI techniques were used to extend Database Management functionality? At the same time, AI Technology will benefit dramatically from Database Technology. What if knowledge could be shared concurrently by existing and unanticipated applications (Knowledge Independence)? What if AI systems could contain hundreds of thousands of rules and access millions of facts? What if new reasoning techniques could be directly applied to existing knowledge bases? What if knowledge could be reorganized, independently of applications, to optimize for current usage (Representation Independence)? What if AI systems development involved simply incremental augmentation of existing knowledge bases? What if AI systems were as robust as On-Line Transaction Processing Systems? Future computing systems will require AI and Database Technology to work together with other technologies. These systems will consist of large numbers of heterogeneous,distributed agents with varying abilities to work cooperatively. Each will have its own knowledge and reasoning schemes, languages, and capabilities. Data, procedures, knowledge, and objects in these systems may be shared, incomplete, and inconsistent with those of other agents; but will certainly persist and will together form a massive distributed information base. The current trend to Interconnectivity -- one system accessing another via standard interfaces -- will evolve into Inter-operability -- intelligent cooperation amongst systems to best achieve specified goals. Such future computing systems pose major challenges for both AI and Database Technologies. Many of these challenges (e.g., managing, structuring, searching, sharing, and modifying objects; reasoning about tasks and specific domains) are present in simpler forms in current AI and Database Systems. Extending existing solutions to new contexts requires a deep understanding of both technologies, their requirements, their capabilities, and their limitations. Difficulties with current research and commercial AI systems, let alone future systems, are due, in part, to the lack of such a deep understanding. Integration of AI into conventional Information Systems requires resolution of knowledge/object management and access issues addressed by Database Technology for simple tabular and structured data. Basic systems issues underlying this resolution include the appropriate use of secondary storage and effective architectures. 844 Invited Talks From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Database Technology currently solves the above problems for about .5% of existing corporate and scientific data. Exciting database research is extending these solutions to new application domains such as AI. Much of this work utilizes the object paradigm. It is hoped that the use of this evolving paradigm in AI, Databases, Programming Languages, Operating Systems, and other areas will provide a common ground for technology integration. For example, combining the object paradigm with that of heterogeneous, distributed databases could provide network wide object management regardless of the systems in which objects reside. This interoperability would permit systems to cooperate with less need to know exact details of foreign systems and objects. Open problems here include the appropriate partitioning of functionality amongst, and the nature of interfaces between Database, Object Management, AI, and other systems. These issues must be addressed at the Knowledge, Symbol, and Architectural/ Organizational Levels. This talk presents a vision of future computing that provides a framework and goals for the AI-Database Integration. Short and long term benefits are illustrated and the critical nature of this progress is emphasized. Progress over the past ten years is illustrated. The applicability of existing Database Techniques to AI systems and of AI Techniques to Database Systems is described. Open problems are identified together with promising research directions. Brodie 845
1988
71
817
HOW TO DRAW THREE PEOPLE IN A BOTANICAL GARDEN. Harold Cohen The University of California at San Diego Department of Visual Arts, La Jolla, Ca 92093 Abstract AARON is a program designed to investigate the cognitive principles underlying visual represen- tation. Under continuous development for fifteen years, it is now able autonomously to make “freehand” drawings of people in garden-like settings. This has required a complex interplay between two bodies of knowledge: object- specific knowledge of how people are con- structed and how they move, together with mor- phological knowledge of plant growth: and pro- cedural knowledge of representational strategy. AARON’s development through the events lead- ing up to this recently-implemented knowledge- based form is discussed as an example of an “expert’s system” as opposed to an “expert sys- tem.” AARON demonstrates that, given appropriate interaction between domain knowledge and knowledge of representational strategy, relatively rich representations may ’ result from sparse information. Figure 1: AARON drawing, 1987 846 Invited Talks From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Brother Giorgio is a 12th Century scholar-monk whose task it is to record what is known of the world’s geography, and hc is currently making a map of Australia, an island just off the coast of India. Since an essential part of map-making involves representing the animals of the country, he is making a draw- ing of a kangaroo. Now Brother Giorgio has never seen a kan- garoo. But he understands from what he has been told that the kangaroo is a large rat-like creature with a pouch, and with an exceptionally thick tail. And he draws it accordingly (figure 24. Figure 2a While he is so engaged, a traveller visits the monastery, and hc tells Giorgio that his drawing is wrong. For one thing -- and Giorgio finds this quite implausible -- the kangaroo doesn’t carry a pouch -- its pouch is part of its belly! And, says the travellcr, it doesn’t go on all fours: it stands upright, on rear legs much bigger and thicker than the front legs (figure 2b). Figure 2b And the tail doesn’t stick straight out, it rests on the ground. Giorgio completes all the necessary changes, and the travcllcr assures him that though he hasn’t got it quite right, it’s close (figure 2~). AARQN, late in the 20th Century, is a knowledge-based program that is capable of the autonomous generation of origi- nal “freehand” drawings, like the one in Figure 1. Like Brother Giorgio, AARON has never seen the things it draws. Figure 2c It, too, is obliged to rely upon what it is told. Unlike Giorgio, however, it cannot make USC of associative knowledge. There would be no point in telling it that a kangaroo looks a bit like a rat, for example, not only because it doesn’t know about rats, but because it has never looked at anything. What both Gior- gio and AARON make clear is that the plausibility of a rcprcsentation dots not rest upon the correctness of the knowledge it embodies. Indeed, for anyone lacking knowledge of marsupials, the “correct” knowlcdgc of the kangaroo’s pouch is at least as implausible as Giorgio’s initial understanding. Nor does plausibility rest upon the complctc- ncss of that knowledge, since rcprcscntations only cvcr rcprescnts what is being reprcscntcd with respect to an arbi- tmrily small set of properties. Given one important proviso -- that the representation-builder has general knowledge about how to make representations -- there would appear to be no lower limit of knowledge below which the making of a representation is impossible. And that proviso points to the main thrust of this paper. It will show AARON’s visual representations to involve a spcc- trum of representational procedures, and a spectrum of dif- fcrent kinds of world knowledge. It will also show the degree to which the particular quality of those representations depends upon the intimate meshing of the program’s world knowledge with its knowledge of representing. AARON has been under continuous development for nearly fifteen years now and it has gone through many generations. At fifteen it may well bc the oldest continuously-operational expert system in existence and perhaps the only computer pro- gram to have had something akin to a life-story. But perhaps AARON would be better described as an expert’s system than as an expert system: not simply because I have served as both knowledge engineer and as resident expert, but because the program serves as a research tool for the expansion of my own expert knowledge rather than to encapsulate that knowledge for the use of others. The goal of this research is to understand the nature of visual representation. The term should not be understood to imply the various mechanical methods -- perspective, photog- Cohen 847 raphy, ray-tracing -- by which two-dimensional transforms of a three-dimensional world may be generated. All of these arc knowledge-free, in the sense that the photographic process neither knows, nor needs to know, what is in front of the lens. AARON helps to address questions that arc both more funda- mental and more general. What do computer programs -- and, paradigmatically, human beings -- need to know about the external world in order to build plausible visual representa- tions of it. What kind of cognitive activity is involved in the making and reading of those representations? The making of representational objects -- the drawings, paintings, diagrams, sketches in which representations arc embodied -- constitutes the only directly-examinable evidence we have of “visual imagining.” I mean those internal cognitive processes that underpin and inform the making of representational objects, and which we all enjoy to some extent, whether or not we make representational objects. I assume that the reading of representations involves essentially similar processes. But making requires more than reading does. It requires a special body of knowledge -- knowledge of representation itself -- that is part of the expertise of the artist, just as the representation of a body of knowledge within an expert system requires an analogous expertise of the knowledge engineer. Understanding the nature of visual representation requires asking what artists need to know in order to make representational objects; what they need to know, not only about the world, but also about the nature and the strategies of representation itself. AARON’s role in this investigation, then, has been to serve as a functional model for a developing theory of visual representation. The stress is on the word “functional,” for the most convincing test of a theory of representation is the model’s ability to make representational objects, just as the plausibility of a theory of art resides in art-making. AARON was last reported in detail in 1979, in the procccd- ings of IJCAI-6, at which time it was making drawings like that in iigure 3. The differences in its output have been matched, of course, by large changes in the program itself. But these have been developmental rather than radical changes, following a pattern analogous to that of human cognitive development, and AARON has retained its identity and its name throughout. Part of my purpose here is to describe the current state of the program. The other part is to account for its development. That means, necessarily, to describe the domain of interaction between program and programmer, to delineate the purpose that the one serves for the other. AARON has been a research tool for me, but also something very like an artist’s assistant, capable always of enacting, without human aid or interfer- ence, the understanding of art embodied in its structure. And my relationship to the program has become increasingly sym- biotic. Without AARON’s sophisticated enactment of my own undersmnding, that understanding would not have developed as it did. Without that developing understanding AARON could never have become the sophisticated adjunct artist that it is. My long-held conviction that AARON could only have been written by a single individual has been based on rather vague suspicions of cultural incompatibilities existing between the disciplines of knowledge engineering and art. Now I believe, rather more precisely, that the problem -- and, indeed, a fundamental limitation of expert systems -- lies in the artificial separation of two bodies of knowledge, that of domain-expert on the one hand and knowledge-system-cxpcrt on the other. 2 Aaron: Early Versions In all its versions prior to 1980, AARON dealt with exclusively internal aspects of human cognition. It was intended to identify the functional primitives and differentia- tions used in the building of mental images and, consequently, in the making of drawings and paintings. The program was able to differentiate, for example, between figure and ground, and insideness and outsideness, and to function in terms of similarity, division and repetition. Without any object-spccilic knowledge of the external world, AARON constituted a severely limited model of human cognition, yet the few primi- tives it embodied proved to be remarkably powerful in gen- erating highly evocative images: images, that is, that sug- gested, without describing, an external world [Cohen, 19791. This result implied that experiential knowledge, inevitably less than constant across a culture and far less so between cultures, is less a determinant to the communicability of visual representations than is the fact that we all share a single cogni- tive architecture. Figure 3: AARON drawing, 1979 From the program’s inception around 1973, I had been con- vinced that AARON would need to be built upon a convincing simulation of freehand drawing, and gave much attention to modeling the feedback-dependent nature of human drawing behavior. As a consequence of this stress the program was for- mulated, initially, largely in terms of line generation. Closed forms, those universal signifiers for solid objects, also were generated from rules directing the development of lines: rather like the way one might drive a closed path in a parking lot by imagining a series of intermediate destinations, veering towards each in turn and finally returning to one’s starting point [Cohen, Cohen, Nii, 19841. Following a paradigm we see exemplified in rock drawings and paintings all over the world, AARON observed a general injunction against allow- ing closed forms to overlap each other, and would be obliged to modify its closure plans frequently in order to prevent over- 848 Invited Talks lap. This resulted in a richer, less predictable, set of forms than the unmodified closure rule would have permitted. But underlying this richness was the fact that AARON had no prior spatial conception of the closed forms it drew: their spa- tial presence, their identity, was the result, not the cause of an entirely linear operation. Throughout this phase of AARON’s development, a con- stant sub-goal was to increase the range and diversity of its output. And in 1980 this desire led to the development of a new generating strategy for closed forms. It had its basis in an attempt to simulate the drawing behavior of young children, specilically at that immediately post-scribbling stage at which a round-and-round scribble migrates out from the scribblc- mass to become an enclosing form (figure 4). It was while this work was in progress that a colleague exprcsscd an intcrcst in having AARON make “realistic,” as opposed to cvocativc, drawings. Could it, for example, make a drawing of an animal? Figure 4 I must avoid here what would be a lengthy digression on the nature of realism. Let it suffice to say that I took my colleague’s words to imply a visual rcprcscntation of an animal, as opposed, say, to a diagram. Since I had never drawn animals and had little idea about their appearances I thought it unlikely that I could oblige. What little knowledge I could place at AARON’s disposal was barely sufficient to construct a diagrammatic stick figure: a representation, ccr- tainly, but not a visual representation. Now it happens that the “enclosing” stage of children’s drawing is also the stage at which they begin to assign representational significance to their drawings. If this was more than a coincidence, I speculated, perhaps it would bc possible to generate an adequate representation by enclosing a stick figure the way a child encloses a scribble. It proved to be a good guess. On the first attempt the program’s drawings showed a startling resemblance to the rock drawings of the African Bushmen (figure 5a). Encouraged by the result I amplified the program’s knowledge to take some account of the bulk of the animal’s body, and the drawings shifted their stylistic affiliations to the caves of Northern Europe (ligure 5b) [Cohen, 1981]. Figure 5b: AARON, animal drawing In retrospect it seems obvious that the closed forms of thcsc drawings would have produced a richer evocation of “rcal” animals than a diagrammatic stick-figure could. The clear dif- ferentiation of style that rcsultcd exclusively from the change in the enclosed, subscqucntly invisible, diagram is mom prob- lcmatic, however. Perhaps “style” in art is less a question of autography than of what the artist belicvcs to be significant. AARON was now potentially able to generate a large variety of geometrically complex closed forms without rcquir- ing gcomctrically complex descriptions. The gain was obvi- OLIS enough to cnsurc to this new strategy a pcrmancnt place in AARON’s repertoire even without the goal of visual representation. From that point forward, all closed forms involved the initial construction of a “conceptual core,” corresponding to the child’s scribble around which the form was “embodied” (figure 6). One important result of the new strategy was to shift the stress in AARON’s drawing mode away from its initial linear- ity, yet the greater gain had less to do with the growth of AARON’s formal skills than with its “cognitive” dcvclop- ment. For the first time AARON now had some concept of what it was to draw before it began to draw it. Cohen 849 Which did not mean that AARON proceeded to draw real- world objects; on the contrary, the representation of real- world objects seemed as unnecessary to my research goals as it was inconsistent with my own history as an artist. The animals disappeared from AARON’s repertoire and no further attempt was made at that time to apply the new strategy to the representation of real-world objects. Yet even in the absence of real-world knowledge, the new cognitive mode endowed AARON’s images with an increasingly “thing-like” presence that seemed to demand an explicitly visual space in which to exist. Thus, for example, where the earlier versions of the pro- gram had avoided overlapping figures, occlusion now became a fundamental principle of pictorial organization. By 1984 the earlier “rock-art” pictorial paradigm had given way entirely. The pressure to provide real-world knowledge of what AARON’s new visual space contained became inescapable and the first of several knowledge-based versions of the pro- gram was constructed (figure 7). Figure 6: AARON drawing, 1983 Figure 7: AARON drawing, 1985 I do not intend by this account to imply some metaphysical force guiding AARON’s development and my own hand. Nor is it necessary to do so. Every system has its own logic, and the need to follow the dictates of that logic, to discover where it will lead, may easily transcend the private inclinations of the investigator. 3 AARON: Recent and Current Versions I said earlier that the goal of this research is to discover what the artist needs to know about the world in order to m‘ake plausible representations of it: not correct representations, or complete representations, or realistic representations -- none of these notions hold up very well under examination -- but plausible representations. If I had asked how much the artist needs to know, the answer would have been that the question is hardly relevant: we make representations with whatever we know. Given adequate knowledge of the representational pro- ccdures themselves, there is virtually no lower limit of world- knowledge below which representation is impossible. The goal, rather, is to discover how representational structures represent what they represent: how we use what we know to build those structures. What does AARON represent, and how -- by means of what structures -- is it represented? As the title of this paper suggests, AARON represents a small part of the flora and the fauna of the world, with a little geology thrown in: a tiny part of the whole of nature. Because plausibility does not rest upon how much the image-maker knows about the world AARON has never been provided with a large object-specific knowledge base -- large, that is, in the sense of referring to many different objects. And because object-specific knowledge is also purpose-specific, no attempt has been made to give it knowledge that might be considered essential for representations of other kinds than its own and within other disciplines. Most particularly, its object-specilic knowledge contains very little about appearances, and the program’s overall strategy rests upon being able to accumulate several different kinds of non-visual knowledge into visually representable forms. This is not a neatly sequential process. As I will show, different knowledge is called into play at dif- ferent stages of accumulation; the program’s representational knowledge is not simply invoked as a final step. In the category of object-specific knowledge the program has five levels, each with its own representational formalism. At the first level is AARON’s declarative knowledge. For example: a human figure has a head, a torso, two arms and two legs. A plant has a trunk, branches and leaf clusters. This declarative knowledge is represented outside the program itself in frame-like forms that are read in as they are needed. So, also, is knowledge of several pictorial “classes.” A class is ch‘aracterized simply by what elements may be used in a given drawing and -- since AARON does not use an eraser -- the front-to-back order in which they may be used. Thus AARON begins a new drawing by selecting a pictorial class, and proceeds by expanding each entry in the class heirarchically into an internal tree-structure, at the lowest levels of which are the management procedures responsible for the generation of individual elements of the drawing. There is, for example, a 850 Invited Talks “hand” manager whose sole task is to produce examples of hands on demand, to satisfy the specifications that are developed for it. The expansion of externally-held declarative knowledge into internal tree structure is done on a depth-first basis, and AARON does not know in advance what the current class will require at a later stage; and it may, in fact, over-ride the demands made by the class in favor of constraints that have developed within the drawing. A class is only minimally prescriptive; it will call for “some” trees or people, rather than two trees or three people, where “some” may be specified, for example, as any number between zero and four. Conse- quently the expansion is not deterministic. Decision-making is relatively unconstrained at the start of the drawing and, though it becomes increasingly constrained as the drawing proceeds, AARON randomizes unless it has some clear reason for preferring one thing or one action over another, as people do. All higher-level decisions are made in terms of the state of the drawing, so that the use and availability of space in particular are highly sensitive to the history of the program’s decisions. AARON’s first and ongoing task, then, has to do with the disposition of its few objects in a plausible visual space. 3.1 The Nature of Appearances When I first provided AARON with the knowledge it would need to make blatantly representational drawings, I reasoned that, since anything one sees through a window is as real as anything else, pictorial composition was hardly relevant to the issue of plausibility. I assumed, therefore, that I could safely fall back upon the simplest, and perhaps the most universal, of compositional paradigms: put it where you can find space for it. And this paradigm, extensively used in AARON’s two- dimensional days, remained valid in its new world to the extent that three people in open view make neither a better nor a worse composition than five people hiding in the foliage. A fundamental problem emerged, however, centered on the ambiguity of the word “where.” Until recently AARON has never had a fully 3-dimensional knowledge-base of the things it draws: foreshortening of arms or the slope of a foot in the representation were inferred from AARON’s knowledge of the principles of appearance, not by constructing the figure in 3-space and generating a perspective projection. And it hap- pened too frequently in the program’s first efforts at represen- tation that people in the picture would stand on each other’s feet (figure 8). I’ve been using the term “plausible representations” to mean representations that are plausible with respect to appear- ance, and I must now consider what appearance means and what it implies. Appearance implies what the world looks like. It implies the existence of a viewer, and a viewpoint that controls the disposition of objects within the viewer’s visual field. Since much of what the viewer sees is illuminated sur- faces, it implies also some condition of lighting that controls visibility in some particular way. And since lighting is arbi- trary with respect to the object itself it follows that the appear- ance of objects -- as opposed, for example, to their structure, their mass or their dimensions -- is a transitory characteristic. In order for appearance to imply specific knowledge of how particular objects look under particular and transitory lighting .I Figure 8: AARON drawing, 1986 conditions, we would have to be able to store and retrieve, not merely “visual fragments,” but complete “mental photo- graphs.” And that is surely not the general case. On the other hand, we can regard the way solid objects occlude each other, the way objects take less space in the visual field as they get further away, the way light falls on sur- faces and so on, as a set of principles. In theory we should bc able to infer a particular appearance by applying the principles of appearance to a particular surface description; that is exactly what the various strategies of “solids modeling” do. But the human mind is rather poor at inferring appearance partly because it rarely has adequate surface descriptions available to it -- we use appearance to provide those descrip- tions, not the other way around -- and partly because the human cognitive system makes use of a gamut of “cognitive perspectives” quite unlike the self-consistent geometries upon which solids modeling relies. One result is that in the one period of history when art has concerned itself explicitly with appearance -- the western world since the Italian Renaissance -- it has inferred the appearance of simple surface configurations, but has relied heavily upon direct observation for the depiction of complex surfaces. For example, the artists of the Renaissance used perspective in depicting objects with simple surfaces -- buildings, tesselated floors -- but almost never attempted to use perspective in depicting the human figure (figure 9). And, of course, solids modeling has balked at the surface complexity of the human figure for the same reason: the difficulty of providing adequate surface descrip- tions. Cohen 851 sketch, in which 2-D space is allocated less to the objects to bc represented than to the space they occupy, and in which space is increasingly committed as those objects “ simul- tancously within both their actual 2-D space and their refcr- enced, implied 3-D world (figure 10). - Figure 9: Fra Angelico, Annunciation, 1437 3.2 Pictorial Organization versus Dirnensioual Plausibility Fortunately, the cognitive system provides a convenient short- hand for the representation of surfaces. Since the eye func- tions as a contrast amplifier we are able to detect the bounding edges of surfaces very efficiently, and we make heavy USC of the behavior of those edges to provide information about the surfaces inside them. In using edges as the basis for a representational mode, then, much of the problem of surface illumination is bypassed. Plausibility rests upon the behavior of the edge, and upon issues that can bc addressed in terms of edges: notably occlusion and spatial distribution. Actually, very little is required, in terms of occlusion and perspective, in drawing a single figure or a single plant. How- ever, the need to represent objects plausibly with respect to other objects requires a significant level of control over 2- space placement and the relative sizes of objects within the representation, and requires more extended control of occlu- sion. This is more complicated than it may seem. As I rcmarkcd, visual representation in general rests upon a complex of cogni- tive “ perspectives, not upon the automatic and knowledgc- free 2-space mapping of the visual field provided by photogra- phy, or its computer-based equivalent, solids modcling. So great is the difference between the cognitive view and the automated view that experienced studio photographers habitu- ally use Polaroid instant film while setting up a shot in order to find out what the world they can see clearly in three dimcn- sions will look like in two. Evidently ‘ L-space organization cannot be adequately predicted or controlled exclusively through control of the 3-space arrangement of objects. Nor, conversely, is it possible to guarantee plausibility with rcspcct to physical dimensionality by concentrating exclusively on pictorial organization. In constructing visual representations the human ‘ artist appears to work under two simultaneously-active and mutually-exclusive constraint-sets. The “ imaginational plan- ning ” that marks this mode is best evidenced by the artist ’ Figure 10. Eugene Delacroix 1862 “ At present, AARON uses only a crude, static model of this essentially dynamic process. While it organizes primarily in 2-space terms, it also maintains a floor plan of the “ world it is depicting. Space for an object is allocated initially on the plane of the rcprescntation. It is then projcctcd back into the “ world, where it is adjusted to ensure valid 3-space placcmcnt, and then it is projected forward again into the representation. In doing this, perspective is used only to the degree of determining where the bases of objects -- the feet of figures -- will fall, and how high the objects will bc, in the representation. It thus ensures that real-world objects are placed plausibly with respect to their ground-plane while doing very little about planning in 3-space terms. People no longer stand on each others feet, but a genuinely dynamic model of this imaginational planning remains a goal for the program future development. 3.3 Levels of Knowledge During the expansion process, a second level of knowledge -- exemplary knowledge -- is invoked to provide fuller specification for the management proccdurcs. The detcrmina- tion that this particular figure will have a large head and long arms, for example, involves applying the descriptors “ and “ plausibly to a set of prototypical dimensions held in table form within the program. The further determination that this figure will hold a particular posture, requiring its right arm to be extended horizontally and its right hand to be point- ing, will require three further levels of amplilication before an adequate specification can be generated. First: the figure is articulated, and AARON has to know where the articulations are (structural knowledge). Second: it must know what the legal range of movement is at each articulation (functional knowledge). Third, and most important, since a cohercntly- 852 Invited Talks whatever posture, and from whatever position, it is viewed. With the completion of this stage AARON has provided itself with the conceptual core of its representation, similar func- tionally to the young child scribble. And it is around this conceptual core, in the third and final stage, that the figure is embodied (Figure 12 shows an incomplete core taken from a current, fully 3-D version of the program). Figure 12: partial core figure Embodying involves generating a path around each of the parts of the conceptual core. These are taken, as the elements of the drawing are, in closest-first order. Part of the internal representation of the drawing that AARON maintains for itself consists of a matrix of cells onto which are mapped the lines and the enclosed spaces of the drawing. Thus the conceptual core is now recorded as a mass of marked cells, to develop a path around which AARON uses what is, in essence, a simple maze-running procedure. However, its implementation rests heavily upon the fact that AARON draws, as the human artist does, in feedback mode. No line is ever fully planned in advance: it is generated through the process of matching its current state to a desired end state. As with any feedback- controlled system, AARON performance is characterized by its sampling rate and by how radically it corrects. This part of the program most intimately determines AARON “ and it has not changed greatly since the program earliest ver- sions. -- ..- Unlike the eariier versions, however, the strategy for “ ima- gining ” the intermediate destinations around its path depends upon two things. Firstly, upon its ability to recognize and to deal with a number of special-case configurations in the core figure (figure 13a, b). These -- and most particularly a configuration indicating a sharp concavity -- are intimately involved in the self-overlapping folding of outlines that con- vey so much about the appearance of complex three- dimensional forms. Secondly, AARON knows what it is draw- ing, and it associates some particular degree of carefulness with the delineation of any particular element. This knowledge is expressed in the use of an additional feedback parameter: the distance from the core at which the path will be developed. Thus, for example, it will draw a thigh rather loosely -- that is, at some distance from the conceptual core and with a rela- tively low sampling rate -- while it will draw a hand close to the core and with a high sampling rate. Both of these are con- trolled by the placement and the frequency of the intermediate destinations around the marked-cell mass. AARON further adjusts its own sampling rate and correction with respect to the size of the element it is drawing relative to the size of the entire image. Figure 13a: strategy for concave configuration Figure 13b. Edouard Manet 1862 “ for a* Woman at her Toilet 854 Invited Talks 4 Conclusion In practice AARON makes drawings of whatever it knows about without requiring any further instructions for the making of a particular drawing -- and, indeed, without possessing any mechanism through which it could take instructions. To the degree that it does nothing much more than enact what it knows, AARON provides an intuitively satisfying model of “visual imagining,” in that it permits the expansion of rela- tively sparse real-world, object-specific knowledge into a con- vincing representation of a visual experience. I have described AARON’s knowledge as falling into two broad categories: what it knows about a small range of world objects and what it knows about building visual representa- tions. And I have proposed that these two categories must be intimately inter-related in any satisfactory model of human knowledge-based performance. The conclusion is an obvious one; we can only represent what is representable in terms of available representational strategies. I have no doubt, for example, that the program’s development has been profoundly determined by the fact that it has been written in ‘C’ rather than in LISP. AARON’s representational strategy, deriving as it does from the young child’s relatively undifferentiated per- ceptions of the world, is well adapted to the representation of blob-like forms, or forms with a strong axis -- heads and limbs, for example. Yet AARON is unable to deal with cube- like objects, the perception of which rests upon high contrast edges in the center of a form as well as at its extremities. AARON will need new representational strategies, not merely more object-specific knowledge, before it can present a new view of the world, just as the young child is obliged to develop new strategies before it is able to put the sides on its representations of houses (figure 14). Figure 14: childrens’ drawings Finally: I have claimed for AARON only that it makes plau- sible representations, and have left aside the consensus judge- ment that its drawings represent a high level of artistic accom- plishment. Why have I had nothing to say about “aesthetic” principles like harmony and balance? The short answer is that AARON is entirely unaware of the existence of those principles, and that since its drawings are aesthetically satisfactory, we must surely question the relevance of those principles to artistic production. This is not to say that AARON does not embody principles of its own, but that whether these are aesthetic principles is largely a matter of definition. I have to assume that the simple “lind- enough-space” rule to which I referred earlier contributes to the aesthetic appeal of the outcome, but it is quite different in kind from the aesthetic rules commonly believed to guide the production of works of art. The fuller answer is that I regard “style” -- surely the most difficult word in the entire vocabulary of art -- as the signature of a complex system. I regard the aesthetics of AARON’s per- formance as an emergent property arising from the interaction of so many interdependent processes, the result of so many decisions in the design of the program, that it becomes mean- ingless to ask how much any one of them is responsible for the outcome. If AARON has maintained a consistent aesthetic, a consistent identity, from its earliest endeavors, I have lo assume it to reflect consistent patterns of my own in determin- ing its development. If someone else wrote a similar program I would expect it to exhibit a different identity and a different aesthetic. That answer would be begging the question, if the point of the question was to consider how an orthodox expert system might be built to generate objects of high artistic value. That isn’t the point; given the orthodox separation of domain knowledge from representation knowledge, I do not believe it will be in the foreseeable future. This is one place where it seems not to be true that two heads are better than one. References [Cohen, 19791 Harold Cohen. What is an image? Proceedings of IJCAI-6, Tokyo, 1979. [Cohen, 19811 Harold Cohen. On fhe Modeling of Creative &havior. Internal Paper for the Rand Corporation. 1981. [Cohen, Cohen, Nii, 19841 Harold Cohen, Becky Cohen and Penny Nii. The First Artificial Intelligence Coloring BOO/L William Kaufman, 1984. Technical Note While the earliest versions of AARON were built as production systems, all the more recent versions have been strongly object-oriented, as the above discussion might indicate. The pro- gram has about 14,000 lines of ‘C’ code and occupies almost a half-megabyte of compiled code, exclusive of external declarative knowledge structures and the internal representa- tions of the developing drawing it makes for its own use. The most recent version was w&ten under UNIX on a MicroVax-II, on which machine a single drawing takes about an hour of CPU-time, and has been ported to several other UNIX machines. Cohen 855
1988
72
818
Investigations Into a Theory of Knowledge Base Revision: Preliminary Report Mukes h Dalal*t Rutgers University Department of Computer Science New Brunswick, NJ 08903 Abstract . A fundamental problem in knowledge representa- tion is how to revise knowledge when new, con- tradictory information is obtained. This paper formulates some desirable principles of knowl- edge revision, and investigates a new theory of knowledge revision that realizes these principles. This theory of revision can be explained at the knowledge level, in purely model-theoretic terms. A syntactic characterization of the proposed ap- proach is also presented. We illustrate its ap- plication through examples and compare it with several other approaches. 1 Introduction At the core of very many AI applications built in the past decade is a knowledge base - a system that maintains knowledge about the domain of interest. Knowledge bases need to be revised when new information is obtained. In many instances, this revision contradicts previous knowl- edge, so some previous beliefs must be abandoned in order to maintain consistency. As argued in [Ginsberg, 19861, such situations arise in diverse areas such as diagnosis, de- sign, database updates, planning, and natural language understanding. In this paper, we investigate a new theory of knowledge revision. In [Levesque, 1984a], Levesque presents formal founda- tions of a functional approach to knowledge representation, where knowledge bases (KBs) are characterized in terms of what they can be asked or told about some domain: Tell : KB x C + KB Ask: KB x C -+ {yes, no, unbnozon) where L is some language to talk about the domain. Since Tell can be used to tell only information which is con- sistent with the knowledge base, it is not the appropriate operation for knowledge revision [Levesque, 1984a, page 1821. For this purpose, we add an additional operation: Revise : KB x L -+ KB Levesque argues that one should define the operations on a KB at the knowledge level [Newell, 19811, independently of the particular symbols/sentences used to build up the KB. In this spirit, we define revision purely in terms of the *Net: dalal@aramis.rutgers.edu ‘The author was supported by the CAIP center at Rutgers during this research. Currently on leave to IBM T.J. Watson Research Laboratory, Yorktown Heights, N.Y. for the summer. models of the KB. We also give an equivalent symbol level description by presenting a syntactic method for revising knowledge bases. We show the relation of our work to research in Philos- ophy on the formal aspects of the logic of theory change [Makinson, 19851 which has recently attracted attention in the AI community. For any revision scheme, it is desirable that it preserve as much as possible the beliefs held prior to revision. We provide one possible formalization of the notion of knowledge retained by a revision scheme. We also briefly discuss some applications of revision. 2 Principles of Knowledge Revision For the purpose of this paper, we abstractly represent the knowledge in a knowledge base by a finite set of formulae in a propositional logic language L; this set describes the possible states of the world - its models. The revision is presented as a formula in t, Let $ o p denote the revised knowledge obtained by re- vising the old knowledge + by the new information ~1, i.e., + o p = Revise($, cl). The problem of knowledge revision is: given 4 and ~1, define + o ~1. In [Dalal, 19881 we motivate certain principles that should be followed when characterizing the revised knowl- edge $ o p. These are: 1. Adequacy of Representation: The revised knowledge should have the same representataon as the old knowledge. Especially in a functional view of knowledge bases, this is essential since the same operations need to be performed on both. By defining the range of Revise as KB, ti o D is implicitly required to satisfy this principle. _ . 2. Irrelevance of Syntax: The revised knowledge base should not depend on the syntax (or representation) of either the old knowledge or the new information. Thus, if $ is logically equivalent to (a) 4’ and p M p’, then 4 o b M 4’ o p’. This is essential in order to provide a model-theoretic semantics of the revision process. In view of this, we omit the distinction between a set of formulae (with an implicit conjunction) and a single formula, since one can be expressed in terms of the other such that the two are logically equivalent. 3. Maintenance of Consistency: If $ and p are both con- sistent, than so is 4 o P. If not for this, Revise and Tell could be identical. 4. Primacy of New Information: The revised knowledge of the system should conform to the new information. Thus, $ o p b p. This implies a complete reliance on the truth of the new information. Dalal 475 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. 5. Persistence of Prior Knowledge: As much old knowl- Note that $ b G($) and that G($) is consistent iff 4 edge as passable should be retaaned an the revised knowledge. is consistent. g can be read as a generalization operator Thus $ o ~1 should be obtained by some form of minimal which takes a set of interpretations and generalizes them change in +. Note that there may be multiple notions to a larger set. G is also a generalization operator which of “minimality” , but that if $ U (p) is consistent then can be thought of as taking a formula or a set of formulae +V=MuU. and returning a subset of its logical closure. 6. Fairness: If there are many candidates for the re- vised KB that satisfy the above principles then one of them should not be arbitrarily chosen. Arbitrariness is clearly undesirable; yet we wish to avoid non-reproducibility (hence non-determinism), and by the specification of Re- vise, we can rely only on the contents of the KB to choose. One possible solution is to define the revised KB as the “intersection” of all these candidates. Note that this in- volves compromising the principle of persistence of prior knowledge only to the extent of reflecting this ambiguity. Now we have a way to systematically characterize changes in models. We also have a quantitative measure of this change: $(A) is a smaller change in A than gj (sl> iff i < j. The definition of g’(A) if i = 0, (2) gi-l(g(&)) is the obvious one: (1) A o th erwise. If 4 is inconsistent with ~1, we change the models of + by applying the operator g. If we obtain at least one interpretation that makes p true then we are done: we can define $ o ~1 to be G(4) u {p). If not, we apply g again and keep on this way. Note that we do not claim that the above principles are the best for every application. We only make explicit cer- tain principles to characterize the intuitive notion of knowl- edge base revision. Let k be the least value of i for which p holds in some interpretation in the set gi(mod($)). It is clear that this is also the least value of i for which the set of formulae Gi($) U (p) is consistent. Definition: $ o p = G”(+) U (p), where k is the least value of i for which Gi($) U {p) is consistent. 3 Semantics of Revision Let A be the set of atoms of the underlying language t. An interpretation A is a truth assignment to the atoms in A. An interpretation A is a model of a formula ~4 if + evaluates to true in A. A is a model of a set of formulae if it is a model of every formula in the set. Let mod($) denote the set of all models of +, where $ could be a single formula or a set of formulae. Consider the knowledge base Revise($, p). The possible states of the world consistent with + are the models of $, i.e., mod($). If p is inconsistent with $, p does not hold in any of them. We can make changes in the models of 4 such that ~1 holds in (some or all of) these changed interpretations. What type of changes can we make? How do we quantify these changes so that we can formalize the notion of minimal change? We answer these questions in this section. Consider changes first. The smallest change in an inter- pretation is a change in the truth value of a single atom. Since we do not wish to lje biased in favor of any single atom, all changes in truth values of all possible single atoms will be our smallest unit of change in an interpretation. Definition: If 20 is an interpretation over a set of atoms A, then define 9(w) = (w’ 1 w’ and w differ in the truth-value of at most one atom in A) At first sight it might appear that we are doing an overkill by generalizing $ with respect to all the ground atoms in it, since the cause of inconsistency might be lo- cated in only a few of them. In [Dalal, 19881 we show that revised knowledge is the same (modulo logical equivalence) even if II, is generalized with respect to only the conflicting atoms. Thus, if it is easier to find the set of conflicting atoms, then it would be advantageous to generalize + with respect to the conflicting atoms only. Example: Let $ = ( a, lb) and p = b. Then mod($) = ({m mod(p) = {{a, b), {b)). Since $ u (p) is in- consistent, we generalize +: ~d(G($)) = d~d(lcI)) = (A @3> (a, b33 Since G($) is consistent with ~1, k = 1. Thus Revise($, /.4) is GM U b3, whose only model is {a, b). Since we are not interested in exact syntactic representa- tion of a formula, $ o p can be expressed as any set of formulae, whose only model is (a, b). 4 Syntactic Characterization of Revision We present a technique to compute G($) by syntactic transformation of $, without using models of + or invoking any model-theoretic constructions. Since $0~ is defined in terms of G we would effectively have a syntactic transfor- mation technique to compute +op. For the purpose of this section, we represent a set of formulae by a conjunction of all the formulae in the set. We use the following lemma and definition from JWeber, 19871: Note that w E g(w). We can extend the definition of g to sets and formulae: Definition: If A is a set of interpretations, define SW = u 9(w) WU Lemma: Let $ be a formula a6d cy be an atom. There exists formulae $2 and +& such that (1) $2 and $; do not contain a, and (2) $ a (a A $2) V (la A $a). If $ is a formula or a set of formulae then G($) is defined in terms of its models as1 mod(W)) = g(mod(?ct)) We replace each ar in $ by true (or false) to obtain $2 (or +;). The resulting expressions can be simplified by evaluating subexpressions consisting of false’s or true’s, until all of these constants are eliminated. ‘Note that while g is a function on interpretation(s), G is a function on a formula Or a set of formulae. an 2We restrict the set of atoms to A($ U p), and we represent interpretation by the set of atoms which are assigned true. 476 Knowledge Representation Definition: Let +, cy, $2 and $; be as above; then zesa ($) = $J,+v+, is called the Teeolvent of + with respect to ar. Theoremz: Let ~6 be a formula and {cY~, . . . , a+) be the set of atoms occurring in +, then The following theorem gives a more direct way to de- termine whether one revision scheme retains more old- knowledge than the other. Theorem: o retains more old-knowledge than o’ iff for every + and ~1, $ o ~1 b $ 0’ p, while for some $ and p, $O’c1 k7/,0& WI = ream, (4) V . . . V zeBa,($) Thus we have a method to compute G(+) given any for- mula y3. This method can be used to compute the revised knowledge $ oh following the definition in the last section. A caveat in this characterization is that it requires checks of logical consistency in order to establish the minimum Je for which G”($) needs to be computed. This problem is in general NP-Complete for propositional logic. One such check is required for every step of generalization. Example(continued): Consider the example of the last section. It is convenient to express $ as a A lb. $ and ~1 conflict in the truth-value of the atom b. Thus, we need to resolve $ with only b: (3% = false (IQ; = a Tess($) = (+)if V (7% = a Thus, $0~ = G($)u(~) = a/\b 5 Retained Knowledge Let o be any revision scheme. The revised knowledge $0~ is expected to represent the composition of the old knowl- edge $ and the new information ~1. In this section we will formalize the notion of how much knowledge represented by + and ~1 is retained in ~6 o ~1. Definition: For formulae $ and ~1 and a revision scheme o , if there exist formulae u+, blc and b+Op such that $ + a;l,cl l= Q~, and then u+ is old-knowledge retainedby o, crP is new-knowledge retainedby o, and u+or is extra-knowledge added by O. Theorem: If o is a revision scheme then for every -+ and p the following statements are equivalent: 2. there exists o+ such that $ b u+ and + 0~ M u+ Ap. Since Principles 4 and 5 (section 2) entail condition 1, this theorem demonstrates that any acceptable revi- sion scheme retains complete new-knowledge and adds no extra-knowledge. Such schemes differ only in the amount of old-knowledge retained. A scheme that retains maxi- mum old-knowledge is more desirable. Definition: Let o and o’ be any two revision schemes that retain complete new knowledge and add no extra knowl- edge. o is said to retain at least as much knowledge as o’, i.e., o’ 5 o, if for all + and p there exists a+ and u$, such that ?U=U$;+l=U$ and c+ b$,. o is said to retain more knowledge than o’ iff o’ -( o and o $ 0’. 6 Related Work and Applications 6.1 A Logic of Theory Change [Makinson, 19851 provides an excellent survey on the work by GGrdenfors, Alchourron and Makinson (GAM) on the formal aspects of a logic of theory change. A theory is de- fined as a set of propositions (formulae) closed under log- ical consequence, i.e., A is a theory iff Cn(A) = A, where Cn is a consequence operation. Three operations are de- fined on a set of propositions A - expansion, where a new proposition 1: is set-theoretically added to A; contraction (A - z), where a proposition z which is in the theory Cn(A) is rejected; and revision (A $ z), where a propo- sition 3c inconsistent with the theory Cn(A) is added to it under the requirement that the revised theory be con- sistent. This operation of revision is very similar to the notion of revision introduced in this paper. Giirdenfors developed some general postulates that seem desirable for contraction and revision. His postulates for revision can be expressed as: (Gl) A $3: is always a theory; (G2) 5~ E A j- 8; (G3) If 1~ $! Cn(A) th en A $ z = Cn(A U {z)); (G4) If 1~ @ Cn(4) then A $ a: is consistent; (G5) If C%(z) = Cn(y) then A $ z = A $ y; (G6) A $ (z A y) 5 Cn( (A $ z) U {y)) for any theory A; (G7) Cn( (A $ z) U (y)) E A 4 (z A y) for any theory A, provided that ly $ A $ z. For a theory A, contraction is then defined using the iden- tity: A I ;I: = (A $ la) n A, and conversely (for any set of propositions A) : A $ z = Cn( (A L 12) U {z}). How does our approach compare with that of GAM? A superficial difference between the approach of GAM and the one presented here is that theirs is defined in terms of the set of formulas expressing the KB. If the KB is taken however to be the logical closure of these formulas (as suggested by a knowledge-level approach) this difference disappears. In fact, in the expanded version of this paper we characterize the revision schemes $ satisfying Gl-G7 in model-theoretic terms. On the other hand, the GAM approach is more general since it applies to any logic for which a notion of logical closure Cn is defined, while ours currently applies to only standard propositional logic. We do however have Theorem: The revision scheme o satisfies the Giirdenfors postulates Gl-G7. It is obvious that o satisfies axioms Gl-G5, and simple model-theoretic arguments establish conditions G6-G74. 4There is also a proof involving the notion of “partial function” introduced in [Alchourron et al., 19851. meet Wnless otherwise mentioned, proofs appear in [Dalal, 19881. Dalal 477 There are of course many other revision schemes that sat- isfy these conditions, including defining A 4 z to be just C%(z) when&%er +I* E Cm(A); and Cn(AU{z)) otherwise. Our scheme is more conservative than at least some of of Ginsberg’s definition to the work of AGM, and hence further relate it to our own definition of o. 6.3 Diagnosis from First Principles those satisfying Gl-G7: it preserves more old-knowledge Assume one is first given a description of some system (say, than the previous admittedly trivial revision, as will be a physical device) and then an observation of the system’s shown in the example of section 6.4. behavior. If the two are inconsistent then one is confronted Secondly, not all-revision schemes satisfying axioms Gl- G7 satisfy our postulate of fairness: One form of fair- ness would be to require that the result of 4 not depend on the accidents of naming propositions; i.e., if f is an isomorphism on A, then we would expect f(A 4 z) = f(A) i f(s). E ven some of the revisions considered in [Makinson, 19851 are unfair in this sense: they pick ar- bitrary maximally consistent subsets of A which do not contain x. Finally, GAM do not suggest any algorithm to imple- ment their constructions, although they do have a the- orem characterizing the acceptable revisions in terms of maximally consistent subsets of A which do not entail x. Such a definition would seem to be much more difficult to implement than that presented in Section 4; but then o requires tests of consistency, so we cannot make any great claims to efficiency. Observe also that the definition of o shows that, contrary to the intuitions voiced in [Makinson, 19851, contraction is not necessarily more primitive/basic than revision: defin- ing o does not involve contraction5. 6.2 Counterfactuals A counterfactual is a statement like “if p, then Q”, where the premise p is either known or expected to be false. It is represented as p > q and is defined to be true in a world6 iff q is true in every most similar (possible) worldin which the premise p holds. In an excellent paper [Ginsberg, 19861, Ginsberg presents a formal description of counterfactual implication and discusses the issues involved in implement- ing it. In our framework, p > q in a world $ is defined to be true iff “$0~ + q”. There is only one most similar possible world - 70 o p. Given a world F, [Ginsberg, 19861 defines a partial order among the subsets of F based on set inclusion. The set of possible worlds for p in F is defined to be: with a diagnostic problem, namely, to determine those sys- tem components whose abnormal behavior can account for this discrepancy. Suppose y$ is the system’s description, where there are propositions asserting the normality of all components; and suppose /.L is an observation that is inconsistent with $J. By protecting all but the normality propositions, it is possible to view 4 o ~1 as representing the revised description of the system7. This revised description will implicitly contain information about all abnormal components - the ones for which normality propositions do not hold. In [Reiter, 19871, Reiter proposes a theory of diagnosis from first principles (references to other work on diagno- sis can be found in Reiter’s paper) which starts form the same initial $ and /.L. He then suggests an algorithm which produces the set of abnormal components explicitly. Space limitations only permit us to state that our scheme would find only those diagnoses which involve the least number of abnormal components: thus if one diagnosis blamed com- ponent b, and the other components c,d and e, then using o only the former would be reported, while [Reiter, 19871 would report both. 6.4 Updates in Logical Databases A database can be considered as a set of formulae which models our knowledge about the real world. One can add new information to the database and query it about its current knowledge. Given new information, the update problem is to define and compute the revised state of the database. Notable approaches to solving this problem have been suggested by Fagin, Ullman and Vardi [Fagin et al., 19831, Borgida [Borgida, 19851, Winslett [Winslett, 19861 and Weber [Weber, 19871. We suggest that the update should be considered as the revision operator o. Example: Let $ = (aAb) and p= lavlb. Since @u(p) is inconsistent, we generalize $ with respect to both a and b. W) = res,($) V Tesb($) =’ a V b W(P, F) = {T E F 1 T &t -7p, d?(T) and V’v, T C U E F * U k -p or B(U)) The predicate B is called the badwodd predicate. Its purpose is to rule out certain worlds, say, which are com- pletely meaningless. p > q is defined to be true in a world F iff for every !I’ E W(p, F), T U (p) k q. Because of certain examples involving counterfactual statements, Ginsberg opts for a definition of p > q which depends on the syntactic form of p and q. As such, his def- inition clearly differs from our semantic definition. In the full version of this paper, we plan to show the relationship bGk(t,h) is not T) _l p ! 6A world is a set of propositions, which are not necessarily atomic. Since G(*) is consistent with p, we are done: the revised knowledge is G($) U (‘3 M ((a/\yb)V(-xzAb)). Given the model-theoretic nature of our revision mechanism, the re- sult of the update will be the same whether the knowledge base is presented as above, or as {a, b) or even {a, b, UA b). In contrast, all four of [Fagin et aZ., 19831, [Winslett, 19861, [Ginsberg, 19861 and [Weber, 19871 obtain (-xzVlb) as the revised database. They are thus less “conservative”, losing all the knowledge in the hypotheses set $. Moreover, [Ginsberg, 19861 and [Fagin et al., 19831 would report a dif- ferent answer (the one produced by our mechanism) if the database was presented as (a, b), but not as (u A b) ! It seems counter-intuitive that updates should produce differ- ent results even in such relatively minor variations in the ‘A protected formula must hold even after the revision. It is Iike an integrity constraint in a database. 478 Knowledge Representation syntax of the database - even the limited logic of explicit beliefs in [Levesque, 1984b] considers these formulations equivalent! In a more complete version of this paper, we relate the other update schemes to the Gijcrdenfors postulates. In [Dalal, 19881 we show that o preserves more old knowledge than them. 7 Conclusions The major contribution of this paper is a semantic def- inition of revision in propositional knowledge bases, pro- viding a new point in the spectrum of approaches to this old-standing problem. This definition is founded on a num- ber of a priori principles (especially minimality of change and fairness) and is also given a syntactic characterization. The application of the approach in several domains is also discussed. The notion of old-knowledge retained is formal- ized, and the approach defined in this paper is shown to retain more old-knowledge then some previous proposals. In addition to the results mentioned earlier, we also pro- pose to investigate the extension of this work in several directions: - Establish further criteria for fairness and preservation of old knowledge, and evaluate all the proposals against these. - Extend the language of revisions to first order logic and epistemic languages like Levesque’s FOPC. - Extend the notion of KB to allow differential treat- ment of certain atoms, or even formulas (e.g., integrity constraints in a data base), so that some beliefs are more easily given up. This of course relaxes the principle of fairness. Acknowledgements I wish to ezpress my gratitude to Alez Borgida for his ad- vice and help at every stage of this research. This paper has benefited from the discussions and comments of my colleagues, including T. Imielinski, J. Radhakrishnan, J. Chomicki, K. Williamson, A. Prieditis, T. Marlowe and J. Koeneman. In addition, I wish to thank D. Ethering- ton, J.-F. Lamy, and the two anonymous referees for their eztremely usefisl suggestions. References [Alchourron et al., 19851 C. Alchourron, P. Giirdenfors, and D. Makinson. On the logic of theory change: partial meet contraction and revision functions. The Journal of Symbolic Logic, 50(2):510-530, June 1985. [Borgida, 19851 A. Borgida. Language features for flexible handling of exceptions in information systems. ACM - Tbawactions on Database Systems, 10(4):565-603, 1985. [Dalal, 19881 M. Dalal. Updates in Propositional Databases. Technical Report DCS-TR-222, Depart- ment of Computer Science, Rutgers University, Febru- ary 1988. [Fagin et al., 19831 R. Fagin, J.D. Ullman, and M.Y. Vardi. On the semantics of undates in databases. In Proc. of 2nd ACM Symp. on Principles of Database Systems, pages 352-365, Atlanta, March 1983. [Ginsberg, 19861 M.L. Ginsberg. Counterfactuals. Artifi- cial Intelligence, 30(1):35-79, 1986. [Levesque, 1984a] H.J. Levesque. Foundations of a func- tional approach to knowledge representation. Artifi- cial Intelligence, 23(2):155-212, 1984. [Levesque, 1984131 H.J. Levesque. A logic of explicit and implicit belief. Proceedings of AAAI-84, pages 198- 202,1984. [Makinson, 19851 D. Makinson. How to give it up: a survey of some formal aspects of the logic of theory change. Synthese, 62:347-363, 1985. [Newell, 19811 A. N ewell. The knowledge level. The AI Magazine, 2(2):1-20, 1981. [Reiter, 19871 R. Reiter. A theory of diagnosis from first principles. Artificial Intelligence, 32(1):57-95, 1987. [Weber, 19871 Andreas Weber. Updating propositional formulas. In L. Kerschberg, editor, Expert Database Systems, pages 487-500, The Benjamin/Cummings Publishing Company, Inc., 1987. [Winslett, 19861 M. Winslett. Is belief revision harder than you thought? In Proceedings of AAAI-86, pages 421-427,1986. Dalal 479
1988
73
819
Adding Number Restrictions to a Four-Valued Terminological Logic Peter F. Patel-Schneider Schlumberger Palo Alto Research 3340 Hillview Avenue Palo Alto, California 94304 Abstract An intuitive four-valued semantics can be used to develop expressively powerful terminological log- its which have tractable subsumption. If a four- valued identity is also used, number restrictions can be added to the logic while retaining tracts bility. The subsumptions supported by the logic are a type of “structural” subsumption, where each structural component of one concept must have an analogue in the other concept. Struc- tural subsumption captures an important set of subsumptions, similar to the subsumptions com- puted in KL-ONE and NIKL. This shows that the trade-off between expressive power and computa- tional tractability which plagues terminological logics based on standard, two-valued semantics can be defeated while still retaining a useful and semantically supported set of subsumptions. 1 Introduction Terminological logics formalize the notion of frames-a notion present in many current knowledge representation systems-as structured types, often called concepts. These logics include a set of syntactic constructs that form con- cepts, and other, related, notions such as roles. Terrnino- logical logics are based on formal model-theoretic seman- tics which provide firm definitions for the syntactic con- structs of the logic. The allowable concepts vary between different termino- logical logics but generally concepts are the conjunction of a set of more general concepts and a set of restrictions on the attributes of instances of the concept. Such concepts can be loosely rendered as noun phrases such as a student and a female whose major is a com- puter science major, and who has at least three enrolled courses, each of which is a graduate course whose department is an en- gineering department. Terminological logics are part of KL-ONE [Brachman and Schmolze, 19851, NIKL [Moser, 19831, KRYPTON [Brachman et al., 1983, Brachman et al., 19851, and KANDOR [Patel- Schneider, 19841, as well as several other knowledge repre- sentation systems. The most important operation in terminological logics is determining if one concept subsumes-is more general than-another. A formal definition of subsumption is part of the semantics of terminological logics. Informally, one concept subsumes another if any object that satisfies the requirements of the second concept (i.e., is an instance of the second concept) must also satisfy the requirements of the first. For example, the concept person with at least two children subsumes the concept person with at least three children who are lawyers in standard terminological logics. This is so because, in the standard semantics for terminological logics, all instances of the second concept must also be instances of the first. However, as shown by Levesque and Brachman [1987], computing subsumption is intractable in expressively pow- erful terrninological logics based on standard semantics. This intractability is a severe problem, since terminolog- ical reasoners would be useful in many knowledge-based systems, and it is not desirable to have components of knowledge-based systems that may take an extremely long time to compute common operations. To achieve tractable subsumption, at least in the worst case, the logic must be expressively weak -too weak to be usable in knowledge- based systems. The trade-off between expressive power and computa- tional tractability can be defused by using a weak seman- tics for terminological logics-one that supports fewer sub- sumption relationships -resulting in tractable subsump- tion for expressively powerful logics. This solution retains a firm semantic foundation for the system, as opposed to the more usual method of achieving tractability by provid- ing a sound but not complete reasoner (as in KL-ONE and NIKL). A tractable terminological logic using a weak semantics based on the four truth values of tautological entailment [Belnap, 1977, Levesque, 19841 has been developed [Patel- Schneider, 19861. The logic is more expressive than the terminological logic that Levesque and Brachman discov- ered to be computationally intractable in the standard se- mantics. However, it is still expressively weak, as it does not include number restrictions-a construct appearing in many semantic networks and frame-based knowledge rep- resentation systems. A number restriction is a way of restricting the number of values that a role has. For example, “at least three children who are lawyers” is a number restriction. As number restrictions are useful in many domains, a termi- nological logic without number restrictions is lacking an extremely useful construct. Patel-Schneider 485 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Number restrictions introduce a new source of complex- ity to terminological logics. When they are added, new semantic ideas--such as identity and cardinality-have to be considered, thus complicating the semantics and the analysis of the logic. The computational problem with number restrictions is that the standard two-valued notion of identity sanctions subsumptions that are hard to compute, even in an oth- erwise four-valued semantics. This paper presents a four- valued notion of identity that solves this computational problem, resulting in a terminological logic incorporat- ing number restrictions that has tractable subsumption- subsumption similar to the “structural” subsumption of KL-ONE and NIKL. . Of course, there is a price to be paid for using a four- valued identity. A four-valued identity is yet another change from the standard semantics, resulting in less cor- respondence between the semantics of the terminological logic and the standard semantics. However, the change is not too unappealing, and incorporating number restric- tions while still retaining tractable subsumption and a sim- ilarity to subsumption in KL-ONE and NIKL is useful. 2 Syntax and Intuitive Meaning The terminological logic developed here has two major syntactic types-concepts and roZes-corresponding to the frames and slots of most frame-based knowledge represen- tation systems. Concepts represent collections of related individuals and roles describe relations between these in- dividuals. The intuitive meaning of the various constructs in the language are derived from the intuitive meanings of constructs in frame-based knowledge representation sys- tems. The logic mostly is an extension of the logic in [Patel- Schneider, 19861. It is closely related to the terminological logics of KL-ONE, NIKL,KRYPTON, and KANDOR. Concepts can be formed in the following ways: <concept> ::= <atomic concept> 1 (and <concept>+) 1 (all <role> <concept>) 1 (atleast <minimum> <role>) 1 (atmost <maximum> <role>) <minimum> ::= <positive integer> <maximum> ::= <non-negative integer> The construct (and <concept>+) is a conjunction con- struct. Informally, an individual belongs to (and Cl Cs) if it belongs to both Cl and C2. The construct (all <role> <concept>) is a role restriction construct. Informally, an individual (Y belongs to (all R C) if, for every individual p, either a is not related to p by R, or p belongs to C. The constructs (atleast <minimum> <role>) and (at- most <maximum> <role>) are number restriction con- structs. Informally, an individual belongs to (atleast n R) if it is related to at least n distinct individuals by R. If n is 1, this reduces to a role filler existence construct. An in- dividual belongs to (atmost n R) if it is related to at most n distinct individuals by R. Roles can be formed in the following ways: 486 Knowledge Representation <role> ::= <atomic role> I (and <role>+) I (restrict <role> <concept>) The construct (and <role>+) is a conjunction construct, similar to (and <concept>+). The construct (restrict <role> <concept>) is a restriction construct. Two in- dividuals are related by (restrict R C) if they are related by R and the second is also an instance of C. The addition of number restrictions is the major change between this language and the language of [Patel- Schneider, 19861. This addition brings the expressive power of the language nearly up to par with the termi- nological languages of KL-&NE and NIKL. 3 Formal Semantics The formal semantics of the logic is an extension of the semantics of [Patel-Schneider, 1986]-supporting number restrictions via a notion of cardinality based on a four- valued identity. The basic ideas underlying the semantics are similar to the ideas underlying other denotational se- mantics. The semantics is based on semantic structures or possible worlds, each of which contains a set of individuals and a mapping from syntactic constructs-concepts and roles-into their meaning in the semantic structure. The truth values of this semantics are {t} or true, {f} or false, {} or unknown, and {t, f} or contradictory.’ Thus the set of truth values form the powerset of {t, f}, written 21tyfl. A semantic structure is a triple, (D, V, I), where D is a set of individuals, V is a function that takes concepts and roles into their extension, and I is an identity rela- tionship over D. The extension of a concept is a mapping from D to 21tyfl. Th e extension of a concept is thus a four-valued characteristic function-not a two-valued char- acteristic function. Similarly, the extension of a role is a mapping from D x D to 2ttyfl. The identity relationship is also a mapping from D x D to 2ct*fl, which must satisfy 1. I(d,d) = {t), 2. I(d, e) = I(d, e), and 3. if t E I(d, e) and t E I(e, f) then t E I(d, f), for all d, e,f E D. These restrictions make the identity relationship into a four-valued version of an equivalence relation. Although this semantics is not too far distant from a standard two-valued extensional semantics, there are some differences that need explanation. One way of motivating these differences is to treat treat the extension of a con- cept, and also of a role, as two extensions, the positive extension and the negative extension. The positive exten- sion of the concept C is the set of individuals that belong to the concept-defined as {d E D : t E V[C](d)}. The negative extension of the concept C is the set of individu- als that definitely do not belong to the concept-defined ‘A slightly different set of truth values that could be used is the set WI, USI, 0, fH used by Frisch [1985]. This set of truth values gives a slightly stronger logic, which may be tractable here, at the expense of removing a useful symmetry. Note that the set of truth values ((1, {t}, {f)}, used by some of the pop- ular three-valued logics, is usually as intractable as two-valued logics. as (d E D : f E V[C](d)}. Unlike the case in two-valued semantics, these two sets need not be complements of each other-there may be individuals that are members of nei- ther of these sets, and also individuals that are members of both of these sets. Individuals that are members of neither set are not known to belong to the concept and are also not known not to belong to the concept. This is a perfectly reason- able state for a system that is not a perfect reasoner or does not have complete information. Individuals which are in both the positive and negative extension of a con- cept can be thought of as inconsistent with respect to that concept in that there is evidence to indicate that they are both in the extension of the concept and also (conflicting) evidence to indicate that they are not in the extension of the concept. (Such individuals need not be contradictory with respect t’o other concepts). This is a slightly harder state to rationalize but can be. considered a possibility in the light of inconsistent information. The difference between this semantics and the one in [Patel-Schneider, 19861 is the presence of the four-valued identity relationship. This relationship is easier to under- stand if viewed in a manner similar to the positive and negative extension viewing of the extension function. Un- der this view, if t E I(d, e) then d and e are known to be identical, and if f E I(d, e) then d and e are known not to be identical, i.e., known to be distinct. As above, it is possible that two individuals are neither known to’be identical nor known not to be identical, and it is also pos- sible that two individuals are both known to be identical and known not to be identical. The reflexive, symmetric, and transitive nature of the identity relationship do, how- ever, serve to make it similar to the standard two-valued notion of equality, and thus makes the change reasonably palatable. A notion of cardinality can be derived from this four- valued identity. A set does not have a unique cardinality, but instead has a minimum cardinality, based on which _ of its members are known to be distinct, and a maximum cardinality, based on how many members it has which are not known to be identical. The minimum cardinality of a set, X, is defined to be the size of its largest subset for which all elements are known to be distinct, Similarly, the maximum cardinality of a set, X, is defined to be the size of its largest subset for which no two elements are known to be identical, maxc(x) = max ly] ’ re?Ig$), Vd, e E Y, d # e > ’ It is possible for the maximum and minimum cardinality of a set-to be different. For example, if no identity relation- ships, positive or negative, are known, then the maximum cardinality of a set is its standard cardinality and its min- imum cardinality is 1. It is also possible for the maximum cardinality of a set to be less than the minimum cardinality of a set. For example, if the identity relationship is total- everything is both identical and distinct from everything else-then the maximum cardinality of a set is 1 and its minimum cardinality is its standard- cardinality. The four-valued identity and the derived notions of min- imum and maximum cardinality form another departure from the standard semantics. The main problem with this departure is not the change from a twovalued identity to a four-valued one, which is in keeping with the basic four- valued nature of the semantics, but the associated divorc- ing of identity from equality in the domain. The four- valued identity weakens the connection between elements of the domain and objects in the world, suggesting instead an interpretation where elements of the domain are more akin to descriptions. Going from individuals to descrip- tions is not a fatal problem, but requires some rethinking of how well the semantics corresponds to its desired role. The semantics can perhaps best be viewed as a semantics of belief, where the elements of the domain are descriptions in some agent’s belief space. In this view of the semantics, if f E I(d, e), th en d and e are believed to be descriptions of distinct objects. Similarly, if t E V[C](e), then d is believed to be a description of an object that belongs to the extension of C. Of course, this view does not change the underlying four-valued nature of the semantics, so it is possible to have incomplete and inconsistent beliefs about identity. The extensions of non-atomic concepts and roles are specified in terms of conditions that they have to meet: t E V[(and Cl . . . C,)](d) 8 for each i, t E V[Ci](d) f E V[(and Cl . . . C,)](d) iff for some i, f E V[Ci](d) t E V[(aDI R C)](d) iff Ve f E V[R](d, e) or t E V[C](e) f E V[(all R C)](d) iff 3e t E V[R](d, e) and f E V[C](e) t E V[(atleast m R)](d) iff minc{e : t E V[R](d,e)} 2 m f E V[(atleast m R)](d) iff maxc(e : f $ V[R](d, e)} < m t E V[(atmost m R)](d) iff maxc{e : f 4 V[R](d,e)) 5 m f E V[(atmost m R)](d) iff minc{e : t E V[R](d, e)} > m t E V[(and Ri . . . Rla)](d,e) iff for each i, t E V[RJ(d, e) f E V[(and RI . . . Rn)](d, e) iff for some i, f E V[RJ(d, e) t E V[(restrict R C)](d, e) iff t E V[R](d, e) and t E V[C](e) f E V[(restrict R C)](d,e) iff f E V[R](d, e) or f E V[C](e) These conditions are designed so that the formal seman- tics corresponds closely to the previously-discussed infor- mal meaning of concepts and roles. For example, the positive extension of (and Cr C,) must be the intersection of the positive extension of Cl and Cs and its negative extension must be the union of their neg- ative extensions. In this way the intuitive notion of con- junction is made formal. Similarly, the conditions above require that if an element of the domain is in the positive extension of (atleast m R) then it must be related to at least m domain elements, known to be pairwise distinct, by the positive extension of R. Also, if an element of the domain, d, is in the positive extension of (atmost m R) then any set of domain elements, no two of which are known to be identical, that are not known to be related to d by the negative extension of R must have cardinality at most m. In this way the intuitive semantics of number restrictions are captured in a four-valued framework. The final part of the semantics is the definition of sub- sumption: Definition 1 One concept or role is subsumed by another, written C a C’, if the positive extension of the first is always a subset of the positive extension of the second and Patel-Schneider 487 the negative extension of the second is always a subset of Subsumption relationships involving modus ponens are the negative extension of the first. not valid here. For example, This definition again corresponds closely to the informal notion of one concept being more general than another. 4 Discussion The semantics defined here has a close relationship to stan- dard, two-valued semantics for terminological logics as de- fined by Levesque and Brachman [1987]. Define a model as a semantic structure where 1. for every concept C, the positive and negative exten- sions of C are disjoint and together exhaust the set of individuals of the model, 2. the positive and negative extensions of roles are also disjoint and exhaustive, and 3. the identity relationship is equality. In such semantic structures the above semantics, including the definition of subsumption, reduces to a standard two- valued semantics for terminological logics. Because of this inclusion relationship, all reasoning in this logic is sound with respect to standard terminological logics. The conditions for concepts and roles, and also the def- initions of cardinality and subsumption, are just a reinter- pretation, in a four-valued setting, of the standard two- valued conditions and definitions. There is nothing added besides what is needed to get from two truth values to four truth values. Thus the semantics is closely related to intuitions about the meanings of concepts and roles. The changes in the semantics-going from two to four truth values and a four-valued identity-are reasonable for systems with limited reasoning power. Such systems do not have total information, thus the presence of truth-value gaps, and also cannot resolve inconsistencies, thus allowing for inconsistent situations. The four truth values of the logic have also been previously used to develop limited reasoners in other areas [Levesque, 1984; Patel-Schneider, t 0 appear]. The set of subsumptions supported by this logic forms an interesting and useful set. Since subsumption is sound with respect to standard terminological logics, if one con- cept subsumes another in this logic then it will also do so in a standard, two-valued terminological logic. Sound- ness of subsumption is an important requirement if the semantics is to capture some of the intuitive ideas behind terminological logics. The sort of subsumption relationships that are valid in this logic are the simple ones, such as (and person (atleast 2 child)) subsuming (and person (atleast 3 (restrict child lawyer))), and (and person (atmost 4 (restrict child doctor))) subsuming (and person female (atmost 3 child)). As these examples show, the valid subsumption relation- ships are not trivial, and include at least some interesting subsumption relationships. (and person (all friend doctor) (all (restrict friend doctor) (atleast 1 speciality))) is not subsumed by (and person (all friend (atleast 1 speciality))), because in four-valued semantic structures it is possible that some friend might both be a doctor and not be a doctor, as well as not specializing. Because the friend is a doctor, (all friend doctor) is not falsified; because the friend is not a doctor, (all (restrict friend doctor) (atleast 1 speciality)) is not falsified; however, because the friend does not specialize, (all friend (atleast 1 speciality)) is falsified, and thus the subsumption relationship does not hold. Also, (and person (atleast 2 friend) (all friend doctor)) is not subsumed by (and person (atleast 2 (restrict friend doctor))), because some individual might both be a friend and not be a friend. Subsumptisn relationships that require reasoning from the law of the excluded middle for identity are also not valid here. For example2, in a two-valued terminological logic (and (atleast 1 (restrict child lawyer)) (atleast 1 (restrict child doctor))) would be subsumed by (or (atleast 2 child) (atleast 1 (restrict child (and lawyer doctor)))), because either the child that is a lawyer is different from the child that is a doctor, in which case there are two children, or they are identical, in which case there is one child which is both a doctor or a lawyer. In the four-valued logic this is not a valid subsumption because it is possible to be uncertain about whether the doctor and the lawyer are identical. These subsumptions are hard to compute, which forms one of the reasons for the switch to a four- valued identity. The subsumption relationships that are valid form a sort of “structural” subsumption3-where each structural com- ponent of one concept or role must have an analogue in the other-similar to the subsumption relationships computed by KL-ONE and NIKL. This close correspondence indicates that the subsumption relationships of this logic form a use- ful set, and, moreover, provides a way of semantically jus- tifying the incomplete subsumption algorithm for KL-ONE and NIKL. ‘This example cannot be expressed in the logic described here because it includes a disjunction operator. However, a more complicated example which embeds this one CUYJ be ex- pressed in the logic. 3As will be shown in the next section. 488 Knowledge Representation 5 Computing Subsumption Subsumption in this logic is weaker than subsumption in logics using the standard semantics, however this does not imply that subsumption is easy to compute here. Even the fact that subsumption is easy in the logic of [Patel- Schneider, 19861 is no assurance that it will be easy here. The addition of number restrictions is a major extension and, as Levesque and Brachman have shown [1987], even small changes in the expressive power of a formal system can result in large changes in the computational tractabil- ity of its operations. Fortunately, subsumption is tractable in this logic. The subsumption algorithm for the full form of the logic is too long to fit in this paper, so an indirect argument has to be used to show its tractability. This is done by convert- ing concepts and roles to a canonical form, giving a sub- sumption algorithm for concepts and roles in this canon- ical form, and then showing how this algorithm can be converted into a tractable subsumption algorithm for con- cepts and roles in arbitrary form. Concepts and roles in canonical form take the following form: <concept> ::= (and <primary>*) <primary> ::= <atomic concept> 1 7 <atomic concept> 1 (atleast <minimum> <role>) 1 1 (atleast <minimum> <role>) <role> ::= (restrict (and <atomic role>+) <concept>) <minimum> ::= <positive integer> This canonical form introduces a new operator, 1, which is a classical negation operator defined as t E V[X](d) iff f E V[C](d) and f E V[X](d) iff t E V[C](d). The conversion can be done by using the following equiv- alences: 1. 2. 3. 4. 5. 6. 7. 8. A commutativity and associativity for conjunctions of concepts and roles C f (and C) (all R (and Cl Cz)) zz (and (all R Cl) (all R Cz)) (all R E) z -(atleast 1 (restrict R -E)) (atmost n R) E l(atleast n+l R) R zz (restrict R (and )) (and (restrict Rr C) Rz) zz (restrict (and Rr Rz) C) (restrict (restrict R Cr) Cz) E (restrict R (and Cl C,)) canonical form concept will often be viewed as a set of primaries. Similarly, the two parts of a canonical form role will often be viewed as a set of atomic roles and a set of primaries. The conversion to canonical form does not change the extension of concepts or roles: Theorem 1 Let C’ be the canonical form of the concept or role C. Then for any semantic structure, V[C’] = V[C]. Proof: By simple structural induction on C.4 4Proof% of the theorems of this paper can be found very similar to proofs in PateI-Schneider, 19881. in or are Once concepts and roles are in canonical form then the following characterization of subsumption is both sound and complete. Theorem 2 Let C and C’ be canonical form concepts. Then C 3 C’ infor all top-level conjuncts, D’, in C’, there exists a top-level conjunct, D, in C such that 1. if D’ is an atomic concept or the negation of an atomic concept, then D = D’, 2. if D’ is of the form (atleast m R’), then D is of the form (atleast n R), with n 2 m and R + R’, and 3. if D’ is of the form -(atleast m R’), then D is of the form l(atleast n R), with m > n and R’ + R. Let R = (restrict S C) and R’ = (restrict S’ C’) be canon- ical form roles. Then R j R’ ifiS’ E S and C 3 C’. This characterization confirms that subsumption in this logic is weak. Only “structural” subsumptions are valid, and inference rules that chain together separate pieces of a concept or role are not valid, except for those involving the conversion to canonical form. Thus subsumption is this logic is very close to the subsumption relationships computed by KL-ONE and NIKL. Given this characterization of subsumption, it is sim- ple to derive a subsumption algorithm that runs in time proportional to the product of the sizes of its arguments. Theorem 3 Subsumption for canonical form concepts and roles can be performed in time proportional to the prod- uct of the sizes of the two concepts or roles involved. The process of converting concepts and roles to canoni- cal form can exponentially increase their size, and thus the tractability of subsumption on arbitrary form concepts and roles has not yet been demonstrated. Two modifications are needed to produce a tractable algorithm for subsump- tion. First, the conversion of concepts and roles to canon- ical form must be done by means of structure sharing. If this is done the “size” of the canonical form of a concept or role-not the length of its printed form but the size of the data structure-will be proportional to the size of the original concept or role, and the canonicalization can be done in linear time. Second, the subsumption algorithm has to be changed so as not to redo computations. This can be done by storing previously performed subsumption tests at the appropriate places in the canonical form of the concept or role, and querying these results when ap- plicable. The obvious method of storing and querying the cached subsumption tests results in a subsumption algo- rithm that runs in time proportional to the product of the “size”s of its arguments. Thus the entire subsumption pro- cess can be done in time proportional to the product of the sizes of the two concepts or roles, resulting in Theorem 4 Subsumption for arbitrary concepts and roles can be performed in time proportional to the product of the sizes of the two concepts or roles involved. 6 Summary The extension of the four-valued semantics for termino- logical logics to encompass number restrictions shows that four-valued semantics can be of use in expressive termino- logical logics. The logic used here contains most of the con- structs of the languages of KL-ONE and NIKL, and contains Patel-Schneider 489 some useful constructs that do not occur in them. Several of the constructs in KL-ONE and NIKL, such as role-value maps and structural descriptions, that are not in this logic can be easily formulated in it, and, moreover, do not seem to provide any computational difficulties for subsumption.5 One extension that has been investigated is the ability to specify fillers (or values) for roles [Patel-Schneider, 19881. This extension retains the desirable computational prop- erties of subsumption. The four-valued semantics used here is a reasonable se- mantics, especially when considering systems with limited reasoning capabilities. Subsumption in this semantics is easy to compute, at least for the language given here. The valid subsumption relationships form an interesting set- one that includes the easy subsumptions and leaves out the less obvious ones. This set corresponds closely to the set of subsumption relationships computed in KL-ONE and NIKL, lending a degree of credence to that set. This extension is not without problems, however. It shares the problems of the semantics in [Patel-Schneider, 1986]-being not as intuitive as the standard twovalued semantics and sanctioning a very limited set of subsump- tion relationships. These seem to be unavoidable prob- lems if a uniform, simple semantics with a fast subsump- tion algorithm is required. The extension also weakens the relationship between the elements of the domain set and objects in the world. This extension shows that there are even more tradeoffs in the relationship between expressive power, deductive power, and computational tractability in terminological logics. It justifies a limited set of subsumption relation- ships for an expressively powerful terminological logic that is easy to compute and, moreover, captures an interesting subset of the standard subsumption relationships. This is not a total solution, because no total solutions are pos- sible (unless P = NP), but it does demonstrate that it is possible to to alleviate the computational problems of expressively powerful terminological logics by weakening deduction in a principled manner. Acknowledgments [Brachman et al., 19831 Ronald J. Brachman, Richard E. Fikes, and Hector J. Levesque. KRYPTON: A func- tional approach to knowledge representation. IEEE Computer, 16(10):67-73, October 1983. [Brachman et aL, 19851 Ronald J. Brachman, Victo- ria Pigman Gilbert, and Hector J. Levesque. An essen- tial hybrid reasoning system: Knowledge and symbol level accounts of KRYPTON. In IJCAI-85 [1985], pages 532-539. [Frisch, 19851 Alan M. Frisch. Using model theory to spec- ify AI programs. In IJCAI-85 [1985], pages 148-154. [IJCAI-85, 19851 International Joint Committee on Artifi- cial Intelligence. Proceedings IJCdI-85, August 1985. [Levesque and Brachman, 19871 Hector J. Levesque and Ronald J. Brachman. Expressiveness and tractability in knowledge representation and reasoning. Computa- tional Intelligence, 3(2):78-93, May 1987. [Levesque, 19841 Hector J. Levesque. A logic of implicit and explicit belief. In Proceedings AddI-84, pages 198- 202, Austin, Texas, August 1984. American Association for Artificial Intelligence. A revised version available as FLAIR Technical Report Number 32, Fairchild Labora- tory for Artificial Intelligence Research, August 1984. [Moser, 19831 M. G. Moser. An overview of NIKL, the new implementation of KL-ONE. Technical Report 5421, BBN Laboratories, 1983. Part of a collection enti- tled “Research in Knowledge Representation for Natural Language Understanding-Annual Report, 1 September 1982-31 August 1983”. [Patel-Schneider, 19841 Peter F. Patel-Schneider. Small can be beautiful in knowledge representation. In Pro- ceedings IEEE Workshop on PrincipIes of Knowledge- Based Systems, pages 11-16. IEEE Computer Society, December 1984. A revised and extended version is avail- able as AI Technical Report Number 37, Schlumberger Palo Alto Research, October 1984. [Patel-Schneider, 19861 Peter F. Patel-Schneider. A four- valued semantics for frame-based description languages. In Proceedings AddI-86, pages 344-348, Philadelphia, Hector Levesque and Ron Brachman, through their inves- Pennslyvania, August 1986. American Association for tigation of the complexity of computing subsumption in Artificial Intelligence. ’ standard semantics for frame-based description languages, provided the impetus for this research. Anonymous re- [Patel-Schneider, 19881 Peter F. Patel-Schneider. A four- valued semantics for terminological logics. Technical viewers pointed out a problem with a previous version of this paper. Report 72, Schlumberger Palo Alto Research, February 1988. References [Belnap, 19771 Nuel D. Belnap, Jr. A useful four-valued logic. In G. Epstein and J. M. Dunn, editors, Modern Uses of Multiple- Valued .Logic, pages 8-37. Reidel, 1977. [Brachman and Schmolze, 19851 Ronald J. Brachman and James G. Schmolze. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9(2):171-216, April-June 1985. [Patel-Schneider, to appear] Peter F. Patel-Schneider. A decidable first-order logic for knowledge representation. Journal of Automated Reasoning, to appear. 50f course, statements Iike this are notoriously prone to er- ror. Adding extra expressive power to the logic via adding more constructs is an area for further research. 490 Knowledge Representation
1988
74
820
Normal Multimodal Logics Laurent Catach IBM Paris Scientific Center 3-5 Place VendBme, 75001 Paris, France Abstract This paper studies what we call normal multimodal logics, which are general modal systems with an arbitrary set of normal modal operators. We em- phasize the importance of non-simple systems, for which some interaction axioms are considered. A list of such acceptable axioms is proposed, among which the induction axiom has a special behavior. The class of multimodal logics that can be built with these axioms generalizes many existing modal, temporal, dynamic and epistemic systems, and could also suggest new formalizations using modal logics. The main result is a general deter- mination theorem for these multimodal systems, which establishes a correspondence between our axioms and conditions over Kripke frames; this should avoid the need for showing determination each time a new system is considered. 1 (Introduction I.1 Presentation During the last decade, it has been widely shown how modal logics provide suitable tools for various theoretical formalizations in computer science. In fact, many modal systems can be found in the litera- ture, and there are a number of areas where such logics are used. Most popular readings of the modal formula q a are, for example, “0~ is necessarily frue” (standard modal logic), “a will always be true” (temporal logic), “X knows fhaf a” or “X believes that a” (epistemic logic), or “after executing some pro- gram a, a will be frue” (dynamic logic), etc. In general, only one fype of modality is considered, i.e. only one aspect (time, knowledge, programs, . ..) is treated at a time. But relatively few attempts have been made to employ all these systems simultane- ous/y; on the other hand, if modal logics are to be of any practical interest, and especially in Al, it seems very natural to ask whether these different modelizations can be “put together”, so we could talk about necessify. time, know/edge, belief, actions, plans, deterministic programs, concurrent programs, obligations, conditionals, etc, wwithin the same lan- guage. Thus, attempting to define a rigorous and unified framework for such systems, which can be called mu/f/modal logics {an abbreviation for multiple modal logics), is the initial motivation for our work. There- fore, our first task is to define syntactic, axiomatic and semantic bases for these systems. However, a very desirable feature of multimodal systems lies in their ability to represent some interrelations between the different aspects (i.e. between modalities lJ,,Cl, . ..). such as the well-known “If X knows that a, then X believes that a” of epistemic logic. Therefore, some questions that naturally arise in considering multimodal logics are: 1. Which combinations of modal systems should be examined? 2. What kinds of interactions between these systems make sense? Should they be specified seman- tical/y or axiomatically? 3. Can we develop a systematic approach to these multimodal systems, and extend standard tech- niques developed for traditional modal logic? It is without the scope of this paper to provide ap- propriate answers to points (1) and (2), since it de- pends very much on the intended formalizations, and moreover these questions may be subject to philo- sophical discussions. The only thing we can say is that some particular combinations, such as know- /edge and belief, or know/edge and time, are certainly of primary interest, especially to Al. So we will focus on point (3), and try to follow a systematic approach, as in [Chellas, 19801 for standard modal logic. Though multimodal logics could be entirely defined by their semantics, in a model-theoretic way, as in [Thomason, 19841 or [Halpern and Shoham, 19861 (and this approach seems particularly relevant when time is consid- ered), we prefer a more axiomatic approach. To be- gin with, we propose a first class of interaction axioms Gv~,~.~J’, with some examples. Then, a general determination theorem is given for the normal multi- modal systems generated by these axioms; the proof uses an extension of the canonical model method for modal logics. To handle induction, however, this method fails, and we have to use the Fischer-Ladner filtration method, as will be indicated. ? .2 Expressiveness: examples One main feature of multimodal languages is their ability to express complex modalities, obtained by composing modal operators of different types or, more generally, by using forma/ operations over modalities. For example, “Bob knows if will be the Catach 491 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. case that” or “Bob knows it is impossible for Alice to believe that” are such complex modalities. To give a very simple example, let us consider a bi-modal epistemic system L, with two agents A/ice and Bob, and two belief operators K, and KIL: K,a : “A/ice believes a” K,a : “Bob believes a” Suppose that Alice and Bob have, as in real life, different ways of reasoning about their beliefs; for example, Alice may be good at both positive and negative introspection, whereas Bob never performs any kind of introspection. With the traditional epistemic approach, K, is then a KD45 modal opera- tor, whereas K, is simply a KD operator. Suppose also that the following assertion holds: “A/ice believes everything Bob believes”, for example if Alice is a little bit naive, or if she is deeply in love with Bob (despite his lack of introspection capabilities!). Then, we would like the axiom scheme j-(K,a --+ K,a) to hold in our system L. In short, in our terminology the resulting bi-modal system will be not homogeneous (since K, is of type KD45 and K, of type KD) and with interactions (since the above axiom links K, and K2). Other examples of interaction principles can be given in considering: knowledge and belief: “If X knows that Y knows that a, then X knows that a” “If X believes cc, then X believes fhat he knows CI” knowledge and time ([Halpern and Vardi, 1986)): “lf X knows that in the next state a will be true, then in the next state he will know thaf CI is true” belief and time ([Lehmann and Kraus, 1986)): “If X believes that tomorrow a will be true, then he believes that tomorrow he will still believe that a is true” As we will see, our action axioms. results apply to such 1.3 Related work Dynamic logic ([Parikh, 19811, [Harel, 19841) and process logic ([Harel, Kozen and Parikh, 19811) al- ready use families of modal operators, denoted by [a], where a represents a program. Also, epistemic logics ([Halpern and Moses, 19851, [Halpern, 19861) provide modal languages with several operators K,,K, . . . K,. Both are, in fact, multimodal logics; but both make the two following important restrictions: 0 they are homogeneous systems, which means that every modal operator ([a] or KJ belongs to the same system of traditional modal logic (e.g. T, S4, S5, . ..) * they form systems without any interactions, which means that, roughly speaking, each modal oper- ator [a] or K, is totally independent (axiomatically or semantically) of the others. Temporal logics, in both their linear or branching- time versions, can also be viewed as special cases of multimodal logics, since many operators are in- volved in the language. In fact, these operators (some of which being not normal) are linked by very special connections, generally indicated by the se- mantics; for example, operators 0 (“next”) and Cl (“always”) of linear-time logic simply interact by a transitive-closure correspondence. Beside these well-known types of iogics, some other multimodal systems have been explored; the reader is referred to [Cohen, 19601, [Rennie, 19701, [Farifias, 19831, [Thomason, 19841, [Fariiias and Orlowska, 19851, [Lucas and Lavendhomme, 19851, [Lehmann and Kraus, 1986], [Halpern and Vardi, 19861, [Halpern and Shoham, 19861, [Fischer and Immerman, 19871. Most of them fall within the scope of the multimodal systems we consider here, as the reader may verify. 2 Formal syntax and semantics 2.1 Language A propositional multimodal language ZE’ is deter- mined by a set a0 of propositional variables p,q . . . . a set C, of atomic parameters A, B . . . . “U” (union) and “;” (composition) operations over parameters, the boolean connectives 1 A v -+ ++ and, finally, the “[I” construct for modal operators. We also distin- guish an element A in C, to be the neutral element for the composition of parameters, i.e. to be the identity parameter. The set C of all abstract parameters is built from C, and the “U” and “;” operations, and the set cf, of all formulas is built from aO, the boolean connectives, and the rule “if a is a parameter and a a formula, [ala is a formula”. As usual, <a > =.@f -[al-a. The set OPS, of atomic modal operators contains operators [A] and <A > for AEC,; in the following, q EOPS, means that Cl =[A] with Ad,, and JZ’(O) will designate the sub-language of %built from (A) instead of C,. We also define 8 = [i] = <A > as being the identity operator. To capture finite sets OPS,= (El ,,O, . . . Cl_,.,), we let &={I,2 . . . n), writing q i instead of [I]; usual epistemic systems can be obtained in this way. To get traditional modal logic, we simply let C,= {I). To get dynamic logic, we just add the Il*,, (iteration) and “?” (test construction) operations to the lan- guage. Further extensions, as proposed in [Berman, 19791, could also be incorporated. Depending on the desired interpretations, some subsets of parameters in C, can be used to represent various concepts such as rational agents, programs, actions or even space or time dimensions. Also, a fundamental remark is that Kleene operations “;” and “u”, along with (3, are always definable with regard to modal operators, using: : I”d” =Def o! ,; . . . ;,,,)a g,-&f 0, . . . &a . (,, , i_, . . . t, q &t =Def q ,a ,-, . . . A Ona Thus, we can always assume the presence of these operations in the language and it allows us, for ex- ample, to consider that formulas 0 Oa or (Cl, q ,a A q ,a A a) can always be represented by < I;1 >a and [(1;2) U 3 U A]cf respectively. 492 Knowledge Representation 2.2 Systems of multimodal logics If L denotes a multimodal language .-%’ with an axiomatization, i.e. a set Ax of axioms and inference rules, then, for q EOPS,, we define L(O) to be the set (a E Z(O) / a is a theorem of L) and Ax(O) to be the subset of Ax containing the axioms and rules of Ax which are either non-modal or involve only formulas belonging to .ZZ’(Cl). The first problem is to know whether L(U) can in- deed be viewed as a “sub-system” of L, i.e. to know whether L(U) can be axiomatized, and whether AX(U) can be used for this purpose. Conversely, can we >just “put together” separate axiomatizations for each sub-system L(D); to get an axiomatization of L? A notion of separ;abilify is therefore needed: Definition: An axiomatization Ax is said to be separable if, for each cl EOPS,, L(U) =TH(Atc(O)), where TH(Ax(IJ)) is the setI of theorems generated by the> axiom and inference rules of AX(O). An example of non-separable axiomatization is: ; $=+%+ (0 ,a - cl ,P) &(a ---, 8; -+ (&a -+ fJ,P) I&%-, El a since Cl,& + a is i’n L(0,) but is not derivable from Ax(CI,), which only contains axiom (3). This point will not-be fully examined here (see [Catach, 19881). Using separable axiomatizations, we will take each sub-system L(O) as being at least normal, so we al- wa-ys have axioms q lcl ++ 7 () 7 cc and K. lJ(a -+ p) --+ (Oa -+ LIP), and also the rule of necessitation RN. “if b then boa” for atomic modal operators. Using definitions of “;” and “u”, it can easily be shown that this also holds for all operators [a], a&Z. Such multimodal systems can be called normal. Note that classical sub-systems ([Chellas, 19801) can also be considered ([Catach, 1988)). The important point is that the sub-systems L(0) may be normal systems of different types, as in the Alice-and-Bob example. If all the sub-systems L(0) are identical to a given system L, of traditional modal logic, we say that L is a homogeneous multimodal system, based on L,. 2,3. Axioms In addition to axioms for ,I and Kleene operations: [d]a f) 0! [a;b]a * [a][b]a [a U b]a f-, ([ala A [b]a) our class of multimodal logics is obtained by consid- ering systems axiomatized by any finite number of axioms schemes of the following type: <a> [b]a --+ [c]<d>a where a, b, c, d denote arbitrary parameters. If we refer to axiom Gh~‘~mJ?. OkCl’a -+ q mOna of modal logic ([Chellas, 1980]), our axiom will be noted Ga,b,c~d and called the “a,b,c,d-incesfuality” axiom. Note that Ga+!‘~cld is equivalent to Gcvd+lb. The fact that a, b, c, d may be complex parameters (i.e. built from atomic ones, using “;” and “U”) make axioms Ga,b,c,d very general. In particular, Ga,b,c,d covers GkJJ)J, and therefore covers the traditional D, T, B, 4, 5 axioms of modal logic ([Chellas, 19801). For example, if a=b=A and c=d=A, we get the symmetry axiom B for Cl =[A]. Consequently, each normal sub-system L(0) can be any of the fifteen well-known modal systems generated by D, T, B, 4 and 5, e.g. KD, KT, KT4 (S4), KTB4 (S5), KD45, etc. If the axiomatization Ax of L consists only in the superposition of all the axiomatizations Ax(O) of the sub-systems L(O), we say that L is a simple multi- modal logic, and Ax is separable. If X,=(1,2 . . . n), examples of non-simple systems can be given by considering the following Gasb.csd interaction axioms: * q l2a-+Cl,a (inclusion) e q ,a -+ (O,a --+ q l,a) (relative inclusion) * lJ2a++C.l,a (equivalence) 0 a-+ q ,(),a (semi-adjunc tion) * q l,a -+ (),a (common seriality) * q ,O,a-, q i20,a (semi-commutativity) @ q l,a +-+ (O,a A q l,a) (union) l q l,a+-+ q l,O,a (composition) Finally, we also consider the following pair: Cbla -+ (Cala A CalCblrv-1 VW -+ Cala) -+ (Cala -+ PIa) called the a,b-induction axioms. Taking b=a*, we get the Segerberg axioms for PDL ([Kozen and Parikh, 19811, [Harel, 19841). Taking a=1 U 2 U . . . U n, we get the axiom for common operators C or D of epistemic logics ([Halpern and Moses, 19851, [Lehmann and Kraus, 19861). We can also get the induction axioms of temporal logics in this way. Note that the first one, written [b]a -+ [a U (a;b)]a, is of type G3.blcJ’. Thus, the above axiom schemes cover many ex- isting systems of modal or multimodal logic. Also, they make the generation of a large class of new ones possible; our Alice-and-Bob story provides such an example. 2.4 Notations The standard notations of Lemmon can be easily ex- tended to multimodal systems, by indexing the sub- systems L( Cl). For example, our Alice-and-Bob system will be noted (KD45), (KD), K,,, , if K,., denotes q ,a -+ Cl ,a. Similarly, (KD45)p) (KD)$m) would desig- nate a simple multimodal system with n operators of type KD45 and m operators of type KD. Note that, for any integer n>O, K(“) is the smallest normal n-modal system (simple or not). 2.5 Models Kripke semantics easily extends to multimodal log- its. A multi-(relational) frame is a pair F = < W,.% >, where W is a set of possible worlds and ,692 is a set Catach 493 of binary relations over W; in that case, F is said to be the join of the frames < W,R> with RE@. If C,, c 2 . . . C, is a family of classes of frames, the join of these classes is the class c of multi-frames c W,{R,,R, . . . R,) > such that <W,R, > belongs to Ci for each Is&n. If L is a normal multimodal system, F= <W,,@> is said to be a multi-frame for L if there exists a map- ping p from C to a satisfying: + p(d) = i * p(a U b) = p(a) U FMO l p(a;N = p(a) I ~(4 where i = ((w,w) / WEW} is the identity (or diagonal) relation over W, and “u” and “I” denote the usual union and composition of binary relations. Multi-models M = < W,.@,V> are defined, as ex- pected, by introducing an assignment function V from Wxd[>, to (0,l). Truth of formulas in worlds of multi- models, written (M,w) b a, is defined inductively as usual; thus, for every parameter a and formula a, we have: (M,w) b [ala it7 (M,w’) /= a for every w’ such that (w,w’)Ep(a) Satisfiability and validity in multi-models, multi- frames and classes of multi-frames, for formulas or sets of formulas, are defined in the usual way. We omit details. In the following, multi-frames and multi-models are defined directly as <W,p > and < W,p,V > respectively. 3 Determination We use the usual operations “c” (inclusion), n-lrr (converse), “U” (union), “1” (composition) and N+N (transitive closure) over binary relations. If < W,p > is a multi-frame for L, and if a, 6, c, d are parameters in ZZ;, we define a,b,c,d-incestuality as being the fol- lowing property: if (w,w’)EP(a) and (w,w”)ep(c), then there exists w”’ such that (w’,w”‘)~p(b) and (w”,w”‘)q(d). Formally, this yields P(a)-’ 1 p(c) c p(b) 1 p(d)-‘, which can be pictured as follows: Theorem: Let L be a normal multimodal system built from a finite set of axioms Ga,b,c,d. Then L is deter- mined by the class of multi-frames having the corresponding a,b,c,d-incestual properties. As usual, soundness is easily obtained. To show completeness, we use the proper canonical (multi-) frame Fc = <WC@> of L, defined as follows: 1. WC is the set of all maximal consistent sets of L, 2. for every V, V’ E WC and a E C, p’(a) is defined by: (V,V’)Epc(a) iff ( a / [a]aEV } c V’ and we show that (i) Fc is a frame for L, and (ii) if Ga.b.cJ is an axiom of L, then Fc is an a,b,c,d-incestual frame. See [Catach, 19881 for a detailed proof. As an example, the above theorem shows that our Alice-and-Bob system (KD45),(KDXK,,, is determined, as expected, by the class C of multi-frames <W,(R,,R,) > where R, is serial, transitive and euclidean, R, is serial, and R, c R,. Note that, as far as soundness is concerned, we can give a very gen- eral result for all simple multimodal logics: Theorem (Soundness): Let L be a simple multimodal system, such that for every q IEOPS,, L(0) is sound with respect to a class C, of frames. Then L is sound with respect to the join C of the classes C, , 4 Induction We expect that multimodal systems containing one or more pairs of a,b-induction axioms (see 2.3) should be determined by the classes of multi-frames < W,p > where p(b) =p(a)+. Soundness can indeed be stated for these multimodal systems, i.e. a,b-in- duction axioms are always valid in multi-frames <W,p > where p(b) =P(a)+. But completeness can- not be obtained using the proper canonical model; all we can show is that if an a,b-induction holds in L, then p’(a)+ E pc(b), and Segerberg axioms are not strong enough to capture transitive closure, i.e. to show the converse p’(b) c p”(a)+. In fact, there exist (infinite) sets of formulas which are consistent in a system L containing an a&induction axiom but which cannot be satisfied in any model for which p(b) c p(a)+. This result was already known for dy- namic logics with the a*rr operator, using results from dynamic algebras ([Parikh, 19811, [Harel, 19841). The easier way to handle induction is to extend the Fischer-Ladner filfrafions method for dynamic logic ([Fischer and Ladner, 19791, [Harel, 1984]), which does yield completeness results (and also decidability and complexity ones at the same time) for some multimodal systems. But no general result can be enounced easily, the problem being that a,b,c,d-incestuality is not always preserved during filtration, as in DPDL ([Ben-Ari, Halpern and Pnueli, 19821). The reader is referred to [Catach, 1988) for a more careful study of multimodal systems with in- duction axioms. 5 Other topics * As mentioned above, extending the filtrations method to multimodal logics yields several re- sults, namely the finite model property, decidability, and even complexity. As for the last one, we expect that the complexity of the validity problem should be PSPACE-complete for multi- modal systems without any induction axioms, and 494 Knowledge Representation EXP7/ME-complefe if at least one induction axiom is considered. Complexity should also depend very much on the considered formulas. The Lindenbaum algebra associated with a multi- modal logic is a boolean algebra with unary op- erators, in the sense of [Jonsson and Tarski, 19511. Therefore, studying multimodal algebras yields many interesting results, such as an ele- gant proof of determination in some cases. Other types of frames and models can be consid- ered for normal multimodal logics, namely multi- dimensional ones <WlxW2x . . . xW~,{R,,R, . . . R”3 ‘. Protocols, as defined in [Fischer and Immerman, 19871, are examples of such models. Conclusion This paper presents some formal developments of multimodal logics, which are general modal systems with arbitrary sets of modal operators. A class of axioms, and especially of interaction axioms, has been proposed, generating a wide class of systems, for which a general determination theorem has been given. Problems when dealing with induction axioms have also been indicated. Many other aspects of multimodal logics remain to be investigated, as has already been done for standard modal logics; some of them are studied in [Catach, 19881. Acknowledgements I am very grateful to David Makinson for helpful re- marks and suggestions. I also thank anonymous referees for correcting errors, and Rosalind Greenstein for her help in preparing the paper. References [Ben-&i, Halpern and Pnueli, 19821 M.Ben-Ari, J.Halpern, A.Pnueli: “Deterministic Propositional Dynamic Logic”, Journal of Computer and System Sciences 25 (1982), pp.402-417. [Berman, 1979-J F.Berman: “A completeness tech- nique for D-axiomatizable semantics”, Proc. 11” ACM Symp. on Th. of Comp. (1979), pp.160-166. [Catach, 19881 L.Catach: “Multimodal logic&‘, Doc- torate Thesis (1988), IBM Scientific Center, Paris, France. [Chellas, 49801 B.F.Chellas: “Modal Logic. An intro- duction”, Cambridge University Press, 1980. [Cohen, 19601 L.J.Cohen: “A formalization of referentially opaque contexts”, The Journal of Symbolic Logic 25 (3) (1960). [FariAas, 19831 L.FariAas: “Space as time”, Information Processing Letters 17, Oct. 1983. pp.l13-115. [FariAas and Orlowska, 49851 L.Farifias, E.Orlowska: “DAL: a logic for data analysis” Theoretical Com- puter Science 36 (1985), pp.251-264. [Fischer and Immerman, 49871 M.J.Fischer, N.lmmerman: “Interpreting Logics of Knowledge in Propositional Dynamic Logic with converse”, Information Processing Letters 25 (1987), pp.175-181. [Fischer and Ladner, 19791 M.J.Fischer, R.E.Ladner: “Propositional Dynamic Logic of Regular Pro- grams”, Journal of Computer and System Sciences 18 (1979), pp.l94-211. [Halpern, 49861 J.Y.Halpern (ed): “Reasoning about knowledge: an overview”, Proc. Conference on Theoretical Aspects of Reasoning about Know- ledge (1986), Morgan Kauffmann. [Halpern and Moses, 1985-J J.Y.Halpern, Y.Moses: “A guide to the modal logics of knowledge and be- lief”, Proc. NCAl 1985, pp.480-490. [Halpern and Shoham, 4986) J.Y.Halpern, Y.Shoham: “A propositional modal logic of time intervals”, Proc. 1st Ann. IEEE Symp. on Logic in Comp. Sci- ence (1986) pp.279-292. [Halpern and Vardl, 19861 J.Y.Halpern, M.Y.Vardi: “The complexity of reasoning about Knowledge and Time: extended abstract”, Proc. 8’ ACM Symp. on Th. of Comp. (1986), pp.304-315. [Handbook, 49841 “Extensions of classical logic”, Handbook of Philosophical Logic II, D.Gabbay & F.Guenthner eds, Reidel, 1984. [Harel, 1984) D.Harel: “Dynamic Logic”, in [Handbook, 19841. [HareI, Kozen and Parikh, 19811 D.Harel, D.Kozen, R.Parikh: “Process logic: expressiveness, decidability, completeness”, Proc. 21° IEEE Symp. on Found. of Comp. Science 1980, pp.129-142. [Jrjnsson and Tarski, 19541 B.Jonsson, A.Tarski: “Boolean algebras with operators I”, American Journal of Mathematics 73 (1951), pp.891-939. [Kozew and Parikh, 1981-J D.Kozen, R.Parikh: “An elementary proof of the completeness of PDL”, Theoretical Computer Science 14 (1981), pp.l13-118. [Lehmann and Kraus, 19861 D.Lehmann, S.Kraus: “Knowledge, Belief and Time”, Lecture Notes in Computer Science 226 (1986), pp.l86-195. [Lucas and Lavendhomme, 1985) Th.Lucas, R.Lavendhomme: “Completude de certaines logiques bimodales”, Seminaire de I’lnstitut de Mathematiques de I’llniversite de Louvain (Belgium), Report 69, May 1985. [Makinson, 4966) D.Makinson: “On some Complete- ness Theorems in Modal Logic”, Zeitschrift tUr Mathematische Logik und Grundlagen der Mathematik 12 (1966), pp.379-384. [Parikh, 19811 R.Parikh: “Propositional dynamic log- its of programs: a survey” Lecture Notes in Com- puter Science 125 (1981), pp.l02-144. [Rennle, 19701 M.K.Rennie: “Models for multipiy modal systems”, Zeitschrift ftir Mathematische Logik und Grundlagen der Mathematik 16 (1970), pp.175-186. [Thomason, 19841 R.H.Thomason: “Combinations of tense and modality”, in [Handbook, 1984).
1988
75
821
A Logic for Hypothetical Reasoning Anthony J. Bonner Department of Computer Science Rutgers University New Brunswick, NJ 08903 Abstract This paper shows that classical logic is inappro- priate for hypothetical reasoning and develops an alternative logic for this purpose. The paper fo- cuses on a form of hypothetical reasoning which appears computationally tractable. Specifically, Horn-clause logic is augmented with rules, called embedded implications, which can hypothetically add atomic formulas to a rulebase. By intro- ducing the notion of ruZebuse independence, it is shown that these rules can express hypothetical queries which classical logic cannot. By adopting methods from modal logic, these rules are then shown to be intuitionistic. In particular, they form a subset of intuitionistic logic having se- mantic properties similar to those of Horn-clause logic. 1 Introduction Several researchers in the logic-programming community have pointed out the utility of augmenting Prolog with the ability to hypothetically add facts to a rulebase. Miller, for instance, has shown how such rules can structure the run- time environment of a logic program [Miller, 19861. Warren and Manchanda have also proposed such logics for reason- ing about database updates [Warren, 1984; Manchanda, 19881. The legal domain, in particular, has inspired much work into this kind of hypothetical reasoning. Gabbay, for example, has reported a need to augment Prolog with hy- pothetical rules in order to encode the British Nationality Act. The act contains rules such as, “You are eligible for citizenship if your father would be eligible if he were still alive” [Gabbay and Reyle, 19841. Also, McCarty has de- veloped a wide class of hypothetical rules for the purpose of constructing computer-based legal consultation systems, especially systems for reasoning about contract law and corporate tax law [McCarty, 1988a; McCarty, 19811. Although hypothetical reasoning is in general com- plex [Ginsberg, 19861, these systems focus on a form of hypothetical reasoning which appears computationally tractable. In particular, they augment Horn-clause logic with rules called embedded implications. These are rules of the form A t (B + C), which means, “A is true if adding C to the rulebase causes B to be true.” The formal properties of embedded implications are still being explored. Gabbay, for instance, has shown that hy- pothetical embedded-implications have an intuitionistic se- mantics [Gabbay, 19851, and Miller has developed an intu- itionistic fixpoint semantics for the predicate case [Miller, 19861. McCarty has extended this work to a larger class of formulas and established interesting semantic results [McCarty, 1988a]. Bonner has shown that query process- ing in such systems is PSPACE-complete in the function- free predicate case (EXPTIME-complete when hypotheti- cal deletions are allowed) [Bonner, 1988a]. This paper continues this line of theoretical research in two ways. First, it formulates a precise sense in which clas- sical logic is incapable of expressing hypothetical queries and rules. Specifically, queries are required to be rulebase independent; that is, a query should not have to be refor- mulated if the rulebase changes. An example is then given of a simple, hypothetical query which cannot be expressed in classical logic in a rulebase-independent way. Second, this paper presents a new development of the intuitionistic semantics of embedded-implications. In par- ticular, instead of developing fixpoint semantics, we ap- ply techniques of modal logic to construct a canonical model. This provides a new perspective, and hopefully new insight, into the intuitionistic basis of hypothetical reasoning. It is shown, in particular, that hypothetical embedded-implications are a subset of intuitionistic logic with semantic properties similar to those of Horn clauses. Indeed, classical Horn-clauses are a special case of intu- itionistic embedded-implications both proof-theoretically and semantically. This paper is an overview of [Bonner, 1988b], to which the interested reader is referred for details and proofs. 2 Examples This section gives examples of hypothetical queries and rules. They focus on a rulebase representing the policy and regulations of a university. For instance, the atomic formula tahe(s, c) means that student s has taken course c, and g&(s) means that s is eligible for graduation. The rulebase contains facts such as take(tony, cs250), and rules such as grad(s) + take(s, cs250), take(s, I&101) The notation R I- II, means that query + is true when applied to rulebase R. For example, consider the query, “Retrieve those students who could graduate if they took (at most) one more course.” This query can be formalized at the meta-level as follows: 1 “Retrieve those B such that 3c [R U (take(s, c)) I- gvad(s)] (1) In our logic of hypotheticals, this query is represented by the expression 3c [gsad(s) +- take(s, c)]. This is an object- ‘See [Kowalski, 19791 f or a description of meta-level and object-level reasoning. 480 Knowledge Representation From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. level expression $(8) such that R t- $( 8) iff condition (1) is satisfied. Having introduced hypothetical queries, we can also use them in the premises of rules. Such rules turn our query language into a logic for building rulebases. For example, suppose the university wishes to enact the following rule: “If a student is within one course of graduation, and he is not eligible for primary aid, then he is eligible for secondary aid.,, This hypothetical ing two rules:2 rule can be represented by the follow- &-z%(8) t N aidl(s), gradl(8). gtUdl(8) t 3C [g7’Ud(8, C) + tUkf?(8, C)] Here, uidl(s) and &d%(8) mean that student B is eligible for primary and secondary aid, resp. gpudl(e) means that 8 is within one course of graduation. 3 Expressibility This section defines a new notion of expressibility which applies to rulebase systems. It centers on the idea that queries should be independent of the rulebase. Using this definition, a hypothetical query is constructed which can- not be expressed in classical logic. 3.1 Rulebase Independence The term “rulebase query” is a generalization of “database query” and refers to a question that is posed to a system containing a large number of rules as well as facts. In gen- eral, there are two aspects to such a query: (i) a question that the user has in mind, and (ii) an expression which he constructs to represent it. “Query formulation” is the task of constructing this expression. This section argues that query formulation should be independent of the rulebase. In particular, l A user should be able to construct an expression to represent a query without a detailed knowledge the rulebase. o If the rulebase is updated, the expression a query should not have to change. representing Similar constraints exist in database systems; e.g., a user should be able to formulate a database query without knowing the contents of the database. Rulebase independence is important for a variety of fez+ sons. Firstly, it permits casual use of the rulebase. That is, one does not have to be an expert in the contents of the rulebase in order to formulate queries. Secondly, it increases reliability. If a user’s knowledge of the rulebase is inaccurate, then his queries may be formulated incor- rectly. Thirdly, it makes changes less expensive. If queries are not rulebase independent, then changes in the rule- base can propagate throughout the user community. For example, utility programs which query the rulebase and li- braries of commonly used queries would have to be updated whenever the rulebase is changed. Finally, the notion of rulebase independence is important not only to the formu- lation of queries but also to the construction of rulebases. Indeed, the premise of a rule is nothing more than a rule- base query itself. Thus, keeping rule premises independent of the rest of the rulebase has all of the advantages just listed for queries: it makes them easier to formulate and more reliable, and it prevents small changes from propa- gating throughout the rulebase. Rulebase independence is captured in the following def- inition: A rulebase query is expressible in a language if and only if it is possible to construct a single ex- pression $ which returns the correct answer for all rulebases R. 3.2 Classical Logic Because of the properties of material implication, classi- cal logic cannot represent some hypothetical queries in a rulebase-independent way. For example, consider the query: “If one of B1 or Bs were added to the rulebase, would C become true?” To represent this query, we need an expression + such that for all rulebases R, R+$ ij6r RU(Bl)kC or Ru(Bz)+C The obvious candidate for $ is the expression (C c B1) V (C t B2). Indeed, in intuitionistic logic, this expression does represent the query in a rulebase indepen- dent way. In classical logic, however, material implication leads to the following equivalence: $ z (Ct B1)v(C+ B2) f (C v N B1) V (C v N B2) E Cv-Blv-B2 E CtB1, B2 Hence, R j= ~6 iff RU(B,, B2) b C, by the deduction theorem. Classically, then, $ does not represent the above query. Is there is some other expression which does? The following theorem (proven in [Bonner, 1988b]) assures us that there is not. Theorem 1 Classically, there is no expression -qS szsch that for any set R of propositional Horn cIawesj3 R+$ $7 RU(B1) /=C or Ru(B~)~C These ideas extend to the construction of rulebases. In particular, suppose that R is a rulebase not containing the atom A, and suppose we wish to add new rules R, to R so that for all R, RUR,l-A iff Ru(BI)I-C OT RU{B2)l-C Classically, there is no set of formulas R, (Horn or oth- erwise) which satisfies this condition. For if R, existed, then we could contradict theorem 1 using $ = A t R,. As the next section shows, however, R, can be constructed using hypothetical embedded-implications. In particular, R* = {A t (C t B1), A t- (C +- B2)) 3This theorem remains true if R is restricted to Horn clauses built from a given set L of atoms. 2The fist rule uses negation-by-failure [Kowalski, 19791. Bonner 481 4 Hypothetical Inference Because classical logic is inadequate for hypothetical infer- ence, new inference mechanisms are needed, i.e., proof pro- cedures for embedded implications. Such procedures have been developed by several researchers [Gabbay and Reyle, 1984; Miller, 1986; McCarty, 1988b], and this section de- fines a simplified version of them. This propositional ver- sion retains the essential properties of the more elaborate systems while admitting a clean theoretical analysis. Definition 1 A Horn rule is an expression of the form B +- B1, Bx, . . . . Bk where h 2 0 and B and each Bi are atomic. Definition 2 An embedded implication is an expression of the .form B t (61,662, . . . . +k where h 2 0 and each & is a Horn rule. Note that Horn rules include atomic formulas as a special case, and embedded implications include Horn rules as a special case. Definition 3 Suppose R is a set of embedded implications. If B and Bi are atomic, then 1.Rl-B if BER 2. RF B if B e&,...,d~ is a rule in R and R t- & for each i. 3. R k B c B1, . . . . Bk if RU(&,...,Bk) k B If R is a set of Horn rules, then this inference system is equivalent to classical Horn-clause logic. However, if R contains arbitrary embedded implications, then these inference rules do not have a classical semantics. That is, although they are clearly sound with respect to classical logic, they are not complete. To see this, consider the rulebase (A + (B t C), D t A, D t C). Classically, D can be inferred from these three rules4, but it is a simple exercise to see that D cannot be inferred using the above inference rules.5 5 Intuitionistic Logic The rules of hypothetical inference defined above are non classical. Indeed, they were introduced precisely to over- come the shortcomings of classical logic described in sec- tion 3.2. The question thus arises as to the nature of their semantics. In fact, they form a subset of intuitionistic logic. This section provides a brief development of intu- itionistic logic adapted from [Fitting, 19691 and [McCarty, 1988a]. Definition 4 Suppose L is a finite or countably infinite set of propositional atoms. A substate is a subset of L, and an intuitionistic structure is a set of substates. Further- more, if s1 and sx are subatates, then 81 5 82 iff 81 E 82. Note that an intuitionistic structure is really a Kripke structure (A&‘, R, x), where M is the set of substates, the access relation R is the subset relation, and the truth- assignment function ?r is given by 4To see this, note that the two Horn rules are equivalent to D t (A V C), and the embedded implication is equivalent to (AvC)/\(Av-B). ‘because there are no rules for inferring B or C. x(A) = (s 1 8 E M and A E s) Since R and ?r are trivial, we do not make them explicit. Definition 6 (Satisfaction) Suppose + is a formula, M is an intuitionistic structure, and s is a substate of M. Then s, M b y5 is read, “M satisfies 7c, at 8,” and is defined recursively as follows: 0 IfAisatomic,thene,Mj=A ij6r AE~ 0 s,M kW% i8 s,M k$l and s,M j=& e s,M j= $1 Wx i.i7 *, M I= $1 or s, M I= $2 a 8, M j= - $ ifi T, M &t $ for all r 1 s in M e s,M l=$24-1h # Of k $1 * Qf I=& for all T 2 s in M Note that unlike classical logic, intuitionistic implica- tion is not defined in terms of disjunction and negation. Rather, it has an independent semantic definition. This is why intuitionistic logic does not give rise to the problems mentioned in section 3.2. Definition 6 (Models) M I=: + ifi 8, M b $ for all substates s of M. In this case we say, “M satisfies $,” or “M is a model of $ “. Definition 7 (Entailment) Suppose $1 and $x are for- mulas. Then +I + $2 in every model of $1 is also a model of $2. 6 Semantics Several researchers have developed fixpoint semantics for inference rules like those of section 4. Miller, for instance, has developed fixpoints semantics for such rules based on intuitionistic and minimal logic [Miller, 19861. McCarty has considered a larger class of formulas involving negation and embedded universal quantifier#, developing an intu- itionistic fixpoint semantics and establishing interesting semantic results [McCarty, 1988a]. Recently, Manchanda has considered hypothetical deletions as well as additions, developing a fixpoint semantics based on dynamic logic [Manchanda, 19881. Indeed, the use of fixpoint semantics to establish completeness results has been common in logic programming since the seminal work of [Apt and Van Em- den, 19821 and [Van Emden and Kowalski, 19’761. This section presents an alternative development based on completeness techniques used in modal logic. The aim is to add a new perspective, and hopefully gain new in- sight into the intuitionistic basis of hypothetical reasoning. To this end, we introduce an intuitionistic structure called the canonical Kripke model. This structure, defined proof- theoretically, provides the necessary link between inference and semantics. It also plays a central semantic role, anal- ogous to that of the unique minimal model in Horn-clause logic. Finally, we compare the intuitionistic semantics of em- bedded implications to the classical semantics of Horn clauses, outlining the close relationship that exists between them. In particular, some well-known properties of Horn clauses are shown to be a special case of properties of the canonical Kripke model. 6A quantifier is embedded if it appears in the premise of a rule, 8s in 44 +- Var P(z, 34 +- C(z, ~11 482 Knowledge Representation 6.1 Soundness and Completeness To show that the hypothetical inference rules of section 4 are intuitionistic, one must prove that they are sound and complete with respect to intuitionistic semantics. In particular, one must prove the following two theorems: Theorem 2 (Soundness) If R is a set of embedded impli- cations and q5 is a Horn rule, then if R I- 4 then R b q5. Theorem 3 (Completeness) If R i8 a set of embedded implications and 4 is a Horn rule, then if R k 4 then RI- 4. Soundness is straightforward and follows from modus ponens and the deduction theorem. Proving completeness is more complex. The approach taken here is an adaptation of techniques used in modal logic and centers on the notion of a canonical model [Chel- las, 19801. In particular, given a set of embedded implica- tions R, 1. Construct an intuitionistic structure 2Mn, called the canonical Kriphe model of R. ‘2. Show that A& b R. 3. Show that if MR j= A then R I- A, for every atom A. Thus, if R k A, then A is true in all models of R. In particular, it is true in MR, and so by point 3, R I- A. This establishes the completeness of atomic inference; i.e., R j= A =+ R I- A. By the deduction theorem, however, it follows that R b tj =s- R I- q5 for any Horn-rule 4. This would prove theorem 3. The central question, however, is how to construct the canonical Kripke model MR. We define it to be the range of a proof-theoretic operator CUR. This operator thus pro- vides the essential link between hypothetical inference and intuitionistic semantics. Definition 8 If R is a set of embedded implication8 con- strutted from the atoms in C, and s is a (possibly infinite) subset of C, then CUR is the atomic closure of R and s and is defined as follows: cIR(s)={AEC 1 RU8kA) Definition 8 If R is a set of embedded implications con- structed from the atom8 in t, then MR is the canonical Kriphe model of R and is defined a8 follows: MR = {clR(s) 1 8 G c) This definition establishes point 1 above. Points 2 and 3 follow from these definitions in a straightforward way (details may be found in [Bonner, 1988b]). Thus, the rules of hypothetical inference defined in section 4 are sound and complete with respect to intuitionistic semantics. In other words, this restricted form of hypothetical reasoning is intuitionistic reasoning. 6.2 Semantic Properties Although the inference system of section 4 is intuitionis- tic, it is not equivalent to the full intuitionistic logic. Dis- junctions, for instance, cannot be expressed. This section describes the semantic properties of this subset of intu- itionistic logic. In particular, it is shown that hypothetical embedded-implications have properties similar to those of Horn clauses. These properties are sometimes sighted as the basis for the computational attractiveness of Prolog [Makowsky, 1986; McCarty, 1988a], suggesting that em- bedded implications may also be attractive as a logic pro- gramming language. The first results are fundamental properties of the canonical Kripke model. Theorem 4 MR has a unique minimal substate 8*. That is,if SEMR then s,<s. Theorem 5 MR ie the unique mazimal model of R. That is, if M b R then M 5 MR Theorem 4 follows immediately from the definition of MR by setting a0 = CIR( (1). It can be generalized, how- ever, to the substate intersection property:7 Theorem 6 The intersection of a collection of substates in MR i8 itself a substate in MR. That is, if M C MR then HIM E MR. The existence of a unique maximal model, having the substate intersection property was first established by Mc- Carty in [McCarty, 1988a]. Indeed, McCarty defines an intuitionistic structure called K* which he shows is the unique maximal model of a set of embedded implications. Theorem 5 assures us that McCarty’s K* is identical to our MR, at least in the propositional case. Conceptu- ally, however, the main difference between them is that K* is defined semantically whereas MR is defined proof- theoretically. Theorems 4 and 5 have implications for inference. In particular, from 4 it follows that the canonical Kripke model contains all the information necessary to perform hypothetical inference. That is, Corollary 1 If q5 is a Horn rule, then RF4 iff MRI=~ This offers a semantic interpretation of negation-as- failure: 4 cannot be inferred from R iff 4 is not true in the maximal model of R. Note the similarity of this to the semantics of failure in Horn-clause logic: an atom cannot be inferred from a set of Horn clauses iff it is not true in the minimal model. In this sense, the canonical Kripke model is an intuitionistic analogue of the unique minimal model of Horn-clause logic. This analogy can be taken one step further. Because of the unique minimal model, Horn clauses have the at- tractive property that they entail a disjunction of atoms iff they entail one of the atoms individually. Similarly, as the next corollary shows, a set of embedded implications entails a disjunction of Horn rules iff it entails one of the Horn rules individually. Corollary 2 If 41, . . . . &, are Horn rules, then RkW’ - -. V q5m ifi R j= q& for some i. This corollary means that a theorem prover for intu- itionistic embedded-implications does not need extensive modification to deal with disjunctive goals. Indeed, the disjuncts are non-interacting and a theorem prover can work on each one separately. This suggests adding the following rule of hypothetical inference to those of section 4: % particular, the intersection unique minimal substate. of all substates of MR is the Bonner 483 o For any collection of Horn rules t&, . . . . 461, References [Apt and Van Emden, 19821 K.R. Apt and M.H. Van Em- den. Contributions to the theory of logic program- ming. Journal of the ACM, 29(3):841-862, 1982. [Bonner, 1988a] A.J. Bonner. Hypothetical Datalog: Com- plezity and Expressibility. Technical Report DCS-TR- 231, Department of Computer Science, Rutgers uni- versity, New Brunswick, NJ 08903, 1988. [Bonner, 1988131 A.J. Bonner. A Logic for Hypothetical Reasoning. Technical Report DCS-TR-230, Depart- ment of Computer Science, Rutgers University, New Brunswick, NJ 08903, 1988. [Chellas, 19801 B.F. Chellas. Modal Logic: an Introduc- tion. Cambridge University Press, 1980. [Fitting, 19691 M.C. Fitting. Intuitionistic Logic, Model Theory and Forcing. North-Holland, 1969. [Gabbay, 19851 D.M. Gabbay. N-Prolog: an extension of prolog with hypothetical implications. ii. logical foun- dations and negation as failure. Journal of Logic Pro- gramming, 2(4):251-283, 1985. [Gabbay and Reyle, 19841 D.M. Gabbay and U. Reyle. N-Prolog: an extension of prolog with hypotheti- cal implications. i. 1(4):319-355, 1984. Journal of Logic Programming, [Ginsberg, 19861 Ginsberg. Counterfactuals. Artificial In- telligence, 30(1):35-79, 1986. [Kowalski, 19791 R. Kowalski. Xogac for Problem Solving. North-Holland, 1979. 6.3 Relationship to Horn Logic The intuitionistic semantics of embedded implications ap- pears to have little relation to the classical semantics of Horn clauses. This would be surprising since Horn rules are a special case of embedded implications both syntac- tically and proof-theoretically. This section resolves the apparent incompatibility, showing that when R is a set of Horn rules, then its classical and intuitionistic semantics are closely related. Indeed, many of the well-known prop- erties of classical Horn clauses, such as the existence of a unique minimal model, are special cases of the properties of canonical Kripke models. Firstly, we note that it does not matter whether Horn rules are treated classically or intuitionistically. In both cases, the inference rules of section 4 form a sound and complete inference system. That is,8 Corollary 3 If R is a set of Horn rules, and 4 is a Horn rule, then R +i 4 iff R bE 4. Secondly, we note that a single classical model can be in- terpreted as an intuitionistic substate, and that a collection of classical models can be interpreted as an intuitionistic model. This forms the basis of the following theorem. Corollary 4 Suppose R is a set of Horn rules. Then M is an intuitionistic model of R i’ M is a collection of classical Herbrand models of R. In particular, the canonical Kripke model MB is the collection of all classical Herbrand models of R. Many semantic properties of classical Horn clauses can now be seen as special cases of the properties of canoni- cal Kripke models. For instance, the model intersection property of Horn clauses is a special case of the substate intersection property of canonical Kripke models; and the unique minimal model property is a special case of the unique minimal substate property. Consider also the prop- erty that a set of Horn clauses entails a disjunction of atoms iff it entails one of the atoms individually. This is a spe- cial case of corollary 2. Finally, in Horn-clause logic, the unique minimal model plays a central semantic role: an atom is entailed by a set of Horn clauses iff it is in the unique minimal model. This property too is a special case of a more general property of canonical Kripke models: Corollary 5 If R is a set of embedded implications, and A is an atom, then R b A iff A is in the unique minimal substate of MR. Acknowledgements The work of Thorne McCarty on the intuitionistic seman- tics of embedded implications was the stimulus for much of this work, and the work on rulebase independence would not have been possible without the many discussions with Tomasz Imielinski. Armand Prieditis provided helpful and timely comments on the presentation. ‘Here, ]=i stands for intuitionistic entailment for classical entailment. and I== stands [Makowsky, 1986] J.A. Makowsky. Why horn clauses mat- ter in computer science: initial structures and generic examples. Journal of Computer and System Sciences, 34(2/3):266-292, 1986. [Manchanda, 19881 Sanjay Manchanda. A Dynamic Logic Programming Language for Relational Updates. PhD thesis, The University of Arizona, Tuscan, Arizona 85721, January 1988. [McCarty, 19811 L.T. McCarty. The representation of an evolving system of legal concepts. In Proceedings of the Seventh IJCAI, pages 246-253, 1981. [McCarty, 1988a] L.T. McCarty. Clausal intuitionistic logic. i. fixed-point semantics. Journal of Logic Pro- gramming, 5(1):1-31, 1988. [McCarty, 1988b] L.T. McCarty. Clausal intuitionistic logic. ii. tableau proof procedures. Journal of Logic Programming, 1988. To appear. Miller, 19861 D. Miller. A logical analysis of modules in logic programming. In Proceedings of the Third IEEE Symposium on Logic Programming, Salt Lake City, Utah, September 1986. [ Van Emden and Kowalski, 19761 M.H. Van Emden and R.A. Kowalski. The semantics of predicate logic as a programming language. Journal of the ACM, 23(4):733-742, 1976. [Warren, 19841 D.S. Warren. Database updates in pure prolog. In Proceedings of the International Con- ference on Fifth Generation Computer Systems, pages 244-253, 1984. 4% Knowledge Representation
1988
76
822
Robert M. Mac Gregor USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina de1 Rey, CA 90292 macgreg@vaxa.isi.edu Abstract This paper describes the design of a pattern matcher for a knowledge representation system called LOOM. The pattern matcher has a very rich pattern-forming langua.ge, and is logic-based, with a deductive mechanism which includes a truth-maintenance component as an integral part of the pattern-matching logic. The technol- ogy behind the LOOM matcher uses an infer- ence engine called a classifier to perform the matches. The LOOM matcher is more expressive and more complete than previous classification- based pattern-matchers, and is expected to be significantly more efficient. This paper describes the pattern-matching facility that has been designed for a knowledge representation system called LOOM [MacGregor and Bates, 19871. LOOM belongs to the KL-ONE [B rachman and Schmolze, 19851 family of knowledge representation systems. A distinctive pattern- matching architecture has been developed for some of these systems-they use an analytic inference engine called a classifier to perform the match operation. As we shall see later, these pattern matchers apply the results of logical de- ductions to the pattern-matching process-hence, we refer to them as deductive pattern matchers. In the classification-based approach to pattern match- ing, an instance is matched to a pattern by first abstracting it, and then classifying the abstraction. This strategy is employed by two recent systems, KL-TWO [Vilain, 19851 and BACK [Luck et al., 1987; Nebel and Luck, 19871. In this paper, we describe an extension of this approach which (1) is deductively more powerful than, and (2) is expected to be more efficient than, the strategies used in these ear- lier systems. In many ML-ONEstyle knowledge representation sys- tems, two languages are provided for expressing knowledge, a concept language and a fact 1anguage.l The concept lan- guage expresses knowledge about unary relations (which we call concepts) and binary relations (which we simply call relations). The fact language states facts about indi- viduals. If the assertions about an individual 1 collectively satisfy the definition of some concept C, then I is an in- stance of 6. In the classification-based approach, a con- cept P is associated with a pattern P(z); thus, matching an individual to a pattern corresponds to recognizing an instantiation relationship between the individual and the corresponding concept. Section 2 describes LOOM’s language for defining con- cepts/patterns; Section 3 introduces the notion of the type of a database individual, and illustrates how an individ- ual’s type can change as facts are asserted or retracted; Section 4 opens with an outline of the deductive architec- ture of the LOOM matcher, and then illustrates it with an extended example; Section 5 shows how the expressive- ness of the pattern language increases when implications between concepts/patterns can influence the pattern se- mantics; Section 6 briefly suggests how LOOM’s pattern matcher can be employed to drive a production-rule sys- tem; Section 7 contains a discussion of some of the practical implications of the LOOM architecture. LOOM provides a relational algebra for creating definitions of concepts and relations. The operators defconcept and defrelation are invoked to bind a symbol. to a relational algebra expression- binding a symbol to a concept (or re- lation) expression effectively defines a new predicate sym- bol. For example, after evaluating the defconcept for Person in Figure 1, we can employ Person in our fact language: (tell (person Bill)) asserts that Bill is a Person, while (ask (Person Bill)) tests to see if Bill satisfies Person. The language contains three classes of elementary con- cept expressions: (1) The term : primitive denotes a unique2, primitive3 concept; (2) A role-restriction quan- tifier can be applied to a relation to generate a role- restricting concept defined by the restriction placed on that relation, e.g., the expression ( : at-least 2 child) denotes the concept such that the attached role child must have at least two role fillers. The language provides the numeric quantifiers : at-least, :at-most, and : exactly, and a universal quantifier :all; (3) A role-relating con- cept expression specifies a relationship which constrains *This research was sponsored by the Defense Advanced Re- search Projects Agency under contract MDA903-81-C-0335. ‘We are referring here to hybrid knowledge representa- tion systems- a hybrid system incorporates multiple reasoners which apply to separate partitions of the knowledge space (see [Vilain, 1985; Brachman et cl., 19831). 2Formally, each app earance of the term :primitive, in a sequence of concept expressions denotes a different concept, i.e., the ith appearance of :primitive represents the ith primitive concept. 3A concept or relation is primitive if it cannot be completely characterized in terms of other concepts(relations). IVIacGregor 403 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. (defconcept Person :primitive) (defconcept Male (:and Person :primitive)) (defconcept Female (:and Person :primitive)) (defconcept Married (:and Person :primitive)) (defrelation child :primitive (:implies (:domain Person) (:range Person)) :closed-world) (defrelation daughter (:and child ( :range Female))) (defconcept Father (:and Male (:at-least 1 child))) (defconcept Successful-Father (:and Father (:a11 daughter Married))) Figure 1: Concept and Relation Definitions the fillers of two or more of the concept’s roles, e.g., the expression (= input-voltage output-voltage) specifies the concept such that the fillers of the roles input-voltage and output-voltage have the same value. Compound expressions are built-up from simple expres- sions by applying the operators :and, :or, and :not, which correspond to the operations of intersection, union, and relative complement, respectively. An elementary relation expression consists of either the term : primitive (which in this context denotes a unique relation instead of a concept) or the projection of a relation defined by restricting its domain or its range. For example, inFigure 1, therelation daughter is defined by intersecting the relation child with the relation defined by restricting the range of the universal binary relation to the concept Female. The language provides specialized operators other than those just mentioned (e.g., the unary operator :inverse which generates the inverse of a relation). It also contains a special syntax for defining concepts representing sets or in- tervals, e.g., the set of colors {Red, Blue, Yellow, . ..} or the range of numbers greater than 4. In Figure 1, Father is defined as ‘a Male with at least one child", while a Successful-Father is “a Father all of whose daughters are Married.” These declarations define the predicates Father and Successful-Father. If we assert in our fact lan- guage (tell (Male Bill) (child Bill Mary)) then the query (ask (Father Bill)) returns TRUE. Figure 3 provides a formal semantics for the expressions illustrated in Figure 1. The def concept and def relat ion operators permit non-terminological knowledge to be asserted about a newly-defined concept or relation. The : implies clause in the definition of child (see Figure 1) asserts that its do- main and range fillers must satisfy the predicate Person. Section 5 contains further examples illustrating the use of : implies. Covering and disjointness relationships can also be asserted between concepts and relations. We note that the ability to attach a name to a pattern, so that it can be referenced* within other patterns, is absent in most rule-based languages. 4Note: LOOM form a cycle. requires that references between patterns not Person Married Male Female Father I Successful-Father Figure 2: Tree of Classified Concepts LOOM adopts an open-world semantics, and employs a three-valued (TRUE, UNKNOWN, FALSE) logic. The key- word :closed-worldinthedefinition ofchildinFigure1 indicates that closed-world semantics apply when deter- mining membership in the relation child. 3 Types LOOM allocates a database object to represent each indi- vidual about which one or more facts have been asserted. The primary function of the LOOM pattern matcher is to maintain an up-to-date record of alI instantiates relation- ships between individuals in the database and concepts whose definitions they match. The intersection of all con- cepts matched by a particular individual is called the type of the individual. An encoding of the type, consisting of a list of the most-specific concepts belonging to the type, is attached to each database object. Figure 2 shows the hier- archy of concepts defined by the definitions in Figure 1. If a database object matches the concepts Married, Person, and Female, then its type is (: and Married, Person, Female), while the list (Married Female) represents an encoding of that type. As facts about database individuals are asserted or re- tracted, the types of those individuals will change. The left side of Figure 4 shows a sequence of assertions and re- tractions made to an (initially empty) database, while the right side of the figure shows the changes in the (encoded) types of the individuals that result from the updates. The next section walks us through this sequence of up- dates, and discusses how the computation of these types is accomplished. eductive Cycle This section outlines the deductive architecture of the pat- tern matcher, and illustrates how the deductive machinery works by tracing the deductions applied to the database individual Fred after each of the assertions shown in Fig- ure 4. A concept A subsumes a concept B if the definitions of A and B logically imply that members of B must also be members of A. For example, in our knowledge base, Male(X) implies Person(X) for all individuals X, hence, Person subsumes Male. An important feature of classification-based systems such as LOOM is their ability to compute subsumption relationships between concepts and relations. A classifier [Schmolze and Lipkis, 19831 404 Knowledge Representation Expression e :primitive (concept) :primitive (relation) (:and Cl C2) (:and RI Rz) (:at-least 1 I?) (:exactly 1 R) (:a11 R C) (:domain C) (:range C) (defconcept C . . . (:implies Cl C2)) (defrelation R . . . (:implies 221 Rz)) Interpretation Ml a unique primitive concept a unique primitive relation AZ. [Cl~(Z) A [CP](~) XSY. fifh]l(x, Y) A URzJ@, Y) Xx. 3~. [R@, Y) Xx. 3y. [R]l(z,y) AVYZ. (I[R&,v> A [R](w)) --) Y = z h.z. VY- UR]I(? Y) --+ ImY> XSY. ucnb) XSY. my) vx. ucnb) + cuwd A uc2nw vxy. uwb, Y) + wnb, y) A ua2nh ~1) Figure 3: Semantics of Some Term-Forming Expressions performs the service of computing all subsumption rela- tionships between a given concept and all other concepts in a concept network. This procedure is made efficient by organizing all concepts into a partial lattice in which more- general concepts (subsumers) are placed above less-general concepts (subsumees). The tree shown in Figure 2 shows the result of classifying each of the concept definitions from Figure 1.5 The search algorithm employed by the LOOM classifier for applying subsumption tests was developed by Tom Lip- kis for the NIKL ( see [Moser, 19831) classifier. Here we list three properties which serve as a partial characterization of the Lipkis algorithm. We will use the word test to mean Uperform a subsumption test on”. Let N stand for the concept being classified: 1. Don’t test a concept C if a descendant (subsumee) of C is known to subsume N; 2. Don’t test a concept C unless at least one parent (imme- diate subsumer) of C is known to subsume N; 3. Don’t test primitive concepts. When classifying large networks, the observed effect of these properties is that only a small percentage of all clas- sified concepts are tested during the classification of a new concept. Within the classification-based paradigm, a concept is treated as if it were a pattern. We will say an individual I matches a concept C if I is an instance of C. It follows that we equate the process of finding all concepts which I matches with the process of computing the type of I. A complete pattern-matching system requires the following facility-after every change to the database, we want to compute matches for alI database individuals against all concepts, i.e., we need to continuously maintain the type of each individual. The program that accomplishes this task is called a recognizer.6 5Note: The LOOM classifier also builds a partial lattice re- flecting all subsumption relationships computed between (bi- nary) relations. These are not shown in Figure 2. GThe term realizer is sometimes used in place of the term recognizer. We will call two database individuals 11 and 12 adjacent if for some relation r, the predication r(Ir,12) has been asserted. Three types of database modifications can cause the type of a database individual I to change: i) asserting or retracting a unary predicate on I; ii) changing the value of one of I’s roles; iii) changing the type of an individual adjacent to I. Applying the procedure adjust-individual-type to an indi- vidual I will 1. recompute I’s type (if necessary), and 2. call adjust-individual-type recursively, if I’s type has changed, to adjust the types of individuals adjacent to I. The job of the recognizer is to apply the procedure udjust- individual-type after each database update to any individ- uals affected by a class (i) or class (ii) change. A concept expression which is matched by an in- stance I is called an abstraction of I. The procedure adjust-individual-type employs a strategy called abstrac- tion/classification (A/C) : “to find those concepts which are matched by an in- dividual I, we form an abstraction A of I, and then classify A. I necessarily matches all concepts which subsume A.” The straightforward application of the A/C strategy rep- resents an elegant but impractical method for computing the type of an individual: if the abstraction A chosen for I is not sufficiently complete, then only some of the con- cepts matching I will be found by classifying A. However, the abstraction-generating schemes used in KL-TWO and BACK are designed to match only a subset of the possi- ble concept expressions, because the cost of generating a sufficiently-detailed abstraction is prohibitive.’ The solution is to abandon a purely forward-chaining A/C strategy in favor of one in which the classifier, while classifying an abstraction, can ask questions about the in- dividual behind the abstraction being classified-the ques- tions represent backward chaining. In the LOOM scheme, 71n particular, the abstractions they generate do not include the role-relating concepts defined in section 2. MacGregor 405 Step Encoded Type(s) Al. (tell (Married Fred)) Fred: (Married) A2. (tell (child Fred Suzy)) Suzy: (Person) A3. (tell (Male Fred)) Fred: (Father Married) A4. (tell (Female Suzy)) Suzy: (Female) A5. (tell (Married Suzy)) Fred: (Successful-Father Married); Suzy: (Married Female) A6. (forget (Married Suzy)) Fred: (Father Married); Suzy: (Female) A7. (tell (Successful-Father Fred)) Fred: (Successful-Father Married); Suzy: (Married Female) Figure 4: Database Assertions and Retractions the classifier, while classifying an abstraction A of I, can interrogate I directly about details missing in the abstrac- tion A. Rather than generating a ‘complete” abstraction to begin with, a sufficiently-detailed abstraction for I is built-up incrementally during the A/C process. The existing implementations of abstraction/classifi- cation pattern-matchers are relatively inefficient: they re- compute a database object’s type each time it is modi- fied, i.e., a classification occurs once per database update. Also, there may be facts about an individual which get ab- stracted, but are not used during the classification step- these components of the abstraction represent wasted com- putation. The LOOM scheme avoids these problems: Initially, the abstraction of an individual consists only of the conjunction of all unary predicates (concepts) as- serted for that individual. While computing the type of a database object, three lists are attached to the object: TYPE is a list of the most-specific concepts matched so far; HITS is a list of questions (phrased as algebraic expres- sions) which received positive answers; MISSES contains questions which received non-positive answers. Whenever the value of a role R of a database individual is modified, the expressions in the individual’s HITS and MISSES lists are inspected. lf the answers to the HITS expressions are still positive, while the answers to MISSES questions are not positive, then no recomputation of the type is nec- essary. Each augmentation of the HITS list becomes an augmentation of the abstraction as well. Summarizing, the LOOM pattern-matcher embodies three ideas which distinguish it from the earlier abstrac- tion/classification pattern matchers 1. The algorithm generates an abstraction incrementally rather than all at once; this is possible because 2. The classification step mixes backward chaining (the ques- tions) with forward chaining (the normal mode of a clas- sifier); 3. The addition of the HITS and MISSES lists significantly reduces the type-computation overhead, i.e., it reduces the frequency of classification. Here we trace the list activity as Fred’s type is main- tained in the presence of the assertions in Figure 4. This trace illustrates the points just made. &. “Fred is Married” The initial abstraction of Fred is (defconcept Fred (:and Married :primitive)). The classifier makes no subsumption tests at all. Thus, the classifier does not match Fred against the %ole- bearing” concepts Father and Successful-Father, and 406 Knowledge Representation hence no questions about role values were posed. The re- sulting state of Fred is: TYPE: (Married) HITS: () MISSES: (). &. “Suzy is Fred’s child” Fred’s child role has been modified-it is now a set containing Suzy -but there were still no questions in HITS and MISSES pertaining to the role child, so Fred’s type is not recomputed. fi. “Fred is Male” The abstraction for Fred now becomes (defconcept Fred (:and Male Married :primitive)). The abstraction is tested against the concept Father; the classifier asks Fred the question (:at-least i child), i.e., UDoes Fred have at least one child?” The an- swer comes back TRUE, the test succeeds, and we test next against Successful-Father. The classifier asks (:a11 daughter Married). Fred’s child Suzy may or may not be Female, and may or may not by Married, so the answer is UNKNOWN. Fred’s state is now: TYPE: (Father Married) HITS: ((:at-least 1 child)) MISSES: ( ( : all daughter Married)) 84. %uzy is Female” Suzy’s type changes from (Person) to (Female). Suzy notifies all adjacent database objects, including Fred, that its type has changed. Fred determines that Suzy is now his daughter as well as his child. However, the answer to (:a11 daughter Married) is still UNKNOWN, so Fred's type is not recomputed. &. "Suzy is Married" Suzy recomputes its type, and again notifies Fred. This time the answer to (:a11 daughter Married) is TRUE. The abstraction (defconcept Fred (:and Male Married :primitive (:a11 daughter Married))) is classified; The resulting state is TYPE: (Successful-Father Married) HITS: ((:at-least 1 child) (:a11 daughter Married)) MISSES: () As. Retract "Suzy is Married" Retracting the assertion (Married Suzy) causes Suzy’s type to revert back to Female. Suzy once more notifies Fred that it’s type changed. Fred checks its HITS and MISSES lists and dis- covers that (: all daughter Married) is no longer true; Fred’s type is recomputed from an abstraction built from those members of HITS which are still true: (defconcept Fred (:a& Male Married :primitive (:at-least 1 child))) Fred’s state now matches the previous state after the as- sertion A4. AJ. ‘Fred is a Successful-Father” Here we see an example of forward deduction rather than backward deduction: The assertion (Successful-Father Fred) not only causes Fred to re- compute its type; Fred broadcasts to all fillers of the role daughter (in this case Suzy) that they now sat- isfy the predicate Married. This causes Suzy to revise its status. Hence, the types for Fred and Suzy now match the state after the assertion A5. However, because Successful-Father was asserted directly, and because Successful-Father implies Father, during this pass the classifier did not ask Fred any questions about the roles child and daughter, and hence the HITS list is different then it was after assertion A5. Fred’s state is now TYPE: (Successful-Father Married) HITS: ((: at-least 1 child)) MISSES: () A couple of points are worth noting: First, the kind of in- ference observed after assertion A7 that we have called jor- ward inference is not performed by most pattern matchers. To achieve this type of reasoning necessitates (i) that the pattern-defining language is capable of, i.e., rich-enough, to express such a logical dependency (most aren’t); and (ii) that the pattern-matcher exhibit a facility for deduc- tive inference as well as ordinary matching. Second, the communication which takes place be- tween database objects accomplishes the task of truth- maintenance-this is discussed further in section 7. 5 lications KL-ONE-style languages draw a distinction between ter- minoZogica1, or term-defining knowledge, and all other knowledge (called assertional knowledge, see [Brachman et al., 19831). A 1 c assifier computes the subsumption re- lationship between a pair of concepts solely on the basis of the terminologically-specified definitions of the two con- cepts. Previous classification-based pattern matchers have equated the language used to express patterns with the terminological language used to define concepts. LOOM breaks this habit by permitting some classes of asser- tional knowledge to contribute to the pattern-definitions: implications, covering relationships, and disjointness re- lationships. Th is section provides an illustration of the additional deductive power that implications of the form “Vz.C(s) --) D(x)” bring to our pattern matcher.8 8No collective agreement has been arrived at as to exactly where the boundary between terminological and assertional knowledge lies, but there seems to be general agreement that implications represent assertional knowledge. Ll . (def concept List : primit ive) L2. (defconcept Null :primitive (:implies List)) L3. (def concept Cons (: and :primitive (:exactly f car) (:exactly f car))> L4. (def concept Cons-List (:and Cons (:a11 cdr List)) (:implies List)) Figure 5: A Lisp List To date, none of the terminological languages imple- mented for a ML-ONE-style system permit one to de- fine recursive or self-referential concepts, such as the con- cept of a Lisp list. LOOM, however, is able to aug- ment a terminological definition of the concept List with implications which permit its pattern matcher to recog- nize a List when one occurs in the database. Figure 5 illustrates how this can be done. The implication in ’ line L2 provides the basis step, Vx. Null(x) -+ List(x). The implication in line L4 supplies the inductive step, Vx. (Cons(x) A Vy. (cdr(x, y) 4 List(y))) 4 List(x). Suppose we assert (tell (Cons cl) (Cons c2) (cdr cl c2) (cdr c2 nil) (Null nil)) First, the object nil will classify as an instance of Null; next the implication L2 will enable the deduc- tion List (nil), so the type of nil is computed to be (Null List); next, the type of c2 will be computed as (Cons-List List), using the implication L4; finally, the type of cl will become (Cons-List List). Hence, we have inferred (List cl). To produce the inferences just illustrated, we added two extensions to the LOOM classifier. First, for each concept C in the classification hierarchy, LOOM computes a sec- ond concept representing the intersection of all concepts implied by C. The intersection includes concepts found by inheriting implies relationships from subsumers of 6, and concepts found by computing the reflexive-transitive- closure of the implies relationship at C. The second ex- tension builds on the first-after computing the type of a database individual, the classifier intersects that type with all concepts implied by that type, and returns the inter- section concept as the new type of the individual. Although the LOOM architecture does not currently pro- vide a production rule facility, it is designed for that possibility-[Yen et al., 19881 describes a production-rule language being built on top of LOOM. The scheme is straight-forward: Whenever the type of a database object changes, the old and new types are com- pared. Each concept missing in the old type but present in the new type corresponds to a newly-matched pattern. LOOM instantiates the database object with all production rules that have that pattern as a pre-condition (left-hand- side). MacGregor 407 7 Discussion In this section, we discuss some of the advantages, and one disadvantage, of LOOM’s pattern-matching architec- ture. In particular, we contrast it with OPS5-style [Forgy, 19811 pattern matchers. Expressiveness of the Pattern Language LOOM’s pattern language is both more and less expres- sive than the typical OPS5-style pattern language. The forte of KL-ONE-based languages is in representing and reasoning about roles- they provide a rich set of operators for describing set-valued roles, chains of roles, and relation- ships between roles. In addition to the subsumption lat- tice for concepts, they define a separate lattice for record- ing subsumption relationships between relations, which al- lows one to state, for example, that the role daughter is a specialization of the role child. LOOM’s ability to ex- press implication relationships between concepts permits the definition of recursive patterns-this was illustrated in j section 5. On the other hand, LOOM patterns correspond to log- ical expressions containing a single free variable, while OPS patterns can have multiple free variables. This fea- ture permits the OPS pattern matcher to function as the sole control-mechanism in an OPS program. The LOOM matcher is intended to be used in conjunction with a sec- ond programming language (as in, for example, [Yen et al., 19881). Th d g e esi n of such a language (we currently use Lisp) is a topic for future research. Truth Maintenance LOOM’s maintenance of types for all database objects ef- fectively maintains truth-values for all unary predicates ap- plied to all objects. To achieve this behavior, the pattern- matcher is augmented by a specialized truth-maintenance subsystem. 8 Conclusions We have described the pattern-forming language and dis- cussed the architecture of a classification-based deductive pattern matcher, one which performs deductive inferences during the course of the pattern matching process. Because in this scheme, patterns are concept-based, they are not isolated entities; instead, patterns are connected to other patterns via direct reference, or by implication or other logical relationships-this results in a very rich pattern-forming language. The pattern-matcher designed for the LOOM system rep- resents an improvement over earlier matchers that have adopted the abstraction/classification approach. Most im- portantly, LOOM widens the scope of what kinds of pat- terns can be matched by (i) incorporating backward chain- ing into the abstraction/classification strategy, and (ii) in- ferring additional matches justified by reference to non- terminological knowledge, e.g., assertions of implications between concepts. In addition, the LOOM matcher is ex- pected to be more efficient than previous classification- based matchers because (i) its strategy of incrementally building-up an abstraction avoids abstracting features which won’t be referenced by any patterns, and (ii) it eliminates the necessity for recomputing the type of a database individual each time that that individual’s at- tributes change. Acknowledgements In designing the classification and pattern-matching al- gorithms, I benefited greatly from discussions with Tom Lipkis, Ray Bates, Bernhard Nebel, Marc Vilain, and Kai Von Luck. John Yen and Norm Sondheimer made signifi- cant contributions to the design of the LOOM language. I would like to thank Stuart Shapiro for his criticisms of an earlier draft of this paper, and Dave Brill for his help in preparing this paper. eferen ces [Brachman and Schmolze, 19851 R.J. Brachman and J.G. Schmolze. An overview of the KL-ONE knowledge representation system. Cognitive Science, :171-216, August 1985. [Brachman et al., 19831 R onald Brachman, Richard Fikes, and Hector Levesque. KRYPTON: a functional ap- proach to knowledge representation. IEEE Computer, September 1983. [Forgy, 19811 Charles L. Forgy. OPS5 Users Manual. Technical Report CMU-CS-81-135, Carnegie Mellon University, Pittsburg, PA, 1981. [Luck et al., 19871 K. von Luck, B. Nebel, C. Peltason, and A. Schmiedel. The Anatomy of the BACK Sys- tem. Technical Report KIT Report 41, Technische Universitat Berlin, January 1987. [MacGregor and Bates, 19871 Robert MacGregor and Raymond Bates. The LOOM Knowledge Repre- sentation Language. Technical Report ISI/RS-87-188, USC/Information Sciences Institute, 1987. [Moser, 19831 M. G. Moser. An overview of NIKL, the new implementation of KL-ONE. In Research in Natural Language Understanding, Bolt, Beranek, and New- man, Inc., Cambridge, MA, 1983. BBN Technical Report 5421. [Nebel and Luck, 19871 B. Nebel and K. von Luck. Issues of integration and balancing in hybrid knowledge rep- resentation systems. In K. Morik, editor, G WAI-87, pages 114-123, Springer, Berlin (Germany), 1987. [Schmolze and Lipkis, 19831 James Schmolze and Thomas Lipkis. Classification in the KL-ONE knowledge rep- resentation system. In Proceedings of the Eighth International Joint Conference on Artificial InteZZi- gence, IJCAI, 1983. [Vilain, 19851 Marc Vilain. The restricted language archi- tecture of a hybrid representation system. In Pro- ceedings of the Ninth International Joint Conference on Artificial Intelligence, pages 547-551, Los Angeles, CA, August 1985. [Yen et a%., 19881 John Yen, Robert Neches, and Robert MacGregor. Classification- based Programming: A Deep Integration of Frames and Rules. Technical Re- port ISI/RR-88-213, USC/Information Sciences Insti- tute, March 1988. 408 Knowledge Representation
1988
77
823
Lin Padgham LINKbING UNIVERSITY Department of Computer and Information Science S-581 83 Linkijping, Sweden Abstract Typing schemes which allow inheritance from super to sub types are a common way of representing information about the world. There are various systems and theories which use such representations plus some inferencing rules to deduce properties of objects, about which the system has only partial information. Many such systems have problems related to multiple inheritance, and have some difficulty in drawing conclusions which we as humans see as intuitively simple. We present a model of typing based on a lattice of feature descriptors. A type is represented by two important points in the lattice representing core and default information. The use of two points allows some information to be monotonic whilst other information is nonmonotonic. We give some operations which can be used in default reasoning about an object, based on knowledge about the relationships between the points in the lattice which are defined as types. We then work through some specific examples, showing the conclusions which we reach with this system. We compare the expressiveness of our system to some of the well known work in the area of default reasoning using inheritance. Classification into categories and sub-categories, along with default reasoning about the properties of objects within these categories is a common human activity. It allows for continuation of the reasoning process in situations where people could otherwise be paralyzed by lack of information, or by the overwhelming number of theoretical possibilities. There have been a number of systems built which attempt to capture the notion of inheritance from super to sub-types, and to simulate the human reasoning regarding characteristics of objects known to belong to a certain type. Some well known examples are FRL [Roberts and Goldstein,77], NETL [ Fahlman,79] and TMOIS [Touretsky,86]. These systems all have difficulties around issues to do with multiple inheritance combined with exceptions. They easily run into ambiguous situations where they cannot make a decision, or make an intuitively wrong decision. This is true even for questions which people would resolve easily and unambiguously. We present a model of typing based on the splitting of a type into type default and type core, and on the formalism of a lattice, rather than the more usual notions of a tree or an acyclic directed graph (“tangled hierarchies” [Fahlman,79]). Our model allows for representation- of information which we believe people typically use in reasoning about defaults, and which is not representable in the above mentioned systems. This model- plus the inference mechanism we suggest gives clear solutions to many of the problems which have previously been difficult. It also provides solutions in a consistent manner, without resorting to more complex inference mechanisms to deal with special situations. We use the notion of a type being defined not as a node iu a graph of super and subtypes, but rather as a collection of characteristics which we expect to see in an object of a particular type. It is our view that this is a natural description of what typing or classification is for humans. It is the association of certain characteristics together as a group. If we then look at the characteristics defining a type, we see that we are far more willing to override some of the characteristics than others. For this reason we identify two clusterings representing a type, - the type core and the type default. The type core includes those characteristics which we regard as always present in objects-of this type. r Only type descriptions which are above” this type core are considered to be subtypes of this type. If we state The type default contains the information for typical that A is a subtype of B then we know unequivocally that the core descriptor for type A contains at least all the information that is contained in the core descriptor for B. 1 Individual-objects can always fall below even the type core. This is taken up further in an example. We are currently working on a suitable way of describing such objects in order to reason about them in the way intuitively desired. 2 Note that specialization or more information is higher in the !attice, whereas it is lower in inheritance graphs. Padgham 409 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. la Royal Elephant lb frican ephant = Default = Core Grey Thing - implies that join of points is inconsistent. Figure objects of that type. The type default must of course contain at least the same informatiom as is contained in the type core. An individual object can also be seen as a clustering of characteristics which we can call the descriptor for that object. Using this notion of two points to describe a type, let us look at a simple but classical problematic example. 2.1 Example I The network la with its redundant link (Clyde --> Elephant) is given by Touretsky [1987] as the sort of example that causes shortest path reasoning algorithms to fail, and is thus a justification for his shortest inferential distance (or onpath preemption) algorithm. Sandewall[ 19861 h s ows with network lb that shortest inferential distance is also inadequate and proposes an algorithm which Touretsky[l987] refers to as offpath preemption. Our approach gives the desired result in both examples, and is appealing in that the addition of extra information (links to and from African Elephant) does not require any change in reasoning strategy. We can write the following relationships from our representation: Core( Royal Elephant) 2 Core( Elephant) Core(African Elephant) 2 Core(Elephant) DefauZt( Elephant) 2 Core( Grey thing) Core(Roya1 Elephant) u Core(Grey thing) =+ K Core(Roya1 Elephant) E NOT( Core(Grey thing)) Core( Grey thing) E NOT( Core( Royal Elephant))’ Desc(Clyde) 2 Core( African Elephant) Desc(Clyde) 2 Core(Roya1 Elephant) Our diagram defines a set of inheritance paths having a specific object (e.g. Clyde) as their fist element, and which characterize properties the object may inherit. The inheritance paths are written in the form: ---- 1 Note that we have represented the information that royal elephants must be non-grey, whereas only typical elephants are gv. Traditional Representation Our representation I n0 op nl op n op . . . ni op 2 n. 1+1 **- where the operation op in the link ni op n. 1+1’ may be either of the following: 1 n. &I ni 1 c&respon b‘ may occur in the path if the ing relationship occurs in the diagram. & Core(A) & Default(A) may occur in the path, representing a default assumption that the object at the beginning of the path is not only an A, but is a typical A, (in so far as A’s typicality does not conflict with already known information). Notice that path steps using 1 proceed downwards in the graph, and steps using & proceed upwards, thus adding extra information to n deductively available. o besides what is E The path may be terminated by n. E NOT(A), indicating that n. is a member of the ‘set of nodes: NOT(A) - { x ] x u A =X} (K f inconsistent in at least one feature value) A set of paths is said to be inconsistent if it contains both the step 2 A, and the step E NOT(A), and consistent otherwise. The full set of paths obtained by using the & operator wherever possible is often inconsistent. Consequently we only allow adding in of information following an e operation provided it is not inconsistent with information obtained at an earlier step. Abbreviating Core to Cr, Desc to Ds, and Default to Of, we can now infer the following about Clyde: Ds(Clyde) 2 Cr(Af. Eleph.) 2 Cr(Eleph.) . ..Cr(Af. Eleph.) 2 C;(Eleph.) E Df(Eleph.) [ 2 Cr(Grey thing) ] Ds(Clyde) 2 Cr(Roy. Eleph.) 1 Cr(Eleph.) . . . Cr(Roy. Eleph.) E NOT( Cr(Grey thing)) --_- 2 We show in [ ] the information which potentially could have been added in following the extension operation, but which is not added due to its being contradictory. 410 Knowledge Representation We see here that we have a single extension, with no default assumptions, giving the result that Clyde, in addition to being a Royal Elephant and an African Elephant, is an Elephant and NOT a Grey thing. If we had instead represented and reasoned about the information for the network la, we would simply have been missing the information: Desc(Clyde) 4 Core(African Elephant) Core(African Elephant) 2 Core(Elephant) This would not have altered the inferred information. The missing initial information that Clyde is an African Elephant would of course also be missing in the conclusion. Default(A) 7 Default(B) A’s are usually typical B’s. Deja&(A) 2 Core(B) A’s are usually B’s - but not necessarily typical B’s. The greater expressivity given by split into core and default is often valuable in reasoning about characteristics of objects. The information is intuitive for humans (at least within some fuzzy boundaries), and needs to be represented if a reasoning system is to draw conclusions we intuitively wish it to draw. This is demonstrated in examples. 3. Lattice Based Models of 4. epresentation of negative nw= information Taking the theoretical space of all possible combinations of all possible characteristics induces a lattice of descriptors. Any clustering of characteristics then belongs somewhere in this lattice. Typing can be seen as naming some of the points in this lattice as being relevant clusterings of characteristics. This view allows for both descriptive and prescriptive typing. We can deduce the type(s) of an object by having information concerning its characteristics, and we can deduce information about an object’s characteristics by having information about its types. Human reasoning also uses typing and classification in both these ways. Reasoning about the relation between individual objects and type descriptors can also be applied to relations between subtypes and their supertypes. A particular type, A, can be observed to be a specialization or subtype of some other type, B, if the type descriptor for A contains all the information in the type descriptor for B plus some extra information. Conclusions can be drawn about the type(s) of a given object by placing its descriptor in the lattice, and noting which types it falls above. However we do not always have complete information regarding an object, and can therefore not place its descriptor directly in the lattice. We introduce the notion of partial information in the descriptors, plus relations between the descriptors which constrain their positioning in the lattice without fully defining it. If we say the descriptor for A is above the descriptor for B (A 2 B), and we know that P is an A, then we can conclude (by transitivity of 7) that P is also a B, without knowing any of the actual characteristics of either descriptor. By comparing two types with respect to cores and defaults, we can make the statement A’s are B’s with four different shades of meaning, ordered with respect to the strength of the statement. Core (A) 2 Default(B) A’s are always typical B’s. Core(A) 2 Core(B) A’s are always B’s - but not necessarily typical B’s. Multiple inherit ante reasoning systems often have negative links in the inheritance graph, which are important in the reasoning process. (E.g. NETL [Fahlman,79] and TMOIS(Touretsky,86]. On examination, these negative links are used to express two different things. These can be described as incomputability and overriding. We define incompatibility within the lattice framework, and show overriding to be unnecessary. ncompatibility By incompatibility between A and B we mean that there is at least one characteristic of A that is incompatible with a characteristic of B. Consequently no object can be both an A and a B. This can be described as AuB=+K where A LI B is the least upper bound (or join) of A and B, and K indicates that at least one feature value is inconsistent. Incompatibility can also be expressed as A E NOT(B). As with positive relations between types we can use the core and default for the type in order to say “A’s are not B’s” with different shades of meaning. Core(A) I-I Core(B) =+ K A’s are never B’s Core(A) LI Default(B) *K A’s are never typical B’s. Default(A) u Core(B) 3 K’ Typical A’s are not B’s. Default(A) LI Default(B) * K Typical A’s are not typical B’s, 4.2 Overriding Overriding refers to the situation where in NETL(Fahhnan,79] and TMOIS[Touretsky,86] negative links are used to override positive inherited information. The situation is not such that the negatively linked points are necessarily incompatible, Padgham 411 but rather that it is necessary to block a possible chain of reasoning. Within our framework this situation occurs when there exists some known type, B, which is above Core(A) but not above Default(A), and B is incompatible with Default(A). One then wants to ensure that the chain of reasoning from Default(A) is disallowed. Because we make a distinction between core and default and can identify subtypes that are known to lie between the core and default, we do not need any special override mechanism. 5. Reasoning Using The Lattice Having defined how to express information regarding types and their relations to each other, we use the inference mechanism as described in example one, for reasoning with the information to state what types a given object may have. The basic relation 2 between lattice points is transitive, so if we know that an object P has a descriptor such that Desc(P) 7 Core(A), and that Core(A) 7 Core(B), we can conclude that Desc(P) I Core(B). Th is method allows us to state all definite (monotonic) positive conclusions regarding Desc(P). We can of course always add relations of the form Default(x) 2 Core(x) (by definition). Negative information of the form ‘AuB =+K gives us relations of the form ‘X 7 A --> X E NOT(B)’ as defined previously. The extension operation & defined in example one allows us to make the nonmonotonic jump from the core of a given type to its default, Core(A) & Default(A) The extension operation can be repeated as long as it is possible to obtain more information by doing so. Each such operation implies an assumption. If there is more than one such assumption in the reasoning process, the ordering of the extension operation gives the different valid extensions. Preference for making this assumption at the most specific point possible (in terms of the 2 lattice relation) gives the intuitively desirable preferences between extensions. This preference for the most specific Core to Default specializations prefers the same extensions as those obtained by the shortest inferential distance algorithm of Touretsky, and the offpath preemption algorithm of Sandewall, but in a more clearly motivated and consistent manner. 6. Some Classical Examples 6.1 Example 2 Figure 2 shows the same network as figure 1 but with different labeling on the nodes. Touretsky argues that with the changed labeling it is less intuitively clear what conclusions we wish to draw. The lattice diagram shows the suggested representation within our model, which differs from the lattice representation given for example 1. This captures the differing strength in the 412 Knowledge Representation information that Marines (and Chaplains) are Men, compared with African Elephants (and Royal Elephants) are Elephants. Ch Inheritance Graph rim Beerdrinker Lattice representation Figure 2 Extracting the formulas similarly to example 1 we reason as follows: Ds(George) 2 Cr(Mar.) & Dj(Mar.) 1 Cr(Man) & Dj(Man) 2 Cr(Brdr.) Ds(George) 2 Cr(Chap.) & Dj(Chap.) 1 Cr(Man) . ..Dj(Chap.) E NOT( Cr(Brdr.)) The extension to De jadt( Man), giving Core(Beerdrinker) conflicts with the extension to DejauZt(Chaplain) which gives E NOT(Beerdrinker). Ordering of the conflicting extension operations gives two extensions - one in which George is a Beerdrinker and one in which he is NOT a Beerdrinker. The previously discussed preference for extension operations from the highest lattice points, when there is a conflict, gives preference to the extension in which George is NOT a Beerdrinker. (Core(Chaplain) 2 Core(Man)) The representation that allows us to differentiate between ‘Typical Chaplains are Men’ vs ‘All Royal Elephants are Elephants’ enables us to then use this information to reason in a more natural (and correct?) way, than the IS-A network representation does. 6.2 Example 3 Figure 3 shows the canonical ambiguous net (‘the Nixon diamond’) concatenated with a net giving further ambiguities (from TOU87). We work through this example with two different choices of representation, in order to bring out some properties of our model. 6.2.1 Quakers are always Pacifists (Figure 3b) In the following representation we represent all Quakers as being Pacifists, but only default Republicans as being incompatible with Pacifists. All other choices between default and core of the types have no influence on the resulting conclusions. (They simply influence the certainty of the extensions. We have made choices here to reduce the number of extension operations needed) Antimilitary 3a er w Figure 3 By using the formulas as previously we reason as follows: Ds(Nixon) z Cr(Qkr) J Dj(Pac.) 2 Cr(Pac.) 7 Cr(Antimil.) DS(Nixon) 2 Cr(Repub.) & Dj(Repub.) 2 Dj(Foot. fan) [ E NOT( Cr(Antimi1.)) ] . ..Df(Repub.) [ E NOT( Cr(Pac.)) ] We obtain a single extension in which Nixon is a Quaker, Republican, Pacifist, Antimilitary and Football fan. This is disturbing to our intuitions as Nixon is a particular individual who we know to have been non-pacifist and non-antimilitaristic. This serves to illustrate the point previously noted, that individuals must be allowed to fall even below the type core, where explicitly stated to do so. However this does not justify the building of a hypothetical class of militaristic Quakers. We are currently working on an appropriate representation for individuals who fall below the core, that will enable us to continue the desired reasoning about such an individual. 6.2.2 Typical Quakers are Pacifists (Figure SC) In figure 3c we replace the original symmetry of the Nixon diamond by showing only typical Quakers to be Pacifists. By doing the reasoning as previously (not shown due to space limitations) we obtain two extensions, one the same as with the previous representation, and one in which Nixon is Quaker, Republican, Football fan, NOT Pacifist and NOT Antimilitary. If we had shown the link from Pacifist to Antimilitary as weaker than we did (i.e. as DejauZt(Pacifist) r] Core(Antimilitary)) we would also have generated an extension in which Nixon was Quaker, Republican, Football fan, Pacifist and NOT Antimilitary. The results here are consistent with Touretsky’s description of a credulous reasoner, in that all possible extensions are found. In this case there is no simple preference between extensions based on assumption of typicality at the most specific point possible. However all extensions are obtained if one wishes to then choose among extensions on the basis of various heuristics. 6.3 Example 4 Figure 4 shows an example from Fahlman et al (FAH81). Th e 1 ure shows both the representation as Fg it is in NETL, and also the more informative representation of Etherington, who uses a wider variety of link types. The meanings of Etherington’s link types are given in figure 5. The lattice representation comes directly from Etherington’s network and does not contain any extra information that is intuitively assumed in choosing between core and default. The information is already implicit in the graph. Shellbearer Cephalopod Nautilus Nautilus NETL Representation . Etheringtons Representation Figure 4 Lattice Representation Padgham 4 13 Strict Strict /,, IS-A f IS-NOT-A ~ Default Default Exception 1 IS-A IS-NOT-A I Figure 5: E therington ‘s link types Reasoning about the individual Nautilus, X, we get: X 2 C(Nautilus) 1 C(Cephalopod) 7 C(Mollusc) . ..C(Ceph.) & D(Ceph.) [ E NOT(C(Shell.)) ] . ..C(Mollusc) & D(Mollusc) 2 C(Shellbearer) . ..C(Nautilus) 2 C(Shellbearer) We reach the same conclusion as both NETL and Etherington’s system, that X is without competing extensions or uncertain assumptions, a Cephalopod, a Mollusc and a Shellbearer. Reasoning about the individual Cephalopod, Y, we get: Y 2 C( Cephalopod) 2 C (Mollusc) &’ D( Mollusc) ZJ C(Shellbearer) . ..C(Ceph.) & D(Ceph.) E NOT(C(Shellbearer)) This corresponds to two extensions, one in which Y is a shellbearer, and one in which it is not. The extension in which it is not a shellbearer is preferred on the basis of its resulting from an extension operation at a more specific point. This conclusion is neither the same as NETL nor Etherington’s conclusions (which also differ from each other here). Etherington’s method provides only one extension, the same as our preferred extension. NETL gives both extensions but does not discriminate at all between them. We feel that our solution, providing both extensions, but with a clear and intuitively reasonable preference, is the most desirable. 7. Discussion The represent ation and reasoning met hods proposed here appear to offer ciear advantages when compared to systems such as TMOIS and NETL which use network representations with two sorts of links (positive and negative) plus possibly exceptions. The largest gain results from the division of type descriptors into two parts, giving a similar effect to what Touretskty classifies as heterogeneous, bipolar systems. Heterogeneous refers to the ability to have some information which is certain (monotonic), while other is uncertain (nonmonotonic). The inference mechanism for reasoning about objects within a lattice based type schema appears to be cleaner and more consistent than the inference mechanisms developed for reasoning about network based representations. Many of the examples which have proved problematic for network based systems, and which have required new inference mechanisms, are simple and clear within the lattice model. The lattice formalism also gives greater clarity to the semantics of negative links than has been evident in network representations. A comparison of the lattice based model with Etherington’s system is less definitive with respect to which is better. By increasing the number of link types he also specifies a heterogeneous system. This has similar advantages to our system in that he represents more information, which is necessary for achieving the desired reasoning. An essential difference between Etherington’s model and ours is that he represents explicit information about exceptions, whereas we simply represent lack of typicality. Both approaches have their advantages. The advantage with our model is that if we know there are exceptions we generate both extensions, with a preference for the most likely. Etherington’s method generates only the typical extension, unless one knows that the particular exception causing case is present. We plan to implement the described system and experiment with larger, more complex examples, to determine whether the described model does provide the reasoning and representation facilities to achieve intuitively desirable results. We thank Erik Sandewall and Ralph RSnnquist for helpful discussions of the ideas presented here, and Jalal Maleki for careful proof reading and useful comments. [Etherington, 871 Etherington, D. W. Formalizing Nonmonotonic Reasoning Systems. Artificiak Intelligence, vol. 31, 1987, pp. 41-85. [Fahlman, 791 Fahlman, S.E. NETL: A System for Representing and Using Real- World Knowledge. The MIT Press, Cambridge, MA, 1979. [Fahlman et al., 19811 Fahlman, S.E., Touretsky, D.S. and van Roggen, W. Cancellation in a Parallel Semantic Network. Proceedings of IJCAI-81, 1981, pp. 257-263. [Sandewall, 19861 Sandewell, E. Non-monotonic Inference Rules for Multiple Inheritance with Exceptions. Proceedings of the IEEE, vol 74, 1986, pp. 1345- 1353. [Touretsky, 19861 Touretzky, D.S. The Mathematics of Inheritance Systems. Morgan Kaufmann Publishers, Los Altos, CA, 1986. [Touretsky et al., 19871 Touretzky, D.S., Horty, J.F., Thomason, R.H. A Clash of Intuitions: The Current State of Nonmonotonic Multiple Inheritance Systems. Proceedings of IJCAI 87, Milan, August 23-28, 1987, vol 1, pp. 476-482. 414 Knowledge Representation
1988
78
824
1. Tractable Theories of Multiple Defeasible Inheritance In Ordinary onmonotonic Lo&s Brian A. Haugh Martin Marietta Laboratories 1450 South Rolling Road Baltimore, Maryland 21227 Abstract A suggestion by John McCarthy for general formulations of multiple defeasible inheritance in ordinary nonmonotonic logic is examined and found to suffer from a variety of technical problems, including 1) its restriction to object/class/property networks, 2) unintuitive results in “Nixon diamond”-type networks, 3) unnecessary closed-world assumptions, and 4) susceptibility to unintended models when gen- eralized. A family of theories is presented that substantially revises McCarthy’s formula- tion to avoid these problems and restrictions. Finally, an inference control strategy for com- puting the theory is identified whose tractabili- ty is ensured by a variety of techniques in- cluding incremental computation of abnormal- ities and truth maintenance. Marvin Minsky’s challenge to proponents of logic- based reasoning in AI to formalize the inheritance of prototypical properties and their exceptions [Minsky, 19751 has resulted in a whole new field of formalized nonmonotonic reasoning including general purpose nonmonotonic logics [McCarthy, 1980; Reiter, 1980; McDermott and Doyle, 19801, applications of such log- its to inheritance hierarchy problems [McCarthy, 1986; Etherington, 1987; Sandewall, 19861, and special for- malisms just for inheritance reasoning [Touretzky, 1986; Horty et al., 19871. While these formulations of inheritance have demonstrated the adequacy of formal systems for such commonsense reasoning problems, each suffers from some substantial deficiency, e.g., lack of computationally tractable implementation tech- niques, absence of any general formalization pro- cedure, limited expressive power, or use of non- standard formalisms that are difficult to extend or modify. In this paper, we develop essential revisions to a ‘suggestion by McCarthy [1986], creating a new family of logical theories for formalizing general inheri- tance reasoning that suffers none of these deficiencies. 2. ccarthy’s Formulation 2.1. bjectl Class/ Property McCarthy [1986] has developed two fairly general methods for representing multiple inheritances with exceptions - a technique using prioritized circumscrip- tion (pp. 105107) and a “class-level” approach using ordinary circumscription that reilies classes and properties (pp. 99-100). Our work has focused on the latter approach because it has an appealing simplicity,and because no general procedure for translating inheritance networks into the prioritized approach has yet appeared. McCarthy reifies “classes”’ and properties of objects by assigning first-order variables and constants to them. Inheritance relations between classes are expressed by wffs of the form cl I c2, stating that class cl ordinarily inherits from class c2, while in(x, c) asserts that an object x is a member of a class c. Default properties are expressed by wffs of the form ordinarily(c, p), meaning that objects in class c ordi- narily have property p, while ap(P, x) states that a predicate P applies to an object X. An abnormality predicate ab(aspectl(cl,c2,p)) expresses the abnormality of members of class cl with respect to inheriting pro- perty p from class c2. D efau It inheritance of properties by classes is expressed by: Ml. [ordinarily(c2,p) & cl I c2 & l ab(aspectl(cl,c2,p))] 1 ordinarily(c1, p). Cancellation of such inheritance is formulated by: M2. [cl I c2 & c2 I c3 & ordinarily(c2, not(p))] 1 ab(aspectl(cl,c3,p)). Transitivity of inheritance of class membership is asserted by: M3. [cl 5 c2 & c2 5 c33 3 cl < c3. Axiom M3 entails the fundamental limitation of McCarthy’s theory: class membership relations are not defeasible, i.e., they cannot be cancelled. We refer to McCarthy’s approach as an “object/ class/ property inheritance system,” adapting Touretzky’s classification scheme [Touretzky, 19861. While McCarthy’s treatment of inheritance of properties by classes provides foundations adequate for a broad range of inheritance systems, it cannot represent the cancellations of inter-class relations that are found in many more general systems (e.g., [Touretzky, 1986; Horty et al., 1987]).3 t He observes that these classes are not extensional. We may conceive of them either as intensional classes or as names of the predicates which pick out the members of these classes (taking a meta-level view). 2 I thank Jeff Horty for pointing this out to me. 3 WC should note that McCarthy presented this ap- proach merely as a promising possibility, not as a comprehensive solulion. Haugh 421 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. 2.2. Problems 2.2.1. A Natural Extension The most natural extension of McCarthy’s for- malism to handle exceptions to class membership inheritance treats them in the same manner that he treats exceptions to property inheritance, by replacing his axiom M3 as follows: M3a. [cl I c2 & c2 5 c3 & --, ab(aspectl(cl,c2,c3>)] 3 cl 5 c3 M3b. [cl S c2 & c2 5 c3 & c2 5 not(c4)] 1 ab(aspectl(cl,c3,c4)) This leads to the intended results in simple class inher- itance cancellation cases, such as that shown in Figure 1, although it will admit unintended models in slightly more complex networks if we simply minimize abnor- malities (as McCarthy does). Figure 1. Simple class cancellation 2.2.2. Unintended Models With Gratuitous Links In some inheritance networks, there will be models of this extended theory which are minimal in abnormalities but achieve that condition by admitting entirely new direct links that cancel the inheritance of intended abnormalities. For example, the network of Figure 2 will have minimal models with a gratuitous explicit link A I not(C), which creates the unintended abnormality ab(aspectl(A, B, C)) while blocking the in tended inheritance of the dual abnormalities ab(aspectl(A, D, F)) and ab(aspectl(A, D, E)). Thus, simply minimizing abnormalities will block many of the intended results (e.g., A 5 C) in such cases. This problem is characteristic of the general scheme of directly minimizing abnormalities in abnormality-based meta-level general inheritance sys- tems and is independent of our particular formulation of the axioms. When abnormalities can be inherited, and that inheritance can be cancelled, there will be models of many networks that will have fewer abnor- malities if an unexplained cancellation of inheritance Figure 2. Unintended models holds. Thus, we conclude that simple abnormality minimization will not provide the intended results in any such theories. 2.2.3. Excessive Closed-World Assumptions McCarthy’s approach of simply minimizing abnormalities cannot escape a wide variety of closed- world assumptions which violate common sense in many cases. Minimizing the abnormalities of indivi- dual objects, for example, entails that no objects with any abnormalities (e.g., penguins) exist unless they can be proven to exist. This pervasive problem arises in McCarthy’s object-level theories of inheritance [McCarthy, 19861 as well as in his meta-level theories, and in all other previous logic-based object-level for- mulations (e.g., [Lifschitz, 1985; Etherington, 19871). 2.2.4. Unintuitive Results In “Nixon Diamonds” McCarthy’s theory encounters a variety of difficulties when applied to certain kinds of inheritance networks which I call “Nixon diamonds,” after the ori- ginal example of this type developed by Reiter and illustrated by Figure 3. More general “Nixon dia- monds” consist of pairs of arbitrary length, multi-link paths between two nodes, where the final links are con- trary - also referred to as “conflicting multi-link paths.” McCarthy’s theory properly handles the origi- nal Nixon diamond, in which “Nixon” refers to an individual, provided the implicit axiom ap(p,x) 3 7 ap(not(p),x) is assumed. Difficulties arise, however, in “generic Nixon diamonds,” in which the root (Nixon) is a class (e.g., Nixon’s family) instead of an indivi- dual, and in “extended Nixon diamonds,” where the multi-link paths are longer than two links (as in Figure 4). pacifist republica uaker nixon I o*zy I in * Key Figure 3. Original Nixon diamond pacifist Kev Figure 4. Extended Nixon diamond In extended Nixon diamonds, McCarthy’s theory entails that some abnormality holds, blocking one of the conflicting paths (as expected), but will be indifferent between all the possible link cancellations. Commonsense reasoning, however, preferring the least disruption to its default beliefs, tends to conclude that only the last links in conflicting multi-link paths are suspect, allowing the intermediate conclusions to stand. 422 Knowledge Representation 3. Excluding Unintended Models p’s, where x can refer to either an individual (or object-level term) or a class (or object-level predicate), 3.1. Alternative Methods and p refers only to either a class or property. General From the problems with unintended models that we network relations, both explicit and derived, can be identified for the natural extension of McCarthy’s represented by isa(x,p). theory, it is apparent that common sense does not C, A sorted logic is used in which upper case A, B, prefer a simple minimization of his abnormalities, but D, Al, Bl, . . . are variables referring to individual insists that no unexplained abnormalities should be objects, while upper case M, N, 0, P, Q, R, S, T, Ml, admitted in place of expected ones. Commonsense use Nl are variables for classes/ predicates, and upper of default relations assumes that there are no abnor- cask ietters from the end of the alphabet - u, v, w, malities interfering with prima facie default conclusions x, Y, z, Ul, Vl, . . . . are variables ranging over objects unless they can be shown to follow from explicitly and classes. Corresponding lower case terms are used known relations using basic rules of default cancella- for constants in the same categories, along with other tion. Once this general principle is recognized, its for- mnemonic lower case constant names (e.g., elephant) mulation proceeds quite naturally by definition of an whose category should be obvious in context. abnormality predicate ab in terms of the rules that gen- To reason about exceptions to default links, we erate abnormalities. Then, minimizing the explicit use an abnormality predicate modeled after McCarthy’s knowledge (direct links) required by those rules to use of a similar predicate, although our syntax is some- generate abnormalities will restrict all such abnormali- what simplified. Where McCarthy uses a predication of ties appropriately. This solution comes at the prima the form ab(aspectl(X,P,Q)) [McCarthy, 19861 to facie cost of a general closed-world assumption that represent abnormalities, we use ab(X,P,Q) to mean entails the falsehood of any general relations that are that object/class X is abnormal with respect to inherit- not provable from such theories. We will show, how- ing any existing default relation between P and Q. For ever, that such assumptions may be fully relaxed while example, ab(royal-elephant, elephant, gray) asserts that retaining their benefits in excluding just the unin- royal-elephants are abnormal with respect to inheriting tended abnormalities. any isa relation between elephant and gray, or, less for- The other major alternative for avoiding unin- mally, royal-elephants are abnormal elephants with tended models within this type of theory involves respect to being gray. While McCarthy uses different minimizing what I call “potential abnormalities,” i.e., aspects (aspect1 and aspect2) to distinguish between those that would hold if there were no other anormali- particular and generic abnormalities, we allow these ties blocking them. We do not present our theory for distinctions to be determined by the sorts of terms minimizing potential abnormalities here because it is appearing in ab predications. more complex than minimizing direct links, and offers no clear advantages for inheritance reasoning. 4.2. Core Inheritance Axioms 3.2. Comparison to Temporal Reasoning The two identified alternatives for eliminating unin- tended models of general inheritance theories are examples of general techniques of nonmonotonic rea- soning that have been applied previously in temporal- causal reasoning to exclude unintended models identified for temporal persistence theories [Hanks & McDermott, 19861. Minimizing types of causal rela- tions was used by Lifschitz [Lifschitz, 19871 to elim- inate models with unexplained changes, just as minim- izing types of explicit inheritance relations here elim- inates unexplained inheritance cancellations, Minimiz- ing “potential causes” was used by Haugh [Haugh, 19871 to eliminate spurious particular potential causes of change (which would be actual causes if their preconditions held), just as minimizing potential abnor- malities in inheritance theories will exclude spurious inheritance cancellations. Thus, our new results here suggest a broad potential for application of these tech- niques to other nonmonotonic reasoning issues. 4. Closed-World Inheritance 4.1. General Notation To enable minimization of explicit relations in our closed-world theories, we distinguish them from other, derived, relations by representing them with distinctive predicates. Explicit default network relations (or links) are of the form isa-x(x,p), stating that x’s are normally 4.2.1. Network Relations Using the notation just presented, and axioms for generating derived relations from explicitly asserted ones, we can formulate a broad range of inheritance theories. We identify a general family of theories which share three core axioms and a simple minimiza- tion technique. The first axiom defines all of the default relations derivable from a network as: Al. isa(X,Q) = [isa-x(X,Q) v (!b3 &G-V) & isa-x(RQ) & 1 WXP,Q)ll which states that X’s are (normally) Q’s if and only if either there is an explicit network link asserting this, or there is an intermediate node P in the network such that X’s are (normally) P’s, there is an explicit link asserting P’s to be Q’s (normally), and X’s are not abnormal with respect to P’s being Q’s. The abnormal- ities referred to are restricted to four primitive types: A2. ab(X,P,Q) = [ab-d(X,P,Q) v ab-i(X,P,Q) v ab-G,RQ) v ab-x(X,RQ)l. Explicit abnormalities (ab-x) are a type of cancellation link that are explicitly asserted, while the other abnor- malities are derived from conflicting isa x relations. 4.2.2. Direct Abnormalities Direct abnormalities are created by the direct override by a single explicit link of a prima facie multi-link path in a network. The example network of Figure 1 illustrates the direct abnormality created by Haugh 423 the cancellation of the path from penguin through bird to pyer by the direct contrary link from penguin to not(j7yer) .4 Our third core axiom characterizes the general conditions under which direct abnormalities arise as: A3. ab d(X,P,Q) = -[isa( X,P) & isa-x( P,Q) & isa-x( X,not( Q))]. 4.2.3. Inherited Abnormalities Our other types of abnormalities admit of a broad range of alternative interpretations, none of which has achieved any consensus as yet. Here, we present the simplest version of inherited abnormalities. It allows what Toure tzky calls “off-path preemptions” [Touretzky et al., 19871, in accord with the theories in [Sandewall, 19861 and [Horty et al., 19871, and is for- mulated as: A4. ab-i(X,Q,R) = (gP)[isa(X,P) & [abeW,Q,R) v ab-W,Q,W v ab-x(RQ,R)ll. Such inherited abnormalities exist for every descendant X of a node P that has some direct or conflicting abnor- mality (e.g., ab-d(P,Q,R )). An example is illustrated by Figure 1, wherein opus inherits an abnormality with respect to birds being flyers from penguin. Under this conception of inherited abnormality, an abnormality requires only a path (isa(X,P)) from the inheriting node (X) to the original abnormality node (P) to be inherited. One plausible variation of this would require every inherited abnormality ab-i(X,Q,R) to have a path (path(X,P,Q)) all the way from the inheriting node X, through the original abnormality node P, to the base node Q of the conflict link. This stricter conception could be expressed by only a minor variation of our axiom A4: A4’. ab-i(X,Q,R) = (&)[path(X,P,Q) & (aQ(P,Q,R) v ab-c(RQ,W v ab-W,Q,W where A4”. path(X,P,Q) = [isa(X,P) & [[isa-x(P,Q) & 1 aW,P,Q)l v (~RHpatW,P,W & isa-x(R,Q) & 1 aWW,QHll. This variation appears to correspond to the treatment in [Horty et al., 19871 when combined with ambiguity- blocking skepticism in conflicting paths as described below. Our intuitions in discriminating examples examined thus far favor our initial formulation, although further investigations are indicated before this issue can be considered settled. 4.2.4. Conflicting Path Abnormalities A variety of incompatible alternatives for han- dling conflicting multi-link paths {or Nixon diamonds) have been proposed in the literature, e.g., [Touretzky et al., 1987; Horty et al., 19871. Approaches have 4 Note that all the directed arcs of our illustrations correspond to isa-?- links in our theories, while a slash through arcs represents relations to the complement class of the destination node, as isa x(penguin, not(jiyer)) is represented in Figure 1. Narrow line links correspond to isa x links from individual objects, as isa-x(opus, penguin) is represented in Figure 1. been categorized as either “skentical” or “credulous” with regarYd to their willingness *to draw conclusions in these &es. Credulous th;ories insist that one of the two conflicting terminal links must apply to the root node in such cases (e.g., isa(nixon, pacifist), isa(nixon, not(pac@st)) for Figures 3 and 4), while skeptical theories support no such conclusions (i.e., the conflictin g links cannot be used to make any conclu- sions about the base node). 4.2.4.1. Skeptical Theories et al Skeptical approach es, as distinguished in [Horty -, 19871, possess the considerable computational advantage of having unique extensions, which obviates any need to examine -alternative extensions during derivations. Skeptical theories can be formalized within abnormality theories such as ours by asserting the root of a general diamond to be abnormal with respect to both of the final links to its top, i.e., Ambiguity-Blocking Abnormality: A5 ab-c(X,P,Q) 5 (sR)[isa( X,P) & isa-x(P,Q) & isa(X,R) & isa-x(R,not(Q)) & where 1 ab-dix( X,P,Q) & 1 ab-dix(X,R,not(Q))] 86. ab dix(X,P,Q) = bb~d(X,RQ) v ab-x(X,RQ) v <!$>[isa(XS) & bb-d(W,Q) v ab_x(W,Q)lLs This first set of conflicting-path abnormality axioms accords with Horty’s interesting results for “nested Nixon diamonds,” as illustrated by Figure 5. In this example, these theories block any conclusions about the relations between X and R in the nested dia- mond, thereby blocking any positive path from X to Q in the larger diamond, leaving the path from X through P to not(Q) unopposed. lated The “ambiguity-blocking” skepticism just formu- may be considered unin tu itive becau se the posi- Figure 5. Nested diamonds tive path through a nested diamond (e.g., X --> F --> R --> Q in Figure 5) seems to remain a possible conflicting path to its contrary alternative in the larger diamond (e.g., X --> P -/-> Q in Figure S), as dis- cussed in [Touretzky et al., 19871. Thus, it may seem that a reasonable skentic should not conclude isa(X,not(Q)) in our example, and should propagate the uncertaintv of the nested diamond to the larger dia- mond. Previous attemnts to formulate- such “ambiguity-propagating” ikepticism have not been successful, although it is quite straightforward within ’ Note that our stricter notion of abnormality inheri- tance would require a path(X,S,P) instead of the simple isa(X,S) in this axiom. 424 Knowledge Representation our abnormality theories, using the following axioms: abnormality ab(a,q,r) as itself. Ambiguity-Propagating Abnormality: A5’. ab-c(X,P,Q) = (SR) [isag(X,P) & isa-x(P,Q) & isag( X,R) & isa-x( R,not( Q) ) & 7 ab-dix( X,P,Q) & I ab-dix(X,R,not(Q))] where A7. isag(X,R) E [isa-x(X,R) v (gT)[isag(X,T) & isa-x(T,R) & I ab-dix(X,T,R)]]. These axioms function by defining potential isa rela- tions between nodes (e.g., isag(X,R) in Figure 5) that will be genuine relations if theyH;;c;ot cancelled *by a conflicting multi-link path. these axroms enable cancellation of all paths from’ roots to tips in embedded Nixon diamonds, and yield complete skepti- cism about all such relations (e.g., between X and R and between X and Q in Figure 5). 4.2.4.2. Credulous Theories Less skeptical theories are possible if we require only that one or the other of two conflicting paths is blocked, as formalized in [Haugh, 19881. 4.3. erforming the inimizations Minimization of explicit relations can be conveniently performed by parallel circumscription of our explicit link predicates isu x and ab-x. Alternatively, for non- disjunctive theories, we may define a single explicit link predicate in terms of them and circumscribe it, e.g.: link(X,P,Q) E [ab-x(X,P,Q) v (isa-x( X,P) & Q = isa# )] where isa# is a reserved constant that simply restricts the range of the extra variable. We can achieve the same effect as circumscription of link in a default logic [Reiter, 19801 with the single normal default rule: : I link( al,c2,c3) -, link(al,c2,c3) which asserts that whenever it is consistent for no explicit link to exist between nodes, we may infer that there is none. 4.4. Fulli Object-Level Interpretations It is possible to translate all of our meta-level default relations into object-level axioms using ordinary predi- cates. These translations, however, cannot occur in iso- lation, since all the relevant abnormalities can be deter- mined only from the structure of the network as a whole. Thus, if we wish an object-level translation, we must determine the relevant abnormalities based upon the structure of the whole net, using our meta-level axioms above, and combine these derived abnormali- ties with the individual translations of each link. After all the provable abnormalities are deter- mined from a meta-level theory, translation should proceed as follows: 2) 3) 4) 5) Translate every isa-x(a,p), where a is an indivi- z;y;;astant, as p(a), and every isa-x(a. not(p)) . Translate every explicit default relation isa-x(p,q) as ttp(W & - aW,p,q)l 3 4(X)1. Translate every explicit abnormality ab-x(X,p,q) as ab(X,p,q). Minimize specific abnormalities (ab(X,P,Q)) in the new theory. 5.1. Ill al redicate Glasses 5.1.1. MetsLevel Theories Our closed-world theories yield just the right results regarding abnormalities, but make more assumptions than necessary about what isn’t provable. Ideally, we would like to keep the abnormality results, yet no longer insist that all the excluded primitive links are false. This can be accomplished by dividing our predicates into two parallel classes, one in which the closed-world minimizations are performed, and another, general all-inclusive class, in which there are no explicit minimizations, but only some restrictions to prevent interference with the results of the closed- world minimizations. In particular, let us create new predicates by appending a “*” to all of our previous predicate names, e.g., isa x* instead of isa-x, and ab x* instead of ab-x. Then, we may rewrite our axiom< using the new predicates, creating a new theory H* whose gen- eral axioms correspond to to the old theory H. All of the links of particular inheritance theories will be expressed in H*, and will be minimized as above. But, now our minimized H* theory is only a subset of a larger theory H** which includes H and H*, wherein the predicates of H* are considered as instances of those of H, formalized: isa*(X,P) 2 isa( X,P) ab*(X,P,Q) z, ab(X,P,Q), etc. Thus, the minimizations of H* will no longer entail closed world assumptions with respect to H**, since explicit links that are absent from H* may still appear in H. Finally, H** will need one more axiom to exclude any new relations in H that could otherwise cancel conclusions of the closed-world theory H*. [isa*(X, Q) & I isa-x*(X, not(Q))] 2 I isa(X, not(Q)). This asserts that no unopposed network relation of the original closed-world theory H* is ever opposed by a contrary relation in the larger theory. This excludes unwanted models while allowing any other relations that don’t conflict with the original theory. We might also wish to restore the law of the excluded middle to our meta-level theory for properties of individual objects, i.e., isa(A,P) 1 l isa(A,not(P)). 1) Translate each general abnormality abtp,q,r) as MW = ab(X,q,r)] and each particular With this formalization of open-world inheri- tance, we may selectively specify any degree of closure Haugh 425 assumptions we like for particular predicates. 5.1.2. Object-Level Interpretations Object-level predicates would also come in pairs, and network relations would be translated into starred predicates (e.g., isa-x*(p,q) translates into [[p*(x) & l ab*(x,p,q)] =I q*(x)]). The additional axioms relating starred and unstarred predicates would also be required (e.g., [C;r “(x) 3 Cl(x)]) for all object-level predi- cates. After translation, object-level open-world theories would proceed to minimize the provable direct abnormalities (ab*), thereby permitting any other abnormalities that did not disagree with the positive conclusions of the closed-world theories. Additional closed-world assumptions may also be added for partic- ular classes, as desired, providing the fullest flexibility in specifying the intended assumptions. 5.2. Auto-Epidemic Interpretations Our use of dual predicates in open-world theories is highly suggestive of an interpretation in auto-epistemic theories. Our starred explicit relations are quite clearly just those that can be proven to hold, i.e., those that are “known” by the system. Thus, it would be natural to translate the entire theory into an au to-epistemic logic in which the starred relations are translated into statements of knowledge, according to the following type of schema: isa-x*( X,P) = > Knows isa-x(X,P) ab-x*( X,P) = > Knows ab-x(X,P) Object-level translations semantically revealing: would be analogous, and isa-x*(P,Q) = > [[Knows P(x) & I Knows ab( x,P,Q)] 3 Knows Q(x)] Thus, a default relation between P and Q can be inter- preted as asserting that if something is known to be P and is not known to be abnormal with respect to being Q, then it is known to be Q. Although more work is required on the formal details of such auto-epistemic versions of our theories, they offer considerable promise in providing deeper semantic foundations and more coherent integration with general theories of an agent’s knowledge and self-reflection. careful incremental derivation of abnormalities, truth maintenance on them, tailored to query types. and inference algorithms This scheme and its imple- mentation in Prolog is described in depth in a longer report [Haugh, 19881. A key technique in computing the consequences of our theories is an initial determination of abnormali- ties which avoids their repeated computation on every query. Truth maintenance is performed on all derived abnormalities, since subsequent changes may under- mine the justifications for current abnormalities. With all the current abnormalities kept updated, queries are processed very efficiently by tracing the unblocked paths through a network. Acknowledgements I am especially indebted to Jeff Horty for assistance in identifying the limitations of previous work. I also thank Steve Barash, Steve Jameson, Stuart Pearlman, and Donald Perlis for helpful comments. References Etherington, D.W. 1987. “Formalizing Nonmono- tonic Reasoning Systems,” Artificial Intelligence, 31( 1):41-86. Hanks, S. and D. McDermott. 1986. “Default Rea- soning, Nonmonotonic Logics, and the Frame Prob- lem,” in AAAI-86, pp. 328-333. Haugh, B. 1987. “Simple Causal Minimizations for Temporal Persistence and Projection,” in AAAI-87, pp. 218-223. Haugh, B. 1988. “Tractable Logical Theories of Mul- tiple Defeasible Inheritance,” Martin Marietta Laboratories Technical Report (forthcoming). Horty, J., R. Thomason, and D. Touretzky. 1987. “A Skeptical Theory of Nonmonotonic Semantic Net- works,” in AAAI-87, pp. 358-363. Lifschitz, V. 1985. “Computing Circumscription,” in IJCAI-85, pp. 121-127. Lifschitz, V. 1987. “Formal Theories of Action (Prel- iminary Report),” in IJCAI-87, pp. 966-972. McCarthy, J. 1980. “Circumscription - A Form of y;;;M3yotonic Reasoning,” Arr@cial Intelligence, . . - . McCarthy, J. 1986. “Applications of Circumscription Formalizing Common-Sense ?r@iciul Intelligence, 28( 1) :89- 116. Knowledge, ” 6. Computing the Theory McDermott, D. and J. Doyle. 1980. “Non-Monotonic Logic I,” Artificial Intelligence, 13:41-72. Our formulations are very simple nonmonotonic theories (using a single normal default, or a single sim- ple circumscription), which allow use of the simplest nonmonotonic technique of negation by failure to minimize explicit inheritance links, or to minimize abnormalities in the object-level theories. Furthermore, the skeptical versions of our theories have models that are provably unique with respect to abnormalities, so that alternative extensions needn’t be examined. Our theories have also been formulated to avoid generating the unnecessary abnormalities McCar?l$i original proposal. found in Thus, the difficulties of computing the consequences of general inheritance theories have already been minimized significantly by the form of our theories. Computational demands are further reduced in our implementation scheme by Minsky, M. 1975. Knowledge, ” “A Framework for Representing In Psychology of Computer Vision, P. Winston (ed.), New York: McGraw-Hill, pp. 211- 277. Reiter, R. 1980. “A Logic for Default Reasoning,” Artifccial Intelligence 13181-132. Sandewall, E. 1986. “Nonmonotonic Inference Rules for Multiple Inheritance with Exceptions,” in Proceedings of the IEEE, 74( 10):1345-1353. Touretzky, D. S. 1986. The Mathematics of Inheritance Systems, Los Altos: Morgan Kaufmann. Touretzky, D., J. Horty, and R. Thomason. 1987. “A Clash of Intuitions: The Current State of Nonmono- tonic Inheritance Systems,” in IJCAI-87, pp. 476- 482. 426 Knowledge Representation
1988
79
825
ic esi Mark F. Orelup Graduate Research Assistant John R. Dixon Professor Mechanical Design Automation Laboratory Department of Mechanical Engineering University of Massachusetts Amherst, MA 01003 This paper describes the meta-level control system of a program (Dominic) for parametric design of mechanical components by iterative redesign. We view parametric design as search, and thus Dominic is a hill climbing algorithm. However, from expe- rience with Dominic we concluded that modeling engineering design as hill climbing has several limi- tations. Therefore, a need for meta-level control knowledge exists. To implement meta-level con- trol, we have taken the approach of dynamically modifying the way hill climbing is performed for this task, rather than requiring the addition of domain-specific control knowledge. We have iden- tified the limitations of hill climbing, constructed various generic hill climbing strategies, and devel- oped a me&t-strategy to guide the application of the strategies. The program monitors its own perfor- mance for unproductive efforts and selects among different strategies to improve its performance as it designs. This met&level control significantly improves the performance of the program over the performance of an earlier version. Engineering design is widely recognized to be an iterative process; thus the control of iterative processes is critical to progress in developing computational models of design. This paper describes a meta-level control system for a pro- gram (Dominic) that performs parametric design of mechanical components by iterative redesign. We view parametric design as search through a design space where each point on a hill is a design. From this view, we have developed Dominic as a hill climbing algorithm for the task of parametric mechanical design. Dominic I [Howe %a, Dixon 871 is a domain-indepen- dent program generalized from two expert systems [Dixon 84b, Kulkarni 851. It solves that large class of parametric component design problems (i.e., the design variables are known, but their values are not) which require essentially no conceptual innovation (an initial trial design is readily obtainable, and all non-metric decisions have been made, such as manufacturing process and material). The inference engine of Dominic I implements the iterative redesign mod- el of the design process [Dixon 85, Dixon 84a] shown in Figure 1, which is a hill climbing algorithm. Though Paul R. Cohen Assistant Professor Experimental Knowledge Systems Laboratory Department of Computer and Information Science University of Massachusetts Melvin K. Simmons Artificial Intelligence Branch General Electric Corporate Research and Development Schenectady, New York 12301 redesign can follow a number of distinct strategies, only a single conservative strategy was employed in Dominic I. Dominic I demonstrates a first step towards domain-inde- pendence and can be characterized as a task-level architec- ture [Gruber 87b]. However, it failed to produce acceptable designs in approximately twenty percent of the cases on which it was tested. We believe this was primari- ly due to the limitations of modeling mechanical design as hill climbing. Initial Design Get Specifications Fails Done Figure 1. Iterative redesign model Rather than add domain-specific control knowledge to improve performance, we have implemented a domain-inde- pendent meta-level control system that controls the appli- cation of different generic hill climbing strategies. Thus, the second version of the program (Dominic II) has meta- level control as depicted in Figure 2. When an examina- tion of a history of the design effort by the Performance Monitor reveals an unproductive effort, the Strategy Selec- tor selects a new redesign strategy from the Library of Redesign Strategies to be used in the inference engine. Strategy selection is made by a meta-strategy, based in part on a history of previous strategies. With this structure, Dominic II can recognize when one hill climbing strategy is unproductive and will substitute another, more appropri- ate one. This implementation of meta-level control main- tains the generality of Dominic’s architecture since the meta-level control system is independent of any domain. Orelup, Dixon, Cohen and Simmons 25 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. In the remainder of the paper, after a brief overview of Dominic’s methodology, the details of Dominic II’s meta- level control system are described, its performance on a variety of problems in five different domains is presented, and its relation to other research in meta-level control is discussed. 2 Before the Dominic II meta-level control system can be described, the basic features of the iterative redesign prob- lem formulation embodied in Dominic must be explained (for more details see [Howe 86a]). In this section we describe how redesign problems are formulated and what constitutes a “redesign strategy.” 2.1 Problem Formulation Redesign class problems are structured in the follow- ing terms: 1) Problem Parameters define a problem instance in a domain; 2) Design Variables define a candidate solution. They are controlled and their values selected (within their limits) by the designer; 3) Performance Parameters assess the quality of a design; 4) Constraints are required relationships among design variables and problem parameters; 5) A Dependency expresses the expected effect a change in a design variable will have on a performance param- eter. 2.2 Quality Assessment For each design, levels of satisfaction (excellent, good, fair, unacceptable) determine how well a perfor- mance parameter meets the desired performance. A quality level (excellent, good, fair, poor, unacceptable) for a per- formance parameter is then determined by combining its level of satisfaction with its degree of importance (high, moderate, low). This method places more emphasis on the more important performance parameters. The lowest quali- ty level of all the performance parameters becomes overall design quality level. 2-3 Redesign Strategy A redesign strategy specifies the set of methods that perform the following actions at each step of the iterative redesign phase: 1) Select a performance parameter for attention; 2) Determine a target amount to change the selected per- formance parameter; 3) Select the design variable to effect the desired change in the performance; 4) Determine the amount to change the selected design variable; 5) Decide whether or not to implement the selected design variable change. Any combination of methods for each step forms a redesign strategy. For example, one redesign strategy could be replaced by another by simply changing the Figure 2. Dominic II architecture method that determines how much to change the selected performance parameter, say, from “increase one quality lev- el” to “increase to the excellent quality level.” Therefore, many redesign strategies are possible, and all search the design space differently. It should be noted that the redesign strategies are defined in generic terms. In this section the meta-level knowledge and control sys- tem implemented in Dominic II are described in terms of: (1) performance monitoring; (2) the unproductive design efforts; (3) the redesign strategies; and (4) the meta-strate- gY* 3.1 Over-vie In Dominic, design is viewed as a search for solutions in a design space whose axes are the design variables and whose topology is determined by the varying degrees of quality of the performance parameters. Dominic searches the design space via an operator (to change one or more design variables) and the redesign strategies control the application of the operator by determining which design variable(s) to change and by how much. Hence, a form of me&level control for Dominic is the control of redesign strategies. Dominic II monitors its performance by check- ing if its designs are improving. If not, normally one of the unproductive design efforts (the problems of hill climbing quantified for this task) are present, and Dominic II selects a different redesign strategy to control its search based on the unproductive design effort present and the his- tory of the strategies tried. With this structure, at the meta-level Dominic is con- trolling the application of redesign strategies, or the meth- ods by which it designs. In essence, Dominic is adapting 26 Automat4 Reasoning its problem solving approach to the current problem as the following paragraphs describe. state of the The current history of the design effort is the informa- tion Dominic II uses to assess its performance. For each iteration in the redesign-analysis-evaluate loop Dominic II stores the following information to record the current design effort: 1) the design variable that was modified, and by how much; 2) the performance parameter attended to, and the conse- quent quality change; 3) any active constraints; 4) the current design; 5) the resulting overall design quality level. A time record of the design effort is made by collect- ing the above information for a number of iterations. In practice, a sufficient number was twice the number of design variables. We found that all important information for performance monitoring could be found in a time win- dow of this duration. Performance monitoring is done by making a perfor- mance assessment every third iteration - unless a strategy change has just been invoked, in which case a grace period equal to the duration of a time record is given. A perfor- mance assessment is made by first analyzing the sequence of overall design quality levels within a time record. If the quality has increased by at least one level since the beginning of the time record, the program continues with the current strategy. If not, Dominic II ascertains if any unproductive design efforts are present by comparing the information gathered in the time record to the criteria of each unproductive design effort described below. If no unproductive design effort is discovered, the program con- tinues without a strategy change. r&s Six unproductive design efforts have been identified by observing failure casts in Dominic I. They are searched for in the time record when lack of overall progress is detect- ed by a performance check. The unproductive design efforts are described below in terms of the design space. 1) Creeping: One or more design variables are being changed in steady, but ineffectual amounts. This is. akin to designing along a ridge in the design space, and singular changes in the design variables lead off the ridge. 2) Cycling: The program is producing the same designs fairly frequently. This usually signifies that an opti- mum has been reached. 3) Floundering: The design variable changes improve the selected performance parameter, but not the overall design. This state can be described as a plateau rela- tive to the overall design quality or as a highly non- linear design space. 4) Confined: The program is making ineffectual design variable changes. The proposed design variable changes are being limited by active constraints, or by mutually exclusive performance parameters. 5) Blocked: Constraints are active, or performance parameters conflict such that no design variable change is allowable. This normally signifies that an optimum has been reached. This is also where Dominic I must quit. 6) Constraint-bound: One or more constraints are active, which inhibits the program from moving into the part of the design space it wants to explore. Constraint- bound can be the cause of Confined, Blocked, Creeping and Cycling. All these unproductive design efforts have operational definitions [Orelup 871. For example, the operational defi- nition for Creeping is: a design variable has been modified monotonically in at least 30% of the iterations in the time record, and all its increments/decrements are nearly equal. If an unproductive design effort is detected, the Strate- gy Selection module selects one of the strategies described below. Dominic II has six redesign strategies described below in terms of hill climbing. Operational definitions in terms of the components of a redesign strategy are given in [Orelup 871. 1) Vanilla: This is Dominic I’s original strategy. It is a conservative one which seems to work well for many applications. It climbs a hill one quality level at a time and is never allowed to move down. 2) Aggressive: This strategy is takes much larger steps than Vanilla, and does not consider effects on the overall design (i.e., it may climb down or even off the hill). 3) Semi-aggressive: It takes large steps and may move down, but not off, a hill. 4) Re-order-perlformance-parameters: This strategy is used to change the order in which the performance parameters are selected for attention. 5) Constraint-adapter: This strategy, used in conjunc- tion with an existing strategy, allows Dominic II to change more than one design variable at a time so that an active constraint is not violated. 6) Big-jump: This strategy is used to achieve much larg- er design variable changes than previously allowed. Another strategy used, New-initial-design, is not strictly a redesign strategy. It is invoked when the strate- gy selector concludes that Dominic II is in a space which is no longer useful. 3. eta-Strategies The mapping of unproductive design efforts to strate- gies is not one-to-one. Deciding which strategy to imple- ment given an unproductive design effort depends on the strategy history, specifically on whether or not the unpro- ductive design effort has been detected before and, if so, what strategy or strategies have been previously imple- mented. Therefore Dominic II not only monitors its per- formance in design, but also its performance in terms of strategy use. Heuristic rules of me&strategy have been developed for selecting a strategy given an unproductive design Orelup, Dixon, Cohen and Simmons 27 effort. These rules are described in general terms below with the unproductive design efforts in which they are used. 1) Creeping: To help Dominic II converge more quickly, but avoid bouncing back and forth over the desired val- ue, use Big-jump until the design variable is creeping in the other direction; at that time invoke Aggressive. 2) Cycling: To ensure the design found is an optimum before trying elsewhere, select for attention any per- formance parameters not previously selected in the current time record. If this does not break the cycle, use the New-initial-design strategy. 3) Floundering: From the information gathered in the time record it is very difficult to tell whether a plateau is present or the design space is very non-lin- ear. To remedy the plateau case, larger steps should be taken (e.g., from Vanilla to Semi-aggressive). For the other case, smaller steps should be taken to be more sensitive to the design space (e.g., from Aggres- sive to Semi-aggressive). If trying both cases did not help, make a new initial design. 4) Confined: To handle conflicting performance parame- ters, the program should take larger (but not reck- less) steps in hopes that the trade-offs will work out more quickly. If this is not the case, then Dominic II is better off in another part of the design space (by invoking New-initial-design). 5) Blocked: If a constraint prevented a design variable change, then Constraint-adapter should be used. If not, or if Constraint-adapter did not work, make a new initial design. 6) Constraint-bound: Activate Constraint-adapter for the active constraints unless Constraint-adapter was activated for the- same constraints in the previous time record. For that case invoke New-initial-design. As noted before, Constraint-bound can emulate other unproductive design efforts. When this occurs, the Con- straint-bound response is run. If the Constraint-bound design variables are not the ones creating the other unpro- ductive design effort, then the response for the other unproductive design effort is run as well. Dominic II was tested and compared against Dominic I in twenty-seven test cases spanning five different domains: hydraulic cylinder (3 cases); I-beam (3 cases); post and beam (3 cases); v-belt (10 cases); and solar heating system (8 cases). Both programs started from the same initial design, in the same strategy, and were given the same num- ber of iterations to work a problem. In time comparisons, Dominic II normally ran faster (up to twenty-four percent faster) than Dominic I, and in the worst case was within three percent of Dominic I’s time. In each of the test cases for hydraulic cylinder design, Dominic I failed to produce an acceptable design. Dominic II, however, was able to find at least one acceptable design in two of the three cases. In I-beam design, in all cases Dominic II converged faster than Dominic I (from 15 - 75% fewer iterations to reach comparable designs), and Dominic II found an “excellent” at best. design, Dominic I a “fair” design In the post and beam domain, Dominic II performed much like Dominic I, but was able to find one design more in all cases. The post and beam domain is more readily solved by decomposition (into a post designer, a beam designer, and a manager) than by iterative redesign [Verrilli 871. V-belt drive design is a good test domain because all its design variables are discrete, and the design space has many local maxima. In nine out of the ten cases run, Dominic II found two or more designs while Dominic I could only find one, became Blocked, and quit. In two cas- es, Dominic I failed while Dominic II was able to find a “fair” design. In six of the ten cases, Dominic II found bet- ter designs than Dominic I. The solar heating system domain consisted of design- ing a space heating system using the F-chart method [Beckman 77, Kreith 781. This domain was chosen for its large number of design variables (eleven) and directly con- flicting performance parameters (initial cost and annual savings). In five of the eight cases Dominic II found at least one more design than Dominic I, and in three cases Dominic II found better designs. In summary, by adapting its approach to a design prob- lem while designing, Dominic II clearly performs better than Dominic I. Dominic II finds more solutions, finds better solutions, converges more quickly, and can succeed where Dominic I may fail. 5 Work in task-level architectures includes [Clancey 85, 86, Chandrasekaran 86, Cohen 87, Gruber 87b, Marcus 851. These are architectures that are more general than domain- specific problems but more specific than weak methods - shells with control knowledge for particular tasks. The power of these generic architectures lies in their trade-off between powerful problem solving and wide applicability, and their explicit representation of control knowledge. Dominic embodies a task level architecture for performing parametric design of mechanical components [Gruber 87b]. Though the me&level control system in Dominic II is by no means a generic architecture for monitoring and adapt- ing to a problem space, it does express the pow- er/applicability trade-off. Rather than adding domain specific control knowledge to improve performance, the me&level control in Dominic II is independent of any domain the system is applied to. SOAR [Laird 871 is a task-level architecture based on search. Depending on the search control knowledge added to the base system, SOAR realizes the weak methods such as hill climbing, means-ends analysis, alpha-beta search, etc. Since many of its subgoals address how to make con- trol decisions, SOAR can reflect on its own problem solv- ing behavior. SOAR also has a chunking mechanism for learning to add to its search control knowledge while it is running. The me&level control system of Dominic II is in the same spirit as SOAR in that they are both trying to mold the search process to the search space. SOAR per- forms this through universal subgoaling, while Dominic II does it by monitoring its performance and changing its 28 Automated Reasoning strategies for search accordingly. Georgeff [Georgeff 831 demonstrates the utility of problem specific strategies in heuristic search. He also dis- cusses various methods for constructing strategies and how meta-level strategies can be used to guide the application of object level strategies. In his discussion of meta-level strategies, Georgeff anticipates Dominic II: “More general meta-level strategies could take account of information derived during the search, and could allow for dynamically changing lines of reasoning. For example, the lack of suc- cess of a strategy may suggest a corrective strategy with which to continue.” An elegant approach to control is BBl [Hayes-Roth 851, a domain independent blackboard architecture for con- trol. The total system consists of two blackboards, one for control and the other for the domain. The control blackboard has six levels of abstraction and controls the execution of the domain knowledge sources as well as its own. The cost-effectiveness of control reasoning using BBl has been demonstrated [Garvey 871 though in one domain through the addition of domain-specific control knowledge. Dominic II has shown the viability of dynami- cally modifying the control aspects of a weak method (and therefore domain-independent control knowledge) and has demonstrated its cost-effectiveness in twenty-seven prob- lems spanning five different domains. Hudlicka and Lesser [Hudlicka 84.1 describe a system which monitors and corrects a program that performs vehi- cle monitoring through acoustic signals, called the DMVT [Lesser 831. The system uses a causal model of the DMVT, which is based on a blackboard architecture, to guide diagnosis when the DMVT deviates from expected behavior. The aim of the diagnosis is to detect faulty con- trol parameter settings or faulty hardware components. The meta-level control system in Dominic 11 is very simi- lar to Hudlicka and Lesser’s work, though their system is more sophisticated. Rather than using a causal model to diagnose, Dominic II simply follows a predefined decision tree, and the tuning of the system is not as flexible or adaptable since individual control parameters are not adjusted but the entire problem solving strategy is changed. However, from what was learned during this research, we believe it is possible to develop a causal mod- el between the general symptoms of unproductive design efforts and the elements of redesign. This would then lead to the fine tuning of strategies as well. Domineering [Howe 86b] applies Dominic I to itself, that is, to design its own configuration. The five steps of redesign are the design variables and the performance parameters are measures of performance for Dominic I, such as the time required to find a design or the best design found. Domineering converges on redesign strategies that produce good performance by Dominic I in a specific domain. Unfortunately, no formal comparison has been made between Dominic II and Domineering. For the pur- poses of discussion, however, a few comparisons and specu- lations can be made. Dominic II has implicit some of the explicit features of Domineering: the performance parame- ters of Domineering are implicitly specified in the defini- tions of the unproductive design efforts in Dominic II; and the dependencies between the aspects of performance and the steps of redesign are explicit in Domineering whereas they are implicit in Dominic II in the form of the meta- strategy. Also note that Domineering attempts to con- verge on one redesign strategy that yields good perfor- mance in a domain while Dominic II dynamically changes the strategies according to the characteristics of the space it is in. Though no experiments have been done, we specu- late that Dominic II would perform better than Dominic I running the strategy that Domineering selects for a domain because Dominic II is more flexible and adaptable to the design space than Dominic I. A single redesign strategy probably could not do well throughout the entire design space. Dominic is a general hill-climbing algorithm that finds satisficing [Simon 811 solutions to design problems in the iterative redesign class. Since engineering design is not well represented .as hill climbing, the original Dominic needed extra control knowledge to improve its perfor- mance. This could have been done by adding domain specif- ic knowledge, but to maintain the generality of Dominic’s architecture for parametric design of mechanical compo- nents, we added me&level control knowledge based on the idea that the problems that can occur in hill climbing can be solved by modifying how the hill climbing is per- formed. In other words, when Dominic determines it is not designing well, it changes the method by which it designs. To implement this idea, we have modeled the lim- itations of hill climbing in the form of unproductive design efforts; identified numerous, generic strategies for hill-climbing in this task; and have constructed a strategy to perform the mapping between the two. With the addi- tion of this knowledge, Dominic II is able to monitor its performance for unproductive design efforts and select among different strategies to try to improve its perfor- mance as it designs. Thus Dominic II reacts and adapts to its environment and shows marked improvement over the performance of Dominic I by converging faster, finding more designs, finding better designs, and succeeding where Dominic I may fail. The work reported in this paper was partially funded by grants from the General Electric Company and the Nation- al Science Foundation to the University of Massachusetts. [Beckman 773 W. A. Beckman, S. A. Klein, and A. D. Duffie, Solar Heatinp Design by the F-Chart Method, Wiley Inter-science Publication, 1977. [Chandrasekaran 863 B. Chandrasekamn, “Generic Tasks in Knowledge Based Reasoning: High-Level Building Blocks for Expert System Design”, IEEE Expert, Fall 1986, pages 23-30. [Clancey 851 W. J. Clancey, “Heuristic Classification”, Artificial Intelligence, Vol. 27 (1985) pages 289-350. Orelup, Dixon, Cohen and Simmons 29 [Clancey 861 W. J. Clancey, “From GUIDON to NEOMYCIN and HERACLES in Twenty short Lessons: ONR final Report 1979-1985”, The AI Magazine, August 1986, pages 40-60. [Cohen 871 P. R. Cohen, M. Greenberg, and J. DeLiso, “Mu: A Development Environment for Prospective Reasoning Systems”, Proceedings of the National Corgference on Artificial Intelligence, pages 783-788, August 1987. [Dixon 84a] J. R. Dixon, M. K. Simmons, and P. R. Cohen, “An Architecture for Applying Artificial Intelligence to Design”, Proceedings of the 21st ACM/IEEE Design Automation Conference, Albuquerque, NM, June 25-27,1984. [Dixon 84b] J. R. Dixon, and M. K. Simmons, “Expert Systems for Design: Standard V-Belt Drive Design as an Example of the Design-Evaluate-Redesign Architecture”, Proceedings of the ASME Computers in Engineering Conference, Las Vegas, NV, August 12- 16,1984. [Dixon 851 J. R. Dixon, and M. K. Simmons, “Expert Systems for Design: A Program of Research”, ASME Paper NO. 85-det-78, presented at the ASME Design Engineering Conference, Cincinnati, OH, September lo-13,1985. [Dixon 871 J. R. Dixon, A. E. Howe, P. R. Cohen, and M. K. Simmons, “Dominic I: Progress Towards Domain Independence in Design by Iterative Redesign”, Engineering with Computers, Vol. 2 (1987), pages 137- 145. [Garvey 871 A. Garvey, C. Cornelius, and B. Hayes-Roth, “Computational Costs versus Benefits of Control Reasoning”, Proceeding of the National Conference on Artificial Intelligence, pages 110-l 15, August 1987. [Georgeff 831 M. P. Georgeff, “Strategies in Heuristic Search”, Artificial Intelligence, Vol. 20 (1983) 393- 425. [Gruber 87a] T. Gruber, and P. Cohen, “Knowledge Engineering Tools at the Architecture Level”, Proceedings of the Tenth International Joint Conference on Artificial Intelligence, pages 100-103, August, 1987. [Gruber 87b] T. R. Gruber, and P. R. Cohen, “Design for acquisition: Principles of Knowledge-system Design to Facilitate Knowledge Acquisition”, International Journal for Man-Machine Studies, Vol. 26 (1987) 143-159. 30 Automated Reasoning [Hayes-Roth 851 B. Hayes-Roth, “A Blackboard Architecture for Control”, Artificial Intelligence, Vol. 26 (1985) pages 251-321. [Howe 86a] A. E. Howe, P. R. Cohen, J. R. Dixon, and M. K. Simmons, “Dominic: a Domain Independent Program for Mechanical Design”, The International Journal for Artificial Intelligence in Engineering, Vol. 1, No. 1, 1986. [Howe 86b] A. E. Howe, “Learning to Design Mechanical Engineering Problems”, EKSL. Working Paper 86-01, Department of Computer and Information Science, University of Massachusetts, Amherst, 1986. [Hudlicka 841 E. Hudlicka, and V. Lesser, “Meta-Level Control through Fault Detection and Diagnosis”, Proceedings of the National Conference on Artificial Intelligence, August 1984. [Kreith 781 F. Kreith, and J. Kreider, Princinles of Solar Engineering, Hempisphere Publishing Corp., 1978. [Kulkami 851 V. M. Kulkami, J. R. Dixon, J. E., Sunderland, and M. K. Simmons, “Expert Systems for Design: The Design of Heat Fins as an Example of Conflicting Sub-goals and the Use of Dependencies”, Proceedings of the ASME Computers in Engineering Conference, Boston, MA, August 4-8,1985. baird 871 J. E. Laird, A. Newell, and P. S. Rosenbloom, “SOAR: An Architecture for General Intelligence”, Artificial Intelligence, Vol. 33 (1987) pages l-64. [Lesser 831 V. Lesser, and D. D. Corkill, “The Distributed Vehicle Monitoring Testbed: A Tool for Investigating Distributed Problem Solving Networks”, AI Magazine, Vol. 4, No. 3, Fall 1983. [Marcus 851 S. Marcus, J. McDermott and T. Wang, “Knowledge Acquisition for Constructive Systems”, Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pages 637-639, August 1985. Delup 8’71 M. F. Orelup, “MetaLeve Control in Domain Independent Design by Iterative Redesign”, Master’s Thesis, Department of Mechanical Engineer- ing, University of Massachusetts, Amherst MA, 1987. [Simon 811 H. A. S’ u-non, The Sciences of the Artificial, 2nd edition, Cambridge, MA, The MIT Press, 198 1. [Verrilli 8'71 R. J. Verrilli, K. L. Meunier, J. R. Dixon, and M. K. Simmons, “A Model for Management of Problem-Solving Networks in Mechanical Design”, Proceedings of the ASME Computers in Engineering Conference, New York, NY, August, 1987.
1988
8
826
Mixing Strict and Defeasible Inheritance John F. Horty1T2 Richmond I%. Thomason lcomputer Science Department 2Philosophy Department Carnegie Mellon University University of Maryland Pittsburgh, PA 15213 College Park, MD 20742 31ntelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260 Abstract: Commonsense or expert knowledge of any rich domain involves an intricate mixture of strict and defeasible information. The importance of represent- ing defeasible information in an inheritance system has been widely recognized, but it is not enough for a sys- tem to represent only defeasible information: without the ability to represent strict information as well, the system cannot represent definitional relations among concepts. As a response to this difficulty, we present a well-defined and intuitively attractive theory of inher- itance for IS-A hierarchies containing strict and defea- sible links types mixed together. 1 Introduction It is apparent that commonsense or expert knowledge of any reasonably rich domain has to involve an intricate mixture of strict and defeasible information. The impor- tance of representing defeasible information in a knowl- edge base-particularly, in a frame- or network-based in- heritance reasoner-has been widely recognized. Although several early systems designed to allow defeasible inher- itance reasoning were subject to semantic difficulties in their treatment of cancellation, these problems by now are essentially solved; there exist today a number of well- defined and intuitively attractive theories of defeasible in- heritance. It has been argued, however, by Brachman [2] and Israel [6], that this intense concern with defeasible in- heritance has obscured some of the more general problems exhibited by network formalisms capable of representing only defeasible information: without the ability to repre- sent strict information as well, an inheritance system can- not express the important analytic or definitional relations among concepts. One way of responding to these difficulties, exempli- fied by work in the KL-ONE tradition [3], is to abandon the project of encoding defeasible information in an in- heritance reasoner, concentrating instead on definitional relations among richly structured concepts. As an alter- native strategy, we are exploring ways in which strict and defeasible taxonomic information can be combined in a sin- gle inheritance network. This paper presents a first step: a well-defined and intuitively attractive theory of inheri- tance for IS-A hierarchies containing strict and defeasible link types mixed together. The analysis of mixed inher- itance described here is itself a mixture, combining the theory of strict inheritance from Thomason et al. [7] with the theory of defeasible inheritance provided by Horty et al. [5]. 2 asic concepts Letters from the beginning of the alphabet (a through d) refer only to objects or individuals; letters from the middle of the alphabet (m through t) refer only to properties or kinds. Letters from the end of the alphabet (u through Z) range over both objects and properties. Where y is a property, the link types x + y and x Q$ y represent positive and negative strict relations. If z is itself a property, these positive and negative strict links are equivalent to certain quantified conditionals: the link p 3 Q represents a statement of the form ‘Every P is a Q’; the link p +$ Q represents a statement of the form ‘No P is a Q’. If x is an object, these positive and negative strict links are equivalent to atomic and neg-atomic statements from ordinary logic: a a p and a + p represent the statements Pa and ~Pct. Where both x and y are properties, the link types 2 + y and x j+ y represent defeasibde relations. These positive and negative defeasible links are equivalent to or- dinary generic statements: p --) Q and r ft Q, for example, might represent the statements ‘Birds fly’ and ‘Mammals don’t fly’. There is nothing in classical logic very close in meaning to generic statements like these. In particular, ‘Birds fly’ doesn’t mean that all birds fly, since it is true even in the presence of exceptions. Instead, it seems to mean that “typical birds” fly. Likewise, ‘Mammals don’t fly’ does not mean that no mammals fly, but only that “typical mammals” don’t. Capital Greek letters represent networks-finite graphs, with nodes and link types as described. Networks are themselves classified as strict if they contain only strict links, or defeasibde if they contain no strict links emanat- ing from property nodes. Mized networks can contain both strict and defeasible links emanating from property nodes. Lower case Greek letters refer to paths-special se- quences of links. Often, it is convenient to refer to an arbitrary path in a way that displays some of the nodes it passes through without displaying the particular link types connecting those nodes. For this purpose, we adopt a no- tation according to which ‘~(z, u, y)’ refers to an arbitrary positive path, and ‘F(x, u, y)’ likewise to an arbitrary Warty and Thomason 427 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. negative path, from x through CT to y. As a convention gov- erning this n-notation, we assume that adjacency of node symbols entails adjacency of nodes on the paths symbol- ized. Thus, for example, ‘?(x, u, (T, y)’ refers to a negative path beginning with a direct link of any type from x to u, and then moving through Q to y. Paths are classified as simple or compound, strict or defeasible, positive or negative. The simple paths are just the direct links-classified as strict or defeasible, posi- tive or negative, along with the links themselves. Com- pound paths are defined inductively, as follows. (1) If r(x,a,p) is a strict positive path, then: ~(x, c,p) 3 q is a strict positive path; ~(2, a,p) + q is a strict neg- ative path; ~(x, 0, p) --f q is a defeasible positive path; and ~(2, a,p) f, q is a defeasible negative path. (2) If $x:, a,p) is a strict negative path, then: ~(2, a,p) + q is a strict negative path. (3) If ~(x, a,p) is a defeasible posi- tive path, then: ~(x, a,p) 3 q is a defeasible positive path; x(x, a,p) + q is a defeasible negative path; r(x,g, p) + q is a defeasible positive path; and ~(2, a,p) f+ q is a defea- sible negative path. (4) If -( 7r x, a,p) is a defeasible negative path, then: I(Z, a,p) + q is a defeasible negative path. It follows from this definition than an individual can occur in a path only as its initial node. Let us define a negative segment as a strict or defeasible negative link, possibly followed by a reverse positive strict path-that is, as a link sequence either of the form x1 + 22 c e . . (I Z, or of the form xi +$ x2 e . . . e xn. Then it follows from this definition also that if a negative segment occurs in a path, it can occur only at the very end. Intuitively, paths represent arguments, which enable certain statements as their conclusions. A positive path of the form ~~(Iu,u,Y) enables the statement II: + y if it is strict or x is an individual, and the statement x --f y if it is defeasible and x is a kind; likewise, a nega.tive path of the form ?i’(x, Q, y) enables x + y if it is strict or x is an individual, and x ft y if it is defeasible and x is a kind. Given a network l?, the purpose of an inheritance theory is to specify the set of statements supported by I-- that is, the set of statements we can reasonably conclude from the statements contained in I. We arrive at this specification in a roundabout way, defining a statement as supported by l? just in case it is enabled by a path that I’ permits. It remains only to define the paths permitted by I-intuitively, the arguments sanctioned in the context of r. 3 Motivation Since our approach to mixed inheritance combines the the- ory of strict inheritance from [7] with the theory of defea- sible inheritance from [5], we first summarize these two theories, and then explain how they fit together. For strict networks, our definition of permitted paths is very simple. According to the theory of [7], a strict net- work I’ permits exactly the paths it contains-that is, I’ permits CT iff a is a path entirely composed of links con- P 4 Figure 1: l?r Figure 2: I’2 tained in I. In the case of Ii (Figure l), for example, the permitted paths include a a s 3 r and p =+ q + r G s. Suppose we interpret the nodes in this net so that p = star- lings, q = birds, r = mammals, s = dogs, and a = Rover. Then the first of these paths shows us how I1 supports the conclusion that Rover is a mammal (a 3 r); the second shows how it supports the conclusion that no starlings are dogs (p + s). Th e net does not permit, for example, the path p 3 s a T, since the link p 3 s is not contained in rl. It is important to note that this analysis of strict inher- itance is not the standard view. Strict networks contain only strict links, each of which is equivalent to a formula of classical logic. It may seem natural, then, to use classical logic itself to provide a semantics for such a network-by identifying the network with the set of formulas that trans- late its links, and then defining a statement as supported by the network just in case it belongs to the deductive clo- sure of that set. This idea, which we take to be the stan- dard view, is due originally to Hayes [4]. To see that it is different from the theory of [7], consider, for example, the net I’2 (Figure 2). This network would be translated into the set {Pa, 1 Pa,lQa}. Since the set is inconsistent, any statement at all belongs to its classical deductive closure; so according to the standard view, I’2 should be taken to support every statement-including, say, &a. According to the analysis of [7], 1 iowever, I’2 does not support &a, since it permits no positive path from a to q, and in fact provides uncontested evidence that l&a. It is, in some ways, a delicate matter to decide between the analysis of strict inheritance provided by [7] and the traditional analysis of [4]. One is always free to regard a strict network simply as a notational variant of some clas- sical theory, so that the analysis of [4] would be appropri- ate. Still, there seems to be some value in taking seriously the graph-based nature of inheritance reasoners, which de- rive conclusions corresponding only to actual paths. The problem is then to see whether we can make logical sense of such a reasoner by designing an appropriate logic, rather than by forcing the reasoner to conform to the standards of an already-existing logic. This task is carried out for strict networks in [7], which provides both a Gentzen-style proof theory for path-based inheritance reasoning and an interpretation of the resulting logic in a four-valued model 428 Knowledge Representation Figure 3: l?a Figure 4: I4 based on that of Belnap [ 11. Defeasible inheritance is more complicated than strict inheritance, primarily because defeasible networks, unlike their strict counterparts, do not permit all the paths they contain. The distinction derives from the different role played in the two kinds of networks by conflicting paths. Any strict network containing conflicting paths is incon- sistent, though-as the theory of [7] shows-the effects of the inconsistency can be localized. But defeasible networks can contain conflicting paths without even local inconsis- tency. Consider, for example, Ia (Figure 3). Although this net contains conflicting paths, it is not inconsistent: it ad- mits interpretations under which all of its links represent true statements-including the well-known interpretation with a = Nixon, q = Quakers, r = Republicans, and p = pacifists. Since the net is consistent, no reasonable theory of defeasible inheritance would say that it permits both of the conflicting paths a a q - p and a 3 r ++ p at once. Any such theory would allow us to draw inconsis- tent conclusions-for example, that Nixon both is a paci- fist (a 3 p) and that he isn’t (a + p)-from consistent information. Theories of defeasible inheritance differ among them- selves in their treatment of conflicting paths. One kind of theory associates with each network containing conflicting paths a number of different extensions, corresponding to different resolutions of the conflicts. Because each exten- sion supports a maximal set of conclusions (subject to cer- tain constraints) we describe these theories as credulous; an example is the theory of Touretzky [8]. The present pa- per is based on an alternative approach to defeasible inher- itance, developed in [5], which has the advantage of associ- ating with any given network only a single extension. We describe this alternative as a skeptical approach-since it embodies the broadly skeptical idea that conflicting argu- ments, represented in networks by conflicting paths, tend to neutralize each other. Applied to I’s, for example, the skeptical approach tells us that neither of the conflicting paths should be permitted. In fact, the theory of [5] is not thoroughly skeptical: its skepticism is restricted to compound paths, and even compound paths can be neutralized only by those conflict- ing paths that are not themselves, as we say, preempted. The first of these restrictions has the effect that, even in the face of conflicts, any non-compound path contained in a network-that is, any direct link-will be permit- ted by that network. As explained in [5], this principle is well-motivated, particularly against the background of the four-valued logic; but it is not a crucial feature of the theory. The second restriction, however, is crucial. Preemption is the mechanism by which, in case of conflicts, arguments based on more specific information are allowed to override arguments based on less specific information. For exam- ple, consider I4 (Figure 4)-with, say, a = Tweety, p = penguins, q = birds, and P = flying things. Since this net contains the two conflicting paths a a p - q - r and a + p ++ r, an unrestricted skepticism would permit neither. However, it seems in this case that the latter of these paths should be permitted, because it represents an argument based on more specific information. The sec- ond restriction above reflects this intuition. We say that a path of the form T(X,T, v) - y is preempted in a net I’ just in case there is a node z such that (i) I? permits a path ST(X, 71, Z, 72, v), so that z provides “more specific” information than ZI about x, and (ii) z + y E I’, so that 2 gives us “direct” information contrary to that provided by v. (The definition of preemption for negative paths is symmetrical.) According to the theory of [5], even a con- flicted path will be permitted if the only paths with which it conflicts are themselves preempted; so, for example, a =+ p ft r will be permitted by I’J, since a +- p - q - r is preempted. The theory of [7] tells us, then, that a strict network permits exactly the paths it contains; the theory of [5] tells us that a defeasible network permits a path it contains just in case that path is either a direct link, or any other path with which it conflicts is itself preempted. Now, to com- bine these two theories into an account of inheritance for mixed networks, we first carry over entirely the analysis of strict inheritance from [7], and then modify the analysis of defeasible inheritance from [5] to accommodate the pres- ence of strict links. Since it incorporates the analysis of [7], the resulting theory tells us that a mixed network permits exactly the strict paths it contains. Likewise, since it is based on the analysis of [5], the resulting theory also em- bodies the skeptical idea that a compound defeasible path is neutralized by any conflicting path that is not itself pre- empted. However, in order to develop this idea in a mixed context, we need to modify slightly our conception of the kind of paths that represent conflicts, as well as our un- derstanding of the preemption relation among conflicting paths. In defeasible networks, all conflicts share a simple form: they involve paths with identical initial nodes, identical end nodes, and opposite polarity. But the presence of strict links introduces the possibility of less direct conflicts, even among defeasible paths. As an illustration, consider I’s (Figure 5). Here it seems reasonable, in light of the strict segment r 3 s + t, to regard p ---f q + 1’ and p - u - u j t themselves as conflicting paths, even though they Horty and Thomason 429 F 2 l 12 e . 7 * P V Figure 5: I’s Figure 6: I’s do not share an end node. Imagine, for example, that T = dogs, s = mammals, and t = animals, so that the strict segment tells us that all dogs are animals. In the context of F5, then, the path p --f- g ---) P, which represents an argument to the effect that p’s are dogs, carries with equal force the conclusion that p’s are animals; so it conflicts with p ---f u -+ v + t, which represents an argument that p’s are not animals. What this example shows is that two defeasible paths can represent conflicting arguments, even if they have dif- ferent end nodes, when one of the paths clashes with a strict consequence of the other. Of course, such strict consequences can themselves be classified as positive or negative. Let us define /Q(Z) = {z) U {y : I? contains a strict positive path from IZ: to y) and ~Q(x) = {y : I? con- tains a strict negative path from z to y}, so that #r(z) and ET(Z) represent the positive and negative strict con- sequences attributed to z by F-the set of properties that x must possess, according to I’, and the set of proper- ties that it: cannot possess. It is then natural to extend our conception of conflicting defeasible paths so that, in addition to the ordinary kinds of clashes, a path of the form 7r(d, a, u) --j y will be said to conflict in a net I’ with any path of the form r(z,~, v) f, m where m E Q(Y), and also with any path of the form K(S,T, V) 3 m where m E i+(y). Our general skeptical attitude regarding con- flicting paths will then have to apply to these new kinds of conflicts as well. In Is, for instance, neither p - q - r nor p + u + v f, t will be permitted, since each is neutralized by its conflict with the other. Just as the presence of strict links allows for the possi- bility of new kinds of conflicts, however, it provides also for the possibility of new relations of preemption. To see this, consider the network I’s (Figure S), supplied with an in- terpretation under which a = Hermann, p = persons born in America, q = native speakers of German, r = persons born in Pennsylvania, and s = native speakers of Pennsyl- vania Dutch. Under this interpretation, Is tells us is that Hermann is a particular speaker of Pennsylvania Dutch, that every speaker of Pennsylvania Dutch speaks German (since Pennsylvania Dutch is a dialect of German), that German speakers tend not to be born in America, that speakers of Pennsylvania Dutch tend to be born in Penn- sylvania, and that everyone born in Pennsylvania is born in America. According to our new, extended conception, the paths a 3 s -+ r and a + s 3 q ft p now represent con- flicting arguments in the context of Ps, since p E Ky,(r). Of course, we would not want to remain skeptical in this case. The path a a s + q + p, representing the argu- ment that Hermann was not born in America since he is a native speaker of German, should be preempted in Fs: the fact that his dialect is Pennsylvania Dutch provides a more specific argument to the contrary. Without modification, however, the treatment of preemption from [5] does not give us this result. A path can be preempted only if there is more specific and direct information to the contrary; and, although s does provide “more specific” information than q, the path s + r + p does not represent “direct” information to the contrary-at least, not according to the standards of [5], which holds that direct information can be carried only by direct links. Evidently, it is this last requirement concerning the nature of direct information that needs to be modified in the present context. In the context of defeasible networks, it makes good sense to say that direct information can be carried only by direct links: any compound path represents an argument that can itself be undermined. In the context of mixed nets, however, certain kinds of compound paths can legitimately be thought to carry direct information- namely, compound paths consisting of a single defeasible link followed by a strict end segment, of any length. In Is, for example, the path s ---f r 3 p should be thought of as telling us directly that speakers of Pennsylvania Dutch are born in America: for even by the standards of [5], s --f r counts as a direct statement of the fact that speakers of Pennsylvania Dutch are born in Pennsylvania, and the strict extension r + p simply tells us that everyone born in Pennsylvania is born in America. Adjusting our definition of preemption to account for this new notion of direct information, we say now that a negative path ~(z, r, v) + m is preempted in a mixed net- work I? if there exist nodes z and n such that I’ permits a path r(z, ~i,z, 72, v) with z ---) n E P and m E Kr(n). This new definition allows us to conclude, as it should, that a =$= s +- q ft p is preempted in I& so the net does end up supporting the conclusion that Hermann was born in America. It is a bit more complicated to formulate mixed preemption for positive paths, although no new ideas are involved, simply because direct information to the con- trary can now take the form either of a positive defeasible link followed by a negative strict extension, or of a negative defeasible link followed by a reverse positive strict exten- sion. Formally, we say that a positive path R(Z, 7, v) ---f m is preempted in a mixed network I if there exist nodes z and n such that I permits a path 7r(z,~i, z, 72, v) with either (i) z + n E I’ and m E zr(n) or (ii) z ft n E r and n E m(m). 430 Knowledge Representation 4 The definition In this section, we assemble our motivational ideas into a definition of the permission relation for mixed networks; we use the symbol ‘ p’ to stand for the permission relation, so that ‘I’ p CT’ means that the net I’ permits the path 6. Like that of [5], the present definition is inductive. Our first step, then, is to assign a measure of “complexity” to each path Q in a net I’ in such a way that it can be decided whether I pa once it is known whether I’ ba’ for each path cr’ less complex in I’ than CT itself. In order to arrive at the appropriate notion of path complexity, we proceed through a number of auxiliary ideas. As we recall, a path is a joined sequence of links containing a negative segment, if at all, only at the very end. Let us say, then, that a generalized path is a sequence of links joined like an ordinary path, except that it can contain negative segments anywhere, and perhaps more than one. (Example: p ft q + r f+ s + t is a general- ized path, but it is not a path, since its negative segment p + q e r is not an end segment.) Next, let us define the defeasible length of a generalized path as follows: if the generalized path does not contain a strict initial seg- ment, then its defeasible length is simply the number of defeasible links in the path; if the generalized path does contain a strict initial segment, then its defeasible length is the number of defeasible links in the path augmented by one. (Example: the generalized path r -+ s a t ---f u has a defeasible length of two, since it contains two defeasible links and no strict initial segment; the generalized path p a q + r -P s + t + u is three, since it contains a strict initial segment along with two defeasible links.) Using these ideas, we can now define the degree of a path u in a net I-written, degr(a)-as the greatest de- feasible length of any acyclic generalized path in I from the initial node of CT to its end node. (Example: degr, (p + q-+r)= 3, since the acyclic generalized path from p to r in I’5 whose defeasible length is greatest is p + u -+ v j-+ t tz s -+ r, with a defeasible length of 3.) In order to insure that the assignment of degree to the paths in a network has the appropriate properties, we need to re- strict the application of our theory, as in [5], to paths free from certain kinds of defeasible cycles (a defeasible cycle is a cyclic generalized path containing at least one defeasi- ble link); for the present, we limit will our attention, even more severely than necessary, to networks which are either entirely acyclic, or which contain only strict cycles. The notion of degree defined here is a straightforward generalization of the notion defined in [5]. However, it is not quite appropriate as a measure of path complexity for an inductive definition of the permission relation; in the present context, the measure of complexity needs to carry just a bit more information. Basically, we want our mea- sure of a path’s complexity to tell us, in addition its degree, whether or not the path possesses a strict end segment. Therefore, we define the mixed degree of a path cr in a net I’ as a pair (n, V) . The first component of the pair tells us the degree of u in I’: n = degr(a). The second component tells us, simply, whether or not c possesses a strict end segment: by convention, we let v = 0 if u does not possess a strict end segment, and v = 1 if it does. We define a lexical ordering on the mixed degrees by giving priority to the first component: (n, v) < (n’, v’) iff either n < n’ or n = n’ and v < v’. The idea behind this ordering is that degree is the primary measure of path complexity-but of two paths identical in degree, one with and one without a strict end segment, the path lacking the strict end segment is classified as less complex. Our definition of the permission relation has the overall structure of a definition by cases. Any path Q from a mixed network can be divided into the subpaths p(u) and 6(u), where p(u) is the maximal strict end segment of u, and &(a) is the result of truncating ~(a) from u. (Example: if u is z + y -+ p +S r (I s, then p(u) is p +b r + s and 6(u) is z + y + p.) Using this notation, then, we specify the conditions under which I’ bu in three separate cases, depending on the form of Q. Our first case deals with defeasible paths possessing strict end segments. Case A: u # 6(u) and u # ~(a). Then I pa iff I’ p S(u) and I bp(u). The next case deals with strict paths, Case B: u = ~(a). Then I’bg iff each link in c is contained in r. Finally, we deal with the case of paths ending in defeasi- ble links-which itself divides into subcases, as such paths may be simple or compound. Case C-I: u = 6(u) and u is a direct link. Then I p u iff u E r. Case C-II: u = 6(a) and (T is a compound path. Two subcases to consider. 1. u is a positive path, of the form X(X, ui, u) + y. Then I? pa iff (a> r P 4x, ~1~4; (b) u + y E r; (c) Form E Q(Y), z ft m @ I and m $7+(z); (d) For m E %T(Y>, x -+ m @ l? and m @ Q(Z); (e) For all v, m, r such that I’ pr(x,r,v) with v j+ m E I and m E Q(Y), there exist Z, n, 71, r2 such that I’p74~,7r,2,7~,2)) with z --+ n E I’ and m E “r(n); (f) For all v, m, r such that I’ br(x, r, v) with v + m E I? and m E &(y), there exist Z, n, rl, r2 such that r pr(x, 71, Z, r2, v) with either (i) z + n E I and m E Kr(n) or (ii) zf+nEI’andnEK:r(m). 2. u is a negative path, of the form ~(2, ui, u) + y. Then I ba iff Horty and Thomason 431 (a) r b +, ~1, u); (b) u % Y E r; (c) For m such that y E q(m), x ---+ m $?! I? and m @ “r(z); (d) For all v, m, 7, such that I? pa(z, r, v) with v 3 m E I? and y E tcr(m), there exist z, n, rl, r2 such that I’ pr(x,~r, 2,5,v) with either (i) z + n E I and m E iZr(n) or (ii) z + n E I’ and n E sr(m). It should be clear that this definition, although struc- tured as a definition by cases, is properly an induction on mixed degree. Case A defines permission for a path u of mixed degree (n, 1) in terms of the path S(u) of mixed degree (n, 0) and the path ~(a) of mixed degree (l,l)- both inductively simpler. Cases B and C-I are basis cases, defining permission respectively for paths of mixed degree (1,1) and (l,O). F inally, Case C-II defines permission for paths of mixed degree (n, 0) with n > 1 in terms of paths of mixed degree (n’, v’)-where v’ may be either 0 or 1, but n’ < n so that the overall measure of mixed degree is simpler. 5 Conclusion By combining the analysis of strict inheritance from [7] with the skeptical analysis of defeasible inheritance from [5], we have developed a well-defined and intuitively at- tractive theory of inheritance for semantic networks con- taining both strict and defeasible links. At this point, two topics stand out as the most important areas for further research. The first concerns the treatment of cyclic networks. A central advantage of the theories of defeasible inheri- tance presented in both [5] and [S] is the ease with which they handle relations of preemption among conflicting ar- guments, naturally preferring those arguments based on more specific information. Part of what makes this possi- ble is the restriction of these theories to acyclic networks, which allows us to define a partial ordering of “specificity” among the various argument paths. In the purely defea- sible case, it is not terribly unnatural to restrict ourselves to acyclic networks; however, it is almost impossible to introduce strict links into a network without also intro- ducing cyclic generalized paths. This paper limits itself to networks whose only cycles are entirely strict, but that limitation seems excessive. We need to discover the extent to which cyclic paths can be admitted into inheritance networks without destroying the partial ordering of speci- ficity among arguments that makes a natural treatment of preemption possible. The second research topic concerns the application of this work to the representation of complex concepts, such as Brachman’s “yellow elephant” or the traditional “un- married man.” In order to represent such concepts along with defeasible information in a taxonomic reasoner, it is necessary, first, to develop a theory of inheritance allowing for the expression of both strict and defeasible relations. The present paper presents such a theory-but it does not address the problem of handling complex concepts within the framework it sets out. A central accomplishment of the KL-ONE tradition has been the design and analysis of algorithms for handling complex defined concepts in an strict inheritance network. It is important, now, to begin exploring the degree to which this accomplishment can be duplicated in the context of a mixed inheritance reasoner. Acknowledgments This material is based on work supported by the National Science Foundation under Grant No. IRI-8700705. We thank Dave Touretzky for helpful discussions. References PI PI PI PI PI PI VI PI N. Belnap. A useful four-valued logic. In J . Dunn and G. Epstein (eds.), Modern Uses ofMultiple-valued Logic. D. Reidel (1977), pp. 8-37. R. Brachman. “I lied about the trees” or, defaults and definitions in knowledge representation. The AI Magaxine, vol. 6 (Fall, 1985), pp. 80-93. R. Brachman and J. Schmolze. An overview of the KL-ONE knowledge representation system. Cognitive Science, vol. 9 (1985), pp. 171-216. P. Hayes. The logic of frames. In Frame Conceptions and Text Understanding, D. Metzing (ed.), Walter de Gruyter and Co. (1979), pp. 46-61. Reprinted in Readings in Knowledge Representation, R. Brachmsn and H. Levesque (eds.), Morgan Kaufmann (1985)) pp. 287-297. J. Horty, R. Thomason, and D. Touretzky. A skep- tical theory of inheritance in nonmonotonic semantic networks. In Proceedings of AAAI-87, Morgan Kauf- mann (1987), pp. 358-363. D. Israel and R. Brachman. Some remarks on the semantics of representation languages. In On Con- ceptual Modeling: Perspectives from Artificial In- telligence, Databases, and Programming Languages, M. Brodie, J. Mylopoulos, and J. Schmidt (eds.), Springer-Verlag (1984). R. Thomason, J. Horty, and D. Touretzky. A cal- culus for inheritance in monotonic semantic nets. In Proceedings of the Second International Symposium on Methodologies for Intelligent Systems, Z. Ras and M. Zemankova (eds.), North-Holland (1987), pp. 280- 287. D. Touretzky. The Mathematics of Inheritance Sys- tems. Morgan Kaufmann (1986). 432 Knowledge Representation
1988
80
827
Nonmonotonic Inheritance and Generic Reflexives David S. Touretzky Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 Abstract. Generic reflexive statements such as Elephants dove themselves have traditionally been formalized using some variant of predicate logic, with variables to mark coreferentiality. We present a radically different semantics for reflexives, based on nonmonotonic inheritance and an extension to Touretzky’s inferential distance ordering. Our system can derive new generic reflexive statements as well as statements about individuals. And unlike the leading predicate logic-based approaches, our formalism does not use variables; this brings it closer in structure to ac tual human languages. The significance of this work for AI is its demonstration of the benefits of a non-classical knowl- edge representation for analyzing commonsense reasoning phenomena. 1 Motivation Reflexive constructions are common in the world’s lan- guages. Contemporary linguistic theories subsume reflex- ivization under anaphoru, treating these constructions (along with expressions like each other) as expressions that have no independent meaning, but are bound in some way to other expressions, thereby contributing to larger units that are meaningful. The leading current linguistic theories of the seman- tics of reflexives’ use variables to interpret reflexives, so- since no human language uses variables at surface level- variables or indices marking “co-referentiality” must be introduced in the course of parsing a sentence. Though theories differ on how these variables are introduced, they agree in producing “logical forms” that contain them. Human languages that mark reflexive, however, gen- erally do so either with special pronominal forms like the English themselves, or in the verb morphology, as in En- glish This watch is self-winding. It is hard to say whether the discrepancy between the logical form and the way in which human languages encode reflexives is a deep linguistic discovery or an artifact of our only having one semantic theory of reflexivization- a logical theory that was originally designed to explain mathematical notations rather than natural language. For ‘See [Thomason 19761 and [Th omason 19831 for the Montague Grammar approach, and [May 19851 for the Government-Binding approach. Richmond H. Thomason Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260 purposes of comparison, it would be useful to have alter- native semantic theories of reflexive. One such alternative is presented here. The need for alternative theories is intensified by the limited ability of first-order logic to cope with the phenom- ena of natural language. Sentences like Most politicians are honest can’t be formalized using forms such as (Many z)(Politiciun z --f honest CC) and such difficulties have led to a theory of generalized quantifiers2 Bowever, relatively conservative extensions such as a Many quantifier are totally inadequate for handling what linguists call generic pbzlrul, and so can’t deal with sen- tences like Elephants are gruy.3 Nonmonotonic semantic networks can’t be used as they stand as an alternative to logical formalisms in interpreting natural language because they are so limited in expressive power. This paper doesn’t offer a solution to the general problem, but we do show that we can account for sys- tematic interactions between generics and reflexives, using techniques from nonmonotonic inheritance theory. This suggests an alternative representation according to which reflexives-though still anaphoric because they must be bound to a relation by appearing in a path containing a single relational symbol R-seem to resemble individual nodes in many ways, and to have a greater measure of semantic independence than the variables of logical repre- sentations. 2 Structure of the Paper Figures 1 and 2 contain all the network primitives that appear in the paper. Our graphical notation is a vari- ant of NETL [Fahlman 19791. There are several kinds of nodes, representing individuals (Clyde), classes (elephant), instances of the term “self” (denoted by o), and instances of “other” (denoted by 0). There are also several kinds of links. IS-A and IS-NOT-A links (-t and #+) express taxonomic information, such as that elephants are gray and Clyde is not a herbivore. Positive and negative rela- tional links (3 and $) represent binary relations between 2For example, the work of Altham [1971], and van Benthem & ter Meulen [1985]. 3See [Carlson 19821 for detailed arguments. Touretzky and Thomason 433 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. classes or individuals, such as Herbivores like gray things. The asertion in Figure 1 that royal elephants do not like themselves is expressed by a $; link from the royal ele- phant node to a 0 (read “self”) node. A fifth type of link, drawn as a dashed line in network notation, con- nects an “other” node to its referent. For example, Figure 2 indicates that opera stars admire other celebrities (i.e., celebrities other than themselves.) In path notation the term “other celebrities” is written @ : c. Likes y- Herbivore Gray Thing \ 9 P Elephant P Likes Royal Elephant -0 P 9 @ Clyde Ernie Figure 1: Herbivores like don’t like themselves. gray things, but royal elephants 0 -------> Celebrity Rock Movie Star Star Beverly Figure 2: Opera stars admire don’t admire rock stars. other celebrities, but they Frank The next few sections introduce a notation for inheri- tance paths and extensions (nonmonotonic theories), fol- lowed by axioms for nonmonotonic multiple inheritance with relations. We will then extend this system to handle reflexive and irreflexive statements. The paper concludes with an evaluation of the inheritance-based approach to reflexives, and some linguistic observations. 3 Notation Let + and ft denote nonmonotonic IS-A and IS-NOT-A links. Let 2 and F denote nonmonotonic positive and negative relational links. A network I’ is a collection of these four types of links, plus the links that bind “other” nodes to their referents. Taxonomic paths are sequences of abutting IS-A and/or IS-NOT-A links, such as ~1 + ~2 ---f x3 f, x4. Positive paths are composed purely of IS-A links, while negative paths include an IS-NOT-A link at the end. Taxonomic paths contain only individual and class nodes; they contain no instances of 0 or 0. Lowercase Greek letters such as a and r will range over taxonomic paths, or, in the degenerate case, single individual or class nodes, or the null path. If Q is null then z + u -+ y should be read as x ---f y, and x ---f c j+ y should be read as z f, y. The notation ~1 3 . . . -+ xn refers to a path of length n whose ith element is zi. Other occurrences of subscripted variables do not imply a continuous chain of xi’s; for ex- ample, the path x1 3 CT + x, denotes a path whose first and last elements are x1 and z,; it is not necessarily the case that the subsequence u has any nodes in common with 22 + . . . + ~~-1. We define r = yr + . . . ----f ym and ? = ym + . . . t yi to be the forward and backward notations, respectively, for the same path. Relational paths are paths of form d -% 5 or 0 $ ‘i. They are often written in expanded form as ~1 -+ R . . . + Xn ----f ym + . . . t yl, where th e 3 link may be replaced by $;. Tl le components u and T must be positive taxonomic paths, and x, 5 ym (or x, 8 ym) must be in I’. Figure 1 generates the relational path E + T + e 3 h 3 g + e + T t C, which says that Ernie is a royal elephant, royal elephants are elephants, elephants are herbivores, herbivores like gray things, elephants are gray, royal elephants are elephants (again), and Clyde is a royal elephant, so Ernie likes Clyde. The definition of inheritability governs the way paths may be extended to form new paths. A set of paths @ is perfect iff every one of its elements is inheritable in @ and no path not in @ is inheritable in a. We define the ezten- sions of a network I’ to be the minimal perfect supersets ofr. Finally, if K: is a path, then ButFirst is K with the first link removed, and ButLast is K with the last link removed. 4 Taxonomic Inheritance We now present a nonmonotonic multiple inheritance sys- tem for classes and individuals. The definition is simi- lar to that of [Touretzky, 19861. In the terminology of [Touretzky et ad. 19871, the system is a credulous, down- ward reasoner, with coupled extensions. But it uses OE 434 Knowledge Representation path preemption, which Sandewall [1986] has proposed as an improvement on Touretzky’s original definition. We state only the axioms for inheritability of positive paths; the negative path axioms can be derived from these by appropriate substitution of “+” and “ft” links. Basis step: x + y is inheritable in @ iff x -+ y E @. Induction: The path K = x1 --+ . . . + xn is inheritable in @ ifX Tl. ButFirst E ip. T2. ButLast E a. T3. There is no path x1 + r f+ x, E (a. (Contradiction) T4. There is no w such that xl + r1 ---f w --f r2 + x+1 E Q, and w f, x, E a, where 71 or 72 may be null, and zu may equal x1 or ~~-1, respectively. (Preemption) The notions of contradiction and preemption are the heart of the nonmonotonic inheritance definition. Con- tradiction keeps paths with conflicting conclusions from both being present in the same extension, as in Reiter’s classic Nixon/pacifist example. Preemption is what al- lows subclasses to override the properties they would in- herit from superclasses, even in the presence of redundant links. See [Touretzky et al. 19871 and [Horty et al. 19871 for more details and examples. 5 Relational Inheritance We next present inheritance axioms for binary relations. Again, this definition is similar to [Touretzky, 19861, ex- cept for the use of off-path preemption. David Ethering- ton, who with Ray Reiter was the first to translate tax- onomic inheritance into default logic [Etherington 1987a], recently produced a version of this system in default logic as well [Etherington 1987131. Basis step: x -% y is inheritable in Cp iff x 5 y E a. Induction step: The path K: = x1 + . . . + Z, -% ym t . . . + yr is inheritable in @ iff: Rl. ButFirst E @. R2. ButLast E @. R3. There is no path x1 ---f u $ 5; t y1 E <p. (Contra- diction) R4. There are no w, w’ such that x:1 ---f rl + w + 72 + X~ E ip, yr + ri --f 20’ + 74 --) ym E a’, and W F w’ E @‘, where 71 or r2 may be null and w may equal ~1 or xn, respectively, and similarly 71 or 74 may be null and w’ may equal yi or ym, respectively. (Preemption) An example of contradictory relational paths is: Fred likes animals but Fred dislikes gray things. In one extension Fred will like elephants because they’re animals; in the other he won’t like them because they’re gray. These two reasoning paths can never appear in the same extension because they contradict each other. An example of preemption of a relational path is Citi- zens dislike crooks, but gullible citizens don’t dislike elected crooks. If Fred is a gullible citizen and Dick an elected crook, there is only one extension, and in it Fred does not dislike Dick. 6 Reflexive Statements We are now ready to introduce reflexive statements. Read- ers who are not yet comfortable with the preceding defini- tions are advised to skim this and the following section the first time through, proceed to the discussion section, and then return here to study the definitions in greater detail. There will be two kinds of reflexive paths in our sys- tem. Explicit reflexive paths are derived from statements that mention “self” directly, such as John is a philoso- pher, and philosophers confuse themselves. These paths R are of form xi + . . . ---f x, ---f 0. Implicit reflexive paths, on the other hand, are derived from ordinary relational paths that double back on themselves. For example, in Figure 1, since elephants are herbivores and are gray, from Herbivores like gray things we can derive Elephants like elephants, and hence Elephants like themselves. The lat- ter conclusion would be written e ---+ h 3 g t e t 0. Implicit reflexive paths take the general form zr + . . . + R xn --) ym + . . . t y1 + 0, where the doubling back means yr = xi for some i, 1 < i 5 n. 6.1 plicit eflexive Paths The following rule creates a new implicit reflexive path which can be inherited by lower nodes: Let K be a relational path of form xl --, . . . + x, 3 ym + . . . + yl where x1 = yi . Then the implicit reflexive path K t 0 is inheritable in Q iff: SRl. ButFirst E a. SR2. ButLast E a. SR3. There is no path zr -+ u fi 5 t yl E @. (Contra- diction) SR4. There is no path x1 ----f Q $ 7 t 0 E de. (Contra- diction) Notice that the rule does not require K itself to be present in Qi. The reason is that K can be preempted by an “other” statement. For example, given Parrots like green things, Amazon parrots are parrots and are green, and Amazon parrots don’t like other Amazon parrots, we can’t infer Amazon parrots like Amazon parrots, but we can still infer Amazon parrots like themselves, Tour&&y and Tbomason 435 6.2 Inheritance of Reflexive Paths Let K be a reflexive path, i.e. a path of form x1 ---r . . . ---f R xn - ym + . . . + y1 + 0. If m = 0 then K is an explicit reflexive path; otherwise it is an implicit path. The rule for inheritability of reflexive paths appears below. To allow a reflexive path to be preempted by an ordinary relational link, we require the head and tail nodes of the preempting relational link to be on the same path x1 + d -+ x, E a. This is reflected in clause S5 below. See [Touretzky & Thomason, forthcoming] for an explanation of why this is necessary. Basis step: x 3 @ is inheritable in <P iff x 3 @ E a. Induction step: The path K = x1 + . . . -+ x, 3 ym t . . . t yr t 0 (with m possibly 0, in which case there is a direct link x, -% 0; otherwise, with x1 # yr) is inheritable in @ Sl. s2. s3. s4. s5. 6.3 ifE ButFirst E ip. ButLast E a. There is no x1 ---) CT $ 7 t @ E a. (Contradiction) There is no w such that x1 ---f 71 + w + 72 + 2, E @, Xl ---) 71 - w ---f 74 + ym E @ (or m = 0), and w $; 0 E a. (P reemption by explicit reflexive statement.) There are no ~1,202 such that x1 ---f 71 + w1 -+ 72 - 202 - 73 - x, E a, x1 + 7; - Wl - 7-i --j w2 + 73 ’ + ym E Q, (or m = 0), and either wr $ w2 E @ or w2 + wr E <p. (Preemption by more specific ordinary relation.) Statements About Individuals In order to make the individual a’s statements about “self” agree with its statements about a, we add the following axiom. Note that it is an implication, not an equivalence: SI. If a R - R - * Q -+ r +- @ E @, then a ---f (T 4 r t a is inheritable in @. In Figure 1 > this axiom derives Clyde does not like Clyde from the inherited path Clyde does not like himself. 6.4 Modification to Ordinary Relations A reflexive statement should block inheritance of a con- tradictory ordinary relation. Thus, if Herbivores like gray things but Royal elephants do not like themselves, we should not infer Royal elephants like royal elephants. To achieve this behavior we modify the rule for inheriting ordinary relations by adding an additional restriction, R5. But we are still free to infer the slightly more restricted statement, Royal elephants like other royal elephants. 436 Knowledge Representation R5. There is no w such that x1 - 71 - w + 72 - xn E @, Xl - 7; - w - 7; + ym E @, and w $; @ E @. (Preemption by explicit reflexive statement.) 7 Irreflexive Statements Let 2 JJi @ : y mean “x’s are in relation R to other y’s” Node y must be a class, not an individual, for this con- struct to make sense. An explicit irreflexive path is of formxl+...+x,3@:y,t...tyr. Forexample, if opera stars admire other celebrities, (o 5 @ : c), Bev- erly is an opera star, and Frank is a movie star (hence a celebrity), we may conclude that Beverly admires Frank (B+o~@:ctmtF). An implicit irreflexive path is generated when an ordi- nary relation doubles back on itself. Implicit paths take the form 11 + . . . - x, 5 zp t . . . t %1 ‘@:ym + . . . t yr , with p > 0. In Figure 1, since herbivores love gray things, and elephants are gray herbivores, we gener- ate implicit paths for both Elephants love themselves and Elephants love other elephants. The latter path is written e-h3gt@:e. 7.1 Implicit Irreflexive Paths Let K be a path of form xi + . . . + x, -% zp t . . . t ~1 + y, where y = x1 and y is a class rather than an individual. Then the implicit irreflexive path ~1 --f . . . - xn 3 zp + . . . t ~1 t @ : y is inheritable in Q, ifE ORl. ButFirst E Q,. OR2. ButLast E a. OR3. There is no path x1 + c $; ?- t y E @. (Contradic- tion) OR4. There is no path x1 -+ (T $ 71 t @ : i-2 t y E Q. (Contradiction) As was the case with implicit reflexive paths, we do not require K to be present in a; it could be preempted. 7.2 Inheritance of Irreflexive Paths The 0 node never stands alone; it always appears con- nected to a node indicating the referent of the word “other.” To simplify the definition below, we will treat the struc- ture 0 : y as a single node. In particular, ButLast(a 2 Tt@:y) isaZF. Let K be a path of form x1 ---f . . . + x, -% zp t . . . t 21 + O:ym + . . . + y1. (If p = 0 there is a direct link x, -% @:ym.) The path K: is inheritable in @ iE 01. ButFirst E <p. 02. ButLast E a. 03. 04. 05. 06. 07. There is no path ~1 + u 8 5 t y1 E a. (Contra- diction) There is no path x1 + 0 $ 71 t @ : 72 t yr E @. (Contradiction) There are no w, w’ such that x1 + ~1 + w + 72 - xn E a, y1 --) 71 --f w’ - 7-i - zp E Q (or, if P = 0, then y1 -+ 7; + w’ ---f ri + ym E a), and W $ 8 : w’ E G’, where 7-r or 72 may be null and w may equal x1 or x,, respectively, and similarly r[ or 7; may be null and w’ may equal yr or zp (or ym ifp = O>, respectively. (Preemption by explicit irreflexive statement.) There are no w, w’ such that x1 + 71 + w + 72 + xn E a, y1 - 71 - w’ - r; - zp E @ (or, if p = 0, then yr --f ri + w’ + r.$ --) yrra E a), and W & w’ E a, where 71 or r2 may be null and w may equal x1 or xn, respectively, and similarly ri or r2/ may be null and w’ may equal yr or zp (or yna if P = 0), respectively. (Preemption by more specific ordinary relation.) If x1 and yr are individuals then x1 # yr. (Non- coreferentiality.) Since opera stars do not admire rock stars, Beverly does not admire Mick; this is an instance of preemption due to 06. We do no derive Beverly admires Beverly be- cause of the non-coreferentiality constraint, 07. 7.3 Modification to Ordinary Irreflexive statements can also block the inheritance of con- tradictory ordinary relations. For example, if Herbivores like gray things, but Wild elephants do not like other ele- phants, we want to block the inference that Wild elephants like elephants. This is accomplished by R6 below. We may still infer Wild elephants like themselves. R6. There are no w, w’ such that x1 + rr + w --, 72 + xn E a’, y1 - r; - w’ --, 7-2’ ---+ ym E 4p, and W p+ 0:w’ E a. 8 Discussion Since generics admit exceptions, they cannot be expressed in classical first order logic. We therefore started with a nonmonotonic inheritance system that allowed us to rep- resent generic statements such as Elephants are gray. We then extended the system by adding axioms for reflexive and irreflexive statements. Although there are some sub- tleties in the phrasing of the new axioms which space does not permit us to go into, the general nature of the extended system should be clear. One thing we have not yet done is prove the constructibil- ity (or at least the existence) of extensions. However, a constructibility proof for networks containing only ordi- nary relations was given in [Touretzky, 19861. We are con- fident that addition of reflexive and irreflexive relations presents no obstacle to constructibility. Our system can derive new statements about classes as well as about individuals. Inheritance systems based on default logic cannot. This difference becomes more apparent when reflexives are added to the language, be- cause relational paths that double back on themselves can generate reflexive paths even when a network contains no explicit reflexive statements. From Herbivores like gray things, for example, we can derive the generic conclusion Elephants like themselves, even if there are no instances of elephants in the network. Some researchers may still prefer to operate within a default logic framework, since default logic has greater ex- pressive power than current semantic network formalisms. Our formulation will be valuable for them as well, since we have solved the problem of extending the inferential dis- tance ordering (the determiner of preemption) to reflexive and irreflexive statements. Etherington’s default logic for- mulation of ordinary relational inheritance, which replaced our path-based notation with default rules, still relied on our inferential distance definition to filter the set of exten- sions [Etherington 1987b]. This was necessary to ensure that subclasses did indeed override superclasses. A simi- lar translation of our new system into default logic would appear to be straightforward. Another advantage of our path-based formulation is that it does not require the use of variables to constrain co-referentiality. There is a natural mapping between in- heritance paths and surface structure which does not exist for predicate logic-based treatments of reflexives. Reflex- ive pronouns map to @ nodes, and the phrase “other y’s” maps to @:y. Inheritance paths may be translated to En- glish sentences by extracting the first node, the relation, and the last node, as when we read e ---f h 2 g t e t @ as Elephants like themselves. The interior of the path serves as an argument or justification for the statement. In conclusion, there is no a priori reason why the rich structure of human language should map conveniently to a predicate logic-based representation. Logic was originally developed to describe mathematics. One can increase the expressive power of classical logic by adding nonstandard quantifiers, modal operators, and extra truth values, but other formalisms may in some cases prove more natural. We find path-based formalisms convenient for inheritance reasoning, and their treatment of reflexives more natural than logic- based formalisms. 9 Some Linguistic Observations There are two possible interpretations of the sentence Rock stars detest other celebrities, depending on the scope of the word “other.” We have so far been using the narrow inter- pretation of “other,” which is that each rock star detests all celebrities other than himself or herself. This is imple- Touretzky and Thomason 437 mented by clause 07 in the definition of inheritability for irreflexive paths. The alternate, broad interpretation of Rock stars detest other celebrities is that rock stars detest celebrities other than rock stars. We will not formalize this second interpretation here, but it appears straightforward to handle. The two uses of “other” can even be inter- mixed by introducing a new node type to denote broadly scoped “other .” Note that the distinction between narrow and broad scope disappears when the origin of the rela- tional link is an individual, e.g., Hurry is jealous of other musicians can only mean “musicians other than himself,” while Trombone players are jealous of other musicians is ambiguous. Similarly, for relational links whose head and tail refer- ence the same class, only the narrow interpretation makes sense, e.g., e 2 @ :e could only mean that elephants love elephants other than themselves. In English one can substitute the expression “each other” when the first and last nodes of a relational path are iden- tical: compare Elephants love other mammals (e 5 @:m) with Elephants love each other (e -% 8 : e). This substitu- tion is mandatory for some speakers. One aspect of the use of “other” in English that is not part of the formal system presented here is that it usually requires a subset membership. For example, Politicians intimidate other crooks cannot be true unless Politicians are crooks is true. With our current set of axioms, the link p 3 @ : c means “politicians intimidate crooks in general, but do not conclude from this that an individual politi- cian who is also a crook intimidates himself.” It doesn’t imply that any politicians actually are crooks. We can get the true English semantics by imposing a restriction on networks to require that any link of form x 5 0 : y be accompanied by a link x ---f y, unless x = y. Finally, we acknowledge that our account of “other” is far from complete. For example, “other” has an existential interpretation as well as the universal one we have been us- ing. A sentence like Roger fools around with other women means some women other than his wife, not every woman who is not his wife. Semantics mainly determines which sense is appropriate, but there may also be syntactic cues. For some speakers, Elephants love other elephants is pref- erentially understood as an existential because they expect the universal interpretation to be expressed Elephants love each other. Acknowledgements This research was supported by the National Science Foun- dation under grant number IRI-8700705. We are grateful to Jeff Horty for helpful discussions, and to Scott Fahlman, whose NETL system provided the original impetus for our investigation of reflexives. References [Altham 19711 Altham, J. E. J. The Logic of Plurality. Methuen and Company, Ltd., London, 1971. [Aoun 19851 Aoun., J., A Grammar of Anaphora. MIT Press, Cambridge, Massachusetts, 1985. [Carlson 19821 Carlson, G., “Generic terms and generic sentences.” Journal of Philosophical Logic 11, (1982), pp. 145-182. [Etherington 1987a] Etherington, D. W. “Formalizing nonmonotonic reasoning systems.” Artificial Intelli- gence 31, pp. 41-85. [Etherington 1987b] Etherington, D. W. “More on inher- itance hierarchies with exceptions: default theories and inferential distance.” Proc. AAAI-87, Seattle, 352-357. [Fahlman 19791 Fahlman, S. E., NETL: a System for Representing and Using Real- World Knowledge. MIT Press, Cambridge, Massachusetts, 1979. [Horty et al. 19871 Horty, J. F., Thomason, R. H., and Touretzky, D. S. “A skeptical theory of inheritance in nonmonotonic semantic networks.” Proc. AAAI- 87, pp. 358-363. [May 19851 May., R., Logical Form: Its Structure and Derivation. MIT Press, Cambridge, Massachusetts, 1985. [Sandewall 19861 Sandewall, E. “Non-monotonic inher- ence rules for multiple inheritance with exceptions.” Proc. IEEE vol. 74, pp. 1345-1353. [Thomason 19761 Thomason, R., “Some extensions of Montague grammar.” In Montague Grammar, B. Par- tee, ed., Academic Press, New York, 1976, pp. 77-117. [Thomason 19831 Thomason, R., “On the semantic inter- pretation of the Thomason 1972 fragment.” Indiana University Linguistics Club, 1979. [Touretzky, 19861 Touretzky, D. S. The Mathematics of Inheritance Systems. Morgan-Kauffman, 1986. [Touretzky et al. 19871 Touretzky, D. S., Thomason, R. H., and Horty, J. F. “A clash of intuitions: the current state of nonmonotonic multiple inheritance systems.” Proc. IJCAI-87, Milan, pp. 476-482. [Touretzky & Thomason, forthcoming] Touretzky, D. S., and Thomason, R. II. “An inheritance-based theory of generic reflexives.” Forthcoming technical report, Carnegie Mellon University. [van Benthem & ter Meulen 19851 van Benthem, J. and A. ter Meulen, eds., Generalized Quantifiers in Natu- ral Language. Foris Publications, Dordrecht. 438 Knowledge Representation
1988
81
828
eyornd HSA: Structures for PBausible antic Networks Paul R. Cohen and Cynthia L. Eoiselle Experimental Knowledge Systems Labora.tory Depaztment of computer and Information Science University of Massachusetts Amherst, Massachusetts Abstract We present a method for automatically deriving plausible inference rules from relations in a knowl- edge base. We describe two empirical studies of these rules. First, we derived approximately 300 plausible inference rules, generated over 3000 spe- cific inferences, and presented them to human sub- jects to discover which rules were plausible. The second study tested the hypothesis that the plau- sibility of these rules can be predicted by whether they obey a kind of transitivity. The paper dis- cusses four sources of variance in subjects’ judg- ments, and concludes that relatively little knowl- edge is needed to achieve moderately accurate pre- dictions of these judgments. 1 Introduction Can cough syrup make people drunk? Our favorite brand can, because it contains alcohol. If you didn’t already know that cough syrup is intoxicating, you could infer it from two spe- cific propositions-cough syrup contains alcohol and alcohol is intoxicating-and from a general plausible inference rule: Rule P X CONTAINS y, Y CAUSES z 2 CAUSES t and Other familiar rules of plausible inference include property in- heritance (e.g., cats have five toes, Ginger is a cat, so Ginger has five toes) and causal abduction (e.g., fires cause smoke, so if you see smoke, look for a fire). Rules like these have two roles that we expect to become increasingly important in coming years. First, they support gracehE degradation of performance at the boundaries of our knowledge. A brittle knowledge system that doesn’t know ex- plicitly whether cough syrup makes you drunk won’t offer a plausible answer-it simply won’t answer the question [Lenat et al., 1986; Lenat and Feigenbaum, 1987; Collins et al., 19751. Graceful degradation depends on general knowledge, which we formulate as as plausible inference rules such as Rule 1, to make up for a lack of specific knowledge. Second, we expect plausible inference to reduce the effort of building knowledge bases, be- cause knowledge engineers needn’t state explicitly those propo- sitions that can be plausibly inferred. Property inheritance, for *We are indebted to Carole Beal, David Day, and Adele Howe for their comments on drafts of this paper, to Carole Beal for her help with the statistical analysis, and Evan Smith for his assistance with this project. ‘This research is funded by the Office of Naval Research, un- der a University Research Initiative Grant, Contract #N00014- 86-K-0764 and by a gift from Tektronix. example, relieves us from having to state explicitly that each member of a class has each property of that class [Brachman, 19851. Rules like property inheritance and Rule 1 obviously are needed to build “mega-frame” knowledge bases [Lenat and Feigenbaum, 19871. Rule 1 has the same structure as property inheritance over ISA links, and can serve the same purposes, that is, support- ing graceful degradation and knowledge engineering. We have developed a simple method for deriving such rules from the relations in a knowledge base, and we have shown how to dif- ferentiate plausible ones from implausible ones based on their underlying “deep structure.” This paper describes two empirical studies of these rules. Both depend on a moderately large knowledge base that we de- veloped for the GRANT project [Cohen et al., 1985; Cohen and Kjeldsen, 19871. The GRANT KB contains roughly 4500 nodes linked by 9 relations and their inverses. In the first study we derived approximately 300 plausible inference rules from these relations. Then we generated over 3000 specific inferences by re- placing the variables in the rules with concepts from the GRANT KB, and presented them to human subjects to discover which syntactically permissible rules were plausible (Sec. 2). The sec- ond study tested the hypothesis that the plausibility of these rules can be predicted by whether they obey a kind of transitiv- ity (Sec. 3). We will begin by describing these studies, hypothe- ses, and results. Then we will discuss the role of knowledge in assessing the plausibility of inferences. 2 periment I: Identifying In this section we describe how to use the structure of property inheritance to produce many other plausible inference and how we determined the plausibility of these rules. 2.1 Background Property inheritance over ISA links can be written ISA n2, n2 R n3 n1 R 723 rules, and where the relation R between n2 and n3 is viewed as a prop- erty of n2. For example, if a canary is a bird and bird HAS-COMPONENT wings, then canary HAS-COMPONENT wings (Fig. 1.a). Here, R is HAS-COMPONENT and the inherited prop- erty is “HAS-COMPONENT wings.” Many plausible inference rules have this structure, but inherit over links other than ISA. For example, in the “cough syrup” inference, above, cough syrup inherits the “CAUSES intoxication” property over the CON- TAINS relation: Cohen and Loiselle 415 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. cough syrup HAS-COMPONENT alcohol, and alcohol CAUSES intoxication cough syrup CAUSES intoxication Figure 1.b shows two other examples. They have the same premises but different conclusions. One premise is “storm HAS- COMPONENT cloud” (and, equivalently, “cloud COMPONENT-OF storm”); the other is “cloud MECHANISM-OF rain” (and, equivn- lently, “rain HAS-MECHANISM cloud”). But the conclusions arc “storm MECHANISM-OF rain” and “rain COMPONENT-OF storm,” respectively. 2 .- I - .---has - - * - component canary” a. Property inheritance over ISA links mechanism-of component-of cloud - rain cloud &storm : “component : : of storm .- rain * b. Inferences that have the structure of property inheritance Figure 1: Inheritance and structurally-identical rules This illustrates that each pair of relations can produce two plausible inference rules that have the same structure as prop- erty inheritance over ISA links. For relations ~1, Rz these rules are: nl Rl ?xz, tllld Rule 2 n2 ~2 nn n1 R2 n3 and m R2-INV R~, a11 d Rule 3 nz RI-INV n1 n3 RI-INV nj Figure 1.b shows these alternatives for ~1 = HAS-COMPONENT, R2 = MECHANISM-OF, n1 = storm, n2 = cloud, and n3 = rain. Figure 1 introduces the notation we will nse throughout. Rules are represented as triangles formed from three concepts and three relations. The legs of the triangle represent premises, and are always drawn as solid lines. The hypotenuse represents the conclusion and is always drawn as a dashed line. Rules can be chained by letting the conclusion of one serve as a premise for another. Figure 2 shows how the conclusion of a +first genernfion inference, “storm MECHANISM-OF rain,” serves as the premise of a. second generation inference, which has the conclusion “storm HAS-PRODUCT runoff.” cloud mechanism-of erain rain has-product * runoff .fl .-- . .-- .- -* .---has .- product .- .- storm. storm First generation Second generation inference inference Figure 2: Second-generation inference Since each pair of relations produces two rules, a knowledge base constructed from N relations will produce (N2 + N)/2 pairs of relations (including relations paired with themselves) and an equal number of rules. The GRANT KB is constructed from nine relations and their inverses, so (18’ + 18)/2 = 342 were generated. Experiment 1 had two goals. One was to generate all pos- sible rules for the GRANT KB and to determine which of them produce plausible conclusions. The other was to find out how the plausibility of conclusions is affected by chaining these rules. Applying roughly 300 rules to the GRANT KB (as we describe below), produced thousands of first generation inferences and over 200,000 second-generation inferences. We expected very few of these to be plausible; but, if we could discover or predict the plausible ones, then we would have a powerful method to reduce the effort of constructing large knowledge bases. 2.2 Design To determine whether the rules produce plausible conclusions, we first instantiate them with specific concepts, then present them to human subjects to judge. We derived 315 rules from the GRANT KB.’ For each we pro- duced 10 test items (five first generation items and five second generation items) by the following method: Each rule is based on two relations. For each pair, say HAS- COMPONENT and MECHANISM-OF, we search the GRANT KB for triples of nodes nl,n2,n3 that are connected by these relations (i.e., nr is connected to n2 by HAS-COMPONENT, and n2 is con- nected to n3 by MECHANISM-OF). Each triple represents a pair of premises from which tvro inferences can be drawn (see Rules 2 and 3, above). For instance, storm, cloud, and rain instantiate nl, n2, and n3, respectively in Figure l.b, yielding the conclu- siolls “storm MECHANISM-OF rain” and “rain COMPONENT-OF storm.” Most pairs of relations in the GRANT KB yield dozens of n1 ,n,z,nn triples. We randomly select five, and their conclu- sions, to be first generation test items. However, we add the conclusions of al! the triples to the GRANT KB. This procedure is repeated to generate second generation test items, with the added condition that one premise of each second generation item must be a conclusion that was produced during the previous search (though not necessarily the conclu- sion of a first generation test item). In all, the 315 rules yield a data set of 3116 test items, of which roughly half are first generation and half are second gen- eration items.2 2.3 Procedure Items in the data set were presented to human subjects by a computer program. Subjects were asked first to indicate whether both premises were acceptable, one or both were un- acceptable, or they did not understand one or both premises. Next, the conclusion was shown and subjects were asked to judge whether it followed or did not follow from the premises, or else to indicate that they did not understand the conclusion. Each item was seen by two subjects. Following a practice ses- sion with 20 items (none of which was in the data set), each subject judged approximately 700 items from the data set. This ‘Pruning duplicates reduces the original 342 rules to 315. 2We don’t have 3150 items because, for some rules, the GRANT KB yielded fewer than five first generation instances. 416 Knowledge Representation took about five hours, distributed over three or four self-paced sessions. 2.4 Results Since the premises of the test items came from an existing knowledge base we expected that most would be .judged accept- able. This is in fact the case: 82% percent of first generation premises and 63% of second generation premises were judged to be acceptable. The following results pertain only to those items. Each rule is represented in the data set by five first gener- ation items and five second generation items, and each item was seen by two subjects. Thus, 10 judgments are made of the items in each generation of each rule. Two plausibility scores for a rule, ranging from 0 to 10, are equal to the sum of t,he number of items that subjects judged plausible for each generation of each rule. The mean plausibility score, over the 315 rules, for first generation items is 4.18 (var. = 6.92), and the correspond- ing statistic for second generation items is 3.17 (var. = 4.88). Both are significantly different from chance and from each other at the p < .Ol level. The fact that both are below chance means that most rules are not plausible. Given this, one would expect chaining of inferences to produce increasingly-implausible con- clusions. This is supported by the evidence that second gen- eration inferences are significantly less plausible than first gen- eration ones. Subjects judged approximately 50% of the rules to have plausibility scores between 3 and 7 (of a possible 10); they judged the rest of the rules to be predominantly plausible or implausible. 2.5 Discussion While these results indicate that many rules generate predonl- inantly plausible conclusions, and many others are predomi. nantly implausible, they do not tell us how to predict which will be plausible and which will not. We wanted to find a small set of common characteristics of rules on which to base these predictions. Furthermore, we wanted these characteristics to depend only on the relations in the rules, not on the nodes or any exogenous factors. We discovered two common aspects of relations. Some rela- tions, such as HAS-COMPONENT have a hierarchical interpreta- tion. Others, such as CAUSES, can be interpreted as temporalre. lations. Lastly, relations such as MECHANISM-OF can have both hierarchical and temporal interpretations: in “nl MECHANISM- OF n2 ," ~2 may be a process that hierarchically subsumes the mechanism nr, or nr may be an object or process that exists or is required prior to achieving nz. Table 1 lists the deep relationa that correspond to all 18 surface relations. Each deep relatiorc has a h (hierarchical) or t (temporal) interpretation, or both. Expressing rules in terms of these deep relations reduces the set of 315 surface rules to 95 unique deep structures. More importantly, we identified a characteristic of deep structures, called transitivity, that seemed to explain why some rules were plausible and others implausible. Figure 3 shows two transitive structures and two intransitive ones. The transitive deep structures represent the rules: “If n1 CAUSES n2, and nl CAUSES TZ~, then nr CAUSES ns,” and “If n1 COMPONENT-OF n!, and n2 COMPONENT-OF n3, then nj COMPONENT-OF ~2." \;t'~' call these structures transitive because the premises imply an ordering between n.1 and n3 that, to be preserved, requires a particular ordering between nr and n3 in the conclusion (nr to Surface relation Deep structure Surface relation Deep structure CAUSES t CAUSED-BY COhWONENT-OF * h HAscoMPoNFiNT h h FOCUS-OF HAS-FOCUS h MECHANISM-OF + HAS-MECHANISM -+T PRODUCT-OF + HAS-PRODUCT I +v t PURPOSE-OF I ) HAS-PURPOSE * t SETTING-OF h SETTING h SUBJECT-OF gh SUBJECI- h + SUBFIELD-OF -4 h HAS-SUBFIELD h Table 1: Surface relations and corresponding deep relations nJ in one rule and n3 to nl in the other). In contrast, the in- transitive structures do not require any ordering on nodes in the conclusion. In one, the premises indicate no hierarchical order- ing between nr and n3, only that n2 is hierarchically-superior to both. Similarly, in the other intransitive rule, n1 and no are both temporally-prior to 7~2, but no ordering is implied be- tween them and, thus, required in the conclusion. The mean h “2 -“3 h 1 .-. .- -- : .- : "1 br-’ -- h h 1 .- h .- “1 : a. Example transitive b. Example intransitive deep structures deep structures Figure 3: Transitive and intransitive deep structures plausibility score for transitive rules was 8.94 (out of 20; var. = 16.83), and for intransitive rules, 5.89 (var. = 14.46). Again, the preponderance of these rules are judged implausible, but these values are significantly different (p < .Ol), and provide strong post-hoc evidence that transitivity is a factor. Transitivity is clear when surface relations map to deep re- lations whose h and t elements point in just one direction. But the surface relations HAS-MECHANISM and PURPOSE-OF have deep relations where t and h point in opposite directions. Therefore, rules that are transitive under one interpretation of these relations are necessarily intransitive under the other. For example, the structure in Figure 4.a may be transitive or intransitive. We call structures like this ambiguous. Although our data suggested that transitivity predicts the plausibility of rules with unambiguous structures, the results were less clear for ambiguous ones. All ambiguous structures have transitive interpretations, but we knew from our data that not all the corresponding rules were plausible. We hypothe- sized a characteristic of interpretations, called consistency, that Cohen and Loiselle 417 t n2 - % h ..-fl .- t .- : .- t .- .* n, -- t a. Consistent h b. Consistent t interpretation interpretation Figure 4: Ambiguous deep structures might discriminate plausible ambiguous rules from implausible ones. A structure has a consistent interpretation when its deep relations all have the same interpretation, either h or t. For example, Figure 4.a has a consistent interpretation in which all its deep links can be interpreted as h. Moreover, this h interpretation is transitive. Figure 4.b has a consistent t inter- pretation, but it is intransitive; and the interpretations of the deep relations that make Figure 4.b transitive are inconsistent (t, t, and h). 1 I Figure 5: Single interpretation deep structures predicts the plausibility of these rules. A graph of the means 60 % - 3 Experiment 2: Exploring Transitivity At the end of Experiment 1, we had formed the hypotheses that transitivity predicts plausibility, and that consistency de- termines the interpretation (transitive or intransitive) of am- biguous structures. Experiment 2 tests these hypotheses. 3.1 Design Experiment 2 focused on ten relations from Experiment 1: CAUSES, COMPONENT-OF, MECHANISM-OF, PRODUCT-OF, PURPOSE-OF and their inverses. (The other relations replicate deep relations and occurred relatively infrequently in the knowl- edge base.) Since each of these surface relations has a unique corresponding deep relation, the 95 rules they generate map to 95 different deep structures. From these, we chose 56 structures (and thus, rules) as a representative sample.3 We generated 10 first generation test items for each of the 56 rules, just as we did in Experiment 1. 20% - I transitive items ini;ym;tive Figure 6: Transitivity x consistency analysis (Fig. 6) suggests that we cannot predict the plausibility of rules that have no consistent interpretation, because the mean plau- sibility score for these rules is roughly five out of 10 (i.e., at chance) irrespective of whether the rule is transitive. Figure 7 compares the mean plausibility scores of transitive, intransitive, and inconsistent rules to chance performance; transitive and in- transitive inconsistent items are collapsed into one category. 3.2 Procedure Transitive Fourteen subjects each viewed all the test items. Items were presented as in Experiment 1. 3.3 Results Our hypothesis is that transitivity, as determined by the consis- tent interpretation of the deep structure, predicts plausibility. Eight rules are composed of surface relations that have just one deep interpretation (CAUSES, CAUSED-BY, HAS-COMPONENT, COMPONENT-OF; see Fig. 5). With these we can analyze the ef- fects of transitivity and consistency on plausibility in rules with single interpretations. A two-way analysis of variance found a significant main effect of transitivity (p < .OOl) and a signifi- cant transitivity x consistency interaction (p < .OOl), but no main effect of consistency (p > .2), confirming that transitivity ig +20% .r .- ul 6= + 10% _m 8 a Chance g 5 (50%) $ .+ -10% a -20% -30% Figure 7: Scores for rules with single interpretations Analyzing all our rules in terms of these categories yields 18 that have consistent transitive interpretations, 20 consistent in- transitive rules, 8 inconsistent rules, and 4 rules that have both transitive and intransitive consistent interpretations.4 The his- togram for all rules (including the eight analyzed earlier) is presented in Figure 8. 3Rules generated from a single surface relation and its in- verse always map to one transitive and two intransitive deep structures. Our sample included the transitive structure and one of the intransitive structures (chosen randomly). Pairs of non-identical relations and their inverses form four transitive and four intransitive rules. Our sample included two transitive and two intransitive rules from each of these sets. Transitive lntransltlve inconsistent Inconsistent Intransitive “Unfortunately, the test items for the other six rules shared 418 Knowledge Representation lnconslstent lntransltive consistent rules rules $2 +30% transitive & .e e ‘Fjz +20% g = mu + 10% ‘Ea 8 0” Chance lf =r (50%) n -10% Figure 8: Scores for all rules Although less clear-cut, Figure 8 echoes one of our earlier results: transitivity predicts the plausibility of rules with con. sistent interpretations. However, the mean plausibility score for inconsistent rules is higher than chance, and the mean plausibil- ity score of consistent intransitive rules is much closer to chance than it was in Figure 7. 3.4 Discussion While the predictive power of transitivity is high for rules that have only one interpretation, it becomes diluted in rules with multiple interpretations. It is not surprising that rules with con- sistent transitive und intransitive interpretations have a mean plausibility score roughly halfway between the scores for tran- sitive and intransitive rules (Fig. 8). However, the mean plau- sibility score of inconsistent rules, which we expected to be at chance, was higher (61%); and the mean plausibility score of rules with consistent intransitive interpretations, which we ex- pected to be implausible, was not as low as we expected (43%). We hypothesize that both these effects are due to an unan- ticipated factor that is raising the plausibility of some but not all of these rules. Whereas all our surface rules have the same structure as property inheritance over ISA links, some but not all of the deep structures of both the intransitive and inconsistent rules have this form. For example, the deep structure for the rule nl COMPONENT-OF nz, nz HAS-MECHANISM n3 t n1 HAS- MECHANISM n3 is intransitive, but its conclusion is often plausi- ble, as illustrated in Figure 9. In this instantiation, battle inher- its “HAS-MECHANISM weapon” from war over a COMPONENT-OF relation. We expect rules with this structure to yield relatively high plausibility ratings even if they are intransitive, because property inheritance is a common and powerful plausible infer- ence rule. Generalized property inheritance (GPI) is a characteristic of a rule’s deep structure, comparable with transitivity: If nl is related to nz by h, and n2 is related to n3 by any relation i, then it is plausible to infer that n1 is related to n3 by i This definition does not restrict the direction of h; it can point “up” or “down” from n1 to n2, whereas in property inheritance many common premises. This was an unavoidable consequence of our decision to generate test items randomlv. Four had con- sistent transitive interpretations, two had consistent intransi- tive interpretations. has-mechanism war -weapon battle . a. Surface structure b. Deep structure Figure 9: An intransitive but plausible rule over ISA links, nr must be a subclass or instance of nr, that is, ISA must point “up.” We relax this for GPI because it is often plausible to infer that a concept will have properties of those concepts hierarchically-inferior to it. GPI explains why some intransitive rules have higher-than- expected plausibility scores. Since some transitive rules are also GPI, we ran a post-hoc transitivity x GPI analysis of vari- ance, and found main effects of transitivity (p < .OOl) and GPI (p < .05), with no interaction effect. Post-hoc tests on the means (Newman-Keuls) found a significant difference be- tween GPI intransitive items and non-GPI intransitive items (p < .05), which means that among intra.nsitive rules, GPI dif- ferentiates two statistically-distinct classes-relatively plausible and relatively implausible rules. After removing GPI rules, the mean plausibility score of inconsistent rules decreases (Fig. 10). Therefore, GPI provides a post-hoc explanation of why intran- sitive and inconsistent rules have higher-than-expected plausi- bility scores. Among transitive items, GPI had no statistically discernible effect. And since there was no interaction between transitivity and GPI, we regard them as independent factors. Transitive GPI Rules Incon- Intran- gii d- + 10% 5& 2 2 Chance al -- (50%) n -10% Figure 10: Post-hoc revision with GPI of Figure 8 4 Contributors to Plausibility In this section we will discuss the factors that contribute to judgments of plausibility. (A more detailed analysis and pre- sentation is given in [Cohen and Loiselle, 19881.) Recall that our goal is to find plausible inference rules that support graceful degradation and help knowledge engineers. Ideally, the agent who uses these rules should not need much knowledge to judge the plausibility of their conclusions. bility of the conclusion of the rule For example, the plausi- n1 m CAUSES CONTAINS CAUSES n2, nl n3 and n2 Cohen and Loiselle 419 seems not to depend on the objects that instantiate ~1, nz, and n3. In contrast, to judge the plausibility of a conclusion of the rule n1 CAUSES m,, and n.2 77, I we need knowledge about n1 and ~2 that can tell us how likely nl is given n2. What knowledge contributes to the plausibility of the con- elusions of the rules in Experiments 1 and 21 Said differently, what factors account for the total variance in judgments of plau- sibility (2’) among our subjects. 7 Wt believe T has four additive components: subject variance-the proportion of T due only to indivitl- ual differences in subjects’ knowledge, experience, moti- vation, and so on. item variance-the proportion of T due only to differences in the concepts that instantiate 7x1, n2, nn in the rule. plausibility 77% and 68% of the time. No knowledge is required to apply these criteria. Greater accuracy requires more knowl- edge, particularly knowledge about the specific rules and the concepts that instantiate them; but because we could not accu- rately estimate the contribution of individual differences among our subjects to T, we do not know the limit on the accuracy of our predictions. Our experiments relied on the GRANT KB, which was built for a different purpose. Although our results are limited to this knowledge base, we believe they are more general, because the surface relations in the GRANT KB are common, and because h and t are general semantic components, and because transitiv- ity and GPI are common structural characteristics. But further work is required to prove the generality of our results. Our goal was to develop methods to support graceful degra- dation and knowledge engineering. Clearly, these purposes are not met if plausible inference rules require masses of knowledge to judge their conclusions. We are very encouraged by the rela- tively high accuracy of criteria that require no knowledge, and by the fact that our accuracy is higher for plausible rules than between-rule variance-the proportion of T due only to differences in the surface structures of rules. deep structure variance-the proportion of T due onlv to whether deep structures are transitive, intransitive, or GPI structures. Ideally, deep structure variance should account for the largest component of T. If 100% of T was due to deep structure variance, then transitivity and GPI would be perfect predictors of plausibility. In contrast, if a large fraction of T is due to item variance, then one needs to know the specific instantiation of a rule-the concepts in the test item-to predict its plausibility. Similarly, between-rule variance represents the effect of know- ing the surface structure of test items on one’s ability to predict their plausibility. Subject variance represents the limit of our a.bility to predict plausibility. For transitive and intransitive rules, and to a lesser extent for GPI rules, deep structure variance accounts for a large frac- tion of T. For all test items with these structural character- istics, our predictions of plausibilitv will be correct for 77? of transitive items and 68% of GPT items; and our prediction of implausibility will be correct for 62% of intransitive items. Since these numbers are not lOO%, the remaining variance in T must be due to the rule, item, and subject factors. for implausible ones. References [Brachman, 19851 Ronald J. Brachman. “I lied about the trets” or, defaults and definitions in knowledge represen- tation. Al dlagazine, 6(3):80-93, Fall 1985. [Cohen and Kjeldsen, 19871 Paul R. Cohen and Rick Kjeldsen. Information retrieval by constrained spreading activation in semantic networks. Information Processing and Man- agement, 23(4):255-268, 1987. [Cohen and Loiselle, 19881 Paul R. Cohen and Cynthia L. Loiselle. Explorations in the Structure of Plausible In- ference Rules for Large KnowZedge Bases. Technical Re- port 88-54, University of Massachusetts, Amherst, MA, 1988. [Cohen et al., 19851 Paul R. Cohen, Alvah Davis, David S. Day, Michael Greenberg, Rick Kjeldsen, Sue Lander, and Cindy Loiselle. Representativeness and uncertainty in classification systems. AI Magazine, 6(3):136-149, Fall 1985. Preliminary estimates of between-rule variance (based on the w2 statistic [Hays, 1973, p. 4851) are 16% for transitive rules, 27% for intransitive rules, and 52% for GPI rules. That is, if a rule is transitive, then knowing which rule it is provides little ad- ditional information about the plausibility of items. However, [Collins et al., 19751 A. Collins, E. Warnock, N. Aiello, and M. Miller. Reasoning from incomplete knowledge. In D. G. Bobrow and A. Collins, editors, Representation and Un- derstanding, Academic Press, New York, 1975. [Hays, 19731 Willim L. Hays. Statistics for the Social Sciences. Holt , Rinehart, and Winston, second edition, 1973. this knowledge accounts for much of the variance in plausibility [Lenat and Feigenbaum, 19871 D. B. Lenat and E. Feigen- SCOTCS of intransitive and GPT items. baum. On the thrrsholds of knowled~r. TJl Pror~d~r/q Estimates of item and subject variance show that item dif- of the Tenth International Joint Conference on AI lzf~c~ul ferences account for most of the remaining variance. Knowing Intelliaence. pages 1173-1182, Milan, Italv, 1987. ., ,.- , “. a rule’s instantiation improves our prediction of its plausibilitv far more than knowing which subject is making the judgement. [Lenat et al., 19861 D. B. Lenat, M. Prakash, and M. Shepherd. Details of these analyses are given in ICohen and Loiselle, 1988!. CYC: Using common sense knowledge to overcome brittle- ness and knowledge acquisition bottlenecks. AI Magazine, 5 Conclusion 6(4):65-85, Winter 1986. This pa.per suggest,s that we can automatically derive plausible inference rules from the relations in knowledge bases and pre- dict judgments of plausibility for the conclusions of these rules. Two structural factors (transitivity and GPI) correctly predict, 420 Knowledge Representation
1988
82
829
Brian C. Williams * MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 Abstract The apparently weak properties of a qualitative algebra have lead some to conclude that we must turn instead to extra-mathematical properties of physical systems. We propose instead that a more powerful qualitative algebra is needed, one that merges the algebras on signs and reals. We have invented a hybrid algebra, called Ql, allows us to select abstractions intermediate between tra- ditional qualitative and quantitative algebras. The power of our algebra is demonstrated in three ways: First, analysis of Ql shows that the al- gebra is robust, sharing many properties of re- als, but including several that are unique. Sec- ond, these properties enable symbolic manipula- tion techniques for canonicalization and factor- ization distinct from those applied to the reals. Finally, these manipulation techniques hold much promise for tasks like design and verification, as suggested by a simple design example. 1 Introduction Many systems analyze the behavior of physical devices us- ing qualitative equations derived from models and device structure. For tasks like explanation and diagnosis, a de- vice’s behavior is predicted using the equations as a net- work of conduits through which values are propagated. In contrast, the task of design is to construct a network of conduits (i.e, qualitative equations) that produces some desired behavior. Modeling this process requires a theory of composition - how qualitative equations combine to pro- duce the design’s aggregate behavior. Creating such a the- ory requires a careful analysis of the algebraic properties of qualitative equations together with a set of techniques for algebraic manipulation. We have invented a powerful hybrid algebra called &I, that captures qualitative and quantitative information about physical devices. We have implemented a quali- tative symbolic algebra system based on this hybrid al- gebra called MINIMA, that provides facilities for combin- ing, simplifying, canonicalizing and factoring qualitative ‘01988 Brian C. Williams. This paper describes research done at the Artificial Intelligence Laboratory of the Mas- sachusetts Institute of Technology. Support for the author’s artificial intelligence research is provided by an Analog Devices Fellowship, the Digital Equipment Corporation, Wang Corpo- ration, and the Advanced Research Projects Agency of the De- partment of Defense under Office of Naval Research contract N00014-85-K-0124. equations. These facilities are demonstrated on a simple design/verification example and play a central role in the system for novel design described in [$I. Very little has been written about the properties of qual- itative algebras, i.e., algebras defined on abstractions of the reals. The few existing studies have focused on the properties of “confluences” - equations involving only ad- dition and subtraction on the signs of quantities[2]. It has been known for some time [l] that confluences are weak - e.g., there is no additive inverse. Recently Struss [7] has performed a detailed analysis of confluences: in re- placing signs with a variety of interval representations he came to some disheartening conclusions: “Since the uncov- ered drawbacks [of confluences] turn out to be very severe, this should motivate a search for additional concepts and approaches of a completely different nature.” As a conse- quence of this weakness few qualitative reasoning systems manipulate qualitative equations symbolically.’ Our analysis of a more extensive qualitative algebra (g1) is much more optimistic. Focusing on both the interaction between qualitative and real expressions and a broad set of qualitative operators results in a powerful algebra, one that shares many important properties with the reals, as well as offering additional important properties (section 4). These additional properties allow efficient algebraic manip- ulation (e.g., canonicalization and factoring) of qualitative expressions without resorting to the expensive procedures needed to manipulate real expressions (section 6). Ql’s al- gebraic properties are also sufficient to account for simple designs (sections 5, 7). 2 Exa hi?: Culinary Our overall agenda is to develop a theory of design inno- vation that accounts for designs using technologies where little is known beyond the physics underlying a few simple devices. A key component of this theory is a qualitative algebra used to describe and compose behavioral relations of primitive devices at an abstract level. The designer uses this algebra to reformulate a desired behavioral relation by combining it with known relations, until he finds one that he knows how to produce (fully or in part) through additions to the physical structure (e.g., adding compo- nents or connections). To understand the requirements of this qualitative algebra we consider a simple design exam- ple. Since the focus of this paper is the Ql algebra rather than design, neither the example nor the reasoning strat- egy presented are particularly sophisticated. [8] presents [31 -‘The only curr ent exceptions are the qualitative and the composition of the 1M operator[5]. gauss rule 264 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. a much more sophisticated design strategy using Ql constructs designs of significantly greater complexity. that Suppose you are throwing a major party that includes beverages. Having waiters manually refill the punch bowl from a large vat would intrude on the ambiance of the event. Thus you would like the level of the punch bowl to be restored to the level of the vat automatically. That is, whenever there is a height difference between the bowl and vat, a device should automatically change the bowl height to meet that of the vat. You reason as follows: First, the height of the punch in the bowl is raised or lowered by having punch flow in or out of it. Second, the pressure at the bottom of a container is proportional to the height and density of the liquid in the container. Since the same type of liquid is in both the vat and bowl, a difference in height corresponds to a difference in pressure. Thus our goal is reformulated as having punch flow into the bowl whenever its pressure drops relative to the vat. Further we know for a pipe that fluid flows to the end that has the lower pressure. Thus our task is completed simply by attaching a pipe between the bottoms of the vat and bowl (and, for aesthetics, hiding the vat behind a tasteful and rare tapestry). VAT PUNCH BOWL The qualitative vocabulary used above is similar to that found in the literature, involving signs of quantities, differ- ences and their derivatives. The reasoning process, how- ever, is a bit different. First, the example does not involve reasoning about spe- cific numbers or qualitative values (e.g., positive or increas- ing). Instead we reasoned about the composition of quali- tative relations, using the process of reformulating an ini- tial goal relation with known relations until a goal is found that can be met by augmentations to physical structure (e.g., addition of the pipe). Second, the example at times requires the designer to reason about the precise relationship between quantities, rather than simply relating the signs of quantities as with confluences[2] (e.g., th e exact quantitative relationship be- tween fluid density, height and pressure must be known to relate height and pressure difference). Capturing this reasoning process requires a hybrid qual- itative - quantitative algebra coupled with a theory of the designer’s algebraic manipulation skills. 3 e ualit at ive Alge The qualitative algebra explored here (Ql) is similar to those used elsewhere in the literature (e.g., [9],[2]), but differs in two important respects. Most importantly, our equations combine qualitative and quantitative information by allowing a combination of qualitative and real operators. Traditionally, real quanti- ties are immediately abstracted to qualitative values (e.g., sign of the quantity), then operated on by the qualitative operators. As a consequence of this early abstraction the result of the qualitative operations is often ambiguous. In Ql quantities may first be operated on using the stan- dard real operators, the result abstracted to a qualitative value, and then operated on further using qualitative op- erators. This produces a result that is less ambiguous than that produced by qualitative operators alone. A “hybrid” algebra of qualitative and quantitative operators thus al- lows us to express constraints spanning the spectrum from weak constraints expressible by traditional qualitative al- gebras to quantitative constraints expressible by the stan- dard algebra on the reals. A second property of Ql is that qualitative expressions include a full complement of operators on signs analo- gous to real addition, subtraction, multiplication, division and exponentiation. Although previous work has included many of these operators in qualitative simulation systems (e.g., [9],[41,[51)~ 1 b a ge raic analysis has focused only on the properties of sign addition and subtraction [2],[7], [3]. The remainder of this section defines the domain, oper- ators and syntax of the algebra. Algebraic properties are explored in the next section. 3.1 omain, Operators and Syntax Qualitative descriptions operate on two sets, the reals ?li and the set S= {-, 0, +} denoting the sign of real quan- tities. The relation between !l? and S is defined by the mapping [ 1: ZR - S where: { + ifz>O For any 3: E %, [z] = 0 ifz=O - ifa!<O Thus the operator [ ] partitions ?R into three intervals, (0, +oo), [O,O] and (-oo,O) corresponding to +, 0 and -. The set S’= { -, 0, +, ?} extends S with the value ?, used to represent an undetermined sign (i.e., the value may lie in any one of the three intervals). Thus ? corresponds to the interval (-00, +oo). Qualitative expressions are composed from the standard operators on 8, (+, -, x, /), an analogous set of operators on S’ (El, 0,C3,0) and the operator [ 1. The operator $ : S’ + S’ - S’ is the qualitative analog of real addition, answering the question: “What is the sign of z + y, given only the signs of 2 and y?“. The operators (a, @~,a) have similar analogs to their corresponding real operators (-, x, /). Th ese operators, called sign operators, are defined by the following tables: lell - + 0 0 + - 3 3 . (U denotes values for which an operator division by an interval containing zero). Allowable qualitative expressions sions (rexp) are of the form: tqexp) and real expres- is undefined, i.e., Williams 2QS qexp ::= [rexp] 1 sign I qexp sop qexp ] 0 qexp 1 qexpintege’ ] (qexp) rexp ::= symbol 1 real ( rexp rop rexp 1 - rexp 1 rexpinteger I (rexp) SOP ::= CEI 18 I@ I0 rap ::= + I- 1 x I/ sign ::= [+I I [ol I E-l I PI where square brackets are placed around the elements of S’ to distinguish them horn real operators. Constraints on a system’s behavior are described by a combination of qualitative and real equations (qeqn and reqn, respectively): qeqn ::= qexp = qexp reqn ::= rexp = rexp In most qualitative algebras (e.g., [2],[9], [7],[3], and [5]) equality takes on a non-standard meaning - a qualita- tive equation ql = q2 is satisfied if ql and q2 denote the same value OT if either ql or q2 = /‘?I. Thus the ex- pression [fl] + [f2] = [0] is not satisfied by [fl] = [+] and [fl] = [-] since [+] + [-] = [?I. This definition re- sults in considerable confusion since qualitative equality is not an equivalence relation - it is not transitive (e.g., [+] = [?I, [?] = [-] + [-] = [+I). As a consequence the fundamental operation of substitution of equals for equals is not generally permitted. This significantly weakens the traditional qualitative algebra. In Ql, equality has the standard meaning (i.e, qualita- tive or real expressions that are equated denote the same value). For example, the expression [fl] + [f2] = [0] is sat- isfied only if [fl] = [0] and [f2] = [O]. The equation is not iatisfied by [fl] = [+],[fl] = [-] since [+]+[-I = [?] # [O]. Ql can be used to describe the relations in the design scenario. The desired relationship between vat and bowl heights (H,, H b is captured by the hybrid equation: ) FL - &I = ww&J1 (i.e., the height of the bowl should change in the direction of the height difference between the bowl and vat). The required relation between height, pressure (P), fluid density (d) and gravity (9) is a real equation, while the relations between height, volume (V) and flow (&) can be real or qualitative: Pi = d x g x Hi for open container i Qb = d/dtt’lrb) [Kl = WC] These equations combine to produce the desired relation: [pv - pb] = [Qb] The relation for a pipe with ends el and e2 is: [pel - J’e2] = -[Qez] Thus the desired relation is achieved by connecting between the bottoms of the vat and bowl. a pipe 2Q1 provides two additional relations, “contains” and “over- laps”, that are important for expressing weaker constraints. [8] discusses their properties, together with techniques for deter- mining if one constraint is an abstraction of another. 4 Properties of Ql In this section we demonstrate the power of Ql by ex- amining its most important properties (see [8] for a more complete discussion). Ql is defined as the structure (~us’,+Ae4@,[ I), where -, / are defined in terms of +,*; 8, 8 are defined in terms of 8. To understand Ql we explore the properties of the real and sign alge- bras ((‘R +, x) and (S’, E8, BP>) and then the interactions between them. In the remainder of the paper s,t and u denote elements of S’ and a,b,c denote elements of %. A table below sum- marizes the basic properties of (3?, +, x) (i.e. the field axioms) and the corresponding properties of (S’, @, B). (S’, @, 8) (R $7 x> Associative: (s cl3 t) CB u = s $ (t aI u) (a + b) + c = u + (b+ c) (8 8 t) c3 u = s 63 (t 63 u) (u x b) x c = a x (b x c) Identity : s $ [o] = s u+O=u SC3 [+I = 9 uxl=u Inverse: none u-u=0 9 0 s = [+] if s # [O] or [?] u/u = 1 if a f 0 Commutativity: s@t=t@s u+b=b+u s@t=t@s uxb=bxu Distributivity: s@J(t@u)=s@ta3s@u ux(b+c)=uxb+uxc The discussion of these properties are broken into prop- erties of (%, +, x) missing in (S’, $, a), shared properties, and properties of (S’, @, 8) beyond those of (‘%, +, x). We focus on the consequences of these properties most relevant to symbolic manipulation. 4.1 Weaknesses of S’ (S’, $, 8) contains most of the field axioms. The major weakness is the lack of an additive inverse for any element of S’ except [0] (i.e., if v E {+, -,?), there is no w E S’ such that V@UJ = [O]). Th us, whatever 8 is, it can’t be the inverse of $. As a result the sign algebra does not meet any of the normal classifications of field, ring, or even group. One major consequence is that there is no cancellation law for $: s@u=tcBu + s=t Without it we cannot in general solve systems of sign equa- tions by subtracting equations and canceling terms. Fur- thermore, addends cannot be moved between sides of an equation: S@t=u + s=uet Consequently we cannot always solve for a particular vari- able in a qualitative expression using standard techniques for real expressions. This is why an algebra based only on confluences (i.e., (S’, @)) is so impoverished. 4.2 Commonalities Between S’ and !R In spite of a missing identity, (S’, $, 8) is still quite strong because it shares most of the remaining properties of (!I?, +, x). $ has an identity ([0]), is commutative and as- sociative. In addition, @I has essentially all the properties of x (an identity ([+I), inverse operator (0), associativity 266 Common Sense Reasoning and commutativity). Like x , @ has no inverse for [0], but &I also has no inverse for [?] (since [?] contains [O]). This does not present a problem in practice since a subexpres- sion denoting [?] provides no information. Since 8 has an inverse, 8 has a cancellation rule analo- gous to x and multiplicands can be moved between sides of an equation: s@u= tc3uus =t axc=bxcua=b For t # [0], [?I : For b # 0 : s@t-=uus=u8t axb=c@a=c/b The second property above allows us to solve for cer- tain variables or subexpressions of qualitative equations in many situations. Also 8 distributes over @. This, com- bined with the commutativity and associativity of $ and 8, allows us to represent expressions in a canonical form similar to polynomials on !J?. Earlier we pointed out that 8 cannot be defined as the inverse of $. However 6 is related to @ in a manner similar to - and x: 0321 G [-]‘cgu z (-1) x a uev 3di3(ev) ul”b =a+(-b) As a result (S’, $, @I) and (8, +, x ) share the following properties important for simplification: e(es) = s -(-a) = a e(s@t) =eset -(a+b)=-a-b [+] 0 ([+] 0 S)’ s 1/(1/a) = a, if a # 0 ifs # [Ol,[?l [+]s (9 x t) = ([+I/4 @ ([+I @ 4 l!l(faaXb:=O(lla) x (l/b) ? ifs, t # [Ol, [?I 4.3 Properties of S’ not in 8 (S’,@, 8) has three important properties that allow sim- plifications not possible in (!J?, +, x ), and that are funda- mental to the canonicalization and factoring algorithms described in section 6. First, since [+] 8 [+] = [+] and [-] 8 [-] = [+], @J is its own multiplicative inverse: L-4 @ PI = bl @ kl for PI f PI, PI (1) A major consequence is that all occurrences of 0 in an expression can be replaced with @I (as long as the denom- inator doesn’t contain [O]). In addition: s Eg”‘t@y = [+I for s # [Ol, [?I @s=u@t fortf [O],[?] The second property relates to exponentiation. tative expression s and integer n,.let sn denote For a quali- s@s:--8s L / 14 if n is positive or negative, and [+] if n = 0. Then the fol- lowing holds: s2i 8 s = s for i E integers (2) Thus all expressions raised to a positive/negative odd power are equivalent; likewise for positive/negative even powers. This allows all exponents i to be reduced to 0 ( i 2 2. This is used later in section 6.1 to reduce allsign expressions to quadratics. Third, there is a cancellation rule for addition: bl + bl = M As a result of these three properties, common subexpres- sions are often “absorbed” into a single expression during the simplification process. This results in expressions that are far simpler than their counterpart would be in 8. We return to this issue in section 6.1. 4.4 elating S’ and !R The remaining task is to examine expressions that use [ ] to combine properties of S’ and 8 (i.e.,(%US’, +, x , @, ~3, [ I)). [ ] is a homomorphism of !I? onto S for the operations of multiplication, division, minus and exponentiation: b * bl e bl @ PI WI e [al @ bl [-;I - aa1 U ‘3 w However, this is not the case for addition or subtraction. For example, expressing height difference as [H,] 8 [Hb] is weaker than [H, - Hb] (e.g., consider H, = 8, Hb = 7). This sheds light on a crucial problem with the standard approaches to qualitative reasoning - they over-abstract. The mistake is that a qualitative equation is traditionally produced from a real equation by replacing each operator with its sign equivalent and each variable v with [v]. Thus, in the punch bowl example we would be forced to represent height difference as [Ha] 8 [Hb]. But this expression is useless - since height is never negative and rarely zero, the value will almost always be [?]. We solve this problem by allowing a hybridization of real and sign expressions. 5 sing a ualit at ive esign The next step is to incorporate the above properties into a symbolic algebra system adequate to capture the reasoning steps in designs similar to our example. To do this we consider what design entails. Given a desired behavior, a designer examines the be- havioral constraints imposed by the design’s existing struc- ture, then uses the models of available components to de- termine where and what additional augmentations are nec- essary to meet the desired behavior. A good designer ex- ploits constraints imposed by the existing structure to re- duce the additions necessary and identify novel additions. Here we focus only on the algebraic manipulations used in this process; coordinating the overall design process coher- ently is a subtle task described in [8]. The types of behavior used in the above process can be expressed by equations in our qualitative algebra. The basic algebraic inference performed by the designer is to re- formulate an equation describing a desired behavior (goal), by combining it with equations describing either existing physical structure or augmentations to that structure he is willing to make (constraints). This process is repeated until either a reformulated goal is met by an existing con- straint or it is proven unachievable. Combining a goal with a constraint involves 1) identi- fying shared variables (or subexpressions), 2) solving for Williams 2G7 a variable/subexpression in the constraint, 3) substituting the result into the goal, and 4) simplifying the combined result. In the punch bowl example the original goal [H, - Hb] = [d/&(Hb)], and the constraint Hi, + & = vb share the variable Hb . Solving for Hb in the constraint we get Hb = &/bl&,. Substituting for Hb in the right hand side of the goal produces [H, - H!,] = [d/dt(Vb/Ab)]; simplify- ing results in [H,, - Ha] = [d/dt(vb)] 8 [f&l. This example is completed in section 7. Next the reformulated goal is checked for failure or success. Success occurs if the goal is a tautology (e.g., s $ t = s $ t) or equivalent to an existing constraint. Fail- ure occurs if the equation is inconsistent (e.g., [+] = [-I). 6 MINIMA MINIMA is a symbolic algebra system for Ql that sup- ports the operations identified above. MINIMA is a quali- tative analog of the symbolic algebra system Macsyma[G], and in fact uses Macsyma to manipulate subexpressions in (% +, x). We discuss the two most important operations per- formed by MINIMA: simplification and equation solving (i.e., solving for a variable or subexpression). By making the simplifier sufficiently powerful (i.e, reducing expres- sions to a unique canonical form), identifying tautologies and equivalent equations is reduced to determining syntac- tic equivalence. Like Macsyma, MINIMA provides two approaches to simplification and equation solving. The first approach is restricted to “obvious” transformations of the equations, using a subset of the properties mentioned above. For ex- ample, given cl = [+] and c2 = 8, the equation ([a”] @ [cz])@[-a/(b-4)*( -cl)] simplifies to ([u]@[+])@[u]“@[b-41 by 1) substituting for constants with known values, 2) applying the homomorphisms for +, x, -, 3) evaluating [ ] on known values, 4) cancelling identities and double negations, and 5) using associativity and commutativity to canonicalize the order of operator arguments. Com- pleteness is traded for faster, more intuitive deductions. The approach is sufficient for many designs, including the punch bowl example. The second approach performs less obvious transfor- mations, using techniques for qualitative canonicalization and factorization. For example, ([a] $ [+]) CFJ [aI2 8 [b - 41 is further simplified to the multivariate quadratic3 [b - 41 8 [aI2 @ [b - 41 @I [a] for [b - 41 f [0], [?]. Canonical- ization and factorization are prohibitively expensive in tra- ditional symbolic algebraic systems on R. However, this is not the case for a sign algebra - MINIMA exploits the properties of Ql described in section 4.3 to make canoni- calization and factorization very efficient in practice. 6.1 Simplification and Canonicaliaation The purpose of simplification is to eliminate irrelevant structure in the equations. This facilitates the process of both comparing and combining equations. The simplifier 3A multivariate polynomial is a polynomial in variable v whose coefficients are polynomials not in u. A multivariate quadratic is a multivariate polynomial of degree 2. eliminates structure through a combination of cancellation (e.g., a/a * l), evaluation ([+] @ [-] + [-I), substitution of known constants, and reduction of subexpressions to a standard form (e.g.,(b * a) * c a * b * c). The oper- ator definitions and properties described in sections 3 and 4 provide the tools to perform simplification. Simplifying equations in MINIMA involves three steps. First, the real subexpressions of an equation (i.e., expres- sions contained within [ 1) are simplified using the prop- erties described for (3, +, x). Next, real operators are transformed into sign’ equivalents whenever possible using the homomorphisms of section 4.4. Finally, the surround- ing sign expressions are simplified using the properties de- scribed for (S’, @, @). Mapping from real to sign operators has two advantages: First, the sign of a quantity is often known when the real value isn’t. For example, we know density and gravity are positive, independent of substance and planet, thus [P] = [d *a*~* H] simplifies to [P] = [HI. Second, the properties of section 4.3 allow significant sim- plifications in S’ not possible in R The “obvious” simplification approach involves making local changes to an equation’s structure. Most of the prop- erties of section 4 are applied as simple rewrite rules during simplification. Commutativity and associativity are used together to convert binary expressions into n-ary expres- sions whose arguments are sorted lexicographically (as in (b * a) * c + a * b *c). The main property not used is dis- tributivity since expanding expressions using distribution can radically change an equation’s structure. The second simplification approach reduces an expres- sion or equation to a pseudo-canonical form* analogous to a multivariate polynomial. In traditional symbolic systems real expressions can be reduced to a unique rational form - a fraction consisting of two multivariate polynomials with common factors removed. Although constructing polynomials is fast, constructing - rationals is expensive for large expressions. The cost is in factorization, which relies heavily on computing greatest common denominator (GCD). A similar approach is taken for Ql, but one that is signif- icantly faster in practice. Operators in S’ are distributive, commutative, and associative. This is sufficient to con- struct polynomials from sign expressions. Furthermore, by equation 4.3 all exponents can be reduced to degree 1 or 2, thus the polynomials are at most quadratic. Finally, by equation 4.3 division can be replaced by multiplica- tion, thus all expressions in (S’, $, 8) can be represented as quadratic, multivariate polynomials.5 Since 8 is elim- inated, factorization and GCD is unnecessary to perform canonicalization. To canonicalize a hybrid equation we convert the real subexpressions to rationals, apply the homomorphisms, and then canonicalize the sign expressions as above. 6 4We use the modifier “pseudo” only because the canonical form has not yet been proven unique. 5An additional stage involves eliminating irrelevant terms in the quadratic equation (e.g.,[zj2 $ [z] $ [+] ti [z] @ [+I). See [8] for a complete list of elimination rules. ‘A more general approach involves constructing the prime factorization of the real expression before applying the homomorphisms. Common Sense Reasoning 6.2 Equation Solving and Factoring Given an equation in (!J?, +, x ) it is possible to solve for any variable. This is not the case for (S’, @, 8): Since there is no cancellation for addition, addends cannot be moved between the left and right sides of an equation. However, it is often possible to solve for certain subex- pressions. Cancellation can be performed for multiplica- tion (section 4.2), thus we can solve for any subexpression that is an argument of a top-level multiplication For ex- ample, solving for s @ t: (s@t)@u=v =5- (9 @ t) = 21 63 u for u # [0], [?] More generally, we can compute the prime factors of the top-level expressions in a qualitative equation, and then solve for any of the factors. Traditionally factorization requires computing GCD, which is very expensive. How- ever, factorization using standard GCD algorithms cannot be used for sign expressions since GCD algorithms rely on cancellation. Instead we use a much simpler approach. Since sign expressions can be reduced to quadratics, it is relatively inexpensive to determine the factors by generate and test. The factorization of a quadratic is of the form: 82 c3 [212 cl3 Sl c3 [a] a3 so = (a @ [z] CB c) c3 (bc3 [z] cl3 d) where ~2, 31, so, a, b, and c are qualitative expressions and 92 = a @ b, si = (a @ c) @ (b @I d) and so = c @ d. Thus to factor a quadratic we generate u,b,c and d by factoring the coefficients sz and so, and then distribute 92’s factors between a and b, and so’s factors between c and d. To test we compute the polynomial corresponding to (a @I c) $ (b@ d) and compare it with sl. Quadratics are sufficiently infrequent that this strategy is quite acceptable in practice. MINIMA’s facilities for simplification, substitution and equation solving provide the algebraic tools necessary to walk through the punch bowl example which, as the trace below suggests, is more complex than our intuitions might at first suggest. The following is a simplified trace of the deductions going from the initial goal [H, -~b] = [dHb/&] to the reformulation [Pv - Pb] = [&b], which is the key to recognizing the solution involving a pipe. In the example, Gn, Fn, and Cn denote Goals, given Facts and Conse- quences, respectively. Gl) [He - &I = [+f(ffb)] F2) .&,%%4b=Vb (23) Hb = v&h, G4) [Ho - h-1 = [d/dt(vb/Ab)l G7) [Ha - Hb] = [d/dt(Vb)/Aa] G8) [Hv - Hb] = [d/d@ib)] 0 [Ab] F9) [Ad = [+I 0) [Hv - 61 = [d/dt(vb)] Fll) Qb = d/C&(&) G12) [K - 61 = [Qb] F13) P,=d*g*H, C14) H, = P&n * g Original Design Goal Container Model solve for Hb in F2 Substitute for Hi, in Gl using C3 Differentiate G4 Simplify G7 Container Model Substitute for & in G8 using F9 Container Model Substitute for d/dt( vb) in GlO using Fll Container Model Solve for H, in F13 G15) [E/d * 9 - Hb] = [Qb] Substitute for H, in 612 using Cl4 F16) Pb=d*g*Hb Container Model c17) Hb = Pb/d * &7 solve for Hb in F16 G18) [Pv/d * g - Pa/d * g] = [Qb] Substitute for Hb in G15 using Cl7 Gl9) [(K - pb)l/([d] @ [id) = k&l Simplify (318 F20) [d] = [+] Property of fluids F21) [d = [+I Property of gravity 622) [(Pv - %)]/([+I @ [+I) = [&a] Substitute g,d into G19 using F20,F21 G23) [pz, - pb] = [Qb] Simplify G22 The reformulated goal 623 enables the crucial insight - the close similarity between 623 and a pipe’s behavior (i.e., [Ptl - Pt2] = [St,] where tl,t2 are the ends of a pipe) suggests connecting a pipe between the two containers. The apparently weak properties of a qualitative algebra have lead some to conclude that we must turn instead to extra-mathematical properties of physical systems. We have instead proposed a new qualitative algebra, Ql, that merges the algebras on signs and reals, allowing us to select abstractions intermediate between traditional qualitative and quantitative algebras. The power of our algebra is demonstrated in three ways: First, Ql is a robust algebra sharing many properties of reals, but several that are unique. Second, these properties enable symbolic manipulation techniques for canonicaliza- tion and factorization, distinct from those applied to the reals. Finally, these manipulation techniques hold much promise for tasks like design and verification, as suggested by our example. The qualitative symbolic algebra system MINIMA has been fully implemented and tested on a Symbolics 3600. A design system based on MINIMA is partially implemented. Acknowledgments: I would like to thank Leah Williams, Dan Weld, Jeff Van Baalen, Paul Resnick, Walter Ham- scher, Johan de Kleer and Randy Davis. PI PI [31 Nl [51 bl PI PI WI J. de Kleer. Causal and Teleological Reasoning in Cir- cuit Recognition. AI-TR-529, MIT AI Lab, Sep. 1979. J. de Kleer and J. Brown. A Qualitative Physics Based on Confluences. Artificial Intelligence, 24, Dec. 1984. J. Dormoy and 0. Raiman. Assembling a Device. AI in Engineering, To Appear 1988. K. Forbus. Qualitative Process Theory. Artificial In- telligence, 24, Dec. 1984. B. Kuipers. Commonsense Reasoning about Causality. Artificial Intelligence, Dec. 1984. J. Moses. Algebraic simplification: a guide for the per- plexed. Comm. ACM, 14(8), Aug. 1971. P. Struss. Mathematical Aspects of Qualitative Rea- soning. AI in Engineering, To Appear 1988. B. Williams. Principled Design Based on Topologies of Interaction. PhD Thesis, MIT AI Lab, To Appear 1988. B. Williams. Q ua 1 a ive l’t t Analysis of MOS Circuits. Artijkial Intelligence, Dec. 1984. Williams 269
1988
83
830
ualitative Approach to echanical Constraint * Paul Nielsen Qualitative Reasoning Group Department of Computer Science University of Illinois at Urbana-Champaign 1304 W. Springfield Avenue, Urbana, Illinois 61801 Abstract This paper provides a qualitative analysis of instan- taneous, constrained motions in rigid bodies. We de- velop a symbolic spatial representation to describe the effects of configuration on the dynamic behavior of rigid objects. We also explore the way symbolic shape information may be used to reason about force transmission. This information may be used to pro- vide a static analysis for a given configuration and is an important component of the calculation of be- havioral transitions when envisioning device behav- ior. All results are based on an implementation. 1 Introduction The goal of Qualitative Mechanics (QM) is to produce a com- monsense theory of mechanical analysis sufficient to describe the behavior of rigid body devices. We want theories which describe both the behavior of common mechanisms such as gear trains, pistons, and ratchets, as well as mechanisms which contain unusual or variant devices such as clock escapements. These descriptions may be used to predict the behavior of an unknown mechanism, determine the suitability of a given device for a task, diagnose mechanical failures, and critically analyze new mechanisms. There is a great deal of interest in developing AI tools to assist in mechanics, both by mechanical engineers [Dixon, 19861 and AI researchers [Davis, 1986; deKleer, 1975; deKleer and Brown, 1984; Forbus,’ 1981; Gelsey, 1987; Kuipers, 198G; Laughton, 1985; Shoham, 1985; Stanfill, 19851. In addition, for a robot to interact with the physical world and manipulate its environment it must be able to accomplish tasks such as turn- ing knobs, opening doors, lifting boxes, and stacking objects. Except in a highly artificial environment these tasks all require deep knowledge of the basic underlying principles of statics dis- cussed in this paper. We assume as input a specification of the set of objects in- volved, a set of possible configurations of these objects, and the external forces acting on the device, if any. The results of this analysis are the instantaneous directions in which an object can and must move. For example if a scape wheel is moving clock- wise and the fore pallet is in contact with a tooth on the scape wheel, how might the scape wheel move? This result covers a wide range of statics problems and is an initial step in understanding the qualitative mechanics of motion. .411 of these ideas have been implemented in a program called ALEX, and the examples are taken from that program. ‘This research was supported by the Office of Naval Re- search, Contract No. N00014-85-K-0225. 270 Common Sense Reasoning 1.1 Overview Section 2 describes the methods used to represent spatial de- scriptors and rigid body objects. Section 3 presents the theory of qualitatively constrained motion. This section is divided into three parts: motion prevented by contact with immov- able objects, motions required by contact with moving objects, and motion allowed by forces external to the device. Section 4 briefly describes how these ideas are used in mechanism analy- sis. Section 5 provides a summary and discusses other research on QM. 2 Representations 2.1 Spatial Representations 2.1.1 Translational Direction A concept of direction is essential to spatial reasoning. When people describe direction in space, without resorting to dia- grams or mathematics, they typically use words such as “right, left”; “up, down”; and “front, back” relative to some frame of reference. We use this order and assign “+” to the first of each of these pairs and (L-n to the second with “0” meaning center. This description corresponds to the signs of the numerical val- ues in a Cartesian coordinate system or to the sign of the cosine and the sine in a polar coordinate system. Directions in space are described by combining these values over several dimensions. A vector which corresponds to an axis may be represented exactly, for example, (+ 0 0) indicates the vector lies along the positive X axis. Vectors which do not cor- respond to axes are represented by continuous spatial regions. For example any location to the lower left of some reference will be written (- -) or (- - 0) where 0 indicates there is no magnitude along the 2 axis. Deflnition 1 (Translational Direction) trans-dir(x) is defined only over vector quantities. It is the ordered set of the signs of the direction vector of x. Definition 2 (Possible Directions) The set of all posszble translational directions r is the Cartesian product of the Jet3 of all passable directions for each dimension. The set of alI possible translational directions for (I single dimension 13 { +, 0, -}. 2.1.2 Rotational Direction People typically describe direction of rotation either by the direction of a tangent or using counter-clockwise and clockwrse. We represent a counter-clockwise rotation as “t” and a clock- wise rotation as U-n when looking along a positive axis toward the origin. Thus the way we normally perceive the movement of the hands of a clock is (-) in 2 dimensions 3r (0 0 -) in three dimensions (clockwise about the Z axis). Note that this rep- resentation of rotational directions corresponds to the signs of From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. numerical tern. sign(x) * )“‘:I Table I: (X] + [Y] sign(x) .xiy-fg Deflnition 7 (Surface Normal) Surf ace-Normal(p , d) is true if d is the directzon of the surface normal at the surface Pa Deflnition 8 (Origin Direction) Origin-Dir(p, d) is true iff d is the translational direction from a point or surface, p, on an object to the center of rotation of that object. 2.4 Contact In order for an object to affect another object there must be contact between the two objects in some sense. (If we think of effects such as gravity and magnetism as a field, we then can then reason about the contact between this field and an object.) The Contact relation shows which parts of objects are in contact. Deflnition 9 (Contact) The predicate Contact (x. y> indi- cates that the distance from x to y is less than E. Where x and JT are both surfaces of objects. Table 2: [Xl ’ [Y] 3 Mechanical Motion values in a right handed Cartesian coordinate sys- Deflnition 3 (Rotational Direction) rot-dir(x) is the or- dered set of the signs of x’s rotation about each a&s. 2.2 Vector Arithmetic The calculations we perform on vector quantities include computing open half planes and ninety degree rotations of vec- tors which are represented in the manner specified above. These calculations make use of the vector dot product (ala2 + blb~ + c,c,) and the vector rotation formulas. In the vector rotation formulas a rotation of ninety degrees causes the cosines to be- come zeros and the sines to become ones, so that the only math we need consider is addition and multiplication of signs. The qualitative nrithmetic(cf., table 1 2) [deKleer and Brown, 19841 provides these results. Definition 4 (Half Plane) The predicate Half -Plane(x, y) is true if x and y are both vector quantities, and the srgn of the vector dot product of x and y is “+‘. Definition 5 (Rotate-go) Rotate-90(x, y is the vector which is perpendicular to x by a r) is true ifl y rotation in the rotational direction r. 2.3 Objects Rigid objects are represented by the set of their surfaces, and surfaces, in turn, are represented by both the qualitative direc- tion of the surface normal and the direction from the surface to the center of rotation. For example, the bottom of a two di- mensional block consists of three qualitatively distinct surfaces all of which have a surface normal in the down direction but whose directions to the center of mass are up-right, up, and up- left. Places where the surface normal is not defined (corners) are represented by the set of adjacent surfaces. For tractability we assume the centers of rotation to be fixed. Deflnition 6 (Surface) The predicate Surface(x, p> is true if p is a point (or set of qualitatively equivalent points) on the perimeter of object x. 3.1 Blocking This subsection answers two questions. Given contact between an object and an obstacle : 1. How will the motion constraints of the obstacle block the object? 2. What motions of the obstacle must be constrained to block the object? A constraint is a reaction force which absolutely prevents a body from moving a certain way. Constrained motion is essen- tial to understanding mechanics because a machine is defined as “any device consisting of two or more resistant, relatively con- strained parts which may serve to transmit and modify force and motion so as to do work [Cowie, 19611.” The opposite Iof a constraint is a freedom. In three dimensions there are six de- grees of freedom (an object can rotate about any of the three axes or translate along any of the axes), and in two dimensions there are three degrees of freedom (two translational and one rotational). In our analysis an object is assumed free to move in each direction unless it is specifically constrained. Deflnition 10 (Motion) TransMotion(o, t) indicates o has instantaneous linear motion in direction t. RotMotion(o, r) indicates o has instantaneous rotational mo- tion in direction r. Deflnitiou 11 (Constraint) TransConstraint (0, t> 2s true when object o is absolutely prevented from moving translationally in direction t. RotConstraint (0, r) is true when oblect o is absolutely prevented from moving rotationally in directzon r. Definition 12 (Freedom) TransFreedom(o , t) iJ true when object o is not prevented from moving translationally In direction t. RotFreedom(o, r) is true when object o td not prevented from moving rotationally in direction r. The constraints which may be imposed when two objects are in contact are given in figure 1. This says that if an (obstacle is “sufficiently” constrained it will prevent the following motions of an object in contact: a translational motion into the open half plane centered on the object’s surface normal at the point of contact, Nielsen 27 1 ( Q obj, obst, p, q, sn) (RigidBody(obst) A RigidBody(obj) A Surface(obst, p) A Surface(obj, q) A Contact(p, q) A Surface-Normal(p, sn) A Origin-Dir(p, 01) A Origin-Dir(q, 02) A (‘dd I) [Half-Plane(-sn, d 1) =+ TransConstraint(obst, d,)] A (VrL3zI) [Rotate-90(-sn, xl, rl) A Half-Plane(xl,o1) =+ RotConstraint(obst, rl)] } I (pda) [Half-Plane(-sn, o?z) =+ TransConstraint(obst, da)] A (‘Vrz&?) [Rotate-90(-sn, z2,r~) A Half-Plane(z? , 02) =S RotConstraint(obst, rz)/ } Figure 1: The law of contact constraint o rotational motion clockwise about any axis which lies in the open half plane centered ninety degrees clockwise from the object’s surface normal at the point, of contact, or o rotational motion counter-clockwise about any axis which lies in the open half plane centered ninety degrees counter- clockwise from the object’s surface normal at the point of contact. An obstacle is “sufficiently” constrained if it is unable to move in the any of the following ways: translational motion into the open half plane centered on the object’s surface normal at the point of contact, rotational motion clockwise about any axis which lies in the open half plane centered ninety degrees clockwise from the object’s surface normal at the point of contact, and rotational motion counter-clockwise about any axis which lies in the open half plane centered ninety degrees counter- clockwise from the object’s surface normal at, the point of contact. These constrained motions of the obstacle are the mini- mum required to describe motion of a link relative to some fixed frame of reference, not just the adjacent link. This al- lows an obstacle to be only partially constrained yet still pre- vent, other object from moving in some directions. Classical kinematics [Reuleaux, 18761 and related AI approaches [Davis, 1986; deKleer and Brown, 1984; Forbus, 1981; Kuipers, 198G; Laughton, 1985; Shoham, 19851 assume that only one object can move, i.e. all objects are fixed except, the object of inter- est. Those approachs simplify analysis, but oversimplify the problem. Figure 2 illustrates this law graphically for the two dimen- sional case. The surface normal of object B (inverse surface normal of the obstacle) at the point of contact is to the right. If object W cannot move up-right, right, or down-right; cnn- not rotate counter-clockwise about an axis above the surface normal; and cannot rotate clockwise about an axis below the surface normal then the object 0 cannot move up-right, right, or down-right (Fig. 2 B) cannot rotate counter-clockwide about an axis above the surface normal (Fig. 2 C); and cannot rotate clockwise about an axis below the surface normal (Fig. 2 D). P !W Figure 2: Constraints imposed by surface contact Figure 3: Block on wedge on ramp A stack of blocks is a simple example of partially constrained motions. When blocks are stacked, any block in the stack is pre- vented from moving in any downward direction because it has contact along a surface with the surface normal in the down direction and the block (or table) it is in contact with is con- strained in all downward directions. A block resting on a wedge on a ramp (Fig. 3) is free to move in any downward direction because the >batncie is not suffi- ciently constrained. The block may move &wnward by pushing the wedge down-right, but if the wedge could not be pushed to the side (perhaps a catch on the ramp), the wedge could not move in any of the directions required by the shape s,f the sur- face between the block and the wedge, and ccnnsequently the block would be constrained from moving downward. 3.2 Constraints Imposed at Corners When one surface slides off another there will be an instant when contact between two convex corners can (occur. In this case the surface tangent is not clearly defined at the point or line 272 Common Sense Reasoning Figure 4: Block pushing a boulder of contact.’ To determine what motions cannot occur in this case, we must first determine the contacts which are possible between each adjacent surface and the corners. Deflnition 13 (Convex Corner) A corner is convex if the angle between the two adjacent surfaces is greater than 180 de- grees. The constraints imposed by contact between two convez corners are the intersection of those imposed by contact with the adjacent surfaces, provided each half plane of the obstacle corre- sponding to the regions to be constrained is itself constrained. Definition 14 (Concave Corner) A corner is concave if the angle between the two adjacent surfaces is less than 180 degrees. The constraints imposed by contact between two concave corners are the union of those imposed by contact with the adjacent jur- faces, provided each half plane of the obstacle corresponding to the regions to be constrained is itself constrained. 3.3 Pushing We have seen how a fixed body will prevent motion. Now we will explore how a moving body will transfer motion. Again there are two considerations. Given contact between an object and a moving body: 1. How will the motion of the body affect the object? 2. What motions of the body will affect the object? The law describing the motions an object must undergo when in contact with a moving body are given in figure 5. This says that if a body is moving “into” an object, the object must move in at least one of the following ways, and if none of these motions are possible the body cannot move: e translational motion into the open half plane centered on the body’s surface normal at the point of contact, o rotational motion clockwise about any axis which lies in the open half plane centered ninety degrees clockwise from the body’s surface normal at the point of contact, or 8 rotational motion counter-clockwise about any axis which lies in the open half plane centered ninety degrees counter- clockwise from the body’s surface normal at the point of contact. ‘When a corner contacts a surface it is sufficient to know the surface normal of the surface because the surface normal of the corner (for our purposes) is the negation of this (Q 0% obst, P, q, sn) { RigidBody(body) A RigadBody(obj) A Surface(body, p) A Surface(obj, q) A Contact(p, q) A Surface-Normal(q, sn) A Origin-Dir(p, o I) A Origin-Dzr(q, 02) A { (3d 1) [Half-Plane(-sn, d r ) A TransMotion(body, d 1)] v (3rlzl) [Rotate-90(-sn, zl, rL) A Half-Plane(xl,ol) A RotMotion(body, r,)] }} * {@da) [Half-Plane(sn, dz) A TransMotion(obj, d,)] v (3raza) [Rotate-90(sn, x2, 7~) A Half-Plane(x?, 02) A RotMotion(obj, TQ)] ) Figure 5: The law of motion transfer The body is moving into the object if it has any of the fol- lowing motions: o translational motion into the the body’s surface normal at open half the point plane centered on of contact, e rotational motion clockwise about any axis which lies in the open half plane centered ninety degrees clockwise from the body’s surface normal at the point of contact, or e rotational motion counter-clockwise about any axis which lies in the open half plane centered ninety degrees counter- clockwise from the body’s surface normal at the point Iof contact. Imagine the block moving right in figure 4. SN indicates the surface normal of the block. Because the direction of motion of the block is into the boulder, the boulder must move in one of the following directions: a translationally right m translationally down right o translationally up right 8 clockwise about an axis down from the contact Q clockwise about an axis down left from the contact o clockwise about an axis down right from the contact e counter-clockwise about an axis up from the contact e counter-clockwise about an axis up left from the contact e counter-clockwise about an axis up right from the contact The way we handle external forces such as gravity, friction, and magnetism is to create an imaginary body pushing con nn (possibly) imaginary surface of the object. For example, a dropped brick would be pushed downward by gravity at its center of mass. Nielsen 273 G--- +X -* f 4 Figure 6: Recoil Escapement 4 Sample Mechanism The principles of blocking and pushing discussed here allow us to determine the instantaneous behavior of a mechanism.2 Typically lower pair recognition [Gelsey, 19871 will leave at most one degree of freedom for each component of a mechanism and the only significant forces transmitted through contact. To de- termine the instantaneous behavior of the mechanism from any given configuration we first compute the constrained motions. After the impossible motions have been eliminated, the inter- section of the free directions and the motions transferred by pushing should yield a single set of consistent motions for each part of the mechanism. If there is no consistent set the mecha- nism cannot move. For example, consider the recoil escapement shown in fig- ure 6. The direction of the surface normal of the pallet tooth is N, the direction to the center of rotation of the wheel is W, and the direction to the center of rotation of the pallet is P. In this type of escapement when the pallet arm contacts the scape wheel, the arm is not sufficiently constrained by the wheel. (Counter-clockwise rotation is possible about an axis counter-clockwise of the pallet’s surface normal.) As a result the pallet’s continued swing drives the wheel backward (motion transferred in the counter-clockwise direction), causing the en- tire clock mechanism to move backward (recoil). 5 Discussion We have presented three aspects of the kinematic analysis of a rigid body device; the constraint of motion, the transfer of motion, and the propagation of external forces. In order to do this we developed a logical theory of rigid body interactions; which provides a symbolic framework for geometric descriptions and laws describing mechanical constraint and motion transfer. Other work [Nielsen, 1988aj combines this result with a set of all possible configurations of the objects in the form of a place vocabulary [Forbus, 1981; Faltings, 1987; Nielsen, 1988b] and dynamical information (as produced by [Forbus, 19843) to pro- vide an envisionment which qualitatively describes all possible behaviors of such devices as a mechanical clock. 6 Acknowledgements I would like to thank Ken Forbus for his support, advice, and criticism. Thanks also to John Collins for providing mechanical engineering expertise. References [Cowie, 19611 Cowie, A.: Kinematics and Deszgn of Mecha- nisms, International Textbook Co., Scranton, PA., 1961. [Davis, 19861 Davis, Ernest: “A Logical Framework for Solid Object Physics,” TR No. 245, New York University, Com- puter Science Department, 1986. [deKleer, 19751 deKleer, Johan: “Qualitative and Quantitative Knowledge in Classical Mechanics.” TR-352, MIT AI Lab, Cambridge, MA., 1975. (deKleer and Brown, 1984) deKleer, Johan and Brown, J. S.: “A Qualitative Physics Based on Confluences,” Artificial Intelligence 24 (7-83), 1984. [Dixon, 1986) Dixon, J.: “Will Mechanical Engineers Survive Artificial Intelligence ?,” Mechanical Engzneering Vol 108 No. 2 (8-lo), 1986. [Faltings, 19871 Faltings, Boi: “Qualitative Place Vocabularies for Mechanisms in Configuration Space,” UIUCDCS-R-87- 1360, University of Illinois, 1987. [Forbus, 19811 Forbus, Ken: “A Study of Qualitative and Ge- ometric Knowledge in Reasoning about Motion,” TR-615, MIT AI Lab, Cambridge, MA, 1981. (Forbus, 19841 Forbus, Ken: “Qualitative Process Theory,” TR-789, LMIT AI Lab, Cambridge, MA, 1984. [Gelsey, 19871 Gelsey, Andrew: “Automated Reasoning about Machine Geometry and Kinematics,” Third IEEE Confer- ence on Artificial Intelligence Applications, 1987. [Kuipers, 198S] Kuipers, Ben: “Qualitative Simulation,” Arti- ficial Intelligence 29 (289-338), 1986. [Laughton, 19851 Laughton, S.: “Explanation of Mechanical Systems Through Qualitative Simulation,” AITR85-19, University of Texas at Austin, 1985. [Nielsen, 1988a] Nielsen, Paul: “A Qualitative Approach to Rigid Body Mechanics,” PhD. thesis, to appear 1988. [Nielsen, 1988bj Nielsen, Paul: “The Role of Abstraction in Place Vocabularies,” to appear 1988. [Reuleaux, 18763 Reulenux, Franz: The Kinematzis ,?f Machin- ery, Dover Publications, Inc., New York, 1876. [Shoham, 1985j Shoham, Yoav: “Naive Kinematics: One As- pect of Shape,” IJCAI 9 (43G-442), 1985. [Stanfill, 19851 Stanfill, Craig: ‘MACK, a Program which Deduces the Behavior of Machines from their Forms,” SIGART Newsletter 93 (12-16), 1985. *Any ambiguity arises from the qualitative representation of spatial directions. The other laws and definitions suffice for more detailed representations. 274 Common Sense Reasoning
1988
84
831
Global Filters for aviorsa 1 Peter Struss SIEMEMS Corp. ZrI INF 22 otto- 6 -8000 Muenc t Germany Abstract Current methods in qualitative physic sometimes predict behaviors of physical systems that do not correspond to any real- valued solution. One reason is that the merging of distinct behaviors cannot be avoided by local criteria. It is necessary to determine the possible continuations of a qualitative behavior taking into account its complete history. Such global criteria for the partial elimination of spurious solutions are developed for 2nd order differential equations. The application of these filters is shown to reduce the set of behaviors for the mass-spring system predicted by other qualitative physics systems. Introduction Current methods in qualitative physics (QP) sometimes predict behaviors of physical systems that do not correspond to any real-valued solution. The existence of spurious solutions and their origins have been analyzed for the treatment of differential equations [Kuipers, 861, [Schmid, 881 as well as of algebraic equations [Struss, 871, [Struss, 88aJ. For the first case, the reason for the prediction of spurious behaviors is the local nature of the criteria for determining state changes. This paper attempts a continuation and a refinement of this analysis. The problem is: how can we determine the possible continuations of a qualitative behavior taking into account its complete history? Such global criteria for the elimination of spurious behaviors are developed for 2nd order differential equations. 2 dine of the er The following section presents two simple questions for demonstrating limits of the current QP approaches that are mainly used for infering qualitative behaviors. In section 4, their common basis is formally described. A fundamental problem in behavior generation is discussed in section 5: the merging of different solutions. For this purpose, a brief introduction to the analysis of the so-called IThis research was supported by the Federal Government of Germany (ITW 8506 E4). phase portrait of 2nd order differential equations is given. These techniques are then used in the next section to construct some necessary conditions for filtering out spurious behaviors. The application of these methods is demonstrated by partially answering the questions of section 3. Due to limitations of space, this presentation is restricted to the key ideas and sample examples. For more details, formal definitions, proofs and solutions, see [Struss, 88b]. 3 We consider examples from the problem class “mass on a spring”, which is the “Tweety of qualitative physics” (Fig. 1). WI Figure 1 Mass on a spring uestion 1: lf the mass is moved away from the equilibrium point, x = 0, given by the rest length of the spring, to x = xg > 0 , will the mass, after one oscillation, return to exactly x0, exceed it, or turn back before ? [Kuipers, 861 showed that qualitative simulation in the style of QSIM cannot answer this question for the frictionless case based merely on the corresponding differential equation. It derives 3 branches of possible behavior after one oscillation, and worse : Question 2: What can we tell about the sequence of the maxima during the oscillation ? Since the above argument for the first oscillation applies to each oscillation, we get further branching in the course of the qualitative simulation (by more than a factor of 3, because new landmarks are introduced). The amplitude is allowed to change arbitrarily over time. This also applies to cases with friction. 4 rin alitative avior By (qualitative) behavior we mean the (qualitative) changes in characteristics of a physical system over time. We assume that a system is described by a finite Struss 275 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. number of characteristic parameters pi. In the quantitative case, they are real-valued functions. Some of them may be the derivatives of others. In QP, these parameters take on qualitative values, i.e. essentially neighboring open intervals and the “landmarks” separating them, forming quantity spaces Qi which are possibly parameter-specific. A qualitative state is a tuple of qualitative values (ql, . . . . qn) for the parameters. Let S denote the set of all states. State transitions are pairs of states forming a relation T C S x S , where (s,s) B T . Finally, a behavior is a sequence of states b = ( . . . . s-n, . . . . SO, ~1, . . . . sn, . ..) c B, either finite or infinite on one or both sides. For a behavior b = ( . . . . sn-1, sn), finite on the right hand side, a (forward) continuation is a behavior b, = ( . . . . sn-1, sn, s’n+l). NOW we state our general problem : Given a description of a system S, and an admissible behavior b, determine the admissible continuations of b. (“admissible” intuitively means “corresponding to a real solution”). Hence, QP systems have to solve three subtasks (which are not necessarily separate, subsequent steps): If b is the empty tuple, this is - Filtering states: Determine the possible qualitative states of a system, i.e. sets of qualitative values that satisfy the equations of its description. For b = (~1) we have the task of - Filtering state transitions: Determine the possible state transitions, i.e. changes from one state to another that are in accordance with the derivative relations and continuity conditions. This reflects: admissible behaviors can only contain admissible states and state transitions: B, C {b = (si) c B 1 Vi si c S, A (si-1, si)cT,} where the subscript “a” denotes “admissible”. In this paper we are mainly interested in the remaining case, which has not yet been tackled in a satisfactory way: - Filtering behaviors: Determine the possible behaviors, i.e. “correct” sequences of states (or transitions). The existing QP methods offer no criteria for checking the global correctness of these sequences, and, hence, they have to assume that each path through the state transition graph is an admissible behavior. 5 Filtering Behaviors The generation of spurious behaviors has different sources [Struss, 88b]. One of them is the merging of different system instances or different behaviors of one system instance. In order to “see” this, we are 5.1 Making ‘Visible: The Equations For the subsequent analysis, we briefly introduce some basic ideas from a mathematical discipline called qualitative theory of dynamic systems (see e.g. [Andronov, 661). In this theory, qualitative results about the solution space of differential equations are gained by applying topological methods. This is possible because of a correspondence between sets of differential equations and vector fields. Consider again the mass on the spring. This system is described by some 2nd order differential equation (5.1) dWdt2 = -MO+ (x1 or the equivalent system of first order equations 6.2) dxldt = v dv/dt = -MO+ (x) , where MO+ is a monotonic function with MO’(O) = 0. (5.2) defines a vectorfield in the (x,v)-plain by mapping each point (xo,vo) of this plain to the vector of the derivatives in this point : (dddtlt,,,,, , dv/dtl&v,,)) = (~0, -MO + (x0)). Solutions of (5.2) then correspond to those curves (“trajectories”) in the plain that in each point have a tangent in the direction of the respective vector. Fig. 2 shows the construction of this vector field and a part of sample trajectory, to, which corresponds to a damped oscillation. The collection of these trajectories, the so-called phase-portrait of the system, looks locally “in principle” like parallels (except for equilibrium points). This implies that trajectories do not intersect or branch etc. (see Fig. 3). dv~dt(,,, Figure 2 Construction of the phase portrait a Figure 3 Impossible phase portraits 276 Common Sense Reasoning Some main charactereristics of the phase portrait of system (5.2) are indicated by Fig. 4. It expresses the oscillatory behavior, but does not decide upon the question whether the system really exhibits a cyclic behavior. Starting at an arbitrary point (x0, 0) on the negative x-axis, the respective trajectory, to, first stays in the quadrant x<O, v > 0, then, intersecting the positive v-axis, continues in the quadrant x>O, v> 0 and leaves it by reaching some point (xl, 0) on the x-axis. How does this representation relate to the description of the qualitative behaviors derived by QP methods? The quantity spaces for x and v impose a grid on the plain. Each of the rectangles correspond to 9 qualitative states: Its corners represent the states where both x and v take on landmark values (e.g. so in Fig. 4), its interior is a state where x and v are between landmarks (e.g. sl), and the edges exclusive of their endpoints (the corners) are states with only one variable crossing a landmark (e.g. ~2). Each trajectory defines a behavior in the sense of section 4, namely the sequence of states it crosses (Fig. 4). Sequence of states for to x so x0 $1 0@) 52 0 53 ux~l) 54 (0,x1) $5 (0,x1) 56 xl Figure 4 Definition of a behavior by a trajectory states X so ukd $1 x0 $2 (XOJI) 53 (xolxl~ 54 Xl 55 bw2) V blJ2) VI (VOlVl) VO bO2Jl) bO#Vl) Figure 5 Merging different behaviors ifferent Solutions Consider the pieces of the trajectories, tl and tz, in Fig. 5. They introduce the admissible state transitions (so, sl), (sl, ~21, 632, sg), (s2, ~41, (~4, ~5). Having constructed a behavior bl = (so, ~1, ~2) (which is admissible, since it represents a piece of tl), we have (at least) two possible continuations for bl, b2 = (so, ~1, ~2, sg) and b3 = ( so, sl, ~2, ~4). Neither of them can be ruled out by the step of state transition filtering, although only b2 corresponds to a trajeqtory of the specific system under consideration, whereas b3 merges two solutions of the system with different initial conditions. This is the reason why question 1 in section 3 is answered by QP with a branching of the behavior. It was correctly identified in [Kuipers, 861 as the merely local nature of the state transition filtering. We can not unambiguously infer a damped behavior for the occurance of friction: Fig. 5 indicates that if the damped trajectory t2 returns to state sq as the arc tl, the existing filters do not forbid a “jump” back to t2 and, hence, a spurious cyclic behavior. istinct Systems Of course, we can argue that the behavior b3 = (so, sl, ~2, ~4) may be not admissible for the system sketched in Fig 5. However, the qualitative description covers a whole family of systems. Could there not be an appropriate choice of a parameter such that b3 is admissible for the corresponding system (and b2 is not) ? Yes, this might happen, although it is hard to prove it for a specific case. But, subsequent continuations of b3 may require choices between state transitions that imply a different choice for the range of the parameter. Hence, in combining admissible state transitions we are not prevented from jumping between different instances of a class of systems and merge their behaviors thus potentially generating behaviors which are not admissible for any single system. In the following section, we construct some filters for sequences of state transitions. loiting t We do so mainly by taking advantage of the property that trajectories cannot intersect, because otherwise we would get different solutions for the same initial conditions. The filters apply to 2nd order differential equations. We have to emphasize that this choice has not only been made for the sake of simplicity of the examples. It mainly reflects the fact that only for this case (i.e. vector-fields in the plain), we expect strong results (with the 3rd dimension, chaos starts). voidance of System Filters aiming at this goal need to identify behaviors that do not correspond to solutions of the same system. We introduce a symmetric binary relation exclusiveCB x B. exclusive(b1, b2) means that there exists no system instance for which both behaviors, bl and b2, are admissible. We know that different trajectories passing through one point of the plain cannot belong to the same system (Fig. 3). The problem is how to recognize the respective behaviors, which run through qualitative states instead of points ? One case is easily solved, namely if states are involved that correspond to points. Hence we call a state s = (ql, q2) a landmark state if all the qi are Struss 277 landmarks. If exactly one of the qi is a landmark, we call it edge state. We define the relation convergent C B x B x S : convergent (b, b’, so) :e b = ( . . . . ssl, so) A b’ = ( . . . . s’-I, so) A s-1 f S’J and analogously a relation divergent for b = (so, ~1, . ..). b’ = (SO, ~‘1, . ..). Then we have the Proposition 6.1 If b and b’ are convergent to or divergent from a landmark state, or an edge state, then exclusive (b, b’) holds. If two behaviors converge to a state that is not an edge or landmark state, it appears to be difficult to detect a case like the one shown in Fig. 3 b. There is some hope, however, to catch situations like Fig. 3 c on the qualitative level. If a behavior b approaches another one, b , ’ “from the left” and leaves it “to the right”, we have the relation crossing(b,b’). Assume we have ways to check whether this relations holds, then we can make use of it by the Proposition 6.2 Crossing behaviors are exclusive : crossing (b,b’) * exclusive (b,b’). We may detect spurious behaviors with the obvious Proposition 6.3 exclusive (b, b) + spurious(b). For example, a behavior crossing itself is spurious. Exclusiveness (and crossing) is monotonic w.r.t. behavior inclusion : b C b’ is true, when there is a sequence of continuations of b establishing b’. Proposition 6.4 bl C b’l A exclusive(b1, b2) + exclusive(b’1, bz), bl C b’l A crossing(bl, b2) 3 crossing(b’1, b2) and, hence, bl C b A b2 C b A exclusive(b1, b2) + spurious(b) Sometimes we can infer the negation of exclusiveness, i.e. for some b, b’ there exists a system allowing both b and b’ (see section 6.3). We are allowed to combine behaviors only if they belong to the same solution. For b = ( . . . . s-2, s-1, so) and b’ = (so, Sl, s2, . ..L we define the behavior union b U b’ = ( . . . , s-2, s-1, so, Sl, s2, -** ). Proposition 6.5 Let so be a landmark state and b = (..., s-g, s-1, so) and b’ = (so, ~1, sg, ,.. ), then 1 exclusive (b, b’) 3 1 exclusive (b U b’, b) A 1 exclusive (b U b’, b’) Before demonstrating the use of these criteria, we have to provide a way to check the crossing relation. 6.2 Detecting Crossing Behaviors As stated above, we hope to check qualitatively whether a behavior approaches another one from one side and leaves it towards the other side. In between, they may share a sequence of states. Proposition 6.6 Letb = ( e-e> SC-~, SC> . . . . sd, sd+ 1, -0. ) and b’= ( . . . . S’,.l, SC, . . . . sd, s’d+ 1, . . . ). 278 Common Sense Reasoning convergent-left(b, b’, s,) A divergent-right(b, b’, sd) 3 crossing(b, b’). We will demonstrate the algorithm for checking the relation convergent-left denoting that the first behavior joins the second from the left. The other check is then obvious. Because of Proposition 6.1 we need only consider the case, where s, is the interior of a rectangle. Such a state can be entered from at most 8 neighboring states An example is shown in Fig. 6. Fig. 6a shows the trajectories in the landmark grid. In Fig. 6b, the same situation is transfered to the state space under preservation of the topology of the plain. b’ is assumed to proceed from s, to sg. a b “e.b’ . . . . . * Figure 6 a Crossing trajectories b crossing behaviors c Transitions and their A We characterize each of the eight transitions, (s, sC) by A(s, sC) which is the pair of the changes, A, of the respective qualitative values (x and v, in our case): &<s, so = a ((x,4, (x’, v’)) A(x x’) 1 I”’ x, x’), A (x, v’)) , where , :H x’ < x A(x, x’) = -, :e x’ = x A(x, x’) = T :H x’ >x In our example, we have A(sl, sC) = ( t , & ) and &,, sg)=( t , +). Running counterclockwise through the state transitions to s, (see Fig. 6 c) corresponds to stepping through the circular list LT = (U,T), Gd), LLn(L+), (LO, (-4M t,~),(t,-j),(~,~),...) Now, we formulate our criterion Let b = (~1, s, ,... ) and b’ = (~2, s,, sg) with convergent(b, b’, s,). If a(~,, s,) is in the sublist of LT that is started by 4Cs3, sJ and ended by &s~,s,), then convergent-left (b, b’, sJ. In the example, we start at &(ss, SJ = ( & ,+) and end in &2, sJ = ( 7 , T ) . &I, SC) = ( t , & 1 is between these elements, hence, the criterion is satisfied. Since divergent-right can be checked in a similar way, we are now able to detect crossing behaviors. We demonstrate that some progress is achieved by answering questions of section 3. 6.3 Symmetry - Hnfering Cyclic Consider again question 1 of section 3. We are now able to infer the cyclic behavior of all solutions to (5.2). The idea is the following: Looking at (5.21, we realize that the application of the transformations t ‘=-t and v’=-v leads to (6.1) dx/dt’ = v’ dv’/dt’ = -MO+(x) , which is of the same form as (5.2). This means we are able to derive the phase portrait in the half plain v < 0 by merely mirroring the v > 0 half plainat the x- axis (and reversing the orientation of the trajectories). Hence the trajectory continuing beyond (xl , 0) is the mirror image of the curve we started with and therefore hits the-x-axis again at (xo, 0) thus establishing a closed curve (Fig. 7! that corresponds to oscillation with a constant amplitude. Figure 7 Symmetry of behavior Figure 8 Damped oscillation In the framework introduced in this naner, we can express the following general principle:* * ’ Consider two systems sysl and sysz and the transformations rt : t -+ -t, , TV : v --) -v. Proposition 6.7 Let sys2 = l;v (tt (sysl)) and for s = (x,v), sv : = (x,-v) b= (..., sl, s2, . ..) A bT= (..., s2v, slv, . ..) * 1 exclusive(b, bT). If we have generated a behavior bl= (so, sl,..., s,) (see Fig. 7), we know for blT = (s,, s,-1~ ,..., slv, so) that 1 exclusive(b1, blx) holds. (Note that SO= sov , s, = s,v , because v= 0). Since s, is a landmark state, Proposition 6.5 then yields Corollary 6.8 For the system (5.2), 1 exclusive(blUblx, bl) , and bl Ublt is cyclic! 6.4 Pnfering Steady Damping Question 2 is concerned with the identification of a global tendency of behavior, namely with the problem of arbitrary changes in the subsequent maxima of the oscillation. This problem is solved for the frictionless case by the result of the previous section. However, it also occurs for the case with friction. Using our filter criteria, we are now able to deduce immediately that if the the oscillation is damped in the first period, it will always be damped: Let SO= (x0,0) be one maximum, and so’= (xo’,O) with xo’<xo the next one (Fig. 8). Since the maximum could only be increased again if the solution crosses itself, Propositions 6.3 and 6.4 detect it to be spurious. The return to so’ is also excluded, because in this case convergent((so, . . . . so’), (so’, . . . . so’)) * exclusive((so, . . . . so’), (so’, . . . . so’)) by Prop. 6.1 * exclusive((so, . . . . so’, . . . . so’), (so, . . . . so’, . . . . so’)) by Prop. 6.4 * spurious((s0, . . . . so’, . . . . so’)) by Prop. 6.3. 7 Summary Our approach to expressing restrictions imposed by 2nd order differential equations is essentially based on the uniqueness of solutions for fixed initial conditions. The criteria can be used to discriminate behaviors that belong to different system instances and to discover spurious behaviors. They enable us to derive cyclic behavior for the frictionless mass-spring system and the principle “once-damped-always- damped” for the case with friction. Similar methods can be used, for example, to infer damping for the mass with friction [Struss, 88b]. Acknowledgements: I would like to thank Egbert Brieskorn, who introduced me to the the qualitative theory of dynamic systems and Adam Farquhar and Hartmut Freitag for commenting on a draft of this paper. eferences [Andronov, 661 Andronov, A.A., Vitt, A.A., Khaikin, S.E. Theory of Oscillators; Oxford, 1966 [Kuipers, 863 Kuipers, B., Qualitative Simulation, Artificial Intelligence, Vol. 29, No. 3, 1986 [Schmid, 88 1 Schmid, L., Impediments to a Qualitative Physics Based on Confluences, In: Fruechtenicht et al. (eds), Technische Experten- systeme, Munich 1988 [Struss, 871 Struss, P., Problems of Interval-Based Qualitative Reasoning, In: Fruechtenicht et al. (eds), Technische Expertensysteme, Munich 1988, also SIEMENS Tech. Rep. INF 2 ARM-l-87 @truss, 88a] Struss, P., Mathematical Aspects of Qualitative Reasoning; to appear in: International Journal for Artificial Intelligence in Engineering, July 1988 [Struss, 88b] Struss, P., Mathematical Aspects of Qualitative Reasoning - Part Two: Differential Equations; SIEMENS Tech. Rep. INF 2 ARM-7-88 Struss 279
1988
85
832
Kenneth Man- MIT Artificial Intelligence Laboratory NE 43 - 438 545 Technology Square, Cambridge, MA 02139. Abstract Even with powerful numerical computers, explor- ing complex dynamical systems requires signifi- cant human effort and judgment to prepare sim- ulations and to interpret numerical results. This paper describes one-aspect of a computer pro- gram, KAM, that can autonomously prepare nu- merical simulations, and can automatically gen- erate high-level, qualitative interpretations of the quantitative results. Given a dynamical system, KAM searches in the phase space for regions where the system exhibits qualitatively distinct behaviors: periodic, almost-periodic, and chaotic motion. KAM uses its knowledge of dynamics to constrain its searches. This knowledge is encoded by a grammar of ’ dynamical behavior in which the primitives are geometric orbits, and in which the rules of combination are orbit adjacency con- straints. A consistency analysis procedure analo- gous to Waltz’s constraint satisfaction algorithm is implemented. The utility of the approach is demonstrated by exploring the dynamics of non- linear conservative systems with two degrees of freedom. From the study of stars and galaxies formation, to air- craft and wing design, to blood flow in the heart, and to microelectronics device modeling, scientists and engineers are confronted with nonlinear equations that are inacces- sible to analytical methods. Although powerful numerical computers can painlessly generate quantitative particular solutions to the equations, understanding the qua3itative content of these equations requires substantial human ef- fort and judgment to prepare numerical simulations, and interpret numerical results in qualitative terms. This paper demonstrates that by combining ideas from symbolic computation with methods incorporating deep knowledge of the underlying dynamics of the physical domain, one can build an effective program that au- tonomously explores the behaviors of a specified dynamical system, and generates high-level, qualitative interpretation of the quantitative data. ‘This rewrt describes research done at the ArtikiaI Intelli- gence Laboiatory of the Massachusetts Institute of Technology. Support for the Laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-86-K-0180. Exploring a dynamical system involves two tasks: (1) generate all typical responses of the system by sampling a sufhcient number of starting conditions, and (2) describe how these typical responses change their characters as the system parameters are varied. Studying the swing of a simple pendulum provides a good illustration of these tasks. An experimenter observes how the pendulum starting at various initial states - an- gular position and velocity of the blob - swings. He may repeat the experiments by controlling the surrounding con- ditions such as air resistance and gravity. From the exper- imental results, he like the following- stable? Under wh oscillate? Will it short, one wants to know the typical responses of the aye tern without actually solving the equations governing the pendulum motion. Blind exhaustive testing of every point of the phase space - the space of all possible starting conditions - to find out the system responses is out of the question because the number of possible starts is overwhelming. Moreover, in- teresting qualitative changes in behavior often occur in so small a region in the phase space that unguided experi- ments will likely miss them. One way to meet these difficulties is to exploit knowledge of the natural constraints in the physical problem. Fluid flow provides a useful illustration. Fig. 1 depicts a flow pattern in some small region; it shows four flow lines. The flow pattern, as it stands, is not complete: some important feature is missing. Let us see why. Since the horizontal flow lines are going in opposite direction, there must be a line of fluid particles whose velocity vectors have zero horizontal components. A similar argument about the vertical flow lines shows there must be a line of fluid particles whose velocity vectors have zero vertical components. In general, these two lines of fluid particles will intersect transversely. The point of intersection, which has a zero velocity vector, is known as a stagnant point. 1 With the stagnant point added, the local flow pattern becomes consistent. Now, we want to turn this physical insight around, and use knowledge about possible local flow patterns to derive an understanding of the global behaviors of a physical syk tern. KAM 3 is an implemented program that embodies this type of knowledge. KAM describes dynamical behaviors ‘This argument can k made rigorous u&g the Index The- orem on singular points of vector fields [AmoId, 19731. ‘The initials stand for three mathematicians: Kobnogorov, Arnold, and Moser. 280 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Figure 1: Left: Inconsistent Now Lines. Right: With additional flow Z&s and a stagnant point, Bow becomes consistent. with a geometric language. ’ An orbit in the geome- try world represents a particular response in the physical world. The totality of the system behavior is represented in KAM by an Orbit Adjacency Graph. A node of the Orbit Adjacency Graph represents an orbit; an arc linking two nodes stands for an adjacency relation between two orbits. KAM exploits two types of domain knowledge. 4 First, KAM classifies dl possible orbits into a few geometric cat- egories such as a periodic orbit, a closed orbit, an island chain, and a chaotic region. Second, KAM has a list of orbif consisfency rules; these rules impose adjacency con- straints on neighboring orbits. One way to look at KAM is to contrast its consistency analysis w&h Waltz’s constraint analysis in line drawings [waltz, 19751. In Waltz’s problem, the junction labels are ambiguous, but the connections among neighboring junctions are known. In KAM’s problem, the orbit inter- pretations are unambiguous, but the connections among neighboring orbits are not known. Whereas Waltz’s con- straint analysis seeks to eliminate impossible line interpre- tations, KAM’s consistency procedure looks for new orbits to elin&ate impossible ohbit connections. Waltz’s proce- dure starts with a complete network of connections. KAM, on the other hand, builds the conneceion network incre- mentally. A common theme in both works is the emphasis on find&g the correct categorization of primitive objects that inter-&t in a well-define61 and tightly~constrained way. an Orbit Recognition ns, (2) a Phase Space the space of initial states, earthing lvIodule to ex- plore the space of parameter values. This paper describes only one aspect of KAM, namely, how it explores the phase space for a fixed parameter value. A previous paper vip, 19871 shows how orbit recognition is done by using minimal spanning trees. The problem of searching in the parameter space, and its solution will be described in a forthcoming dissertation pip, 19881. KAM is implemented in Zetalisp; all the examples in the rest of this paper are actual output of the KAM program. The curre& work complements recent research in qual- 3Describing dynami cal phenomena geometrically is a great insight due to Henri Poincarb ‘The domain knowledge is based on deep mathematical facts; it does not come from ad hoc stipulations. itative physics bbrow, 19851 in two directions. First, it studies a new domain, a nontrivial class of dynami- cal systems having essential nonlinearity, which brings up hard issues of representing complicated qualitative behav- iors: multiple steady states, periodic, almost-periodic, and chaotic motion. Second, the complexity of behavior necessitates a style of qualitative reasoning that emphasizes the overall shape of the motion rather than the details of an individual quan- tity’s variation with time. So, while almoat all works in qualitative physica are based on some form of qualitative calculus and time-se&a representation of time-varying b+ haviors, this research employs a geometric representation of behavior that is based OIJI the Dynamical Systems The- ory [Smale, 1967). A n orbit representing a particular syg tern behavior is a static, unchanging object which can be easily visualii, and be compared and contrasted with other orbits. The paper iz organized as follows. The next section de fines some terminologies. Section 3 describes the target domain; it introduces a nonlinear discrete mapping which serves as an example throughout the paper. Section 4 ex- amines the constraints imposed by the problem domain. Section 5 prmts KAM’s consistency analysis algorithm. Section 6 shows M’s performance for a particular pa rameter value of the mapping. Finally, section 7 explains the performance and discusses the limitations of the cur- rent implementation. To begin with, we must understand something of dynami- cal systems. The state of a system at any time to is a minimum set of values of variables (21,. . . , zn) which, along with the input to the system for Ib 1 f 0, is sufficient to determine the behavior of the system for all time t 2 rto. The variables state, and (2) the evolution law. As the system evolves with time, the state tracea out a path in the phase space; the path is called an orbit or a trajectory. A phase portrait is a parti n of the phase space into orbits. The evolution determines how the state evolves with time. In a d te time system, the evolution law is given by a system of difference equations, or, more ab- stractly, a function d : X + X where X is the phase space of the discrete system. The function f which defines a dis- crete dynamical system is called a mapping, or a n-nap, for short. The multipliers of the map f are the eigenval- uee of the Jacobian of f. An area-preserving malp is a map whose Jacobian has a unit determinant. In the evo- lution law, variables that are not s iables are called parameters. A one-parameter of maps ( fa ) is a class of maps indexed by the same parameter a. The set of iterates of f, ( p(z) 1 n E 2 ), is called the orbit of x relative to f. The point x is a fixed point off if f(z) = Z. A fixed point x is stable, or elliptic, ifall the multipliers off at x lie on the unit circle; it is unstable, or hmerbolic, otherwise. The point x is a periodic point of period n if p(z) = z. The least positive n for which yip 281 fw = E is the period of x. The set of all iterates of a periodic point forms a periodic orbit. Although differential equations are the most common tool for modeling continuous phenomena, I choose to study difference equations for two reasons. First, in practice, it- erating maps is far easier than integrating differential equa- tionz; one can generate data a thousand times faster. Sec- ond, mathematically difference equations exhibit the same class of behavior as differential equations [Smale, 19671; so there is no loss of generality. The price, however, is that it is sometimes not obvious what physical situation a given set of difference equations corresponds to. 3 The Task KAM explores dynamical systems characterized by a one- parameter family of nonlinear area-preserving maps. This class of problems is important because many problems in physics - the restricted 3-body problem, orbits of particles in accelerators, and two coupled nonlinear oscillators, for instance - can be reduced to the study of area-preserving maps. KAM takes three inputs: (1) a one-parameter family of armpreserving maps, (2) the ranges of the state variables, and (3) the range of the parameter. The output is a fam- ily of phase portraits. Each phase portrait is partitioned into regions belonging to one of the three types: first, a regular region containing periodic or almost-periodic or- bits s ; second, a chaotic region containing chaotic or un- bounded orbit; third, an intermediate region between the first two consisting of large island chains. The family of phase portraits records the history of qualitative changes in the phase space structure as the parameter is varied. Let us focus on describing how one phase portrait is pro duced for a specified parameter value. Consider a typical area-preserving map studied by Henon penon, 19691: Zn+l = z,coscr-((y,-zi)sina Z/n+1 = Z,sincu+ (gm - Zi)Coso where t, E (-1 l),vn E (-1 1) and o E (0 2.2). The Henon map is historically important because it shows how an addition of a simple quadratic nonlinearity (via 2:) to a linear rotation can lead to dramatic changes in the behavior of system. Fig. 2, taken directly from Henon’s paper, displays the phase portrait of the Henon Map for a particular parameter value CY = 1.3284305 (cos CY = 0.24). Fig. 2 depicts nine representative orbits of the phase portrait. Near the elliptic fixed point at origin is a regu- lar region consisting of three closed orbits. Just inside the outermost closed orbit lies two chains of five smaller closed curves. Islands of the chains are separated by a separatrix, the closed curve with five loops. Finally, outside the regu- lar region is a chaotic orbit whose iterates no longer lie on a curve, but seem rather to fill a two-dimensional region in a chaotic manner. KAM’s task is to search the phase space for such repre- sentative orbits. Note in particular the 51oop separatrix enclosing the island chain. Because the separatrix is con- fined to such a small region of the phase space, the chance of finding this orbit by random selection of initial states is %ee section 4.1 for the relevant definitions. 2. I’ ” Figure 2: A Typical Phase Portrait of the Henon Map ( COS(Y = 0.24). Prominent features: an elliptic fixed point at origin, two large Sisland chains, a separatrix, and chaotic region. almost zero. Later in the paper, we will see how KAM can discover the separatrix in a few trials. 4 Characteristics of Task 4.1 Primitive Orbits A fundamental property of conservative (Hamiltonian) sys- tems with two degrees of freedom is the area-preserving property: a bundle of initial points covering a small region is mapped onto another region with the same area. The shape of the region may change, but the area is invariant under the map. This property severely limits the possible long-time behaviors of orbits. In the following, I enumerate the six possible ways in which the iterates of an area-preserving map can appear in the phase space 6 (see Fig. 3): 1. Periodic Orbit. A finite number of N iterates are encountered repeatedly. 2. Almost-periodic Orbit. The iterates illl densely a smooth closed curve surrounding a elliptic fixed point, but they never repeat themselves. 3. Island Chain. The iterates form almost-periodic or- bits around a stable periodic orbit. The number of islands (or loops) in the chain is equal to the period of the periodic orbit enclosed by the chain. 4. Separatrix. A separatrix is an orbit joining hyper- bolic periodic points. The number of loops of a sepa ratrix is equal to the period of its associated periodic orbit. 5. Chaotic Orbit. The iterates form a random splatter of points that fills up some area of the phase space. 6. Escape Orbit. The iterates approach arbitrarily large values in the phase space; they form an un- bounded orbit. 6The categorization depends on two mathematical results: (I) orbits in the phase space do not cross, and (2) the phase space of a Hamiltonian system with two degrees of freedom is dXeomorphic to a solid 2-torus. See [Arnold, 19781. 282 Common Sense Reasoning (a) (b) (d) 1 1 (e) Figure 3: Primitive Orbit Types: (a) Periodic Orbit (pe- riod 5 is shown), (b) Almast-periodic orbit, (c) Lsland chain (a 54sland chain is shown), (d) Segaratrix (a 5separatrix is shown), (e) Chaotic Orbit (An escape orbit kn&s the same except eventually becomes unbounded.) 4.2 Continuity of As suggested in the introduction, two neighboring flow lines must satisfy a continuity requirement: a continuous flow cannot have a sudden change in flow direction. There is a direct analog for discrete maps. An orbit of a discrete map is no longer a smooth line, but consists of points jump- ing from one place to another. The crucial quantity in the discrete case turns out to be the rotation x~~~~ber of an almost-periodic orbit. The rotation number measures the asymptotic average of the angular distances between any two successive it erates in units of %x-radian [k&Kay, 19821. A periodic orbit can he thought of as a degenerate almost-periodic orbit whose rotation number is rational. For example, the iterates of an orbit with rotation number = & repeat them- selves after every five times. An almost-periodic orbit al- ways has an irrational rotation number. The concept of rotation number is important because it can be shown [MacKay, 19821 that the rotation numbers almost-periodic orbits change continarouelg. We can exploit the continuity of rotation number to locate periodic orbits and island chains as follows. Consider two nearby almosbperiodic orbits having rotation numbers pr and p1 respectively. Suppose p1 is slightly smaller than i, and h slightly larger. By continuity, there must exist a third, nearby orbit with rotation number exactly equal to 4. In other words, a periodic orbit of period 5 must exist between the two almost-periodic orbits. ’ ‘S&e there are in finitely many rational8 between any two irrationals, intitely many periodic orbits exist. However, the sixe of the island chain enclosing a periodic orbit decreases ex- ponentially a~ the period of the periodic orbit increases [Arnold, 19781. So, in practice, periodic orbits with large periods will be too tiny to be seen. The rotation number of an aIrno&periodic orbit is rep resented by an open interval with rational endpoints. For example, p E (p q) means the rotation number where between the reduced rational8 p and q. t lies some- Two rota- tion numbers are said to be compatible if their associated open intervals are not disjoint. An integer k is compatible with a rotation number if k is equal to the denominator of one of the endpoints of the associated open interval. 5.1 Pairwise Orbit Consistency The key strategy underlying KAM’s search for orbits in the phase space is to focus the search in regions where neighboring orbits are inconsistent. This section describes KAM’s rules for determining pairwise orbit consistency. As explained before, KAM recognizes six primitive orbit types. Considering consistency, we can make two simpli- fications. First, a periodic orbit is thought of as a degen- erate almost-periodic orbit with rational rotation number. Second, neither a chaotic orbi an escape orbit con- strains its neighboring orbits. ith three orbit types - almost-periodic, island chain, separatrix - the num- ber of po&ble pairwise combinations is 6. The inconsie- tent combinations are described by the rules below s : a RULE 1: Missing Islands. Two almost-periodic or- bits with incompatible rotation numbers are not con- aistent. RULE 2: Separatrix. An island chain and an almoe orbit are not consistent: a separa trix if4 missing. Q RULE 3: Missing A Sri0 Q island chains with different num i&n are not con&- tent. e RULE 4: Missing loops is inconsistent having rotation numb P- 8 RULE 5: Missing Orbit-3. A separatrix with loops is inconsistent with an island chain with m is- lands unless k = m. 0 RULE& g Orbit-& Two separatrices having different of loops are not consistent. onsistency Comg When two neighboring orbits are inconsistent, a complaint will be made. A consistency camqhht ie a data strut- ture describing the nature of the complaint. Specifically, it records the type of complaint, and the identity of the orbits involved. ask Algorithm The basic data structure is an Orbit Adjacency Graph. The graph has a single type of node, and a single type of ‘A reduced rational is a rational whose numerator and de- nominator have no prime factor in common. ‘These r&a have not been proven valid, but they are consistent with what is rigorously known about Hamiltonian dynamics. Yip 283 link. Each node in the graph represents an orbit. A link between two nodes is valid if the orbits in question are adjacent in the phase portrait. A link is inconsistent if the adjacency cannot be part of a legal flow pattern allowed by local dynamics. Consistency analysis is the process of updating adja- cency links: create new links, remove invalid links, and identify inconsistent links. The process has two purposes: (1) maintain correct adjacency relations between orbits as new orbits are added, and (2) create a complaint for each inconsistent link. The complaints are stored in a stack, the cornplaint-stack. A complaint is removed from the stack if either the adjacency link causing the complaint is no longer valid, or the complaint does not lead to useful new orbits. KAM continually searches for new orbits until the complaint-stack is emptied. The principal steps of consistency analysis are: 1. Initially, pick some random initial states. Create or- bits corresponding to these states. 2. Add the newly created orbits to the Orbit Adjacency Graph. 3. Update adjacency links. Produce a lit of invalid adja- cency links to be removed, and a list of new adjacency lii to be added. 4. For each new adjacency link to be added, run incon- sistency rules against it. If the link is inconsistent, a new complaint is made. Put the complaint on top of the complaint-stack. Add the new link to the graph. 5. For each invalid adjacency link to be removed, delete, if any, its sssociated complaint in the complaint-stack. Bemove the link from the graph. 6. Handle the complaint on top of the complaint-stack. Examine the type of complaint, and propose a list of new initial states to try. 7. Create a new goal with the suggested initial states. Make new orbits starting from these initial states. 8. Repeat the process (steps 2 to 7) until the complaint stack is empty. 5.4 Suggestion Rules After a complaint is made, KAM searches for the missing orbits to restore consistency. The purpose of the suggestion rulea is to propose new initial states. Currently, KAM has six suggestion rules corresponding to the six inconsistency situations. The basic idea here is to do a Ssection search” in the region delimited by the two offending orbits. FOP example, if two almost-periodic orbits are inconsistent, the midpoints between the two set of iterates are promising initial states to try. 5.5 Single Orbit Consistency Besides the pairwise orbit consistency rules, KAM has rules that act on single orbits. For example, the Bound- ary Circle Rule extends a regular region ss far as possible until a boundary circle - an almost-periodic orbit that is isolated on at least one side from other almost-periodic or- bits - is encountered. A second example is the Empty Circle Rule which says: An almost-periodic orbit that encloses a large region, and does not encloses any other orbit is not consistent. 6 Experiinents 6.1 To illustrate the algorithm, I give an example of how KAM explores the map with a specific parameter value coso = 0.24. The total running time for this experiment is 3.5 hours. lo KAM explores 15 initial states. Ten of these orbits are useful: 5 almost-periodic orbits, 3 island-chains, I separatrix, and 1 escape orbit. Fig. 4 displays these orbits; the figure is remarka imilar to Fig. 2 taken from five initial states result in 1 ther experiments I have tested the consistency analysis algorithm with more es of the parameter a E (0 2.2) of the able to reproduce all phase portraits non’s been succes&lly tested on X982] whose phase space is Euclidean plane. ance in its amity to select “gromisiig” initial states. This success can in part be attributed to the geometric representation of the orbits. When two neighboring orbits are inconsistent, the geome- try of these orbits delimits a bounded region in the phase ‘*On the average, KAM takes about 10 times longer to pro- duce one phase portrait than a domain expert does. Approx- imatdy 80 % of the mming time, however, ia spent 019 orbit recognition. 284 Common Sense Reasoning < (d ... ,l-. r, ,,,‘ . L- . . . . J /” r, \p : ‘ . 0 0 “) l -.,:... ,’ GT.,’ . ..*- .._____ i’ . . 2, ‘. c, .. _.. /. r : u Y ” .i Figure 5: How KAM picks initial conditions: (a) Ran- domly pick an initial state. (b) Find an almost-periodic orbit; try centroid of the orbit. (c) Find a second almost- periodic orbit; try a midpoint. (d) Find a third almost- periodic orbit; try a midpoint. (e) Find a fourth almost- periodic orbit; try a midpoint. (f) Find a !&island chain; try a midpoint between islands. (g) No useful orbit; try a midpoint. (h) Find a second Sisland chain; try a midpoint between islands. (i) Find a 5-separatrix; try a midpoint. (j) Find a third 5 is an 1 d h c ain; try a point in interior. (k) No useful orbit; extend boundary circle. (1) Find az~ escape orbit. space for future search. The boundedness of the region al- lows the “bisection search” strategy to rapidly zero in the desired orbits. 7.2 hat Does Not Island chains that are embedded inside a chaotic (or w cape) region are difficult to find. This is expected because a chaotic (or escape) orbit imposes no constraint on its neighboring orbits. A domain expert has less trouble with this problem largely because he can recognize the internal shape of a chaotic region. KAM, in contrast, cannot de- scribe the number of holes and connected components that a given two-dimensional dot pattern may have. This paper shows how a computer program, KAM, that ex- ploits specific domain knowledge based on rigorous mathe- matical results is able to automatically generate phase por- traits that are essentially the same as those appearing in published papers in the experimental dynamics literature. Knowledge of the six primitive types of long-time behavior, and the modes of transition between primitive types allows a simple consistency analysis to draw global conclusions about the phase space dynamics without requiring sophis ticated problem-solving mechanisms. Describing behavior geometrically m&es high-level, qualitative interpretations of numerical data possible. H expect this method of analysis - defining the primitive categories, and cataloging possi- ble transitions - pioneered by Walt% in the early seventies, and the emphasis on geometric structures to be a powerful methodology for attacking hard computational problems in science and engineering. [Arnold, 19731 V.I. Arnold. Onlinary Ddflerential Equa- tions. MIT Press, 1973. [Arnold, 19781 V.I. Arnold. Mathematical Methods of Classical Mechanics. Springer-Verlag, 1978. [Bobrow, 19851 Daniel @. Bobrow, Reasoning about Physical Systems [Henon, 19693 M. Benon. Numeric area-preserving mappings. uarterly of Applied Mathe- matics, 27, 1969. FacKay, 19821 kbert S acKay. Renomalkation s’n Area Pmserving Maps. PhD thesis, Princeton Wniver- sity, 1982. [Smale, 19671 Stephen Smale. Differentiable dynamical systems. Bulletin Ametican Mathematical Society, 73, 1967. [waltz, 19751 David Waltz. Understanding line drawings of scenes with shadows. In The Psychology of Computer vision. McGraw-Hill, 1975. [yip, 1987] Kenneth Yip. Extracting qualitative dynamics from numerical experiments. In Proceedings AAAI-87, 1987. pip, 19881 Kenneth Yip. KAM: A Compoter Program that Autonomously Explores Dynamical Sysiems. PhD thesis, MIT, 1988. Forthcoming. MP 285
1988
86
833
Exaggeration Daniel S. Weld Computer Science Department, FR-35 University of Washington Seattle, WA 98195 Abstract Exaggeration is a technique for solving compar- ative analysis problems by considering extreme perturbations to a system. For example, exagger- ation answers the question “What happens to the output temperature of a heat exchanger if fluid flow rate increases?” by simulating the behav- ior of an exchanger with infinite flow rate. This paper explains the three phases of the exagger- ation algorithm: transform, simulate, and scale. The transform phase takes a comparative analy- sis problem and generates the description of an exaggerated system. The simulate phase predicts the behavior of the transformed system. Finally, the scale phase compares the original and exag- gerated behaviors to answer the original compar- ative analysis question. ntsoduction The symbolic analysis of real-world systems is central to many problems in artificial intelligence. In order to cope with a changing world one must be able to understand its behavior. Many types of analytic activities have been investigated, for example qualitative simulation [de Kleer and Brown, 1984; Forbus, 1984; Williams, 1984; Kuipers, 19861, measurement interpretation [Forbus, 19831, and di- agnosis [Davis, 1984; de Kleer and Williams, 19871. Re- cently, a new qualitative reasoning task has been iso- lated: comparative analysis [Forbus, 1984; Weld, 1988b]. Whereas qualitative simulation takes a description of a sys- tem and predicts its behavior, comparative analysis takes as input this behavior, the original description, and a per- turbation, then describes how and why the perturbation changes the behavior. For example, given a description of heat exchanger in which hot oil passes through a pipe surrounded by cold water, a qualitative simulator would say that the oil will exit from the pipe cooler than when it entered. Compara- tive analysis, on the other hand, takes this description of cooling and evaluates the effects of perturbations. FOP ex- ample, it might be asked to deduce what would happen to *This report describes research done at the Artificial Iutelli- gence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s Artificial Intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research con- tract N00014-85-K-0124. Portions of this research were done at Xerox Palo Alto Research Center and at Schlumberger CAS Palo Alto Research. the oil output temperature if the oil moved more quickly through the heat exchanger. Previous discussions of comparative analysis have dealt with a solution method called differential qualitative (DQ) analysis [Forbus, 1984; Weld, 1988b]. This paper intro- duces exaggeration, a technique which solves a larger class of comparative analysis problems than DQ analysis [Weld, 1988a]. For example, DQ analysis generates the following answer to the heat exchanger question: Since the rate of cooling is dependent only on the initial temperature and thermal conductivity and these are unchanged, the rate of cooling is unchanged as a function of time. Since the oil will spend less time in the pipe, it will exit with a higher temperature. Exaggeration’s approach to comparative analysis is very different from that of DQ analysis. Instead of tracing the effect of a perturbation through the causal structure of the system, exaggeration considers the behavior of a system in which the perturbation is taken to a limiting value. If this new system has a qzsalitatively diflerent behavior from the original, then exaggeration postulates a general trend caused by the perturbation. Exaggeration produces the following explanation: If the fluid flow rate was infinite, the oil would spend negligible time in the exchanger. Since the rate of cooling is finite, the oil would lose negli- gible heat and exit hotter than oil moving at fi- nite speed. Thus any increase in oil flow rate will cause a corresponding increase in output temper- ature. Exaggeration changes a comparative analysis question into a simulation problem about a system with infinite or infinitesimal valued parameters. Figure 1 provides an overview of the program, EXAG, that implements the the- ory of exaggeration in three parts. Given a perturbation and a description of the system including initial values, the TRANSFORM PHASE produces a new model in which the perturbation has been taken to an extreme. The SIM- ULATE PHASE (denoted HR-QSIM in the figure) simulates this exaggerated model to produce an exaggerated behav- ior that is qualitatively different from the behavior QSIM [Kuipers, 19861 produces using the original model: in one case the heat has dropped a finite amount, in the other it has fallen negligibly. Finally, the SCALE PHASE compares the two behaviors and predicts the answer to the original comparative analysis question. Although exaggeration handles a larger class of com- parative analysis questions than DQ analysis, it does not Weld 291 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. I - -I---‘ . . . V(O)= inf I I - . I I Figure 1: Overview of the Exaggeration Algorithm always answer them correctly. If the system does not re- spond monotonically to the pert urbatibn, then exaggera- tion may generate false predictions. For a the two techniques, including comparison of an explanation of exaggera- tion’s limitations, see [Weld, 1988a]. This paper explains the details of the exaggeration algorithm; the transform, simulate, and scale phases are discussed in turn with an emphasis on the HR-QSIM implementation of the simu- late phase. In particular, HR-QSIM is critically depen- dent on two temporal reasoning innovations: predecessor- persistence and successor-arrival filtering. 2 Transform P The transform phase converts a comparative analysis prob- lem into a simulation problem by creating a model of the system that has an exaggerated initial value for some pa- rameter. The trick is to produce a description which, when simulated, generates a behavior qualitatively different than the original’s. To do this, the parametric perturbation in the comparative analysis question is amplified: an increas- ing perturbation is transformed into an infinite initial value while a decreasing change results in an infinitesimal value. The critical re&irem&t is a qualitative representation that can express-infinitesimal and infinite values. The QUALITATIVE HYPERREAL REPRESENTATION [Weld,1988c] meets the requirement by extending Kuipers’QSIM qua& tity space using the hyperreal numbers of nonstandard analysis [Robinson, 1966; Keisler, 19761. As in QSIM, pa- rameters are continuous functions from time into a value space, but both time and the value spaces are abstrac- tions of the hyperreal numbers. In this extended repre- sentation, the qualitative value of a parameter has two parts. The HR-QVAL encodes magnitude information, and the HR-QDIR abstracts the parameter’s derivative. Sup- pose a parameter P has landmark values po < . . . < For any time t, the following HR-QVAE'S are possible: Pi. ini if P(t) is infinite, and > 0; minf if < 0 TiALO pi +) if P(t) = landmark pi if P(t) - pi is infinitesimal and > 0 (HALO pi -) if P(t) - pi is infinitesimal and < 0 -(Pi,Pi+l+ if P(t) - pi and pi+1 - P(t) are both non-infinitesimal > 0 -Qk , in+ if P(t) is finite and P(t) - pk is non-infinitesimal > 0 Every finite landmark, p, has a halo of numbers that are infinitesimally close; the two halves of these halos 292 Common Sense Reasoning are denoted (HALO p +) and (HALO p -) respectively. The positive infinitesimals, for example, are represented (HALO 0 +). The QSIM expression for an open interval, (~1, p2), is not used since it overlaps with (HALO pl +) and (HALO p2 -). This explains the definition of +pl,pz+. It also proves useful to extend the representation of qual- itative derivatives. QSIM uses a simple description of the sign of the parameter’s derivative: inc, dec, or std. The qualitative hyperreal representation supplements this rep- resentation with information on the order of magnitude of growth. A hyperreal number, z, has four possible orders of magnitude: inf if ]z] > every finite number fin if ]z] = a positive standard real number negl if ]z] = negligible, i.e. a positive infinitesimal 0 ifz=O Qualitative derivatives are represented as a pair of the di- rection and order of magnitude of change. Thus (dec injj denotes the HR-QDIR of a parameter that is decreasing in- finitely fast. If a parameter’s HR-QDIR is (std 0), then it may be abbreviated std since 0 is the only possible order of magnitude of std. Thus a parameter P may be qualitatively described at a point of time, t, by its HR-QVAL and HR-QDIR; square brackets denote this abstraction: [p(t)] Z (HR-QVAL(P(t)),HR-QDIR(P(t))) If the same qualitative description is valid for an interval, A, of time, then it can be written [P(d)]. As described in [Weld, 1988c], the transform phase uses this representation to describe an exaggerated sys- tem. Suppose that the original heat exchanger is de- scribed in terms of two independent parameters, thermal conductivity K, and fluid velocity through the pipe V (both assumed constant), and three dependent parame- ters: heat Q, heat flow F, and position of a unit volume of oil1 X. The parameters obey the following constraints: V = $ X, F 7 $ Q, and F = KQ. The transform phase modifies the initral conditions to produce a description of a heat exchanger with infinite flow rate (20, Lo, and fo are standard, finite negative landmark values, but ~0 is positive.): [V(O)] = (inf, std) Lw)l = (20, (inc i7zfl) [K(O)] = (ao, std) [Q(O)1 [F(O)] 1 [?I [decpjj 01 inc n 3 Si ate ase Since the advantage of exaggeration is that it reduces a comparative analysis problem to a problem of qualitatively simulating a transformed system, it should be no surprise that the simulate phase is the most difficult of the three. The trick is to demonstrate a qualitative simulation tech- nique which can handle parameters with infinite and in- finitesimal values. Because Kuipers’ QSIM [Kuipers, 19861 ‘For simplicity, the ‘liquid-individual’ model of fluids is used here. In addition this model does not distinguish between tem- perature and heat. is simple, precisely defined and widely available, I chose it as basis for the simulate phase. The addition of infinite and infinitesimal values requires a number of modifications. The fundamental problem is the strong reliance that all qualitative simulation algo- rithms place on the order topology of the standard real numbers [Williams, 19841; QSIM, for example, assumes that the value spaces of time and the various parameters alternate between open intervals and closed points. The presence of infinitesimals in the hyperreals results in a more complex topology where this is no longer the case. I call my implementation of the simulate phase HR- QSIM, to acknowledge its ancestry. The next section ex- plains its overall control. Then I present two of HR-QSIM’s most interesting technical innovations: the predecessor- persistence filter and the successor-arrival filter. 3.1 -QSPM Control HR-QSIM has essentially the same control structure as QSIM. They take as input a set of parameters, a set of constraints, and a set of initial qualitative values. As out- put, they produce a tree of states; each path through the tree represents a possible behavior of the system. To gen- erate a state’s successors, they use continuity information to predict the possible next values of each parameter in- dependently. Conceptually, the space of possible successor state values is the cross product of the parameter values. Waltz filtering efficiently prunes this space of states with- out explicitly representing it. After Waltz filtering, the states are constructed to represent the remaining tuples of parameter values. Global filters may prune some of these states; the rest are marked as successors to the original state and pushed on the control queue. Space considera- tions preclude treatment of the many difference between QSIM and HR-QSIM; see [Weld, 1988c] for a discussion of additional next-value tables used to generate parame- ter values, and of extended constraint filters used in Waltz filtering. Instead the next sections focus on two global fil- ters based on predecessor-persistence and successor-arrival times. 3.2 Persistence ad Arrival Times QSIM’s temporal representation is simple; states persist for either an instant (a closed point of time) or a finite open interval. Furthermore, QSIM can easily tell how long any state will last; if the predecessor state lasted for an instant, the successor will persist for an interval and vice versa. For HR-QSIM the qualitative hyperreal representation allows derivatives to have a negligible order of magnitude so a state might last for an infinite time before a parameter transitions to a new landmark value. If some parameter has an infderivative, then the state might persist for only a negligible time. Since the original QSIM cases are also still possible, I distinguish between the following four qual- itative lengths of time: 0, negl, fin, and inf HR-QSIM uses two techniques, predecessor-persistence filtering and successor-arrivalfiltering (section 3.4 .), to deduce the tem- poral extent of qualitative states and to prune inconsistent successors. The difference between the two techniques results from the following observation about transitions in the qualita- tive hyperreal representation: It may take longer for a parameter to transi- tion to a new qualitative value than it spends in its old value. Lest this sound confusing, consider the following con- crete example. Let I be a parameter, in other words a func- tion from the hyperreals to the hyperreals, defined as the identity function I(i) = i. Consider the length of the in- terval, A, in which [I(d)] = ((halo 0 +), (inc fin)), termed the PERSISTENCE of the qualitative value [WeId, 1988c]. I claim that I persists in (halo 0 +) for negl time. FOP ex- ample, if I persisted in the halo for a standard finite time, to, then that would imply that to E (HALO O+), in other words that to is an infinitesimal. Since 0 and inf persis- tences also lead to contradictions, I persists in (HALO 0 +) for negl time. Now consider the time it takes for I to reach the quali- tative value 40, inj$ (formalized as SUCCESSOR-ARRIVAL TIME [Weld, 1988c]). I g ar ue that I’s successor-arrival time is fin. By definition of 40, in.&, when I reaches this qual- itative value it must be greater than some standard real value, ~0. Thus TO time must have elapsed since 1 left 0. Since only negl time passed reaching (HALO 0 +) from 0 [Weld, 1988c], I takes f;n - negl = fin time to arrive at its new qualitative value. In other words, even though there is no intervening hyperreal value sandwiched between (HALO 0 +) and 40, infi-, I takes longer to reach its new qualitative value than it spends in its original value. Several benefits result from considering persistence and arrival measures separately. The unintuitive topology of the hyperreals is made clear, exposing the relationship be- tween the time when one value ends and another starts. The result is a powerful algorithm for temporal reasoning in qualitative hyperreal simulation. Section 3.3 discusses the filtering of successor states based on persistence times while section 3.4 deals with the successor-arrival filter. Both techniques use a common mechanism, the DISTANCE-RATE-TIME TABLE (figUrt? 2) t0 COmpUte tk?m- poral values. This table is indexed by rate and distance values and returns the time required to traverse the dis- tance. In both cases, the rate values come directly from the parameter’s qualitative derivative. The difference between persistence and arrival times comes from the distance used to index into the table. To calculate the time a parameter can persist in a qualitative value, the ‘width’ of the value is used as a table index. Formally, the width of a qualitative value is the order of magnitude of the maximum distance between any two members of the set of hyperreal points that underlie the qualitative value [Weld, 1988c]. From this definition, the following characteristics can be derived. The width of a landmark point is 0, the width of a landmark’s halo is negl, the width of a finite interval (e.g., -tpi, pi+1 + or +pj, infi-) is fin, and the width of inf or minf is inf. By using these width values as an index to the distance-rate-time table, HR-QSIM calculates how long each parameter can persist in its current qualitative value. An entry of ‘?’ in the table indicates that inf, fin; negl, or 0 time may elapse 3.3 Breclecessor- ersistence Filtering HR-QSIM calculates persistence values for two reasons. From the persistences of each parameter, one can deter- mine how long a qualitative state is a valid description of Weld 293 3.5 Heat Exchanger Example Successor-arrival filtering is nicely illustrated by the heat exchanger. The initial state generated by the transform phase persists for 0 time because several parameters are moving from landmarks. Waltz filtering generates a sin- gle successor state which arrives in negl time and has new values for X, Q, and F: [-w1 )I = ((HALO zo+), (inc inf)) [QWdl = (( HALO qo-), (deC fin)) [F(A )I = ((HALO fo+), (inc fin)) Unfortunately, Waltz filtering does not predict a unique successor to this state. The question is whether X will transition from its halo before, after or at the same time as Q and F transition from their halo. Since each parameter is in a halo, each has a qualitative width of negl, and since each is moving towards a finite interval, each parameter must travel a fin distance before transitioning. Plugging these values into the distance-rate-time table leads to the conclusion that every parameter persists for laegl time, so d1 represents a time interval of negl length. In addition, X takes negl to arrive, but Q and F take fin to arrive. Successor-arrival filtering uses these values to eliminate the two successor states that don’t have X transitioning before Q and F. The only set of next values which pass the test are the following; they arrive in negl time. Figure 2: The Distance-Rate-Time Table a system. Secondly, by comparing the persistences of all the parameters in a system, one can often filter inconsis- tent transitions that were not eliminated by HR-QSIM’s other techniques. For example, suppose the two parameters, A and B, are both increasing at the same fin rate, and this rate is held constant. In state S;, A = (0, (inc fin)) and B = ((HALO 0 -), (inc fin)). After Waltz filtering, three sets of possible next values remain. Either A leaves 0 before B reaches 0, or B reaches 0 before A leaves 0, or they both transition at the same time. The question is, which of these successor states is possible? The answer comes from analyzing the persistence of the predecessor state, Si. The width of A’s qualitative value is 0 and A is moving with fin speed, thus the distance-rate-time table lists A’s persistence as 0. B has the same rate and has negl width, so B’s persistence is negt This means that B must persist in its qualitative value for longer than A. In other words, A must transition before B. 3.4 Successor-Arrival Filtering Like persistence values, arrival times are useful as a means for eliminating inconsistent transitions. Calculating the time that a parameter takes to arrive at a new qualitative value from an old one requires a notion of the distance between the two different values. The distance between two qualitative values is defined as the order of magnitude of the minimum distance be- tween any two points in the hyperreal sets underlying the two qualitative values [Weld, 1988c]. For example, the dis- tance between a landmark and its halo is negl, the distance between a halo and a neighboring finite interval is fin, and the distance between inf and any different value is inf. The predecessor-persistence and successor-arrival filters are implemented together. The inputs are parameter val- ues for the predecessor and proposed successor states. Two variables, SP and SA, store successive approxima- tions to the state’s persistence and arrival values respec- tively. SP is initialized to the set (0, negl, fin, inf), and SA to {negl,fin, inf). For each parameter, X, let P be the set of possible persistence values, and let A be the set of pos- sible arrival values. If X transitions to a new qualitative value in the successor state, set SP to SP intersect P and let SA be SA intersect A. Otherwise, if X has the same qualitative value in the predecessor and successor states, remove any time values from SP that are greater than the largest value in P, and remove any time values from SA that are greater than the largest value in P, not A! Since this parameter is not changing, the next state must ar- rive while this parameter is still persisting. If SP or SA is empty, the successor state is inconsistent, otherwise SP and SA are the sets of possible persistences and arrivals LWdl = ( +ZO, Ok-, (inc inf)) [Q&42)1 = (( HALo Qo-1, (dec fin)) v%42)1 = (( HALO fo+), (inC fin)) Since the distance to X’s next value is still fin, similar reasoning holds again. dz has negl length; next X tran- sitions to (HALO 0 -) and then to 0 (always arriving in negl time) while Q and F remain in the halo of their orig- inal values. Without successor-arrival filtering, HR-QSIM could not be sure that negligible heat is lost when oil moves infinitely fast. 4 Scale ase The scale phase answers comparative analysis questions by comparing a standard QSIM behavior of the origi- nal system with the hyperreal behavior generated by HR- QSIM from the transformed initial conditions. For exam- ple, QSIM generates three possible behaviors for the heat exchanger: in one, thermal equilibrium (Q = 0) occurs be- fore the oil leaves the pipe (X = 0), in one X transitions to 0 before Q reaches 0, and in the third they transition at the same time. Since Q drops a finite amount in all these standard behaviors but stays at (HALO qo -) in the hyper- real simulation, the scale phase concludes that in general, output heat rises as oil velocity increases. Although this is a correct answer for this problem, the scale phase can draw false conclusions. Since exaggeration approximates the sign of a partial derivative (e.g., $$ has positive sign) by evaluating at an infinite or infinitesimal asymptote and scaling, it may answer incorrectly if the system does not respond monotonically [Weld, 1988a]. 5 Related Work Like Raiman’s FOG system [Raiman, 19861, HR-QSIM’s qualitative hyperreal representation is grounded in the respectively. 294 Common Sense Reasoning theory of nonstandard analysis [Robinson, 19661. Unlike FOG, which only handles algebraic equations, HR-QSIM can simulate the time behavior of differential equations. Davis’ CHEPACHET program [Davis, 19871 is very sim- ilar to HR-QSIM. In fact, HR-QSIM’s four next-value ta- bles [Weld, 1988 c are derived from CHEPACHET’s tem- ] poral topology rule. However, CHEPACHET’s qualitative representation is less expressive than the qualitative hy- perreal representation. For example, CHEPACHET can- not distinguish between ~0, (HALO ~0 -), and +O,~ot- - each value is MEDIUM. Of course, the values could be distinguished by introducing another parameter called HEAT-LOST, but how would the transform phase know when to do this? Until this question is addressed, exagger- ation can solve more comparative analysis problems using HR-QSIM as the simulation phase. DQ analysis [Weld, 1988b] also solves comparative anal- ysis problems. Unlike exaggeration, DQ analysis only pre- dicts correct answers [Weld, 1988b] to comparative analy- sis questions. However, exaggeration appears to solve more problems than DQ analysis [Weld, 1988a] and often gener- ates simpler explanations [Weld, 1988c]. ACKNOWLEDGMENTS Tomas Lozano-Perez and Johan de Kleer advised and made this research possible. Ernie Davis contributed tech- nically in many ways. Mark Shirley straightened me out when confused and rescued me when imperiled. Ben Kuipers provided QSIM. Brian Williams, Jerry Roylance, David Jacobs, and Walter Hamscher read early drafts. Jeff Shrager, Paul Horwitz, Pat Hayes, Ken Forbus, Randy Davis and Steve Bagley provided interesting ideas. References [Davis, 19871 E. Davis. Order of Magnitude Reasoning in Qualitative Differential Equations. Technical Re- port 312, NYU Computer Science Department, Au- gust 1987. [Davis, 19841 R. Davis. Diagnostic Reasoning Based on Structure and Behavior. Aritificial Intelligence, 24, 1984. [de Kleer and Brown, 19841 J. de Kleer and J. Brown. A Qualitative Physics Based on Confluences. Artificial Intelligence, 24, December 1984. [de Kleer and Williams, 19871 J. de Kleer and B. Williams. Diagnosing Multiple Faults. Artificial Intelligence, 32, April 1987. [Forbus, 19831 K. Forbus. Measurement interpretation in qualitative process theory. In Proceedings of the Eighth IJCAI, pages 315-320, 1983. [Forbus, 19841 K. Forbus. Qualitative Process Theory. Artificial Intelligence, 24, December 1984. [Keisler, 19761 J. Keisler. Foundations of Infinitessimal CuZcuZus. Prindle, Webber and Schmidt, Inc., Boston, 1976. [Kuipers, 19861 B. Kuipers. Qualitative Simulation. Arti- ficial Intelligence, 29, September 1986. [Raiman, 19861 0. R aiman. Order of Magnitude Reason- ing. In Proceedings of the Fifth National Conference on Artificial Intelligence, August 1986. [Robinson, 19661 A. Robinson. Non-Standard Analy- sis. North-Holland Publishing Company, Amster- dam, 1966. [Weld, 1988a] D. Weld. Ch oices of Comparative Analysis: DQ Analysis or Exaggeration? International Journal for Artificial Intelligence in Engineering, To Appear 1988. [Weld, 1988b] D. Weld. Comparative Analysis. Artificial Intelligence, To Appear 1988. [Weld, 1988c] D. Weld. 2% eories of Comparative Analyszs. AI-TR-1035, MIT AI Lab, May 1988. [Williams, 19841 B. Williams. Qualitative Analysis of MOS Circuits. Artificial Intelligence, December 1984. Weld 295
1988
87
834
uallitative Reasoning at Multi esolutions Seshashayee S. Murthy M T. J. W&son Research Center P.Q. Box 704 Yorktown Heights, NY 10598 Ah§tI%Nt In this paper we describe an approach to unify the various quantity spaces that have been proposed in qualitative reasoning with numbers. We work in the domain of physical devices, such as electrical circuits using lumped parameter models. We show how changing the quantity space can be achieved in the course of analysis and how this is similar to dynamically changing the resolution in analysis. We demonstrate the utility of this approach with two examples in the domain of circuit analysis. 1. Introduction One of the chief aims of Qualitative Reasoning is to pro- vide a broad picture of the functioning of the world by taking a step back from the details. In this paper we show that in reasoning with numbers the aim is to break the real number line into broad, qualitatively distinct classes and describe the working of a device in terms of these classes. [JohaGa] defines the qualitative values a variable can have A0 . . . A, as representing disjoint abutting intervals that cover the entire number line. I define the set of val- ues {A, . . . A,) as the Q-space.’ The aim of Qualitative Reasoning is to reduce the cardinal&y of the Q-space while still retaining the infor- mation available from doing the analysis using quantita- tive values. This has two benefits. 0 Complete quantitative information is not always available about the variables being analyzed. For example in design, one may not know the exact val- ues of all parameters in the design. Vet one has to make decisions using this partial information. In this case the partial information can be used by repres- enting the variables in a qualitative form. By using the smallest possible Q-space in which to perform the analysis we are able to deal better with incomplete information. a By using a qualitative description of the variables we can form a description of the working of a device that has a smaller number of states. Thus one can get a I would names. have liked to use the term Quantity Space but that has a better understanding of the workings of the device, at the desired level of detail. In essence using a small Q-space gives a broader picture of the workings of a device. It is therefore intuitively clear that the best approach is to use the smallest Q-space possible that will describe the working of the device. Unfortunately however, the ex- pressive power of a Q-space depends on the number of elements it contains. This paper describes a scheme to carry out analysis in the smallest possible Q-space. We show that depending on the problem at hand it is advantageous to perform the analysis in different Q- spaces. We propose a set of 4 Q-spaces which represent different resolutions on the number line. We show that with this judicious choice of Q-spaces we can switch dynamically between Q-spaces while performing the analysis. In the process we perform each operation in the analysis at the smallest resolution. We show how to switch to a Q-space with a higher resolution when the results of an operation are ambiguous. Different parts of the analysis can be carried out at different resolutions and the final result is a description of the device that is close to optimal. This is illustrated with the help of two ex- amples in linear circuit design. 2. -spaces The following set of Q-spaces are proposed: I. (&) (0, non-zero) This Q-space is identical to the one described in [JohagSaJ The following relationships between var- iablcs can bc cxpresscd in this Q-space. a > b if [a - h] = + a=b ifa-bis0 The converse of these relationships can also be ex- pressed. In addition we can express relationships between quantities based on the relations =, > and < meaning[ PorbM J. I am willing to accept suggestions for better 296 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. a is increasing if a(t2) > a(t1) and 12 > tl The rules for arithmetic are described in [.Toha85a] It is to be noted that if [a] f [b] then [a + b] is indeterminate. Magnitude information is also absent, This ambiguity can be resolved by moving to the next Q-space. 2. (5) (0, infinitesimal, large) This Q-space is identical to the one described in [Raim86] All relations that can be expressed in Q- space 1 can be expressed in this Q-space. In addition, the following relationships can be expressedCRaim86-J. a $- b if a is large and b is infinitesimal. Q g bifa= b(1 +E). a - b if a and b are both infinitesimal or large. [Mavr87] shows how to tie this Q-space to the real number line. This is done by choosing a value e that is the minim urn ratio between a large and a small number. Q-space 2 splits the positive half of the real number line into two halves that are separated by a threshold. The threshold is different for different types of vari- ables e.g. impedance and frequency. Even for the same type of variable the threshold depends on the particular comparison being made. For example when we say two places are far apart it depends on whether the journey is being made by car or on foot. In the following a, and b, are the thresholds for a and b. The rules for addition are2 described in [Raim86]. l[t is to be noted that these rules holds only if a, = b,. Multiplication in this Q-space retains the sign infor- mation. a x b is large if a is large and b is large a x b is small is a is small and b is small Here the threshold is a, x b, The product is ambiguous is all other cases. 2 We run into Zeno’s paradox here. This can be resolved by going to the next finer resolution if necessary. 3. 4. 3. It is significant that the threshold changes during multiplication. We show in the examples how this can result in ambiguity. These ambiguities can be resolved by using Q-space 3. (+)(O, y2) where y is the base, (e.g. 2 or lo), and z is an integer. Ikre 1 is y”. If 1 a 1 = y’, then log(a) -= z. In this Q-space it is possible to express all the re- lations that can be expressed in Q-spaces, 1 and 2. In addition it is possible to describe the logarithmic distance , LD, between two numbers LD(a,b) = log(a) - log(b) For multiplication. CalCbl = WI log(a*b) = log(a) + log(b) For addition the rules are. If log(a) > log(b) or ([a] = [b] and log(a) = log(b)) then2 log(a + b) = log(a) and [a + b] = [a] If log(a) = log(b) and [a] # [b] then log(a + b) IS log(a) and [a + b] is indeterminate. To resolve the ambi- guity we need to go to a finer level of resolution, i.e. the next Q-space. (+)(x * UT), y and z as before and x is a number with n significant digits. As n increases the veracity of the description increases till at n= infinity this Q-space approaches the real number line. The rules for ad- dition and subtraction are similar to that in machine arithmetic with fixed precision. e~ations~li~ to previous work. In this section we illustrate the use of the 4 Q-spaces, two examples from the domain of circuit analysis. with Murthy 297 The (&) (0, non-zero) Q-space [Joha85a, Forb85] has the lowest resolution. It is excellent for describing the working of the circuit in Figure 1 if we merely wish to discover whether the current I flowing in the circuit in- creases with V. I = Vo/R, I1 = Vi/R I2= V2/R I2 - I1 = (V2 - Vl)/R If V2 > Vl then [V2 - Vl] = + Therefore l-12 - 111= + and I increases with V. Figure 1. Figure a is a simplified model of a voltage source in series with a load resistance R,. In figure b the voltage source is represented as an ideal voltage source in series with an output resistance R,. In figure c the load is represented by a resistor R, in parallel with a capacitance C. The whole unit is in series with an inductance L. Other examples of reasoning in the (+)(O, non-zero) space can be found in [Joha85b, WillSS] 3 The main problem with reasoning in this space is that addition of two numbers of different signs results in ambiguity. Also it is not possible to neglect small influences w.r.t. big ones. This is a very important part of Qualitative Reasoning in humans. To achieve this capability we need to move to Q-space 2. 3 Using the signs of partials as the elements is a common technique in economics. of an implicit Q-space If we started out with a more complicated model of a voltage source that includes an output resistance R, as in Figure lb WC can use the (j-)(0, infinitesimal, large) Q- space [Raim86] to reason about the quantities. To deter- mine the current flowing in the circuit we use Ohm’s law to find I = V/(R, + Ri) If R, < R, then R, can be neglected w.r.t R,. i.e. R,+R,=R, Therefore I= V/R, Reasoning in the (+)(O, infinitesimal, large) Q-space can bring about ambiguity if two quantities are multiplied. Consider the example of Figure lc. Here we represent the load by a capacitor C in parallel with the load resist- ance R,. The combination is in series with an inductance L. Admittance( R, 11 C’) = WC + ljR,[Purc65] Each type of variable in this equation has its own threshold. That is because different types of variables have different units. For example, it does not make sense to compare frequency and resistance. If we know that frequency has a threshold o,, resistance has a threshold R,, and capacitance has a threshold C, it is not necessary that C-p1 = l/R, even though they have the same units. It is therefore not possible to compare o C and l/R in this Q-space. It is also not possible to compare R and CDL, the impedance of the inductance I,. Hence it is not possible to know if any of the quantities in the admittance can be neglected. A threshold must be chosen for each comparison that is made. In order to do this we need to move to Q-space 5. If we know that LO- 105, and C- IO-l2 , then oC- IO-‘. Similarly if R,- lo”, then l/R,- 10B3 . If WC set the threshold at 1O4, we find that l/R, g WC 298 Common Sense Reasoning l/R, + wc= l/R,* The impedance of RL 11 C is R,, and th e capacitance C can be deleted from the model. Hence the current I flowing through the circuit is V/(R, + wL + RL) Here again it is not possible to compare R, and R,. If we move back to Q-space 3 we fmd that L- lo-lo and its impedance wL - 1O-5 . If R,- 1O-3 then we can set the threshold at 104. and R,%& Hence these two quantities can be neglected w.r.t. R and I = V/RL. Let us now consider an example that has more compo- nents. Figure 2 shows the circuit for a positive voltage follower. The model for the operational amplifier has the following parameters: Bias current 4 - 10-10 A Input resistance R,- 10*2n Input capacitance C.- lo--l2 F Cutoff frequency 0: - 1 07Hertz Output voltage v, - 10’ Volts Gain K - 102 Output resistance R,- 10-2n Biasing resistors R, and R, - lo5 12 Load resistor R,- 103f2. The voltage source has a Voltage v/ 100 Volts, Output resistance Ri, - 105&J Frequency w - 104 Hertz On analyzing this circuit we fmd that Q-space 2 is not suflicient to remove ambiguities. We need to go to Q- space 3 like in the previous example. We fmd that Therefore Ri can be dropped from the model. vjwcj g Ib therefore Zb can be dropped from the model. Rir<l/wCi With these simplifications to the model, the voltage at the input to the operational amplifier is the same as Vi Similar reasoning Figure 3 reduces the circuit to the one shown in L t Figure 2. A positive voltage follower. The top figure shows the circuit using an operational amplifier and the bottom is the model of the operational amplifier. Figure 3. The circuit after simplifications reached by analysis at Q-spaces 2 and 3. The equations for this circuit are V- = V,(Rl/(Rl + R2)) v+ = vi V,=k(V+- v-) f Ience v+ - I/-- (I/,//t)- 10-l and v v- N +- If V+ and VP are represented in Q-space 3, or lower, then the difference is indeterminate. Murthy 299 Therefore to simulate the circuit we need to represent all the variables in Figure 3 in Q-space 4 with at least 3 sig- nificant digits. The qualitative values a variable can have A, . . . A, as representing disjoint abutting intervals that cover the en- tire number line[Joha85a]. I defme the set of values (A, . . . A,) as the Q-space. In this paper we have pro- posed a set of 4 Q-spaces that are useful in engineering problem solving. They allow us to represent the sort of relations that are useful in making engineering approxi- mations. The Q-spaces that we describe are chosen because re- lationships that hold between quantities in Q-space with lower resolution hold in a Q-space with a higher resol- ution. If the results are indeterminate going to a Q-space with a higher resolution may resolve the conflict. Thus >, < and equal can be represented in all 4 spaces. 9, m and E can be expressed in Q-spaces 2, 3 and 4. In Q- space 3 and 4 the logarithmic distance between two number can be expressed. In Q space 4 with n significant digits we can express the difference of two numbers q, and q2 where q1 - q2 w lo-” Q-space 4 has the advantage that it is similar to the way numbers are represented on machines. There is a calculus for obtaining error bounds with such arithmetic. As the number of significant digits increases this Q-space ap- proximates the real line. It is possible to have a different break up of the number line. For example the temperature, We also advocate choosing the threshold in Q-space 2 dynamically. Each comparison involves different quantities and by moving from Q-space 3 to 2 we are able to set our threshold dy- namically. There is a many-one mapping from Q-space 4 to 3. One just ignores the significant digits. To go from Q-space 3 to 2 one needs to compare the variable to the appropriate threshold If log(q) > log(ZhreshoZ~ implies q is large. log(q) < log(ihre.s+zoZd) implies q is infinitesimal. Moving from Q-space 2 to 1 is trivial. Only the sign is retained. A device is analyzed at the lowest possible resolution. If ambiguities result, we move to a higher resolution Q- space till the ambiguity is resolved. Using this technique we get as general a description of the device as possible. 5. Conclusions We describe a scheme to analyze devices at multiple levels of resolution. We propose that 4 Q-spaces be used in qualitative analysis. These smoothly span the range form (&) (0, non-zero) to the real-number line. Analysis is performed at the lowest possible resolution until ambigu- ities occur. To resolve ambiguities in a Q-space with a lower resolution, we move to a Q-space with a higher re- solution This paradigm allows us to obtain the most general description of the working of a device. 6. Acknowledgements This paper has benefited greatly from discussions with Peter Blicher, R. Bhaskar and Ruud Bolle. 7. References Forb85. Forbus, Kenneth D. Qualitative Process Theory, pages 85- 168. in Bobrow, Daniel G., Qualitative Reasoning about Physical Systems. The MIT Press, 1985. Joha85b. de Kleer, Johan. How Circuits Work, pages 205-281. in Bobrow, Daniel G., Qualitative Rea- soning about Physical System. The MIT Press, 1985. Joha85a. de Kleer, Johan and Brown, John Seely. A Qualitative Physics based on Confluences, pages 7-84. in Bobrow, Daniel G., Qualitative Reason- ing about Physical Systems. The MIT Press, 1985. Mavr87. Mavrovouniotis, M. L. and Stephanopoulos, G. Reasoning with Orders of Magnitude and ap- proximate relations.. Proceedings of the Sixth National Conference on Artificial Intelligence, 1, July 1987. Purc65. Purcell, 1% M. Electricity and Magnetism.. MC Graw Ilill book company., 1965. Raim86. Raiman, Olivicr. Order of Magnitude Reason- ing.. Proceedings of the Fifth National Conference on Arti/jcial Intelligence, 1, July 1986. Will85. Williams, Brian C. Qualitative Analysis of MOS Circuits, pages 281-347. in Bobrow, Daniel G., Qualitative Reasoning about Physical Systems. The MIT Press, 1985. 300 Common Sense Reasoning
1988
88
835
I A l( Setting up Large-Scale ualitative Models Brian Falkenhainer and Kenneth II. Forbus Qualitative Reasoning Group Department of Computer Science University of Illinois at Urbana-Champaign 1304 W. Springfield Abstract Avenue, Urbana, Illinbis 61801 A qualitative physics which captures the depth and breadth of an engineer’s knowledge will be orders of magnitude larger than the models of today’s quali- tative physics. To build and use such models effec- tively requires explicit modeIing assumptions to man- age complexity. This, in turn, gives rise to the prob- lem of selecting the right qualitative model for some purpose. This paper addresses these issues by de- scribing a set of conventions for modeling sssump- tions. Simplifying assumptions decompose a domain into different grain sizes and perspectives which may be reasoned about separately. Operating assumptions reduce the complexity of qualitative simulation by fo- cusing on particular behaviors of interest. We show how these assumptions can be directly represented in Qualitative Process theory, using a multi-grain, multi-slice model of a Navy propulsion plant for illus- tration. Importantly, we show that model selection can often be performed automatically via partial in- stantiation. We illustrate this technique with a simple explanation generation program that uses the propul- sion plant model to answer questions about physical and functional characteristics of its operation. Introduction 3ng-range goal of qualitative physics is to develop sys- tematic models that capture the breadth and depth of hu- man reasoning about the physical world. Such models will be crucial for future intelligent computer-aided design and tutoring systems. Clearly, they will need to be orders of magnitude larger than today’s models. Furthermore, they must capture phenomena at several levels of detail, and from a variety of perspectives. Building such models raises several new issues for qualitative modeling: 1. Organization problem: How can we organize a model that captures phenomena at a variety of grain sizes and perspectives? 2. Relevance problem: Generating all possible states be- comes intractable as the size of system modeled grows. Even if we could generate them all, often we only care about a subset of the behavior. How can we use qual- itative simulation in a more focused way to answer questions? 3. Selection problem: _4s models get larger, complete in- stantiation becomes both undesirable and impossible. No one understands spilling a cup of coffee via quan- tum mechanics. Furthermore, some of the perspec- tives in a model will be mutually incompatible. How can an appropriate subset of a model be reasoning, given a particular question? selected for This paper addresses each of these issues. In partic- ular, we claim the key idea in solving all of them is a set of conventions for explicitly representing modeling as- sumptions. We introduce explicit simplifying assumptions to solve the organization problem by providing “scoping”, delimiting when descriptions are and are not applicable. We introduce operating assumptions to describe standard behaviors or default conditions. We illustrate how, using these conventions, the selection problem can in some cases be solved automatically via partial instantiation. These conventions are illustrated using a multi-grain, multiple perspective high-level model of a Navy propulsion plant. We demonstrate our solution to the model selection prob- lem by showing how, in the context of a tutoring system, the form of a question can be analyzed so that the appro- priate set of modeling assumptions can be automatically computed. In the next section we outline our perspective on quali- tative modeling, showing the need for explicit modeling as- sumptions to control model instantiation and use. Section 3 gives a brief tour of the steam plant and its qualitative model which provides our motivating example. Section 4 describes our conventions for modeling assumptions, and Section 5 shows how they are used to organize the steam plant model. Section 6 describes our algorithm for auto- matically computing a minimal set of simplifying assump- tions for a given query. Finally, we discuss directions for future research. ‘khe odeling recess We call the system or situation being modeled the sce- nario, and its qualitative model the scenario model. The simplest way to build a scenario model is to create a model of that specific scenario for a particular purpose. While useful systems may be built this way, it is also easy to generate ad hoc models of dubious merit, where the model must be thrown away whenever the scenario or purpose changes slightly. An indirect route is more robust - build first a general-purpose domnzn model, which describes a class of related phenomena or systems. Ideally, a scenario model can be built by instantiating and composing de- scriptions from the domain model. Developing a domain model requires more initial work, but it simplifies generat- ing models for a range of scenarios. Furthermore, ad hoc aspects of models are more likely to be discovered if the same descriptions are re-used in a variety of settings. So far, we have stated the commonplace view of mod- eling in qualitative physics. Qualitative process theory Falkenhainer and Forbus 301 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. I31 be organizes domain models around processes, which can automatically instantiated to form scenario models. Device-centered ontologies [l; 141 provide catalogs of de- vices, which can be composed to build scenario models. (Kuiper’s f$XM [8] d oes not provide any abstraction or or- ganizing structure for domain models itself, but one could imagine using it with either ontology.) Unfortunately, as we have attempted to build more realistic models, we have discovered that this view is inadequate. This view breaks down in two ways for complex do- main models. First, higher fidelity models are simply big- ger than lower fidelity models, Representing fluids in de- tail, for instance, requires geometric information about the placement of portals, descriptions of head at every distin- guishable place, models of fluid resistance in pipes, and so forth. We have built such models, (which turn out to be several times larger than than the models in [3]) on simple situations they swamp our machines. , and even Only part of the problem is technological. Even if our computers ran infinitely fast, for most purposes we simply don’t need or want such detailed answers. When we do need the details, it is typically about a very narrow range of behaviors. This scaling problem becomes even more acute when faced with modeling the kind of propulsion plant studied in STEAMER [5], which used a numerical model that contained hundreds of parameters. The stock AI answer is “hierarchy”, but how should it be done? The second breakdown comes from the use of multiple perspectives. In some cases, a feed tank is best viewed as an infinite capacity liquid source. In other cases, it should be viewed as a container which may be emptied ke “,oih views at once. er a s with dire consequences). One cannot consistently One solution would be to cre- ate multiple, distinct models, one for each perspective and purpose. Doing so would significantly raise the difficulty of the selection problem, and make knowledge acquisition and maintenance nearly impossible. We must-find ways for incompatible perspectives to peacefully coexist in a single domain model. These issues have been addressed before in qualita- tive physics, albeit partially and informally. de Kleer and Brown, for instance, describe class-wide assumptions, which roughly correspond to our use of simplifying assump- tions. However, this notion has never been formalized nor explicitly used as part of their programs or models [7]. So far, the device ontology in qualitative physics has inher- ited a limitation from System Dynamics [lo] upon which it is based: the process of mapping from the “real-world” scenario to a device model lies outside the theory. Qualitative Process theory was designed with such prob- lems in mind. The descriptions of the domain model are automatically instantiated by a QP interpreter, thus - in theory - providing the means for modeling assumptions to be explicitly represented. This paper describes a set of conventions for exploiting this power. 3 A steam plant model Since steam plants are not everyday systems, we begin with a brief description of steam propulsion plants, and the highlights of our model. Figure 1 shows an abstract view of propulsion plants adapted directly from Navy training 302 Common Sense Reasoning Feed PnmP Assembly Figure 1: Simplified model of a navy steam-powered propulsion plant. materials [9]. The lowing fashion: primary components operate in the fol- Boiler assembly. The boiler assembly takes in dis- tilled water and fuel and produces superheated steam. Most surface ships use several boilers, but this can be ignored. The heat is supplied in most ships by an oil- buring furnace. The steam produced by the boiler is fed through the superheater, which increases its tem- perature in order to get more work out of it. Turbines. The superheated steam then enters the tur- bines, which produce work (by driving the ship’s pro- pellers), resulting in the temperature, pressure, and kinetic energy of the steam dropping. Condenser assembly. The steam exhausts from the turbine to the condenser, where it is cooled by circu- lating sea water and condensed again into liquid. Feed Pumps. A series of pumps transport the con- densate back to the boiler assembly, where the cycle begins again. Our model captures the first few “high-level” models of the steam plant, with various perspectives. Some ques- tions that can be answered with the model currently are illustrated in Figure 2. We have focused only on the main steam cycle, ignoring support systems such as lubrication and distillation. We only represent the highest levels of structural abstraction, throwing away all geometric in- formation. Even so, we believe this is the largest quali- tative model built to date. The domain model includes definitions of 8 object types, 23 views, and 14 processes. (Expanding these into horn clauses yields 1566 “axiom- equivalents”.) During the partial instantiation computa- tion on the plant model, 21 processes, 55 views, and 79 quantities are created. (This works out to 8617 instanti- ated horn clauses in the ATMS database.) A Symbolics machine has never lasted through a total envisionment of the full model. But using the techniques described in this paper, the envisionments typically take a few minutes. 4 Modeling Assumptions Conceptually, we view setting up and using a scenario model as a process of filtering potential instantiations and behaviors. Ideally, the “raw input” takes the form of a true structural description, whose terms are physical ob- jects such as pipes, tanks, sumps, butterfly valves, and so Figure 2: Some questions the model can answer Here are some answers generated b system using the steam plant mo i an implemented query el. The questions were formulated in a specialized query language. The explana- tions are automatically generated by the The size of the subset of the model instantiate cf rogram. is listed after each question. for the query Q: What affects the efficiency of the plant? The efficiency of the plant is affected positively by the work rate of the turbine. It is also affected negatively by the energy input to the plant. 15 quantities, 41 inequalities, 9 process, 9 views, and 9 situations. Q: What is causing black smoke to rise from the furnace? Black smoke is rising from the furnace because it is not the case that the fuel/air ratio of the furnace is less than the F/A saturation point for the furnace. 4 quantities, 6.5 inequalities, 8 processes, 15 views, and 21 situations. Q: How many mass flows are there? There are 4 mass flows: 1. A flow of water from the condenser to the feed pump 2. A flow of steam from the turbine to the condenser. 3. A flow of steam from the boiler to the turbine. 4. A flow of water from the feed pump to the boiler. 25 quantities, 89 inequalities, 7 processes, b0 views, and 15 situations. on. The output is a scenario model, which is then ana- lyzed. The first filter in the modeling process is a mapping from the raw input into a set, of structural abstractions, which capture the essentials of that system relevant to a particular analysis. For example, a collection of pipes and valves might be recast as an abstract fluid path, which may or may not be a1igned.l If parts of the structure do not, play a role in the behaviors of interest, then those parts may be thrown away. For example, in considering the thermodynamic properties of the main steam cycle in a propulsion plant, one ignores the multitude of drain valves and pipes, since they are only used during system startup and shutdown. The next filter selects the relevant, grain size and per- spectives by specifying a set, of simplifying assumptions. Answering a student’s questions about the overall func- tioning of the plant, for instance, does not require instan- tiating a detailed model of lubrication flow. We take as our inspiration Sussman’s slices notion [I 11, where results from multiple perspectives could be combined in synthe- sizing engineered systems. In Sussman’s system the lan- guage for specifying perspectives was domain-dependent ( i.e., electronic circuits), and instantiation decisions were made by hand. By contrast, our techniques &ould work for any phenomena expressible in QP theory, and we also address the problem of automatic perspective selection. At this stage the model is ready for analysis. Often.this analysis requires qualitative simulation, which itself can be tuned by imposing operating assumptaons to filter out irrelevant behaviors. For teaching basic plant operation, ‘A open. fluid path is aligned exactly all of its valves are for instance, the steady-state behavior is crucial, while the intermediate states between *‘cold iron” (i.e., completely off) and normal operation are irrelevant. Here we describe how modeling assumptions can be or- ganized for model and behavior selection. We assume structural abstractions as inputs, and ignore the problem of computing them from structural descriptions. 4.1 Simplifying assumptions A common technique for managing complexity is to ignore irrelevant details. A prerequisite for ignoring details in setting up a model is some means to “turn off” certain aspects of it. Consequently, we require every description in the domain model to depend on explicit simplifying as- sumptions (except for those which are always to be instan- tiated). These take the form CO?JSIDER((specifier)). The collection of CONSIDER assumptions form the groundwork of any particular analysis. For instance, in the steam plant model we provide the ability to selectively instantiate ther- mal properties with the following description: (defView (Thermal-Physob ?physob) Individuals ((?physob :type Physob :conditions (CONSIDER (Thermal-Propertles ?physob)>>> Relations ((quantity (Temperature ?physob)) (quantity (Tboil ?physob)) (quantity (Tfreeze ?physob)) (Greater-Than (A (Tboil ?physob)) (A (Tf reeze ‘physob) > ) (not (Less -Than (A (Temperature ?physob) > zero)))) The thermal properties of an object will be instantiated exactly when this COIJSIDER assumption is believed. Representing simplifying assumptions imposes new re- sponsibilities on the domain modeler. The model must be organized so that local decisions about relevance force a co- herent subset of the model to be constructed. For instance, if thermal properties are considered in one part of a steam plant, they should also be considered in connected parts. Such coherence can be enforced by establishing logical de- pendencies between CONSIDER assumptions. For example, we divide our model into operating blocks and functional blocks to control granularity. An operating block corre- sponds to a system or subsystem which must, be consid- ered at a uniform level of detail. A functional block is like an operating block, but only has input-output behavior - its internal details are hidden at that resolution. If we are focusing on a particular level of a system, we want to treat its components as functional blocks. This is enforced by a rule in the model whose content is: V’sVciSystem(s) A Conslder(Operating-Block(s)) A Has-Part (s, c) * CO!ISIDER(Functlonal-Block(c))] Simplifying assumptions can also control perspectives. For example, in some circumstances it is appropriate to consider the thermal properties of all contained stuffs at a given level of detail. In our model this is expressed by the assertion CONSIDER(thermal-properties), whose conse- quence is: Fallenhainer and Forbus 303 COWSIDER(thermal-properties) =+ Vat [Contained-Stuff (at) + COMSIDER(thermal-properties(s In other cases we want to focus on just particular substances inside certain containers. We say this by CONSIDER(thermal-in(sub, can)), where sub is a sub- stance and can is a container. The implication of this assumption is CONSIDER(thermal-in(aub,can)) =F Vs [State(s) + Consider(thermal-properties(C-S(sub,s,ca)))] That is, if we are thinking about water in the boiler, we must consider both liquids and steam. 4.2 Operating assumptions Engineers constantly use default assumptions about be- havior to manage complexity. For example, when trying to figure out how a system containing a heat exchanger works, engineers tend to assume that the fluid in the hot leg is hotter than the fluid in the cold leg. If the system is operating as intended, making this assumption saves effort because the other two alternatives (i.e., the temperatures being equal or the cold leg temperature being higher than the hot leg temperature) need not be considered. If the system is not operating as intended then the engineer’s predictions will be wrong and the analysis must be re- performed to consider the other alternatives. Several kinds of operating assumptions are useful. The simplest are local restrictions over the space of possible behaviors. For instance, one might assume that the tem- perature in the boiler is higher than that of the condenser. More typically, collections of restrictions are gathered to describe operating modes of the system. The collection of assumptions about heat exchangers above can be collected into an individual view to form the “normal mode” of the device2. A steam plant has several operating modes, start- ing from ‘cold iron” and ending in “full steam”, and each subsystem has modes as well. Forcing a system to be in a particular mode dramatically reduces the number of pre- hicted behaviors. Not all operating assumptions are organized into modes. In analyzing a new thermal system, for instance, one may first focus on its steady-state behaviors. Our model defines the concept of a system s being in steady state with respect to a given thermodynamic property q as follows: Steady-State(s ,q> + VpVst [Has-Part (s ,p> A Contained-Stuff (St> A Container(st)=p =+ D[q(st)]=zero I Two important caveats must be remembered when using operating assumptions. First, they must respect the sim- plifying assumptions in force. For example, it is inconsis- tent to both force the boiler’s temperature to be constant and to not consider the thermal properties of the boiler. The easiest way to ensure such consistency is to only in- clude operating szsumptions in descriptions which contain ‘Discussions with engineers indicate that most process de- signers tend to have detailed models for only one or two oper- ating modes (3f a. system, hence normal mode makes sense in many cases. But for systems with many defined ,operntionnl regions, the idea of a normal mode doesn’t make much sense. the appropriate simplifying assumptions as prerequisites. Second, care must be taken not to rule out possible be- haviors which are actually important for the task. In the initial stages of a design, for instance, it may be useful to suppress fault models and concentrate on steady-state be- havior, but it could be disastrous to continue making those assumptions in later stages. No modeling discipline can completely prevent such mistakes. The advantage of our conventions is that such assumptions are at least explicit in the analysis, rather than implicit (say, in the choice of one domain or scenario model over another). 5 rganization of the model Here we return to the steam plant model, these ideas are used in its organization. and show how 5.1. Granularity The model has three distinct levels of granularity (see Fig- ure 3)) which we describe here. Unheated closed thermodynamic cycle: The propul- sion cycle is treated as a black box, with heat flowing in and work flowing out. This level is useful for describing global properties of the sys- tem, such as efficiency. This level is predicated on CONSIDER(Operating-Block(Steam-Plant)). Contained stuffs: Working fluids are explicitly rep- resented using the contained stufls ontology [6]. At minimum the volumetric properties (e.g., amount and pressure) are represented, but thermal properties !%kd temperatures, thermal mixing) can also be in- , according to the chosen perspective. This description is locally predicated on statements like CO!lSIDER(Operating-Block(boiler)) or globally estab- lished by CONSIDER(Volumetric-Properties). Boiler assembly: The boiler is the heart of the plant, so we include an additional level of detail about it. This level represents the furnace explicitly, in- cluding the effects of fuel/air ra’tio on heat produc- tion rate and efficiency. This level is predicated on CONSIDER(Operating-Block(furnace)). The furnace may be examined independently or in the context of the rest of the plant. When being examined independently, idealized sources and sinks are instantiated to provide an “exterior” for the system. 5.2 Perspective Perspectives allow irrelevant parts of a model to be turned off. Not all perspectives are consistent with every level of granularity. In our model, the following perspectives are supplied: Volume trac properties: Aa mentioned above, this per- spective is mandatory with the contained-stuff level. ii special process describes the volumetric effects of phase changes without invoking thermal properties. Thermal propertzes: Heats, temperatures, and thermal effects of mixing are considered m this perspective. A ther- mal perspective may be introduced for any component or system S by asserting CONSIDER[Thermal-in(xater ,S)l. Bowler fault models: In operating a plant it is im- portant to keep the water level within a certain range. Too low, and the boiler can melt. Too high, and water 304 Common Sense Reasoning Volumetric Properties-, hermal Properties Temperatures Thermal Mixing PV Work / ( I 1, ’ Furnace Faults Fuel Intake Boiler Faults \\ Qin Figure 3: Differing views of the propulsion plant. droplets are entrained into the superheater. Since steam is moving through the superheater faster than sound, these water droplets can cause tremendous damage. Assert- ing Consider(Fault (fluid-level, boiler) > instantiates level as an explicit quantity, qualitatively proportional to amount, and installs these problems as a possibility. Furnace fault models: The fuel/air ratio in the furnace is also tightly controlled. If the mixture is too rich, black smoke comes out of the stacks, which is bad if you want not to be seen. If the mixture is too lean, white smoke appears. In either case, fuel efficiency drops dramati- cally. These problems are instantiated as possibilities by Consider(Fault(Exhaust-type,Furnace)). ode1 selection for question-answering Our conventions for modeling assumptions provide a framework for organizing large-scale qualitative models, but this effort is useless unless we can effectively select just the right aspects of a model to use for a particular task. The general model selection problem is extremely hard, and depends critically on the nature of the task. Conse- quently, we focus on a particular task, namely answering questions in an instructional setting. We only address the problem of selecting appropriate simplifying assumptions; the problem of ascertaining the right operating assump- tions is beyond the scope of this paper. An inteiligent tutoring system consists of several compo- nents, including a student model, Jialogue manager, and domain expert j13]. Given a question, our task is to find a subset of the model that (a) suffices to answer the question and (b) minimizes extraneous details. A simple question about whether or not phase changes happen in the boiler should not be answered with a soliloquy on the possibil- ity of black smoke rising from the stack. Furthermore, we would like to insulate the tutoring system from the internals of the model as much as possible. Ideally, we would like to create a set of question-answering routines that would work with any QP models. Such routines could form the core of a generic tutoring system which, given a QP model and appropriate nomenclature and display in- formation, could produce reasonable explanations (in the manner of [2j). We make only the plausible assumption that these rou- tines can identify which parts of a query are descriptions which must be supplied by the qualitative model. These descriptions might be specifications of quantities, such as efficiency, or relationships, such as a liquid flow occurring. Our algorithm assumes the qualitative simulator uses an ATMS. (It could be modified, at greatly reduced efficiency, to work with another kind of TMS.) In an ATMS, a fact can be asserted as true or false, with the usual meaning, except once asserted, such facts can never be retracted. A fact may also be assumed, which means it may or may not appear as part of some context (i.e., environment). Our qualitative simulator, QPE [4], exploits this distinction by not instantiating descriptions when their prerequisites (such as COHSIDER assumptions) are asserted false, since they could never hold in any consistent context. Given the structural description for a particular sce- nario, and a list of query descriptions 22~~ we find the minimal appropriate set of simplifying assumptions as fol- lows: Expand the structllrnl description using the domain model. This involves finding instances of process and view instances, as well as creating theoretical entities such as contained-St 11%. Assume (not assert!) every possible COFiSIDER state- ment. Create a new node. QUERY, justified by the conjunction of the descriptions in LJ ; . Find the environment in the label for QUERY which Falkenhainer and Forbus 305 has the minimum number of CONSIDER ass umptions. Return these COPJSIDER assumptions as the result. Envisionment can now proceed, beginning with the ex- pansion process again, but with the minimal appropri- ate simplifying assumptions asserted as true, and any CONSIDER assumptions not believed as consequences of them asserted as false. The query system illustrated in Figure 2 used this algorithm to determine what aspects of the model to instantiate. 7 Discussion The establishment of conventions for modeling assump- tions is crucial for the organization and use of large-scale domain models. We introduced simplifying assumptions, in the form of CONSIDER statements, as a means of select- ing grain size and perspective. We described how operating assumptions, such as steady-state, could be specified to fil- ter possible behaviors. We have tested these techniques by building a multi-grain, multi-perspective model of a Navy propulsion plant which is significantly larger than any pre- vious qualitative model. We further showed how a partic- ular part of the model selection problem, finding a mini- mal appropriate set of simplifying assumptions, could be solved automatically by analyzing a partial instantiation of a model with respect to a particular question. The issues we have addressed are relatively new, but we think we have made substantial progress on them. Much remains to be done, such as figuring out an automatic so- lution to selecting operating assumptions for an instruc- tional context. We are currently extending our collection of generic query routines, with the long-range goal of pro- viding a QP toolkit for building intelligent tutoring sys- tems. We are still a long way from building the kind of quali- tative model we ultimately desire. We believe a qualitative model sufficient to support the full range of reasoning an intelligent tutor would need about the steam plant - the kind of model sought in the STEAMER project - would be about ten times larger than our current model. For example, there are at least three levels of detail below the finest grain of our current plant model which would be useful in intelligent tutoring systems. These new levels of detail will in turn require introducing new perspectives. To explain how a jet pump works, for instance, requires substantial geometric reasoning. No existing qualitative physics can handle the mixture of dynamics and geometry involved, and more research is needed to extend the range of phenomena we can cover. The kind of analysis we have focused on here, explana- tion generation, is one of the simpler uses for a qualitative model. We suspect these ideas will prove useful for other types of analyses as well (viz Slices), but this remains to be explored. For many analyses, the mapping from structural description to structural abstraction is the crucial step; do- ing it incorrectly can prevent consideration of important phenomena (such as ignoring resonance phenomena in the design of structures). The discipline of explicit modeling assumptions must be extended to this part of the modeling process, so that we can build engineering problem solvers whose analyses are trustworthy. 8 Acknowledgements John Collins provided valuable commentary and technical assistance. This research was supported by an IBM Gradu- ate Fellowship, by the National Aeronautics and Space Ad- ministration, Contract No. -NASA NAG-9137,, by the Of- fice of Naval Research, Contract No. N00014-85-K-0225, and by an NSF Presidential Young Investigator Award. References Ill PI ;131 141 de Kleer, J. and Brown, J. “A qualitative physics based on confluences”, Artificial Intelligence, 24, 1984 Forbus, K. and Stevens, A. “Using Qualitative Simu- lation to Generate Explanations” Proceedings of the Third Annual Conference of the Cognitive Science So- ciety, August 1981. Forbus, K. “Qualitative process theory” Artificial In- telligence, 24, 1984 Forbus, K. “The Qualitative Process Engine”, Tech- nical Report No. UIUCDCS-R-86-1288, December, 1986. To appear, International Journal of AI in En- gineering, 1988 Hollan, J., Hutchins, E., and Weitzman, L., “STEAMER: An interactive inspectable simulation- based training system”, AI Magazine, Summer, 1984. Hayes, P. “Naive Physics 1: Ontology for liquids” in Hobbs, R., Moore, R. (Eds.), Formal Theories of the Commonsense World, Ablex Publishing Corporation, Norwood, New Jersey, 1985. Iwasaki, I. and Simon, H. “Causality in device behav- ior”, Artzficzal Intelligence, 29, 1986. Kuipers, B. “Qualitative Simulation” ) Artificial Intel- ligence, 29, September, 1986. U.S. Navy, Principles of naval engineering, NAVPERS 10788-B, Prepared by the Bu- reau of Naval Personnel, 1970. Shearer, J., Murphy, A. and Richardson, H. Introduc- tion to System Dynamics, Addison- Wesley, 1971. Stallman, R.M., and Sussman, G.J. “Forward reason- ing and dependency-directed backtracking in a system for computer-aided circuit analysis”, Artificial Intel- ligence 14 (1980) l-39 Stevens, A., Robert,s, B., Stead, L. Forbus, K., Steinberg, C., Smith, B. “STEAMER: Advanced computer-aided instruction in propulsion engineer- ing”, BBN Technical report, July, 1981 Wenger, Etienne, Artajiczal intelligence and Tutoring Systems, Morgan Kaufmann Publishers, Inc., 1987. Williams, B. “Q ua itative 1 analysis of MOS circuits”, Artzficial Intellzgence, 24, 1984. 306 Common Sense Reasoning
1988
89
836
Andrew Gelman, Susan Altman, Matt Pallakoff, Ke GO, mce w Abstract FRM is an experimental, knowledge-based system that assists in the judgmental aspects of budget planning and financial resource management. Problem solving in this domain requires many kinds of knowledge from many sources. We represent domain knowledge uniformly as constraints and view resource management and planning problems as constraint satisfaction and resolution tasks. We sketch here the finan- cial resources management problem, our ap- proach, and early results, concentrating on constraint representation and management issues in the system. Preparing and managing budgets are knowledge- based activities that require substantial expertise to do well. These are constraint satisfaction tasks, in the abstract, where the constraints are symbolic as well as numeric, and are judgmental as well as definitional. They are large tasks in which the organization of knowledge is critical to their success. The FRM system* is a prototype working program that attempts to integrate many of the tasks an intelligent financial assistant should per- form beyond the bookkeeping that a spreadsheet program does with numerical relations. Tt is an object-oriented system in which hierarchical or- ganization among constraints, as well as among budget items and budgets themselves, is an im- portant design principle. We use the same mechanisms to represent a hierarchy of perspec- 1This work was funded in part by: DARPA under contract N00039-86-C-0033; Boeing Computer Services under contract W266875; a gift from Price Waterhouse Technology Centre; Lockheed Missiles and Space Company under gift l-72-LO31; NASA under cooperative agreement NCC2-274; and NIH un- der grant RR-00785. 2 FRM runs on Xerox 1186 machines and is implemented in the CLASSIHYPERCLASS object-oriented programming sys- tem [SmithR 86, Schoen 831. CLASS and HYPERCLASS are trademarks of Schlumberger Technology Corporation and were formerly known as STROBE and IMPULSE respectively. tives under which to view the same financial in- formation in different ways. Because of the na- ture of budgeting tasks, it is important also to represent temporal segments of budgets implicitly as sub-budgets and reason with them just as ar- bitrary collections of line items can be considered as sub-budgets. A uniform interface is provided by a form-filling system that is itself driven by constraints on how to present information under a perspective. While constraints, perspectives, and hierarchies are the central themes of our work to date, we also include in FRM, and briefly report on, a replanning system that adjusts finished budgets in light of new information and an explanation sys- tem that presents audit trails or explanations un- der specified perspectives. FRM also includes a distributed database utility in its design, but not in its current implementation. Figure 1 shows the major components of the FRM system that are described in subsequent sections. USER Figure 1: The major components of the FRM system. 2.1 User Interface: FORMAN Form filling is a natural metaphor for the budgeting assistant, and a job that most managers will gladly turn over to an assistant. FORMAN is the FRM interface through which users create, Gelman, Altman, Pallaloff, Doshi, Manago, Rindfleisch and Buchanan 31 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. I AddLine Add Text Add Lines Edit item Move Item(s) Line Up Column Line Up Row DETAILED BUDGET FOR FIRST 12 NONTH BUDGET PERIOD DIRECT COSTS ONLY FROM THROUGH 7-MAR-88 6-MAR-89 DOLLAR AMOUNT REQUESTED (Omit cants) PERSONNEL (Applicant organization only) TIME/EFFORT SALARY FRINGE TOTALS NAME POSITION TITLE % Hours per Wk. BENEFITS Blttnan, R. Principal Investigator 3% 1 0 2119.50 $ 538.35 $ 2657.85 Ralston, A. Programmer 75% 38 $ 6667.50 $ 32917 58 Chandler, G. Secretary 10% 4 $ 2084.40 $ 529.44 $ 2613.84 ---- ---- ---- ---- ---- ---- ---_ SUBTOTALS -> $ 30453.90 $ 7735.29 $ 38169.19 EQUIPMENT DESCRIPTION YEARLY AMOUNT Computer Equipment $ 10500.00 Scientific-Tech Equip-NS $ 3400.00 lston, A. : DirectCost = $ 26250.00 BECAUSE Employee Olrect Cost = Yearly Salary X Level Of Effort, AND Select item's Section Ralston, A. : PercentageOfHoursWorked = Add Line 75% Ralston, A. : Salary = Why? 35000.0 OTHER EXPENSES DESCRIPTION ---- YEARLY AMOUNT ---- Figure 2: User’s view of a form during an FRM session. examine, and modify budgets. Users select items, with a mouse, on images of forms and invoke operations on the items by selecting commands from menus (see Figure 2). When a value on the form is changed, the system may change other values automatically or after consultation with the user as a result of applying domain knowledge. We have attempted to keep interactions simple and consistent by adopting menu-driven, object- oriented, and what-you-see-is-what-you-get (WYSTWYG) approaches to user interfaces. A key design feature is the separation of data, stored in the CQNFRM data managing module, from presentation information which is the domain of FORMAN. One datum may appear on several different forms concurrently. Conversely, a single form may be used repeatedly to view different budgets. A form is defined as a collection of text, ac- tive cells, and sub-forms, all represented inter- nally as objects. Sub-forms are forms themselves and may be displayed and edited accordingly. As 3 Ciccarelli’s work[Ciccarelli 84) also emphasizes separating presentation information from data. UBTOTALS -> $ 13908.00 UBTOTALS -> ---- an example, the form in Figure 2 has a sub-form labeled “PERSONNEL”. FORMAN has three main components: a form editor, a form data base, and a table that links items in the form data base to locations in the budget data base. The form editor is built on the MYPERCLASS object editor and is responsible for creating and maintaining the graphic ,images of forms.4 Form structures are stored and clas- sified hierarchically in the form database and can be specialized, copied and edited to create new form layouts. These layouts become views of budget data when form’s cells and sub-forms are linked to locations in the CONFRM database. A table object maintains these links. Each table entry points both to a CONFRM location and to all FORMAN objects that display the location’s value. The table provides a means for FORMAN to instruct the database to change a value and for the database to tell FORMAN when a value needs to be redisplayed. 4HYPERCLASS editors are hierarchies of CLASS objects that describe components of an editor (e.g., a window, com- mand menus, main and sub-editors), along with message receivers and associated functions that perform the essential editing tasks. 32 Automated Reasoning Early use of the system indicates that with flexibly defined forms and intuitive user inter- actions, FORMAN provides users of FRM with a powerful tool for creating views and using them to manipulate data. Further developments would increase FORMAN’s utility. These include a database browser for linking forms to data, and improvements to the human interface of the copy and linking mechanisms. 2.2 Constraint Representation and Management Spreadsheets operate with numerical constraints on the values of cells in a matrix. FRM extends the concept of constraints to include not only relations among numerical values, but also rela- tions among names, titles, and other symbolic values. FIRM encodes in constraints its knowledge of how to fill out or revise a form, and how to make substantive changes to budgets [Gelman $71. The system recognizes that some constraints are strong and must be satisfied with- out exception, while others reflect weak preferences, with many judgmental considerations in between. The language of constraints must be expressive enough to capture the following kinds of knowledge: 8 Definitions -- the total cost of a budget is the sum of the costs of its sections; 0 Rules & Policies -- a Principal Investigator must devote at least x% of his/her time to a project; e Promises & Commitments -- if you support my student this quarter, I will support yours next quarter; B Judgments & Preferences -- agency A is un- likely to support more than x% time for clerical support; a Planning Heuristics -- try to support student researchers full time during tbe summer, giving preference to PhD candidates over MS candidates; o Rebsadgeting Strategies -- when reducing a budget’s total cost, cut non-essential items be- fore essential items. The constraint whose syntax is illustrated in Figure 3 is a symbolic, preferential one that can- not be represented by a spreadsheet formula. hen more than two part-time secretaries provide support in a budget, it may be desirable to create a view that combines the clerical com- ponents into a single “super-secretary” item. This constraint will detect such a situation and modify the structure of the current budget view, while retaining a detailed underlying representation for use when the extra detail is appropriate. Super- items are described in Section 2.3. Still other kinds of constraints check on relationships between parts of a budget. For ex- ample, experience may show that telephone or supplies should be budgeted at a constant dollar amount times the number of full-time-equivalent employees. Such a constraint has a conditional corrective action. If no telephone expenses are yet budgeted, it creates a telephone with the indicated cost. If telephone costs are present but have a value inconsistent with the constraint, it updates the cost accordingly. CONSTRAINT: SuperSecretary Arguments = ($Budget $Secretary $AIISecretaries) IF-Clause = (Type? $Secretary SECRETARY) THEN-Clause = (Less (Length $AlISecretaries) 3) CorrectiveActions BindClause- = (CreateSuperltem $Budget $AllSecretaries) BindCIause-2 = (BIND $Secretary (confrm Personnelltems)) = (BTND $Budget (FindRoot $Secretary)) BindClause- = (BIND $AllSecretaries Strength q 4 (FindItems $Budget SECRETARY)) Priority = 300 ImposedBy = Agency A Source = Bittman Author = Ralston LastEdited = l/01/88 Figure 3: Syntax for a typical symbolic constraint. All of the FRM constraints have a common structure. A constraint is an object, created or edited through a specialized editor. The editor guides the input of slot values to ensure they are valid, and checks for consistency with pre- existing constraints [Altman 88-J. A constraint may have any number of arguments, which will be bound to values at execution time. An in- dividual clause is an expression consisting of ar- guments, constants, and the constraint language operators. The IF-clause corresponds to the pre- conditions of the constraint and is a logical ex- pression made up of zero or more clauses. The 2WE’iV-clause is a conjunction of clauses that describe a desired state. Corrective actions are statements specifying database modifications to be invoked upon detection of a violation. Each argument has a binding clause that binds it to either a database location, the value stored at such a location, or to the result of a functional expression The language allows bindings to be ex- pressed in terms of other arguments in the same constraint. Arguments are bound dynamically during constraint evaluation as their values are needed. All bindings are generated from the in- itial binding of the enable argument of the con- straint. The enable argument is the one cor- responding to the datum whose changing value triggered the constraint; it may be a different ar- gument each time the constraint is activated. Links between budget data and the constraints are created at the time a constraint is loaded. These links depend on binding the arguments to class objects in the database, and are used to en- able the constraint when slot values are changed. Enabled constraints are added to a task agenda from whence they will be evaluated by the Con- Gelman, Altman, Pallakoff, Doshi, Manago, Rindfleisch and Buchanan 33 straint Manager/Scheduler. The scheduler decides which of the pending tasks has highest priority and executes it. The priority attribute of a con- straint gives a default measure of the urgency of considering the constraint. The evaluation process begins with the IF clauses of the constraint. The IF-Evaluator checks each of these clauses to see if the pre- conditions are met. If they are, the THEN- Evaluator is called to check for a violation of the f;sEktnrelationship. If it is satisfied, no action . Otherwise, corrective actions may be undertaken to force satisfaction. Possible actions include filling in or overwriting database values, creating or deleting budget items, calling the planner (see Section 2.3, or consulting the user about an unusual situation. Our constraint language supports the specifica- tion of and reasoning about time intervals [Allen 84, Ladkin-A 86, Ladkin-B 86-3. Temporal representation in constraints supports viewing time slices of budgets which are equivalent to sub-budgets along the temporal dimension. Con- straints use appropriate rate computations that differentiate, for example, between yearly and monthly rates, and language operators implicitly handle variables whose values change over time. We provide a set of operators describing primi- tive temporal relations as well as higher level operators to manipulate intervals. Our extrapola- tion constraints provide a way to project a budget from one time interval to another using the time operators and methods that convert relative time intervals to absolute ones. Constraint hierarchies allow users more control over the invocation of families of constraints. Constraints are indexed by several attributes, such as expert source or strength. The user can load and delete groups of constraints using these or user-defined indices and thus have the system use one expert’s preferences or any other desired combination of constraints. Similarly, evaluation of some constraints may be deferred during hypothetical sessions or in early stages of budget preparation. There may be times when a manager decides to violate constraints, or is forced to compromise because of conflicts between constraints. FRM currently provides a simple means to manage these situations. Each constraint has a strength attribute, which indicates the importance of satis- fying its relationship. It provides a quantified measure of the hardness or softness of the con- straint. We believe negotiation expertise [Lax 861 is relevant when considering conflicting con- straints that have different criteria for impor- tance, and are looking at ways of incorporating this knowledge into the FRM planner. ecuasive Sub-budgets The design of CONFRM was guided by the need for a flexible and extensible representation that allows for multiple hierarchies. A budget is often part of a larger budget in an organizational framework, and conversely may itself represent the merger of smaller sub-budgets. The ERM sys- tem must be able to display budget information at an appropriate level of abstraction. Also, a budget may be organized quite differently for presentation to different agencies (e.g. NIH as opposed to NSF). In order to satisfy these needs we have implemented the concepts of recursive sub-budgets and perspectives. Several object hierarchies exist in the CON- ERM subsystem, the most central being the taxonomic Canonical Representation ierarchy (CRH). CRH class objects contain definitions of all budget object attributes, including slots for costs, descriptions, codes, etc. Object types be- come increasingly specialized as one moves downward through the CRH, e.g. the object Per- sonnelItems has slots for EmployeeName and Salary, while EquipmentItems has a UnitCost slot. There are two main subtrees in the CRH, one a hierarchy of budget items, indivisible budget ex- pense entities, the other of sections, which represent mergers of sub-budgets. Another CON- ERM hierarchy contains ItemTypes, a collection of several hundred objects, each describing a recognized type of budget expense, e.g. “Telephone Costs” or “Books and Publications.” Budget items may be made instances of these ob- jects, through which they may inherit various slot values and constraints. The ability to maintain multiple presentations of a single set of data is achieved through the use of perspectives. A perspective is a collection of objects and constraints that define a particular view of the full set of budget items. Each perspective has a designated root object. The sib- ling perspective objects form a tree of arbitrary depth below the root, successively refining the budget organization into sub-budgets. The ob- jects at the leaves of the perspective tree are sets (or sub-budgets) of actual budget items from the canonical hierarchy (see Figure 4). A perspective constraint may be associated with any leaf perspective object, e.g:, Domestic Travel in Figure 4. Such a constraint describes the con- ditions whereby a budget item could be a sub- budget of the perspective, and would be loaded automatically when the perspective is activated. Suppose, for example, the user preparing a budget under the NIH perspective adds an item to the “Supplies” section and enters “Furniture” as its description. The perspective constraints linked to the description field will be evaluated and the one governing membership in the Equipment sec- tion will fire. The constraint’s corrective action removes the item from the Supplies section and adds it to Equipment. 34 Automated Reasoning / OTHER EXPENSES / CONSORTIUM/CONTRACTUAL COSTS /ALTERATIONS AND RENOVATIONS ,OUTPATIENf PATIENT CPlRE COSTS,INPATIENT BUDGET TRAVEL /FOREIGN ---DOMESTIC SUPPLIES EQUIPMENT \ \CONSULTANT COSTS PERSONNEL Pig : The structure of the NIH perspective. Each node in the graph represents one perspective object. The sub-budgeting model extends from perspectives to other sets of budget items. New super-budgets can be created by combining two or more budget item sets into a super-set. Each set involved in such a merger maintains its iden- tity and may be viewed individually as before. Sets to be combined may represent different tasks or sub-projects within a project or may represent different time-slices of a single budget. The combining process is recursive in that super-sets may themselves be merged into larger sets. Returning to the “super-secretary” example dis- cussed in Section 2.2, a leaf node of a perspective hierarchy may be a super-item which is the com- position of two or more related items, but which we wish the system to treat for most purposes as a single item. The final product is a hybrid of a perspective object and a budget item. A super- item is a leaf node in the perspective tree to which it belongs, but is subject to constraints on perspectives as well as those on budget items. Controlling how super-items are constrained may provide the key to manipulating budgets at a high level of abstraction. If a manager is work- ing on an abstracted budget for an entire or- ganization, the items s/he sees will generally be super-items. Normally changes to costs in super- items pose complex planning problems in trying to propagate corresponding changes down to the component items. But suppose the system is in- structed to treat, for the interim, these super- items as items. They would thus be subject to* item constraints rather than perspective con- straints, and could be manipulated without resort- ing to planning processes. The necessary downward propagation of these changes could be deferred until such time as the manager wishes to concentrate on lower budgetary levels. A similar mechanism operating on the root of the perspec- tive could defer upward propagation. We are at present developing this functionality and believe it to be a feasible solution to potentially massive scoping and combinatorial explosion problems inherent in the budgeting process [Duda 873. xplanation facility has been implemented for that describes, on request, how a location red its current value, and if possible justifies the value. If the current value was set by the corrective action of a constraint, the explanation contains an automatically generated description of the constraint’s clauses and the arguments used in calculating the value. Explanations are recursive in that the values of these supporting arguments may in turn be questioned. In the case of a user-entered value, the explanation tells when and by whom the value was entered. The FRM planner is called by the constraint manager to determine a sequence of actions to fix a constraint violation. The current simple plan- ner [Ghan 87-J proceeds hierarchically. The generation of the next sequence of actions is guided by the solution produced at a higher level and by planning heuristics. Some heuristics determine the set of corrective actions which can be chosen at each planning step, while others prune and order the search space (explicit control knowledge is defined in the form of meta-rules). Another type is used to gauge the relative impor- tance of budget expenses. The hierarchical ap- proach and the application of heuristics produce a first solution which minimizes constraint viola- tions. owever, there are always many possible on of alternative solutions. all of the components we have described -- FO , the Constraint PLANNER, and the explanation module. experimental system demonstrates the advantages of tbe approach reported, even though it runs with one-half second to 15 second delays on the Xerox 1186 and has not been put into full opera- tional use. It duplicates and significantly exceeds the functionality of an earlier FORTRAN program that we used for budgeting and that had a knowledge of the rules for the Stanford en- vironment built in procedurally. Aside from the obvious improvements of a graphics-based inter- face, FRM provides a declarative specification of the basic budgeting and presentation rules so these can be changed at will. The most common budget preparation tools in use today are spreadsheet packages. hile these commercial systems are more polished than our prototype system, ERM has a number of powerful capabilities not provided by spreadsheets, includ- ing: Gelman, Altman, Pallakoff, Doshi, Manago, Rindfieisch and Buchanan 35 FRM can encode judgmental knowledge and provide suggestions. Constraints do not have to be rigid relationships. FRM can handle symbolic as well as numeric constraints, as exemplified by the “super- secretary” constraint. Constraints can produce structural changes to the budget by causing new items to be created or deleted as appropriate. Constraints can be expressed generically and need not be specifically connected to in- dividual cells. The delayed binding mechanisms in FRM allow constraints to be linked and invoked automatically whenever the triggering situation is detected in the budget form. The recursive sub-budget capability allows a flexible partitioning or aggregation of budget elements without specifically having to program the relationships and combination actions cell by cell. Different user preferences and institutional requirements for budget formats and infor- mation presentation can be accommodated through the mechanism of perspectives. The FRM planner can take into account tolerances on budget values in order to jointly satisfy constraints. Constraints can be over- ridden for specific cases and the planner can “reverse-engineer” line item changes. FRM has a simple explanation facility which allows the user to examine the chain of cal- culations or actions producing an observed value. This facility is not a model-based ex- planation at present as in [Mosy 84], but suf- fices for relatively tightly constrained budget- ing situations. In parallel with experimenting with the FRM system, we reproduced some of its functionality in a Microsoft EXCEL spreadsheet template using the macro facilities available. The EXCEL spreadsheet was extremely brittle in that it was not possible to protect users from overwriting formulas and still give them the ability to manipulate other items. The spreadsheet im- plementation tightly embeds the inter-element relationships with the data presentation, resulting in a rigid and opaque system. Trying to build in needed flexibility proved very frustrating because of the limited nature of the programming lan- guage provided to relate cells or manipulate them in macros. We believe that the FRM constraint- based model provides a much more powerful and flexible environment in which to express budgetary relationships and to support user interactions. We thank Jean-Luc Bonnetain, Jean-Luc Brouil- let, Dennis Chan, Craig Cornelius, Don Henager, and Carla Wong for their contributions to the FRM project. And we thank Reid Smith, Eric Schoen, and the Schlumberger Palo Alto Research center for their contribution and support of the CLASS/HYPERCLASS object-oriented system on which FRM is built. [Allen 841 Allen, J.F. Towards a General Theory of Action and Time. Artificial Intelligence 23(2):123-154, July, 1984. [Altman 88] Altman, S. Knowledge Aquisition and Representation in FRM. Internal Working Paper KSL 88-45, Stanford University, Knowledge Systems Laboratory, June, 1988. [Chan 871 Chan, 19. PLANNER: An Intelligent Budget Planner. Internal Working Paper KSL 87-74, Stanford University, Knowledge Systems Laboratory, June 1987. [Ciccarelli 841 Ciccarelli, E. Presentation Based User Interfaces. Technical Report AI-TR-794, MIT Artificial Intelligence Laboratory, August, 1984. [Duda 871 Duda, R.O., Hart, P.E., Reboh, R., Reiter, J. and Risch, T. Syntel: Using a Func- tional Language for Financial Risk Assessment. IEEE Expert 2(3):18-31, Fall, 1987. [Gelman $71 Gelman, A. CONFRM: Managing Financial Resources with Constraints. In ternal Working Paper KSL 87-14, Stanford University, Knowledge Systems Laboratory, February, 1987. [KQSY 84] Kosy, D.W. and Wise, B.P. Self- Explanatory Financial Planning Models. In Proceedings of the National Conference on Ar- tificial Intelligence, pages 176-181. American Association for Artificial Intelligence (AAAI), August, 1984. [Ladkin-A 861 Ladkin, P. Time Representation: A Taxonomy of Interval Relations. In Proceed- ings of AAAI-86, pages 360-366. AAAI, 1986. [Ladkin-B 861 Ladkin, P. Primitives and Units for Time Specification. In Proceedings of MI-86, pages 354-359. AAAI, 1986. [Lax $61 Lax, D. and Sebenius, J. The Manager as Negotiator. The Free Press, 1986. [Schoen 83] Schoen, E. and Smith, R.G. IM- PULSE, A Display-Oriented Editor for STROBE. In Proceedings of the National Conference on Artificial Intelligence, pages 356-358. AAAI, August, 1983. [SmithR 861 Smith, R.G. and Carando, P. Structured Object Programming in STROBE. Technical Report SYS-86-17, Schlumberger-Doll Research, October, 1986. 36 Automated Reasoning
1988
9
837
Formal Basis for Commonsense Abstraction of Dynamic Systems1 Uumi Iwasaki2 Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213 Abstract Abstraction is an essential technique in reasoning about complex systems involving a large number of variables and interconnections. Aggregation of dynamic systems is an abstraction technique whose application is easily observable in everyday life. The basic intuition behind aggregation of variables may be summarized as follows: if variables in a large dynamic system can be partitioned into subsets such that variables in each subset are more strongly connected to each other than to variables in other subsets, one can describe the short-run behavior of each subsystem independently of other subsystems. Furthermore, one can describe the long-run behavior of the entire system in terms of these subsets instead of individual variables, treating each subset as a black box. This paper provides a formal justification for commonsense abstraction based on aggregation of a dynamic system and presents a procedure for doing so. 1. Introduction Abstracting a detailed description to produce a simpler description is essential in reasoning about a complex system. Aggregation is one such abstraction mechanism whose application is easily observable in everyday life. The general problem is to compute the values of some variables of a complex, dynamic system. People confront such situations on a daily basis and am successful in quickly approximating the desired values with sufficient precision for their tasks. One such technique is to deal with aggregated variables rather than the original variables. An aggregated variable is one whose value depends on the values of some collection of variables. For example, it could be the sum of some variables. If the number of aggregated variables is less than the original number of variables, then it is clearly a computational advantage to recast the problem in terms of the aggregated variables. Let us take some concrete examples. 1. A well-known example by Simon of aggregation of variables is about heat flow within a building [Simon 811. Consider a building divided into a large number of rooms, which are in term divided into a number of offices by partitions. The buiMi.ng walls provide perfect thermal insulation from the environment. The wall between rooms am good but not perfect insulators while the partitions are poor insulators. In this situation, the temperature ‘This research was sponsored by the Defense Advanced Research Pro&s Agency ox)% ARPA Order No. 4976 under contract F33615-87- c-1499. Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, ermsylvania 15213 The building walls provide perfect thermal insulation from the environment. The wall between rooms are good but not perfect insulators while the partitions are poor insulators. In this situation, the temperature equilibrium among offices within one room will be reached very rapidly while equilibrium among rooms will be reached only slowly. Therefore, as long as one is not interested in modeling rapid temperature fluctuations within one room, a useful aggregation will be to have one temperature variable for each room and to assume equilibrium within a room is reached instantaneously. 2. ‘Common sense’ aggregation is also useful to experts in making quick but reasonably accurate computations. Consider the domain of electrical circuits. Aggregation of variables is used in the simplest of circuit calculations. An example would be to determine the behavior of charge flow between two banks of capacitors connected in parallel. If the banks are connected via a large resistor while the capacitors within a bank are connected in parallel using low resistance lines, a useful approximation would be to determine the expressions of the aggregated charge for a bank and assume instantaneous sharing of this charge within a bank. Hence having calculated the aggregated charge for a bank the approximate expressions for individual charges is easily obtained. Aggregation of variables can also be used in really complex systems. For example, an engineer in charge of a sub-station that supplies electrical power to a city block is more likely to be using an aggregated load in his calculations rather than the average individual loads of the houses in the block. Aggregation has been discussed by many researchers in Artificial Intelligence. Weld proposes an aggregation procedure by discovering a cycle in the simulated behavior of a model weld 861. Fishwick also detects cyclic behavior for the purpose of aggregation, but he does so not by simulation but by static analysis of data-flow graph of a process [Fishwick 871. ‘The author’s current Department of Computer Palo Alto, CA 94304 address is: Knowledge Systems Laboratory, Science, Stanford University, 70 1 Welch Rd., Iwasaki and Bhandari 307 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. the behavior description of components into a behavior description of the whole pylander 871. Their approach is to combine behavior descriptions of more components to generate a description of the behavior of the device as a whole by recognizing certain patterns, called causal patterns, in combinations of causal steps in component behavior descriptions and aggregating the steps into one abstract step. All these techniques am concerned with aggregating behavior consisting of a sequence of discrete steps. In this paper, we discuss a rather different aggregation technique for aggregating a behavior model represented, not in terms of explicit causal steps, but in terms of functional relations among variables. Simon and Ando provided a formal basis for aggregation, namely aggregation of variables [Simon and Ando 611 They proved that the above intuition was indeed true for the case of a nearly decomposable dynamic matrix with one significant characteristic root for each subsystem. courtois[CoLutois 771 specifies an aggregation procedure along with an error-of- approximation analysis for the special case of stochastic matrices that satisfy the Simon-Ando requirements. In this paper we extend the work by Simon, Ando, and Courtois by presenting an aggregation procedure for more general, non- stochastic dynamic systems that satisfy the Simon-Ando requirements. Though the procedure and example presented in this paper are numerical, the concepts of aggregation applies to qualitative models, also. The aggregation technique presented here provides justifications and suggests procedures for qualitative abstractions. This paper is organized as follows. Section 2 presents the concept of near decomposability of dynamic systems and its implication for their behaviors. Substantial background is presented to make this paper self-contained. Section 3, then, discusses construction of an aggregate system from a nearly decomposable system such that the behavior of the aggregate system is a reasonable approximation to the long-term behavior of the original system. Finally we discuss the implications of this work for common sense aggregation in Ax5fcia.l Intelligence. 2. Decomposable and nearly decomposable sys terns This section formally introduces the concept of a nearly decomposable system and describes the theorems proven by Simon and Ando about the behavior of such a system [Simon and Ando 611. Let M* be a self-contained dynamic system of n equations and P* be the matrix of coefficients in M*. Thus, M* consists of equations of the form; I *i = CYZ;lXl + a;& + * m * + sign, and P* is a matrix of the form; Iall, a12, . . . aln I I . . . I p* = I . . . I I . . . I la, Iv an2, . - . a,, 1 where a0 is the coefficient of Xj in the ith equation of M. P* is called completely decomposable if, by simple rearrangement of rows and columns, P* can be put in a block diagonal form as IP,* I I P,* I p* = I I I I I P#l where PI*s are square submatrices, N is the number of such submatrices in P*, and all the elements of P* not in any of the submatrices are zero. If the matrix of coefficients is completely decomposable, the dynamic system consists of independent components which do not interact at all and which behave independently of each other. The submatrices represent the components. Now, consider a slightly different, self-contained dynamic system M and its matrix P of the same size as M* and P*, such that P has the same diagonal submatrices as P* but the elements outside of the submatrices are either zero or very small, the magnitudes being less than E for some given E. P looks like, IP,* I I P,* I P = I I I I I PN*l where the elements of P outside of the submatrices are either E or zero. Then P can be expressed as P = P* + EC, where C is an arbitrary nxn matrix. A matrix such as P that can be put in this form is called a nearly completely decomposable matrix or a nearly decomposable matrix. The system M whose matrix P is nearly completely decomposable consists of components such that variables within each component interact strongly, but variables from different component interact relatively weakly. The submatrices represent such components and the E elements outside the submatrices represent weak links among components. ehavior of a Nearly Decomposable System For a dynamic system to be dynamically stable all its eigenvalues must be negative. We will assume that this is in fact the case for all systems we will discuss in this chapter. Furthermore, we assume that all the roots of the system are distinct. When all the roots are distinct, the time paths of the variables can be expressed as where - , h, are the characteristic roots, and is the eigenvector corresponding to the jth root. 308 Common Sense Reasoning Ifalltherootshl, . . . . h, are arranged in ascending order of the absolute values of their real parts, the contribution of the roots toward the end of the list to the dynamic behavior of the system will be damped rapidly, and will be of importance only to the short-term behavior of the system. For the long- term behavior,‘we can ignore these roots and treat the system as having a smaller number of degrees of freedom, corresponding to the number of roots we retain Let M be a dynamic structure with the corresponding matrix P that is nearly decomposable with N square submatrices. Let ml be the size of the Ith submatrix, and x1; - - - 7 “M, be the variables belonging to the Ith submatrix. Let h,; . . . , AmI be the characteristic roots belonging to the Ith submatrix. Without loss of generality, we will assume that the roots in each subsystem are arranged in ascending ordering of the absolute values of their real parts. Therefore, The time path of expressed as follows: each variable in the system can be (2) Alternatively, the time path of the entire system can be expressed as X(0 = z x A(t), where x(t) is the the column vector of the variables, and Z is the matrix whose corresponding to the eigenvalues columns are h ’ mN column vector of the exponential terms as .. eigenvectors and A is the A(t)* = 1 exp(~llO, eqG-$, - . - , exp(Amlt), expOQ, - . - , exp(h,2t), . . . , exp($NO, . - - , expGmNt)] Since P is nearly decomposable, the eigenvector “j, corresponding to the eigenvalue ~j, (I = 1toNandj = lto mI) is such that its elements z kLl.1 are very small for L f I. In the above expression for x(t)hK, these small zjfiK’s represent the effects of the variables outside the Kth subsystem on m, * K Simon and Ando show that the behavior of such a system may be approximately described in the following four stages [Simon and Ando 611: 1. short-run dynamics Variables in each subsystem are moving towards their relative equilibrium independently of other subsystems. 2. short-run equilibrium The most significant root of each subsystem dominates the behavior of the subsystem. 3. long-run dynamics The variables in each subsystem move together towards over-all equilibrium while maintainmg relative equilibrium in each subsystem. 4. long-run equilibrium Finally, the most significant root of the entire system dominates. When the behavior of a large system is approximately described in four stages as above, the goodness of the approximation naturally depends on how small the E’S are and also how dominan t the most significant root of each subsystem is compared to the rest of the roots. 3. Aggregation of Variables We describe the procedure for producing an aggregated matrix from a nearly decomposable matrix. In the description below we often drop the argument (t) of variables that are functions of time to improve readability. 3.1. Procedure for aggregation M is the nearly decomposable system defined in 2.1 with N submatrices and M* is the corresponding completely decomposable system. M and M* consist of n variables and equations. Each subsystem Mi and M*i (i = 1 to N) consists of mi variables xl,, . . . , x,,. Mi &SO consists of equations of the following f&n; I xji’ = fj;‘“y X2,’ * * - , xm,, X1,’ * * - 3 x , . . . , forj = ml,tO mi, x1 , . . . , N x, , N ) (3) and M*i of the following form; x- ’ = f*.(* f&j Ii $42; -**7 , xm.> = 1 to mi, wherefi,‘s andfij.,‘s are linear functions of their arguments. I I (4) Let hi be the most significant eigenvalue of Mi and let hi* be the corresponding eigenvalue of M*i. Let z’i = {‘* 1~ ‘*2.’ . . . ) z*,,) be the eigenvector correspond&g to i*i. Simon ad Ando show that XJX Ji ki = z*j/z*ki for j = 1 t0 mi. (5) We define the aggregate variable yi and another useful variable Zi for each subsystem Mi a~ fdlows: Yi = ZJ21xj i Zi = ~~I Z’j i It follows from equation (5) that (6) Iwasaki and Bhandari 309 xj!Yj = Z*ji / Z*j forj = 1 t0 ?TZi. (8) To prove this, it is sufficient to cross multiply and then approximately equate terms on b.oth sides of the equation using the relations in (5). To derive the aggregate matrix we need expressions for all y> in terms of the aggregate variables. Observe that it follows from the definition of the aggregate variables in (6) that (9) It is therefore possible to compute the aggregate matrix by doing the following for every subsystem Mi 1. Add the mi equations in (3). Note that the left- hand-side of the resultant equation is nothing but Y’i while the right-hand-side looks like the right- hand-side of (3). 2. Using the relations in (8) replace all xjk in the right-hand-side by z*~~/Z*~ for j= 1 to mk and for k= 1 toN. The aggregate system consists of the variables y 1 to yh7 and the N differential equations thus generated. 3.2. Example of aggregation We will give an example of model aggregation. Consider an environment where four species, a, b, c, and d, of organisms live. Assume that available resources and living space are fixed and limited, that’ the environment is isolated, and that there is no new resources added to the environment. The life of the four species of organisms are coupled in the following manner: a mainly preys on b but occasionally preys on c; c mainly preys on d but also preys on b occasionally; also b preys on d and d preys on a very infrequently. Let M be a nearly decomposable dynamic system with four variables x1,, x21’ 3,, and x2 representing the populations of the four species, a, b, c, ani d respectively. Suppose that the the following relations hold among the variables; n’1, = --50.00~, + 23.0(%, + (1.0000e-$.X1 1 (10) I xq = -1.0000X, -2 0.10000x~ + &’ (2.&o&-&2 XII2 = (1.~00&‘-3;2 1 17.000x2 = (3.000&-3)x, 0.9000ox~ 1 2 Let P be the following structure M. -50.000 23.000 -1 .OOoo -0.10000 0.0000 1.0000e-03 3.0000e-03 0.0000 matrix of coefficients for the 1 .OOOOe-O3 0.0000 0.0000 2.OOOOe-03 -47.000 17.000 -3 .oooo -0.90000 P is nearly decomposable and has the following two 2x2 submatrices, P* 1 and P*,. -50.000 23 .OOO p*1 = -1 .oooo -0.10000 -47.000 17.000 P*2 = -3 .oooo -0.90000 The eigenvalues of P associated with each submatrix are as follows: subsystem eigenvalues PI hl, = -0.56526 h2 = -49.535 I p2 = -2.6170 = -47.383 The eigenvectors of P*, and P*, corresponding to the eigenvalues h, and h, are as follows: z1 * = (6.46526: 1) 1 zl* = (0.38303, 1) 2 We will let variables yl(t) and y2(t) to be the aggregate variables for submatrices M, and M2 defined as follows: Yl = X1, + X2, y2 = 3, + X2, Differentiating both sides of the above two equations with respect to time yields; y’1 = X’l + XI2 1 1 y12 = X’12 + x’22 (14) (15) y1 and y2 have the following approximate, to the variables of the submatrices: linear relations 0.46526 3, = 1+0 46526’l = 0.3 17527~1 ‘1 X2, = 1+0.46526’l = 0.682473~ t .0.38303 Xl, = 1+0.38303y2 = 0.2769494y2 x22 = l+. i8303~2 = 0.723051Y2 Substituting the derivatives on the right-hand-side of (14) and (15) by the expressions on the right-hand-side of equations (10) through (13), and further substituting the occurrences of the original variables by their approximations in terms of Yi’S given above yields the following aggregate systems; Y’l = -0.565278~~ + 1.723047e-3y, (16) Y2’ = 1.63505-3~~ - 2.616978e-3y2 (17) 310 CommonSenseReasoning The variables y1 and y2 and equations (16) and (17) constitute the aggregate structure M’. The timepaths of the two eigenvectors are as follows: Yl = clexp(-0.565277t) + c20.839814e-03 exp(-2.61698r) (18) y2 = c,(-O.79692e-O3) exp(-O.565277t) + c,exp(-2.61698f) (19) cl and c2 are such constants that the relations (6) are satisfied at f = 0. Given the values the aggregate variables, the values of the original variables can be estimated by equation (8). 3.3. Conditions for Aggregation When the behavior of a nearly decomposable structure is approximated by an aggregate structure, the goodness of approximation will depend on the smallness of the matrix elements outside the submatrices and also on the degree of the dominance of the most significant root of each submatrix over other roots. This second condition can be stated more precisely as follows: For a nearly decomposable dynamic matrix to be aggregated as described in this section, the matrix must satisfy the following conditions; 1. There is one root in each submatrix such that the absolute value of its real part is smaller than any others in the submatrix. 2. The absolute value of the real part of the primary root of any subsystem must be smaller than that of any non-primary root of any subsystem. In other words, IR&ll) I < IRe(hi,> I forl = 1 toN,j = 2tom, andJ = 1toN. The necessity of the condition (1) above is obvious ‘if the movement of each subsystem is to be represented by one aggregate variable. The second condition is given by Courtois [Courtois 771. Courtois showed that the second condition is necessary for the case of stochastic matrices, but that it is also necessary for more general cases can be easily seen as follows: Given a dynamic structure, equation (2) expresses the time path of each variable. Without loss of generality, assume that all the characteristic roots h,.,‘s (i = 1 to Mi> associated with each submatrix Pi G = 1 ‘to lV) zux arranged in increasing order of the magnitude of their real parts. Therefore, for each submatrix Py h, is the most significant root. Then, equation (2) can be expanded as follows: x$) = s, + s, (20) where iv s, = c 2. I=1 ‘I+ exp(h q) and N mI s, = cc I=lj=2 zjp,eV(hilf) In aggregating a matrix, one discards S, in (20). Therefore, if aggregation is to produce a reasonable approximation, the exponential terms in the discarded term, S,, must diminish before the exponential terms in S,. In other words, the magnitudes of the real parts of the must be smaller than those in S, eigenvalues involved in s, 3.4. Subsystems with multiple significant roots The two conditions discussed above for aggregation makes the applicability of the aggregation procedure somewhat limited. However, the concept of nearly decomposable systems and the discussion of behavior of such systems in Section 2 are more generally applicable, and the aggregation procedure can be generalized to cases where the conditions are not necessarily satisfied; in particular to cases where a submatrix have any number of non-negligible roots. Let h,; . . . , h”, be the eigenvalues associated with the Zth submatrix. Without loss of generality, we will assume that these eigenvalues are arranged in ascending order of the absolute values of their real parts. For some given threshold value 3Lo > 0, we partition this set of eigenvalues for the Zth submatrix into two subsets, one containing those the absolute values of whose real parts am less than or equal to I+, and the other containing the rest. Let s, be the number of the eigenvalues belonging to the first subset. We will refer to the eigenvalues in the first subset as significant roots and those in the second subset as insignijkant roots. Therefore, the set of the significant roots is Ih,; $Y - * - h,,J, where I Re(h;l) I 5 ho, and the set of insignificant roots is Ih 1 +s,’ 3L2;+s,, a * e h,llV where IRe(hi,) I > ho. After the contributions of the insignificant roots vanish, the long-run behavior of each subsystem MI can be described with sI (sI < m,) significant roots. define values for each subsystem sI aggregate of the original variables x1 , x2, , I Therefore, one can variables such that the . . . . xmI after some time can be approximated as &ear combinations of the aggregate variables. Then, M can be rewritten in terms of these aggregate variables to produce an aggregate system. The aggregate system will describe the long-term behavior and will have fewer degrees of freedom than the original one. 4. Discussion This paper focuses on devising a formal model, namely aggregation of variables, for common sense abstraction. This work builds on existing work on nearly decomposable dynamic systems. An aggregation procedure aggregates a dynamic structure by defining one aggregate variable for each subsystem based on its most significant roots and rewriting the entire structure in terms of these aggregate variables. For lwasaki and Bhandari 311 this procedure to be applicable, the most significant root of each subsystem must be such that the magnitude of its real part is smaller than any insignificant root of any subsystem. When a dynamic system is aggregated in this manner, each subsystem can be represented by one aggregate variables. We generalized this procedure to make it applicable to cases where each subsystem has more than one non-negligible roots. In such cases, as many aggregate variables as there are significant roots (with respect to some threshold value) in each subsystem are defined for a subsystem. courtois performed in-depth analysis of approximation error in aggregation for stochastic systems [Courtois 771. Similar analysis or approximation error for the general aggregation procedure presented in this paper will be useful and represents immediate future work for this project, Note that the aggregate variables are defined as sums of the variables in a subsystem. As the formal model treated here covers quite a large class of systems we think that sums or simple linear functions of strongly connected variables will comprise a significant percentage of common sense aggregation examples. The examples in Section 1 seem to support this point. Though the procedure and example presented in this paper are numerical, the relevance of this work is not limited to cases where numerical information of functional relations among variables is available. Even when only a qualitative model exists, model aggregation is possible and is often performed based on such qualitative knowledge as relative strengths of interactions among variables and groups of variables and relative speeds at which groups of variables reach equilibrium through workings of causal mechanisms in the system. The work presented here provides justifications and suggests procedures for performing such qualitative aggregation. One of the authors discusses related qualitative aggregation techniques and their relations to the notion of causality in a separate document [Iwasaki 881. Kuipers uses abstraction by time-scale in order to control the exponential growth of the number of possible courses of behavior in qualitative simulation wuipers 871. Kuipers has a hierarchy of constraint networks of very fast to very slow mechanisms. When simulating a fast mechanism, variables controlled by slower mechanisms are considered constant, and when simulating a slow mechanism, equilibrium among variables controlled by faster mechanisms is considered to be reached instantaneously. This idea of abstraction by time- scale is very similar to the notion of abstraction discussed in this thesis. However, Kuipers does not explore the issue of generating such a hierarchy of models from one original model. The aggregation technique discussed in this paper can be used to generate a hierarchy of models of different time- scales. Acknowledgement We thank Dr. Herbert Simon for providing advice throughout this project. However the responsibility for views presented in this document rests solely with the authors. [Bylander 871 B ylander, T. Using Consolidation for Reasoning about Devices. Technical Report , Laboratory for Artificial Intelligence Research, Department of Computer and Information Science, The Ohio State University, 1987. [courtois 771 Courtois, P. J. Decomposability: Queueing and Computer System Applications. ACM Monograph Series. Academic Press, 1977. [Fishwick 871 Fishwick, P. A. Inferring Causality and Cyclic Behavior through Data Flow Analysis. In Proceedings, The Methodologies for Intelligent Systems Symposium: Second International Symposium, Elsevier North-Holland, 1987. pwasaki 883 Iwasaki, Y. Model-Based Reasoning of Device Behavior with Causal Ordering. PhD thesis, Department of Computer Science, Carnegie Mellon University, 1988. [Kuipers 871 Kuipers, B. Abstraction by Time-Scale in Qualitative Simulation. In Proceedings, 6th National Conference on Artificial Intelligence, 1987. [Simon 811 Simon, H. A. The Sciences of the Artificial. Ml-I- Press, 1981. [Simon and Ando 611 Simon, H. A. and Ando, A. Aggregation of Variables in Dynamic Systems. Econometrica29, 1961. [Weld 86) Weld D. S. The Use of Aggregation in Causal Simulation. Arttjkial Intelligence30( l), 1986. 3 12 Common Sense Reasoning
1988
90
838
Causal Ordering in a Yumi Iwasaki2 egartrnent of Corn Carnegie Mellon ittsburgh, Peunsylvania 15213 Abstract This paper describes a computational approach, based on the theory of causal ordering, for inferring causality from an acausal, formal description of a phenomena. Causal ordering is an asymmetric relation among the variables in a self- contained equilibrium and dynamic structure, which seems to reflect people’s intuitive notion of causal dependency relations among variables in a system. This paper extends the theory to cover models consisting of mixture of dynamic and equilibrium equations. When people’s intuitive causal understanding of a situation is based on a mixed description, the causal ordering produced by the extension reflects this intuititve understanding better than that of an equilibrium description. The paper also discusses the view of a mixed model as an approximation to a completely dynamic model. Much of sciences and engineering is concerned with characterizations of processes by equations that describe the relations that hold among parameters of objects and that govern their behavior over time. Formal treatment of the foundations of sciences have avoided notations of causation and spoke only of functional relations among variables. Nevertheless, the notion of causality plays an important role in our understanding of phenomena. Even when a formal description of a situation is given in terms of acausal, mathematical relations, informal explanations of the form, “A causes B” are exceedingly common. People are able to explain the behavior in causal terms while using an acausal formal description of a situation. This paper describes a computational approach, based on the theory of causal ordering, for inferring causality from an acausal, formal description of a phenomena. Causal ordering, first presented by Simon [Simon 521, is an asymmetric relation among the variables in a self-contained equilibrium or dynamic model, which seems to reflect people’s intuitive notion of causal dependency relations among variables. This ‘This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under contract F33615-87-C-1499. tie author’s current address is: Knowledge Systems Laboratory, Department of Computer Science, Stanford University, 701 Welch Rd., Palo Alto, CA 94304 intuitive causal understanding of a situation is based on a mixed description, the causal ordering produced by the extension reflects this intuitive understanding better than that of an equilibrium description. The paper also discusses the view of a mixed model as an approximation to a completely dynamic model. ordermg was mm equilibrium structure consisting of equilibrium equations [Simon 521. First, we define an equilibrium structure: eflnition 1: Self-contained equilibrium structure A self-contained equilibrium structure is a system of n equilibrium equations in pz variables that possesses the following special properties: 1. That in any subset of k equations taken from the structure at least k different variables appear with nonzero coefficients in one or more of the equations of the subset. 2. That in any subset of k equations in which m 2 k variables appear with nonzero coefficients, if the values of any (m - k) variables are chosen arbitrarily, then the equations can be solved for unique values of the remaining k variables. The condition (1) above ensures that no part of the structure is over-determined. The condition (2) ensures that the equations are not dependent because if they are the equations cannot be solved for unique values of the variables. The idea of causal ordering in a self-contained equilibrium structure can be described roughly as follows. A system of n equations is called self-contained if it has exactly n unknowns. Given a self-contained system, S, if there is a proper subset, s, of S that is also self-contained and that does not contain a proper self-contained subset, s is called a minimal complete subset. Let So be the union of all such minimal complete subsets of S; then So is called the set of minimal complete subsets of zero order. Since So is self-contained, the values of all the variables in So can, in general, be obtained by solving the equations in So By substituting these values for all the occurrences of these variables in the equations of the set (S - S& one obtains a new self-contained structure, which is called the derived structure offirst order. Let S, be the set of minimal complete subsets of this derived structure. It is lwasaki 313 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. called the set of complete subsets of 1st order. Repeat the above procedure until the derived structure of the highest order contains no proper subset that is self-contained. For each equation ei in S, let Vi denote the set of variables appearing in ei, and let Wi denote the subset of Vi containing the variables belonging to the complete subsets of the highest order among those in VP Then, the variables in Wi are said to be directly causally dependent on the elements in (Vi - Wi). In order for the above procedure to produce causal relations in the model that agrees with our intuitive understanding of the causal relations in the real situation, the equations comprising a model come from an understanding of mechanisms. The term mechanism is used here in a general sense to refer to distinct conceptual parts in terms of whose functions the working of the whole system is to be explained. Mechanisms are such things as laws describing physical processes or local components that can be described as operating according to such laws. An equation representing such a mechanism is called a structural equation, and every equation in the model should be a structural equation standing for a mechanism through which variables influence other variables. One thing to note about the method of causal ordering is that it does not require knowledge about the precise functional forms of equations. The only information that the method makes use of is what variables appear with a non-zero coefficient in what equations, which in terms of mechanisms translates to what variables are causally linked by each mechanism. 3. Example : Bathtub Though the causal structure produced by the method of causal ordering usually agrees with people’s intuitive notions of causal relations [Iwasaki and Simon 86, Iwasaki 871, sometimes cases arise where a causal structure produced does not agree with human intuition. We present one such case to motivate extension of the method to dynamic and mixed structures. The device used as an example is a bathtub as shown in Figure 3-l [Kuipers 87a]. There are four variables; the input and output flow rates Qi, and Qout, the amount of water in the tub, A, the valve opening, K, and the pressure in the bottom of the tub, P. a,,7 ILK-I b Qout Figure 3-1: Bathtub The situation can be characterized by the following four equations, where cl, c2, and c3 represent positive constants. Q Out = KP (1) The output flow rate is proportional to the pressure. A = ClP (2) The pressure is proportional to the amount of water Q Out = Qin (3) When the system is in equilibrium, the input flow equals the output flow. Qin = ~2 (4) K = c3 (5) The input flow rate and the valve opening are exogenous. (externally controlled) The causal ordering produced for this bathtub model is shown in Figure 3-2. “x --> y” means that variable y is causally dependent on x. Figure 3-2: Equilibrium Causal Ordering of Bathtub The causal structure shown in Figure 3-2 may seem counter- intuitive. It shows that the output flow rate directly depends on the input flow rate, the pressure depends on the output flow rate, and the amount depends on the pressure. However, intuitively speaking, adding water to the tub increases the amount (A), which increases the pressure (P), which in turn increases the output flow rate (Q,,). Figure 3-3 shows this “intuitive” causal ordering. Q in j A -> P j Qo,t 1‘ K Figure 3-3: Intuitive Causal Ordering of Bathtub In what follows I will first show that the causal ordering in Figure 3-2 is in fact the correct ordering for an equilibrium model. In the next section, I will show that the “intuitive” causal ordering can be obtained by the extension of causal ordering to make it applicable to systems of dynamic equation as well as mixture of dynamic and equilibrium equations. In order to see that the ordering in Figure 3-2 is correct, one must realize that the model is an equilibrium one. In an equilibrium model, quantities represent the final values assumed by variables when equilibrium is attained and not transient values. In the bathtub example, if the input flow is decreased suddenly, it will cause immediate disturbances in the values of other variables. However, the entire system will be in a steady state only when the output flow again becomes equal to the input flow, which is the situation the equilibrium 314 Common Sense Reasoning model depicts. Suppose that the value of K is changed by opening up the valve a little more, then an immediate reaction will be that Q out wilI increase. However, when equilibrium is restored eventually, assuming that it will, the equilibrium value of Qoti must be equal to Qin (otherwise, the system would not be in equilibrium). Thus, changing K only affects the equilibrium values of P and A but not Qoti Therefore, equilibrium value of Qoti cannot be dependent on P or A, a fact correctly reflected in the ordering of Figure 3-2 but not in Figure 3-3. Nevertheless, it is true that ordering in Figure 3-3 seems to capture some intuitive notion of causality in the situation. It is because in this case our “intuitive” causal understanding is of a dynamic situation rather than that of the equilibrium situation represented by the model above. The next section presents causal ordering in dynamic and mixed systems. Given a self-contained dynamic structure, one can perform elementary row operations to the equations to solve them for the n derivatives. This operation produces an equivalent system of equations in canonical form. A differential equation is said to be in canonical form if and only if there is only one derivative in the equation, and the derivative is the only thing appearing on the left-hand-side of the equations. A self- contained dynamic structure in n variables, x1, . . . , x~, in canonical form consists of n equations of the following form, wherefi’s (1 I i I n) are functions of their arguments: 3 ’ = fi<+ x2, * * * , x,1 We interpret the equations of structure in this form to be mechanisms of the system. Therefore, the ith equation, the only one containing xf, is regarded as the mechanism determining the time path of Xi. Furthermore, variable Xi, whose derivative appear in the ith equation, is said to be directly causally dependent on the variables that appear with a non-zero coefficient in the equation. . Causal ordering in a dynamic and mixed structures In this section, we define self-containment and causal ordering for structures consisting of differential equations and mixture of differential and equilibrium equations. We will then show that the “intuitive” causal ordering similar to that in Figure 3-3 emerges as the causal ordering in a mixed model of the bathtub. 4.1. Causal ordering in a dynamic structure Dynamic causal ordering is defined for systems consisting of first order differential equations. Since a differential equation of higher order can be converted into a set of first order equations by introducing new variables to stand for derivatives, the definition of causal ordering presented here applies to a very wide class of dynamic systems. Following is the definition of a self-contained dynamic structure [Simon and Rescher 661: efinition 2: A self-contained dynamic structure A self contained dynamic structure is a set of n first-order differential equations involving n variables such that: 1. In any subset of k equations of the structure the first derivative of at least k different variables appear. 2. In any subset of k equations in which r (r 2 k) first derivatives appear, if the values of any (r - k) first derivatives are chosen arbitrarily, then the remaining k are determined uniquely as functions of the n variables. The above definition of self-containment for a dynamic structure is analogous to that for an equilibrium structure. The condition (1) above ensures that no part of the structure is over-determined while the condition (2) ensures that the structure is not under-constrained. 4.2. Causail ordering in a mixed model Systems are in practice often described in terms of a combination of equilibrium and dynamic equations. A such mixed structure is a natural extension of dynamic structures. Before defining self-containment for mixed structures, we must introduce some notations. Let M be a system of n equations in n variables such that some of the equations are equilibrium equations and others are first-order differential equations. Then, let Dynamic(M) be the subset of M consisting of all the differential equations in M, and let Static(M) be the set consisting of all the equilibrium equations in M and one constant equation for every variable v whose derivative appears in Dynamic(M). A constant equation of a variable is an equation of the form, v = c, where c is a constant. The intuitive meaning of the set Static(M) may be understood as follows: the equilibrium equations in a mixed set represent mechanisms that restore equilibrium so quickly that they can be considered to hold in 0 units of time within some time-frame (e.g. days if the time-frame is centuries). On the other hand, the dynamic equations represent slower mechanisms that require non-zero amounts of time for the variables on their right hand sides to affect the variable on their left hand sides. Therefore, in a very short period of time -- shorter than is required for the variables on the right hand side of the differential equation of a slow mechanism to appreciably affect the variable on the left hand side -- the variable on the left hand side can be considered unchanging. Thus, the set Static(M) represents a snap-shot picture (i.e., a very short-term equilibrium description) of the dynamic behavior of mixed structure M. Let M be a system of n equations in n variables such that some of the equations are static equations and others are dynamic equations of the type defined in the previous section. Definition 3: The set M of n equations in n variables is a self-contained mixed structure iff: 1. One or more of the n equations are first-order Iwasald 315 differential equations equilibrium equations. and the rest are 2. In any subset of size k of Dynamic(M), the first derivative of at least k different variables am=-. 3. In any subset of size k of Dynamic(M) in which r (r 2 k) first derivatives appear, if the values of any (r - k) first derivatives are chosen arbitrarily, then the remaining k are determined uniquely as function of the n variables. 4. The first derivatives of exactly d different variables appear in Dynamic(M) where d is the size of the set Dynamic(M). 5. Static(M) is a self-contained equilibrium structure. Given a self-contained mixed structure, as defined above, the causal ordering among its variables and derivatives follow the definitions of causal ordering in dynamic and static structures. In other words, the causal ordering in a mixed structure can be determined as follows: 1. The ordering among n variables and m derivative in subset Dynamic(M) is given by the definition of causal ordering in a dynamics structure. 2. The ordering among variables (but not their derivatives) in Static(M) is given by the definition of causal ordering in an equilibrium structure. 4.3. Mixed model of the bathtub Now, we are ready to look at the bathtub example again. Let M be a mixed structure consisting of equations (l),(2), (4), (5) and the following differential equation in place of (3); A’ = Sin - Qou WI M is a self-contained, mixed structure according to the definition given above. Dynamic(M) consists of equation (3d) alone, and Static(M) consists of equations (l), (2), (4), (5) and the following constant equation: A = c3 (W The causal ordering in M is shown in Figure 4-1. In the figure, an integration link, which is an edge connecting a derivative of a variable to the variable itself, is marked by i, causal links in the dynamic part of the model (Dynamic(M)) are indicated by arrows of broken lines. The causal structure indicates existence of a feedback loop. The structure may be explained informally in English as follows: The output flow rate depends on the pressure, which depends on the amount of water the in the tub. The rate of change of the amount of water is determined by the input and output flow rates. Figure 4-1: Mixed Causal Ordering of Bathtub The reason for mixing in one model equilibrium equations (1) and (2) and differential equation (3d) is because the equilibrium relations represented by the first three equations are restored much more quickly (in fact, almost instantaneously) when disturbed than the relation represented by equation (3). Therefore, in a model of a medium temporal grain-size, it is reasonable to treat (3) as taking time but to treat others as instantaneous. 5. A mixed s%.ructure as an ap roximation to a dynamic structure A mixed structure can be viewed as an approximation to a dynamic structure. When a mechanism in a dynamic structure acts very quickly to restore relative equilibrium, one can regard it to be acting instantaneously. Or, when a mechanism acts so much more slowly than other mechanisms in the system that its effect on the variable it controls is negligible, the variable may be considered constant. In these cases, the description of the system’s dynamic behavior may be simplified by replacing the fast-acting mechanism by an equilibrium equation or the slow mechanism by a constant equation. This section discusses generating a mixed structure from a dynamic structure as an approximation to the latter through these two techniques. 5.1. Equilibrating ynamic Equations We will use the term equilibrating to refer to the operation of replacing a dynamic equation by its corresponding equilibrium equation. Since the differential equations are assumed to be in canonical form, equilibration is accomplished by replacing the left hand side by 0. There are a whole range of mixed structures between the completely dynamic structure and the equilibrium structure depending on the temporal grain size selected for the model. However, substituting an arbitrary subset of a dynamic self- contained structure with the corresponding static equations will not necessarily produce a self-contained mixed structure. Moreover, not every self-contained dynamic structure produces a self-contained equilibrium structure when every equation is replaced by the corresponding equilibrium equation. Let us call a variable self-regulating if its derivative is a function of the variable itself, and non-self-regulating otherwise. Definition 4: Self-regulating variables and equations A differential equation in canonical form is called self-regulating if the variable whose derivative is the left hand side of the equation also appears on the right hand side with a non-zero coefficient. Such a variable is also called a self-regulating variable. It can be proved that equilibrating any number of self- regulating equations in a self-contained dynamic or mixed structure will always produce a self-contained mixed structure (or a self-contained equilibrium structure if no more dynamic 3 1 G Common Sense Reasoning equations are left). However, equilibrating a non-self- regulating equation may produce an over-constrained structure. The following theorem states this fact. The proof is given elsewhere [Iwasaki 883. Theorem 5: Equilibrating any number of self- regulating equations in a self-contained dynamic or mixed structure always produces a self-contained mixed structure (or a self-contained equilibrium structure if all the dynamic equations in the original structure have been equilibrated.) K’ = cy WI K’ ,+)K A’+A’-) P+) p -> Q;ut+)Q ? I out 5.2. Exogenizing Dynamic Equations In contrast to variables that adjust to changes in other variables very quickly to restore relative equilibrium, some variables responds so slowly to changes in other variables that they can be regarded as independent of other variables. The equation corresponding to such a variable can be replaced by an exogenous variable equation, which amounts to deleting from the system under consideration the slow mechanism through which others influence this variable. We will call this operation of replacing a dynamic equation by an exogenous variable equation exogenizing. There are two ways to exogenize a variable: Case 1: If a variable Xi is changing but the rate of change depends mostly on Xi itself and very little on other variables, they can be deleted from the expression on the right hand side of the differential equation to make it a function of xi alone. Case 2: If a variable is not only unaffected by other variables but is hardly changing, the dynamic equation can be replaced by a constant equation of the variable. Conceptually, exogenizing is the opposite of equilibrating, because exogenizing a variable assumes it is unaffected by other variables while equilibrating a variable assumes it responds to changes in other variables extremely quickly to restore equilibrium. Exogenizing a variable amounts to deleting a mechanism from the system by placing the mechanism determining the value of the variable outside the scope of the system under consideration, and it is reasonable to do so only when the feedback to the variable from the variables inside the mechanism is negligible. Exogenizing a variable in a self-contained structure always produces a self- contained structure. The proof, given elsewhere, follows directly from the definition of self-containment of a mixed structure [Iwasaki 881. Theorem 6: Exogenizing an equation in a self- contained dynamic or mixed structure always produces a self-contained structure. 5.3. Bathtub example revisited Consider a totally dynamic model, D, of the bathtub example consisting of equation (3d) and the following equations. The causal ordering in this dynamic structure is shown in Figure 5-l. Q f *Ui = c4(Qout - fW W P’ = c&P - A) CW Q’in = c6 W) Figure 5-P: Causal Ordering in D If it assumed that the mechanism represented by equation (Id) acts very quickly to restore equilibrium, one can replace the equation by the corresponding equilibrium equation (1). It can be easily verified that the resulting mixed structure is self- contained. Likewise, replacing- equation (2d) by the corresponding equilibrium equation also results in a self- contained mixed structure. If both equation (Id) and (2d) are equilibrated, the result is also a self-contained mixed structure. The mixed structure M in Section 4.3 is produced by assuming the mechanisms of (Id) and (2d) to act-very quickly and also assuming at the same time that Q, and K are hardly changing. However, if it was assumed that the mechanism represented by equation (3d) acted very quickly but that the mechanisms of (Id) (2d) were slow, the resulting mixed structure, M’, consisting of equations (Id), (2d), (4d), (5d) and (3) would not be self-contained because Static(M’) is not self-contained. This fact can be intuitively explained by examining the causal structure in Figure 5-l. Since the only causal path from Q, to Qoti - in the causal graph is <Q,, A’, A, P’, P, Q,d’, Q,J, the equilibrium between Qin and Qoti cannot be restored before A and P are restored to equilibrium. Therefore, it produces a contradiction to equilibrate equation (3d) without equilibrating (ld) and (2d) at the same time. 6. Discussion We have extended the method of causal ordering to dynamic and mixed structures. Making assumptions about relative speeds of adjustment in mechanisms in a dynamic structure amounts to classifying the variables into three categories; 1. Variables whose rates of change are influenced only very little by other variables; 2. Variables that adjust so quickly that they are always close to relative equilibrium with other variables; 3. All other variables. This idea is closely related to that of aggregation of nearly decomposable dynamic system by Simon and Ando. &a.rly decomposable systems are those consisting of subsystems of variables such that the variables within a subsystem interact strongly while the interactions among subsystems are much Iwasakl 317 weaker. They showed that when a given dynamic system is nearly decomposable, and if one is only interested in the long- term dynamics of the system, then one can aggregate the subsystems, assuming them to be always in steady-state relative equilibrium, and consider only the movements of the aggregated systems [Simon and Ando 611. Their work provides theoretical justification for generation of a mixed structures as an abstraction of a completely dynamic structures using the techniques discussed in Section 5. relation. Journal of Philosophy49:517-28,1952 [Simon and Ando 611 Simon, H. A. and Ando, A. Aggregation of Variables in Dynamic Systems. Econometrica29, 1961 [Simon 661 Simon, H. A. and Rescher, N. Causes and Counterfactual. Philosophy of Science33:323-40, 1966 The idea of abstraction by time-scale is used by Kuipers [Kuipers 87b3 in order to control the exponential growth of the number of possible courses of behavior in qualitative simulation. The techniques discussed here can be used to generate models of different time-scales. The approaches described in this paper have been fully implemented as part of a computer program named CAOS for reasoning about system behavior in the domain of a coal power plant. The program consists of a collection of modules for generation of equation models, causal analysis of models, dynamic stability analysis, and qualitative prediction of the effects of external disturbance. The method of causal ordering in a mixed system has also been used in a program called YAKA to perform diagnosis of faults in oil refinery plant [Lambert et al. 881. Acknowledgment The author would like to thank Professors Herb Simon and John McDermott for their guidance in conducting this research, and Professor Ben Kuipers for helpful discussions of the bathtub example. References [Iwasaki 871 Iwasaki, Y. Generating Behavior Equations from Explicit Representation of Mechanisms. Technical Report CMU-CS-87-131, Department of Computer Science, Carnegie Mellon University, 1987 [Iwasaki 881 Iwasaki, Y. Model-Based Reasoning of Device Behavior with Causal Ordering. PhD thesis, Department of Computer Science, Carnegie Mellon University, 1988 [Iwasaki and Simon 861 Iwasaki, Y., and Simon, H. A. Causality in Device Behavior. Artificial Intelligence29 1986 [Kuipers 87a] Kuipers, B. The Bathtub Counterexample. Unpublished manuscript. [Kuipers 87b] Kuipers, B. Abstraction by Time-Scale in Qualitative Simulation. In Proceedings, 6th National Conference on Artificial Intelligence, 1987 [Lambert et al., 881 Lambert, H., Eshelman, L., and Iwasaki, Y. Using Qualitative Physics to Guide the Acquisition of Diagnostic Knowledge. In Proceedings of the Third International Conference on Applications of Artificial Intelligence in Engineering, 1988 [Simon 521 Simon, H. A. On the definition of the causal 318 Common Sense Reasoning
1988
91
839
Controlling Qualitative Resolution Jean-Luc Dormoy Electricit de France Research Center Clamart, France LAFQRIA - Pierre & arie Curie Paris University Abstract We proposed earlier in [Dormoy & Raiman, 19881 a new way of reasoning about a device, we called “Assembling a Device”. Starting from a component description (namely confluences), the qualitative resolution rule provides task-oriented global relations which link the physical quantities involved in a device to some selected reference variables. This rule is complete: given any task to be performed (simulation, postdiction,...), it discovers an assemblage, i.e. a set of relations reducing the task to a straightforward propagation. We might thus expect to apply qualitative reasoning to large-scale systems. Unfortunately, the number of potential applications of the resolution rule is likely to increase exponentially as it is being fired. This behavior has to be related to the NP-completeness of the problem which consists of solving a set of confluences. In this paper, we present a heuristic for controlling the resolution rule, i.e. for choosing between its potential applications, and a collection of simple rules for speeding it up. This heuristic has a combinatorial form, but it is based on a simple commonsense idea. At the same time, it is borne out by mathematical results. Theoretically, a qualitative model can be out of its scope, but we have not yet hit upon a physical system with this kind of pathology. In [Dormoy & Raiman, 19881, we proposed a new way of reasoning about a device, called “Assembling a Device”. Starting from a component description (namely confluences), the qualitative resolution rule provides task-oriented global relations which link the physical quantities involved in a device to some selected reference variables. This rule is complete: given any task to be performed (simulation, postdiction,...), it discovers an assemblage, i.e. a set of relations reducing the task to a straightforward propagation. We might thus expect to apply qualitative reasoning to large-scale systems. All this is developed in detail in the above mentioned paper [Dormoy & Raiman, 19881 (this issue), and we strongly recommend that the reader consult it before reading the following. Solving a set of confluences turns out to be an NP-complete problem [Dormoy, 19871. Hence, the number of potential applications of the resolution rule is likely to increase exponentially as it is being fired. In practical terms, combinatorial explosion happens even when dealing with very simple models. In the second section, we present a heuristic, which we call the “joining rule”, for controlling the resolution rule, i.e. for choosing between its potential applications. It is based on the simple commonsense idea of consolidation [Bylander, 19871. At the same time, it is borne out by mathematical results. In theory, a qualitative model may be out of the scope of this heuristic. We justify why we have not yet hit upon a physical system with this kind of pathology. Though the joining heuristic is self-sufficient, some rules can be added to the basic machinery in order to speed up the assembling step. We present them in the third section and we show how the whole system works through the use of a simple example. In conclusion, we think that the assembling technique, controlled by the joining heuristic, can assemble large artefacts. We are currently working on a model for a large-scale plant. 2. Consider a component-based model of a device, and let Cl, C2 and C3 be three mutually interacting components. If we denote C 12 the compound component Cl2= {C 1 ,C2), the interactions between Cl and C2 define how Cl2 works. Indeed, they are of no interest to C3: from C3’s point of view, the set made up of Cl and C2 is equivalent to C12. C3 cannot distinguish Cl and C2 from each other. Hence, it should be possible to draw a model of Cl2 from models of Cl and C2 regarded by C3 as equivalent. Joining local models together in order to provide more global ones is what has been called consolidation [Bylander, 19871. The problem lies in giving concrete expression to this idea. In particular, certain rules must be stated as regards to the selection of the pair of components to be consolidated at each inference step: the pair certainly cannot be randomly selected. 2.2 e resolution rule under the microscope Dam-toy 319 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. b Figure 1: Joining two components In a confluence-based model, C 1 and C2 interact through their common variables. Hence, building a model for Cl2 means providing confluences by eliminating them. Consider a variable involved in both Cl and C2 models. If it is involved in some other component model, then it must appear in a model of Cl2 (like variable y in Fig. 1). But if it is not, then it must be completely eliminated (like variable x in Fig. 1). The resolution rule (Fig* 2) seems to tackle this problem, but we must examine what it accomplishes closely. Let xr yI 2, a, b be qualitative quantities such that x+y=a and -x+z=b If x is different from ?, then Y+Z =a+b Figure 2: The qualitative resolution rule Consider a simple case (but this case happens more often than not), when both CI and C2 models are made up of a single confluence, say respectively E 1 and E2. Let x be a variable involved in both, and assume that the resolution rule applies to E 1 and E 2 and so eliminates x. Then we get a new confluence, say E12, which is global to C12. Any other variable involved in E 1 or E 2, or both, will belong to E 12 as well. But it is not true in general that El2 is equivalent to E 1 and E2. For instance, if the model of another component C3 mentions x explicitely, then E 12 is certainly not a proper model of Cl 2. But the equivalence should hold whenever x is exclusively involved in El and E2. 2.3 Joining two components Previous remarks provide a heuristic rule for choosing between the potential applications of the resolution rule at each inference step: Joining rule: Let 45 be a set of confluences corresponding to a component-based description of a device. If the resolution rule applies to confluences E 1 and E2 by eliminating variable x, and if x is exclusively involved in E 1 and E2, then choose this application. An equivalent model (as far as variables different from x are concerned) is obtained by substituting confluence E 1 2 produced in this way for confluences E 1 and E2. f sv If El and E2 are the respective models of components Cl and C2, then E 12 is a proper model for CI2. Cl and C2 are joined. This rule can be applied recursively. Indeed, a variable y different from x and involved solely in E 1, E2 and a third confluence belongs to exactly two confluences after the joining rule has been fired. Therefore, the joining rule might choose to eliminate it at a next step. This means that a compound component can be joined in turn to another atomic or equally compound component. 2.4 A mathematical justification The choice heuristic contained in the joining rule conditions has been justified above by some commonsense arguments. It needs no mathematical proof. But the conclusion, which claims that substituting E 1 2 for E 1 and E2 provides an equivalent model for the variables different from X, does need one. We have proved that this is true for square systems, i.e. when the number of confluences is equal to the number of internal variables. Indeed, it can be proved in this case that, starting from task-oriented confluences, all the pieces of task-oriented assemblages (involving variables different from x) that can be drawn from the initial model can be drawn after the joining rule has been fired as well. We do not give the proof here, because it is too long and requires mathematical notions which are beyond the scope of this paper. It can be found in [Dormoy, 19871. We have proved further. Let I5 be a non decomposable set of confluences, and x a variable involved exactly in two confluences, say El and E2. If the resolution rule does not apply to E 1 and E2 by eliminating X, then no piece of assemblage involving a variable different from x can be drawn from @. A set of confluences @ is said to be decomposable if it contains a subset @” involving variables that are not mentioned in @ - @‘. In practical terms, if E happens to be decomposable, then one considers 3E ’ first. This is what Iwasaki and Simon [1986] called causal ordering. The problem comes down to the study of non decomposable sets of confluences. In concrete terms, a “loop of components” is not decomposable. Efficient algorithms have been described for decomposing a set of equations (see for example [Trave & Kaskurewicz, 19861). This second property is important: it states what happens when two components are about to be joined, but ultimately cannot be so. The conclusion seems natural: finding a piece of assemblage for a variable different from x requires eliminating x at some step. This property can be viewed as the “negative part” of the joining rule (it states when joining is not possible). However, it must be pointed out that this second property never applies when the qualitative model is stationary. A stationary qualitative model based on confluences can be formally defined as having a fz.dZ qualitative rank (the qualitative rank of a system is defined as the maximum 320 Common Sense Reasoning number of its column vectors which are qualitatively independant). This means that the single solution when all the reference variables are 0 is 0. In physical terms, this means that all the internal variables remain steady when the reference variables do. This is why we call it a stationary model. It can be proved that an assemblage can be drawn from a non decomposable model iff it is stationary. The model example presented in this paper is stationary. e joining rule fail? The system presented here has been tried in various examples, stemming from different physical areas: electronic circuits, thermodynamic systems (e.g., the pressurizer of a PWR nuclear power plant),.... It never failed in yielding an assemblage in a straightforward way. So, it is justified to ask whether this method is complete, i.e. always leads to an assemblage. If this is the case, then any model which can be assembled must involve at least one variable belonging to exactly two confluences. Indeed, the joining rule may fail. Some models can be assembled, but have no variable belonging to less than 3 confluences. We shall not discuss the underlying mathematics, but previous work related to this question has to be mentioned. Similar issues were studied more than twenty five years ago by mathematical economists. They led to many mistakes. Lancaster [1962] claimed that the matrix of any square system having a determinate value turns out to be deducible from the form: r +-o...o 1 Now, a system having a determinate value can be assembled. This would imply that the joining rule is complete in the squarecase. Two years later, Gorman [1964] showed that this is wrong by producing the following counter-examples: N2 1 J 0 LJ +++++ Nl and N2 are square matrices. They have a single line in common. They are themselves supposed to be Lancaster’s or Gorman’s matrices. Gorman claimed in a footnote that he had proved that all the determinate matrices are deducible from this generic form. Unfortunately, this is wrong, too, as shown by the counter-example: It can be shown that Lancaster’s and Gorman’s forms, plus this last form, are the only generic forms of 4x4 matrices. There are 6 basic forms of 5x5 matrices, but we do not know how many there are for nxn matrices with n>S. A generalized control for qualitative resolution is strongly related to these topics. Let’s go back to the real world. The fact that the joining rule works without trouble within a physical model can be justified by a commonsense argument: there must be a variable linking two components, but not involved in the interaction with any other component. tatio Here follows a demonstration of how the joining rule is implemented. Though it is self-sufficient, some rules can be added in order to speed up the assembling step. They all turn a set of confluences into an equivalent one. Their advantage lies in the fact that they reduce the number of confluences or of variables. New confluences are produced by some of them: they can also be produced by the resolution rule. However, their complexity is polynomial. Hence, it is worth firing them first. The set of confluences considered at the current step will be denoted E in the overall section. ask machinery Let IEO be the qualitative model to be assembled. Perform choice, step 0. Choice, step i: Select within the current set of confluences Bi a variable x such that: x is involved in exactly two confluences of @iv x has not been yet selected at step i, there is a variable different from x involved in which has not been yet assembled. Joining rule (JR), step i : Let x be the selected variable, and E 1 and E 2 the confluences involving x. Then, eliminate x by mean of the resolution rule. This produces the confluence El2 * Set @* 1+1 <--IEi-{ElrE2}U{E12}.PcrfOrmchoice,step i+l. Backtracking, step i: Make a new choice, step i. If no such choice is possible, and if i is different from 0, then go back to step i-l. In addition, as soon as a confluence involving a single variable is produced, the corresponding piece of assemblage is kept and the backtracking step is performed. The “negative part” of the joining rule may also be added. 3.2 Simplification rules Dormoy 321 3.2.1 Equality rule until assembling is completed. Indeed, ( e ) is not a constraint upon the variables involved in (e) and diffe rent from x: whatever value they are assigned, Let ax+by=O (e ) be a confluence in @, such that a and b are both different from 0. Then x=-aby, and the expression -aby can be substituted for x in all the confluences of iE different from (e ) . Then discarding (e ) provides an equivalent set of confluences. This rule is of great practical importance: it discards a variable and at least one equation. At the same time, there are often in physical systems confluences having the pattern of (e ) . Some examples are: the valve of the pressure regulator, the form of Ohm’s law involving voltage drop, or a confluence of a component involving three variables and corresponding to a “connected-to-ground” component. Example: (from CE-feedback, see Fig. 4 below) From [ dVFP] - [ dig2 ] =O one draws [dVFp] = [ dig2 1. [dig2 ] can be replaced by [dVFpl. 3.2.2 Ritschard’s rule In the field of economics, Ritschard proposed [1983] a more constrained form of the resolution rule, but leading to a more informative conclusion (the divergences from the resolution rule are underlined): Let x+El=a (Cl) and -x+Ez=b (C2) be two confluences, where x is a variable and El and E2 have no variable with opposite coefficients in common. Assume that all the variables involved in E2 are also involved in ~1 (though the reverse may not be the case). Then E yafb (C3 ) is a valid confluence, where E3 is the same expression as E 1+E2, but with no repeated variable. Moreover. if a+b=b. then substituting confluence ((~31 for confluence ! c 11 provides an eauivalent set of confluences. This rule eliminates the occurrence of a variable in an equation. Its complexity is polynomial, but it costs much more than the other rules presented here (including the joining rule). Nevertheless, it is worth examining it at the beginning of the assembling step, for it may cause the application of the equality rule (see above) or the single-occurrence-elimination rule (see below). Example: (from the pressure regulator, see [Dormoy & Raiman, 19881 - this issue) This rule applies to the pressure regulator after the equality rule substituted -[dP4] for [dAl.Let (Cl) and (C2 ) be the two confluences: [dP2l-[dP3l-[dP4l-[dQl~~ (6) [dP31-W41- [dQl=O (3) Then [ dP 3 ] can be eliminated in confluence ( 6 ) , and confluence ( 6 ) can be replaced by confluence ( 7 ) : k@2l-Wql-[dQl=O (7) 3.2.3 Single-occurrence-elimination rule If a variable x occurs in a single confluence ( e) involving at least two variables, then discard x and W an assignment to x that satisfies confluence (e ) can always be found. Example: After previous application of Ritschard’s rule, [ dP 3 ] occurs only in confluence ( 3 ) . Hence [ dP 3 ] and (3) can be discarded. 3.2.4 Assemblage propagation rules These rules generalize the basic propagation rules in order to deal with task-oriented confluences. Let (e) be a confluence involving a single internal variable x. Then deduce the corresponding piece of assemblage. This rule simply achieves the goal of the assembling step. Let x=f(wl,... , wD) be a piece of assemblage, and ( e ) a confluence ihvolving x. Then replace x by f(qr..., wr,) , provided that this adds no new ? coefficient to any reference variable or that (e ) has been discarded by the single-occurrence-elimination rule. Any global relation deduced after this replacement will be a piece of assemblage under the usual conditions. Adding a new ? coefficient to some reference variable could make assemblage deduction impossible. For instance, [ dP 2 ] should not be replaced by [ dP I]+ [ dP 5 ] as soon as the resolution rule produces the piece of assemblage: W2l=Wll+W51 (Al) This would lead to a new form of confluence ( 1) : -[dQl=?Wll+M’51 Afterwards, no piece of assemblage could be deduced for [dQl . Example: (from the pressure regulator) This rule draws a piece of assemblage for [ dP 3 ] from confluence ( 3 ) and pieces of assemblage (A2 ) and (A3): W3l=W~l+?[dp5l (A51 The last part of the rule makes sure that this is a proper piece of assemblage. iF iB2 Transistor Q 1: [dvIl-[diBl=O [dvIl-[diC1]=O LdvIl-[diEI]= Transistor Q2: [dVc1E2l-[dic+O [dVc1E2]-[diBl]-G 322 Common Sense Reasoning Ohm’s law: [dvI]-[dvFp]-[diF]=O Ohm(I,FP) [dv,2]-[dvFp]-[dvClE2]"0 Ohm(E2,FP) [dvFpl-[dig21=0 Ohm(FP,Ground) [dvC1l-[diCCl=O Ohm(C1,CC) KCL: [diI]-[diB]-[di,]-0 KCL(1) [diCl]-[diCCl]-[di,,]=O KCL(C1) [di,2]-[diF]-[di,l]-0 KCL(FP) Definition of drop of potential: [dvclE21 - [dvcll +[dvE2]=0 PD(Cl,E2) Figure 4: CE-Feedback and its loop model We present here how the “loop” in CE-feedback (Fig. 4) [De Kleer, 19841 can be assembled using the joining rule and the simplification rules afore-mentioned. Some inference steps are illustrated by diagrams. They are intended to show the similarity between the way the system joins the components and the way an engineer would. The equality rule applies first. It gives: [dvIl=[di,]=-[diCl]=[diEll ~d~~~~~l=~di~~l=-~di~~l=~d~~~l [dvppl=[dig21 After replacements have been performed, we get: [dVI]-[dVFp]-[diF]=O Ohm(I,FP) [dvE2]-[dvFp]-[dvCl~2]"0Ohm(E2,Fp) -[dvI]-[diF]=-[diI] KCL(1) -[dvI]-[dvcl]-[dVClE2]"0 KCL(C1) [dVFp]-[diF]-[dVClE2]=0 KCL(FP) [dVclE21- [dvcll +[dVE2]=0 PD(Cl,E2) The joining rule now applies. The steps are: Choice, step 0 [dvcl] Sekkd,KCL(Cl)-PD(Cl,E2) JR, step 0 -[dV,l-[dV,2l-~dVc1E21”0 (14) Choice, step 1 [dVE2] Selected,Ohm(E2,FP)+(14) JR, step 1 -[dvI]-[dvFp]-[dvClE2]-0 (15) Let’s sum up the situation. The current model at step 2 is: [dVI]-[dVFp]-[diF]=O Ohm(I,FP) -[dvI]-[diF]=-[diI] KCL(1) [dvFpl- [d+l- [dvc1E21”0 KCL(FP) -[dvI]-[dvFp]-[dvClE2]=0 (15) The joining rule goes on firing: Choice, step 2 [dVClE2] selected, KCL(FP)-(15) JR, step 2 [dvI]+[dvFp]-[diF]=O (16) Choice, step 3 [dVFp] Selected,Ohm(I,FP)-(15) JR, step 3 [dvI]-[diF]=O (16) At this step, the equality rule applies, and deduces that [dvIl and IdiF] areeqL&: [dvI]=[diF].Propagating this equality in KC L ( I ) leads to the first pieces of assemblage: [dvI]=[diF]=[diI]. Backtracking to step 2, the second assemblage propagation rule applies. The set of confluences at step 2 reduces to: [dvFpl-[dvClE2l=[diI] KCL(FP) - [dvFpl- [dvc1&” [diIl (15) The joining rule applies again: Choice, step 2 [dVFp]SeleCted,KCL(FP)+(15) JR, step 2 [dVc1E2l”[d+l and gets a new piece of assemblage: 1 dVC lE2 ] = [ diF ] . One can check that no other informative piece of assemblage can be obtained. If not controlled, qualitative resolution leads to combinatorial explosion. But the fact that qualitative models stem from real-world devices prevents qualitative resolution from meeting the fate of resolution in logic. The heuristic control presented here is strongly related to the structural properties of a sane device. We have tried our system in examples corresponding to different physical areas. However, these were all small devices. Nevertheless, we believe that the assembling technique, controlled by the joining heuristic, could assemble some larger artefacts. We are currently working on a model of a large-scale plant. [Bylander, 19871 Tom Bylander. Using consolidation for reasoning about devices. Technical Report of the Ohio State University, March 1987. [De Kleer, 19841 Johan de Kleer. How circuits work. Artificial Intelligence Vol.24 n”l-3, December 1984. [Dormoy, 19871 Jean-Luc Dormoy. Resolution qualitative: completude, interpretation physique et controle. Mise en oeuvre dans un langage a base de regles: BOOJUM. Paris 6 University Doctoral Thesis, December 1987. [Dormoy & Raiman, 19881 Jean-Luc Dormoy and Olivier Raiman. Assembling a device. AAA188. [Gorman, 19641 W. M. Gorman. More scope for qualitative econonomics. Review of economic studies XXX1 pp 65-68, 1964. [Iwasaki $ Simon, 19861 Y. Iwasaki & H. A. Simon. Causality in device behavior. Artificial Intelligence Vol. 29 n”1, July 1986. Lancaster, 19621 K.J. Lancaster. The scope of Qualitative economics. Review of economic studies XXIX n”2,1962. [Ritschard, 19831 Gilbert Ritschard. Computable qualitative comparative statics techniques. Econometrica Vol. 51 n”4, July 1983. [TravC & Kaskurewicz, 19861 Louise TravC & Eugenius Kaskurewicz. Qualitative solutions of linear homogeneous systems. Internal report of the LAAS-CNRS at Toulouse (France), October 1986. Dormoy 323
1988
92
840
Non-Intersection of Trajectories in Qualitative Phase Space: A Global Constraint for Qualitative Simulation* Wood W Lee and Benjamin J Kuipers Department of Computer Sciences University of Texas, Austin, Texas 78712 Abstract The QSIM algorithm is useful for predicting the possible qualitative behaviors of a system, given a qualitative differential equation (&DE) describ- ing its structure and an initial state. Although QSIM is guaranteed to predict all real possibili- ties, it may also predict spurious behaviors which, if uncontrolled, can lead to an intractably branch- ing tree of behaviors. Prediction of spurious be- haviors is due to an interaction between the qual- itative level of description and the local state-to- state perspective on the behavior taken by the algorithm. In this paper, we describe the non-intersection constraint, which embodies the requirement that a trajectory in phase space cannot intersect itself. We develop a criterion for applying it to all sec- ond order systems. It eliminates a major source of spurious predictions. Using it with the curvature constraint tightens simulation to the point where system-specific constraints can be applied more effectively. We demonstrate this on damped oscil- latory systems with potentially nonlinear mono- tonic restoring force and damping terms. Its in- troduction represents significant progress towards tightening QSIM simulation. 1 Introduction QSIM [Kuipers, 19861 qualitatively reasons about systems of autonomous qualitative differential equations (&DES). Although many well known techniques already exist for solving systems of ordinary differential equations (ODES), they are applicable only to ODES of restricted forms. In real applications, however, such forms are rare. On one hand, incomplete knowledge often renders &DE models more realistic than exact ODE. On the other hand, even when we do have exact ODES, they are usually in unsolv- able forms. QSIM, always predicting all real solutions to a system of &DES (in the form of qualitative descriptions of the temporal behavior of parameters), has the potential to deal with these cases. Taking a phase space view, mathematicians have been able to develop analyses that yield useful global charac- teristics (such as stability) of solutions to ODES without explicitly solving them. However, in applications such as monitoring and control where thresholds are a main con- cern, such techniques are insufficient. Simulation type *This work is supported in part by the National Science Foundation under grant number IRI-8602665. 286 Common Sense Reasoning techniques, such as QSIM, would be necessary. In such cases, QSIM predictions exhaust all possible manners in which various thresholds might be crossed. Though a powerful algorithm, a combination of the local state-to-state perspective and the qualitative level of de- scription taken makes it possible for QSIM to predict spu- rious solutions. In an analysis of the &DE for the damped spring, Lee et al. [1987] identified various new types of con- straints (higher derivative, energy and system property) for tightening QSIM simulation. Using early versions of these constraints, they were able to arrive at all and only the correct predictions for the linear damped spring. However, success of these early versions with potentially nonlinear damped springs was not as complete. Kuipers and Chiu [1987] introduced a generalized higher derivative constraint in the form of curvature constraints. They were able to eliminate a major source of spurious predictions in QSIM, namely, the lack of derivative infor- mation, sucessfully. Though a powerful and necessary con- straint for simulating systems of second order and higher, there are many cases where curvature constraints alone do not suffice to make predictions tractable. In this paper, we describe the non-intersection constraint (short for non-intersection- of-phase-space-trajectory constraint). It is not system- specific in the sense that its derivation does not depend on the specific system QSIM works on. It is derived from a mathematical theorem that governs all systems the current QSIM deals with and applies equally to them. It specifies that phase space trajectories do not cross themselves and eliminates a major source of spurious predictions. We have developed a criterion for applying it to all second order systems. Using it with the curvature constraint tightens simulation to the point where system-specific constraints (such as energy and system property constraints) can be more effectively applied. This is demonstrated on damped oscillatory systems. In the rest of this paper, we first introduce the phase space framework and how QSIM predictions fit into the picture. Next the non-intersection constraint is described. Then we describe our current implementation and results of applying it to damped oscillatory systems. Its relation- ship to previously introduced constraints and other issues are discussed. Finally, related work by Sacks [1987] and Struss [1987] are described. 2 The Phase Space View The non-intersection constraint is based on the stan- dard phase space representation for systems of first-order differential equations. An nth order equation can always From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. V Figure 1: Some phase portrait of oscillatory systems. Part of a QSIM Prediction ----------------========= Time X V TO (0 X190) (0 INF) Tl x190 0 T2 (0 X190) V87 T3 0 (V87 0) T4 x191 0 T5 (X191 0) V88 T6 0 (0 V88) T7 x194 0 T8 (0 X194) v91 Figure 2: A QSIM prediction and its qualitative phase portrait. be reduced to a system of n first order equations. For ex- ample, the linear-damped spring, described by the second order equation ma = -Lx - ~21, is also described by the following system of two first order equations: Li = v Jc rl 6 = --x--v m m (1) (2) A phase space for a system is the Cartesian product of a set of independent variables (state variables) that fully describes the system. For second order systems, this cor- responds to a -phuse plane. A point in the phase space (phase point) represents a state of the system. Changes of the system over time define a trajectory through the phase space which tracks the state changes. Thus a trajectory is ageometrical representation of asolution to a systemI A phase portrait (or phase diagram) for a system depicts its phase space and trajectories and is a geometrical represen- tation of the qualitative behavior of the system. Figure 1 shows some phase portraits of oscillatory systems. -From left to right, they represent solutions of steady oscillations and diminishing oscillations, respectively. For a more thor- ough treatment of the phase space representation, please refer to an elementary differential equations book such as [Boyce and diPrima, 19771. A QSIM prediction is a qualitative description of the be- havior of a solution to a given system (Figure 2). Thus it also describes the class of trajectories in the phase space which has the corresponding qualitative description. Us- ing the Cartesian product of the quantity spaces of the Figure 3: constraint. a b C (4 (b) Intersection criterion for the non-intersection state variables as the qualitative phase space, the trajec- tory of a QSIM prediction may be obtained by plotting the qualitative states predicted in this qualitative phase space. e Non-Intersection Constraint The mathematical foundation for the non-intersection con- straint is a theorem about trajectories of autonomous sys- tems which states that: A trajectory which passes through at least one point that is not a critical point cannot cross itself unless it is a closed curve. In this case the tra- jectory corresponds to a periodic solution of the system [Boyce and diPrima, 1977, p.379-3801. Its proof follows from the existence and uniqueness theo- rems for systems of first order differential equations and will not be given here. Autonomous systems are systems whose phase space representations do not explicitly involve the independent variable (time, in QSIM). Since QSIM deals with sys- tems that do not involve explicit time functions, this theo- rem applies to the QSIM domain. The idea of the non- intersection constraint, then, is to implement the con- straint imposed by this theorem onto trajectories of QSIM predictions. The difficulty with applying this constraint within QSIM is that the qualitative description of behaviors only speci- fies values in terms of a discrete set of symbols, i.e. land- mark values and the intervals between them. Therefore, we only know where the phase space trajectory is in a loose, qualitative sense. For example, in Figure 2, the precise trajectory from (X190,0) to (X191,0) is unknown. We only know that it reaches V87 before crossing the negative v axis. If a trajectory consists of a single critical point, it will be a quiescent initial state and we need not worry about constraining its simulation. If on the other hand the tra- jectory is a closed curve, it corresponds to cyclic behavior and an appropriate filter in QSIM takes care of the behav- ior. Thus, we need only concern ourselves with multi-state, non-cyclic behaviors. Lee and Kuipers 287 Given this, the problem then is to detect intersections between segments of a trajectory. The simplest case occurs when a trajectory reaches a point (coordinates specified by a pair of landmark values) it passed through before. In the general case, however, the intersection point lies between landmark values. We prove its existence for second or- der systems by establishing a criterion for intersection as described below. Pick a trajectory segment with end points defining a rectangle which encloses all points of the segment. Con- sider segment UC enclosed in rectangle abed (Figure 3a). The segment partitions the edges of the rectangle into two sets, {ab, bc} and {ad, dc). If the trajectory later enters this rectangle through one edges set, say {ab, bc) at b, and exits through the other, in this case {ad, dc) say at d, an intersection must occur, even if we don’t know precisely where’. Establishing this condition for a trajectory is thus a criterion to conclude that the trajectory intersects itself. It is general and applies to all second order systems QSIM deals with. 4 Pmplementation The non-intersection constraint has been implemented us- ing the criterion for intersection just described. An inter- esting source of complication is that phase ‘points’ can be points, intervals or areas depending on whether the state variables are at landmarks or in intervals. Consider the case of Figure 3b. The state variable x is in an interval at one end of a trajectory segment and at a landmark at the other end, and vice versa for the variable V. In this case, the edge sets satisfying the intersection criterion are {af,fe} and {bc, cd}, rather than {af,fe) and {ac,ce). Other sources of complication are discussed in [Lee and Kuipers, 19881. The non-intersection constraint is applied to all legiti- mate phase spaces of a system. This means that for the damped spring, the constraint is applied to each of the z- V, v-u and U-X phase spaces 2. This is necessary because of the local point of view of limit-analysis-based qualitative simulation methods. Simply applying the constraint to, say, the x-w space would not ensure that the parameter a behaves properly. 5 An Example We have chosen the damped spring as an example to il- lustrate the power of this constraint. The reason is that the damped spring is a representative second order system with versions of varying complexity (from linear to nonlin- ear): ‘This is a direct consequence of the Jordan Curve Theorem which says that a closed curve in a plane divides the plane into exactly two regions. Refer to [Christenson and Voxman, 19771 for details. 2Normally, the t-2r space is considered the phase space for a damped spring. In fact, though, any collection of variables that is a linearly independent set and that fully describes the system can be the phase space. value of Icm I I v2/4 v2 0 0 a lags x o----4 t a leads x 180° out of phase Figure 4: Correspondence between relative values and q2 and behavior of linear damped spring. overdamned 0 ---+ ? critically damped underdamped linear damped spring mu = -kx - qw monotonic spring force mu = -f(x) - qv monotonic damping general damped spring of km These same equations also describe damped oscillatory terns in other domains (e.g. circuits and control). sys- Damped spring systems have two types of behaviors, purely oscillatory and reaching quiescence. The division between these two types is, in the linear case, governed by the relationship between 4km and q2 (Figure 4). Its behavior is purely oscillatory (underdamped) if 4km > q2 and reaches quiescence otherwise (overdamped and criti- cally damped). For purely oscillatory behaviors, different phase relationships between x and a are possible and are, in the linear case, governed by the relationship between km and q2. Using the non-intersection constraint together with a curvature constraint [Kuipers and Chiu, 19871 on the damped spring systems has made predictions tractable. Three sets of behaviors are predicted. One set consists of strictly expanding oscillations with varying phase relation- ship between a and x. Another consists of strictly dimin- ishing oscillations with varying phase relationship between a and x. The third consists of behaviors reaching quies- cence after arbitrary number of diminishing oscillations. Among these three sets, the expanding set is elimi- nated when energy constraints are included [Lee et al., 19871. The system property constraints impose consis- tent x-u phase relationships on the remaining two sets. Since behaviors with overdamped and critically damped approaches to quiescence correspond to 4km 5 q2, filter- ing the behaviors in the third set requires imposing con- straints of a numerical nature. The quantitative reasoning methods of Kuipers and Berleant [1988] should make it possible to apply partial quantitative knowledge to filter these behaviors. The behaviors of the damped spring system that sur- vive the combined curvature, non-intersection, energy and system-property constraints can be classified as follows: 288 CommonSenseReasoning Intersection in R-X portrait. Rectangle formed by the phase ‘points’: CR256 01 C(B R256) X1911 Edge sets: 1. (CC% 0) (17 (0 A256))I C(A 0) (X (X191 0))13 2. ([(A R256) (X (X191 O))II Reenters rectangle through edge set 1 at CCL3 13256) 61. Exits through edge CR256 (X191 8)l. set 2 at Figure 5: The non-intersection constraint at work. Overdamped or critically damped approach to quies- cence. Diminishing oscillations, with one of three constant x-u phase relations. Diminishing oscillations, with varying x-u phase rela- tions. Diminishing oscillations, reaching quiescence after an arbitrary finite number of oscillations. All behaviors can be accounted for for each version of the damped spring. For the general damped spring and the monotonic damping cases, behaviors from all four classes are possible. For the monotonic spring force and linear cases, behaviors from classes 1,2 and 4 are predicted. How- ever, only classes 1 and 2 represent possible behaviors in the linear case. Spurious predictions are due to limitations on the current form of the system property constraint. In- corporating Kuipers and Berleant’s [1988] quantitative rea- soning methods should allow us to eliminate them. Output showing the non-intersection constraint at work is included in Figure 5. iscussion Although the M+ functional relationship is defined to be time invariant in QSIM, insufficient mechanisms are incor- porated to ensure that QSIM treats each M+ function con- sistently. This is the reason why Lee et al. [1987] had lim- ited success with nonlinear versions of the damped spring. For nonlinear versions of the damped spring, the envelopes derived for a from the corresponding energy equations are too weak to constrain a appropriately. Thus QSIM pre- dicts that a can behave more or less arbitrarily. This, however, gives rise to behaviors with inconsistent M+ func- tions which violate the non-intersection constraint. Apply- ing the non-intersection constraint eliminates these spuri- ous predictions. In comparison with previously introduced constraints - curvature, energy (Lyapunov) and system property, the non-intersection constraint is not system-specific in that its derivation does not depend on the particular system QSIM works on. Its form remains the same and it applies equally regardless of the system. The curvature constraint is fundamental in the sense that it addresses QSIM’s lack of higher derivative information for performing local state- to-state predictions central to the algorithm. It is local in the sense that it does not address particular global sys- tem characteristics. In this sense, the non-intersection, energy and system property constraints are all global. The non-intersection and curvature constraints together tighten simulation to the point where constraints address- ing particular global system characteristics, such as energy and system property, can be applied more effectively. This represent significant progess towards tightening QSIM sim- ulation. The non-intersection constraint can impose, for exam- ple, the requirement that a trajectory must spiral inwards, but it does not guarantee that the spiral converges to the origin. It remains possible that the spiral converges to a limit cycle. This ambiguity can be resolved using an ap- propriately chosen Lyapunov (energy) function. Another possible approach for resolving this ambiguity is to apply aggregation methods [Weld, 19861 to abstract the decreasing oscillation to an amplitude decreasing towards zero. This abstraction transforms the ambiguity between asymptotically stable behavior and limit cycle to a much simpler limit-analysis type ambiguity. We need only ask whether a changing value (the amplitude) moving towards a limit (zero) reaches it or stops before reaching it. In the current paper, we have discussed only the non- intersection constraint applied between two segments of the same trajectory. In fact, the non-intersection con- straint applies more generally, prohibiting intersections be- tween any two trajectories in the same phase portrait. This last condition raises an important subtlety. Two trajec- tories within the same phase portrait represent different possible initial conditions of the same system. However, since a set of QSIM predictions may have different presup- positions about the system properties of the system being simulated, it is not guaranteed that two arbitrarily cho- sen QSIM behaviors may be legitimately placed into the same phase portrait. Thus, in order to apply the non- intersection constraint between two trajectories, we must be able to determine whether their presuppositions about system properties are compatible. We plan to address this problem in future work. elate Struss [1987] h as made a significant contribution to the mathematical foundations of qualitative reasoning through a careful analysis of qualitative algebras in terms of in- terval algebras. Kuipers [1988] elaborates on some of Struss’ points, and clarifies a misconception about QSIM. In his appendix, Struss makes an interesting analysis of the spring without friction (the simple spring) based on the phase space approach. Using purely qualitative argu- ments (symmetry) about trajectories of the simple spring, he arrives at the conclusion that the simple spring oscil- lates with constant amplitude. He then adds that this would make adding further equations like conservation of energy unnecessary. A point to note, however, is that the conservation of energy equation is not a further equation that needs to be added. It is derivable from the original description of the system. The process of deriving it would be liken to the process of his analysis. The difference is that knowledge Lee and Kuipers 289 of algebraic manipulation is needed rather than of phase space trajectory analysis. Sacks’ work [1987] is impressive in automating the math- ematician’s analysis of precisely specified ODES. Using a combination of numerical and analytical methods (notably piecewise linear approximations), his PLR program pro- duce qualitative descriptions of solutions, in the form of phase diagrams, for nonlinear differential equations. His approach is to first make a simple piecewise linear approx- imation of the given equations and construct phase dia- grams for them. Then he refines his approximation, con- structs another set of diagrams and compares them with the previous ones to look for new qualitative properties. This process of refine-and-compare continues until no new properties are found. His program perfT,rms well on a va- riety of equations. Our work addresses the problem of obtaining qualita- tive behaviors from an incompletely specified &DE. When key functional relations are known only to lie in the class of monotonic functions, piecewise linear approximation is impossible, and Sacks’ powerful methods do not apply. 8 Conclusions QSIM is a powerful inference mechanism for predicting qualitative solutions of &DES. However, if unconstrained, it is possible for QSIM to predict intractable spurious so- lutions. Kuipers and Chiu [1987] and Lee et al. [1987] have intro- duced various constraints to tighten the simulation process. They are useful, but are in general unable to tighten sim- ulation to the point where predictions become tractable. We have introduced a global, non-system-specific con- straint to eliminate a major source of spurious predictions. This is the non-intersection constraint for phase space tra- jectories which specifies that a trajectory cannot intersect itself. Using it and the curvature constraint together tight- ens simulation to the point where other global and system- specific constraints can be applied more effectively. This is demonstrated on damped oscillatory systems. Introduction of the non-intersection constraint repre- sents significant progress towards tightening QSIM simu- lation. Current implementation applies the constraint be- tween two segment of the same trajectory. Future work includes generalizing the constraint to apply between tra- jectroies and automating interpretation of behavior classes, for example, by aggregation of repeated cycles [Weld, 19861, or by merging behaviors into families [Chiu, 19881. Acknowledgments Thanks to Charles Chiu, Xiang-Seng Lee, Jason See and Wing Wong for reading drafts of this paper. References [Boyce and diPrima, 19771 W. E. Boyce and R. C. DiPrima. Elementary Diflerential Equations. John Wiley & Sons, New York, 1977. [Chiu, 1988] C. Ch iu. Automatic Analysis of Qualitative Simulation Models. Unpublished, 1988. [Christenson and Voxman, 19771 C. 0. Christenson and W. L. Voxman. Aspects of Topology. Marcel Dekker, New York, 1977. [Kuipers, 19861 B. J. Kuipers. Qualitative Simulation. Artificial Intelligence 29: 289-338, 1986. [Kuipers and Chiu, 19871 B. J. Kuipers and C. Chiu. Taming Intractable Branching in Qualitative Simual- tion. IJCAI-87, 1987. [Kuipers, 19881 B. J. Kuipers. The Qualitative Calculus is Sound but Incomplete: A Reply To Peter Struss. To appear in International Journal of AI in Engineering, 1988. [Kuipers and Berleant, 19881 B. J. Kuipers and D. Berleant. Using Incomplete Quantitative Knowledge in Qualitative Reasoning. AAAI-88, 1988. [Lee et al., 19871 W. W. Lee, C. Chiu and B. J. Kuipers. Developments Towards Constraining Qualitative Sim- ulation. UT TR AI87-44. Also in AAAI-87 Qualita- tive Physics Workshop Abstracts, 1987. [Lee and Kuipers, 1988] W. W. Lee and B. J. Kuipers. Non-Intersection of Trajectories in Qualitative Phase Space: A Global Constraint for Qualitative Simula- tion. TR forthcoming. 1988. [Sacks, 19871 .E. P. Sacks. Piecewise Linear Reasoning. AAAI-87, 1987. [Struss, 19871 P. Struss. Problems of Interval-Based Qualitative Reasoning. Siemens Corp., ZTIINF, 1987. [VVeld, 19861 D. S. Weld. The Use of Aggregation in Causal Simulation. Artificial Intelligence 30: l-34, 1986. 2%) Common Sense Reasoning
1988
93
841
Using Incomplete Quantitative KnswIedge in Qualitative Reasoning* Benjamin Kuipers and Daniel Berleant Department of Computer Sciences University of Texas, Austin, Texas 78712 email: kuipers@cs.utexas.edu or berleantQsally.utexas.edu Abstract Incomplete knowledge of the structure of mech- anisms is an important fact of life in reason- ing, commonsense or expert, about the physical world. Qualitative simulation captures an impor- tant kind of incomplete, ordinal, knowledge, and predicts the set of qualitatively possible behav- iors of a mechanism, given a qualitative descrip- tion of its structure and initial state. However, one frequently has qzlaniitative knowledge as well as qualitative, though seldom enough to specify a numerical simulation. We present a method for incrementally exploit- ing incomplete quantitative knowledge, by using it to refine the predictions of a qualitative rea- soner. Incomplete quantitative descriptions (cur- rently ranges within which unknown values are assumed to lie) are asserted about some land- mark values in the quantity spaces of qualita- tive parameters. Unknown monotonic function constraints may be bounded by numerically com- putable envelope functions. Implications are de- rived by local propagation across the constraints in the model. When this refinement process produces a con- tradiction, a qualitatively plausible behavior is shown to conflict with the quantitative knowl- edge. When all predicted behaviors of a given model are contradicted, the model is refuted. If a behavior is not refuted, propagation of quan- titative information results in a mixed quantita- tive/qualitative description of behavior that can be compared with other surviving predictions for differential diagnosis. 1 Introduction A qualitative model of a device or system is an abstrac- tion of a set of real systems. The behavior of these sys- tems can vary greatly, yet purely qualitative descriptions of these behaviors are identical. Quantitative knowledge about these systems can, however, allow them and their behaviors to be distinguished. Adding quantitative infor- mation to qualitative modeling allows more precise charac- terization of systems and their behaviors. This increased precision can help in diagnosis and prediction of behavior, *This research was supported in part by the National Science Foundation through grant number IRI-8602665. even if there is much too little for numerical simulation. Consider the example of the pressure regulator. A working pressure regulator of the type described in [de Kleer and Brown, 19851 has a fluid input of varying pressure. The regulator has an internal, continuously vari- able valve which regulates the pressure at the output port so that its variation is considerably smaller than the vari- ation at the input port. It does this by sensing the output pressure and closing the valve to restrict the flow as the output pressure rises, or opening the valve when the out- put-pressure falls. The pressure regulator may malfunction by having the internal valve stuck in one position, so that it cannot con- trol the output pressure. For both the working and stuck pressure regulators, an increase in input pressure leads to an increase in output pressure, but for the working reg- ulator the increase is significantly less. Qualitative simu- lation of both the working and stuck regulators indicates correctly that an input pressure increase implies an out- put pressure increase. However, the qualitative description alone cannot distinguish between the working and stuck regulators on the basis of output pressure variation (or any other easily measured parameter). This problem may arise with any proportionally controlled, negative feedback system, an important class of real mechanisms. Quantitative information is necessary to resolve this am- biguity, but we wish to preserve our ability to reason reli- ably with incomplete knowledge of the structure and nu- merical values characterizing the physical system. Our mixed qualitative-quantitative reasoner, Q2, makes it pos- sible to assert incomplete quantitative knowledge in the form of rangesl, about the landmark values in Kuipers’ [1986] QSIM b e h avioral description, and propagate their consequences. Our method is applicable to other qualita- tive reasoning systems with limitations discussed in section 4. In the case of the pressure regulator, we assumed plausi- ble ranges2 of values for resistance and flow capacity, and ’ krlly specified q uantitative values are expressed as ranges whose endpoints are identical. 2The term ravage is used rather than intervcal, because the rules of interval arithmetic are not always valid in this appli- cation. In interval arithmetic, if XY=K for intervals X, Y and K, the width of Y decreases for increased width of X given K (cf. Alefeld & H erzberger [1983]). But if X and Y are ranges representing reals whose values are uncertain, then increased width for X represents greater uncertainty in X, hence greater uncertainty (i.e. s’ncrecssed width) in Y. Our ranges are to be interpreted as representing probability distribution functions. Range [A, B] thus represents any pdf whose value is positive from A to B, and zero otherwise. 324 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. simulated the response of the regulator to a doubling of the input pressure from [5,5.1] to [lo, 10.21. Each of the two models (working and stuck) predicted a single quali- tative behavior: Output pressure increased. Augmenting the qualitative descriptions with quantitative ranges, the working model predicted the final value of the output pres- sure to be in [1.91,2.98], while the stuck model predicted an output pressure in [3.8,6.2]. This is precisely what is required for differential diagnosis between the two models. 2 Propagation of uantitative Information We will explain our quantitative propagation method in the context of a simple one-tank “bathtub” system; in this case one with a partially blocked drain, so that outflow increases only slowly with pressure. There are three distinct qualitative behaviors for a bath- tub which is being filled from empty with the drain left open: (1) equilibrium between inflow and outflow before amount reaches FULL, (2) overflow while inflow is greater than outflow, and (3) equilibrium between inflow and out- flow exactly when amount reaches FULL. In Q2, two types of quantitative information are pro- vided as part of the initial description of the system: o Quantitative ranges describing what is known about the values of certain landmark values, in this case the landmark IF* of the parameter inflow(t), and the landmark TOP of the parameter level(t). e Numerically computable envelopes that bound the (unknown and possibly nonlinear) monotonic function constraints, such as outflow = M+(pressure). Figure 1 shows the only quantitatively consistent be- havior out of the three qualitative possibilities, given ini- tial quantitative assertions about TOP, IF*, and envelopes constraining the relations between amount and level, level and pressure, and pressure and outflow. The two equi- librium behaviors were found to be inconsistent with the quantitative information given. 2.1 Types of Quantitative Quantitative propagation occurs in different ways for the various qualitative constraints being propagated over. As a notational convention, if the qualitative behavior has parameter(t) = L for a landmark L at a particular time- point t, we may use either parameter(t) = [lo, hi] or L = [ro, hi], to indicate that the quantitative range [lo, hi] must contain the (unknown) numerical value of L. In Q2, each type of qualitative constraint is associated with a procedure for propagating partial quantitative in- formation among its arguments. These procedures define a quantitative semantics for the constraint that must of course be consistent with the semantics already defined by the qualitative simulator. The four types of methods for propagating incomplete quantitative information are: 1. Propagation across arithmetic constraints: ADD, MULT, MINUS. This is exemplified by an ADD constraint in a model of a bathtub, as shown in table 1. Note that divide and (binary) subtract constraints are trivially imple- mented with ADD and MULT. (a) An ADD constraint: netflow = inflow-outflow (b) Landmark values at time tf (see Fig. 1): netflow = inflow -outflow . . A/F-l = IF* - QF-I (c) in terms dfzown ranges: [O-051, 0.1461 = 11, 1.011 - [O, 99991 (d) The ADD can narrow the range for outflow( [O.OSl, 0.1461 = 19, 1.011 - 10.864, 0.9481 2. Propagation across monotonic function con- straints: M+, M- . This is typified in the bathtub model by an “M+” monotonic constraint between amozlnt of water and level in the tub, indicating that a change in either parameter implies a change in the other in the same direction. A qualitative monotonic function is a gener- alization of a large space of possible quantitative func- tions - indeed, all monotonic quantitative functions for which the monotonicity has the same sign as that of the corresponding qualitative function. There is a middle ground between purely qualitative and fully specified quantitative monotonic functions. We im- plement this middle ground by using upper and lower ENVELOPES (figure 2). ENVELOPES are quantitative functions which bound the space of quantitative functions that could apply to a monotonic constraint to a greater extent than the sign of the monotonicity. For the bathtub example, a particular tub may be consistent with a bathtub model that is partly quantified by envelopes constraining the relation between amount and level if its function re- lating amount and level falls within those envelopes. Otherwise it is definitely not consistent (maybe it is very funny-shaped tub, or perhaps not a tub at all but a sink or swimming pool). Propagation through a partially quantified M+ constraint occurs as described in figure 2. 3. Propagation across quantity spaces. Consider the netf Eow of water into the tub. At time Tl, the value of netf low is whatever quantitative value is associated with the landmark named “NF-1” (fig. 1). This value must be less than the value of NF-0, which may be as high as 1.01, but is greater than 0. Thus from the ordinal position of NF-1 and the quantitative information associated with its neigh- bors, we infer that netflow = [O,l.Ol]. With the help of other sources of constraint, propagation eventually narrows it all the way to [0.051,0.146]. 4. Propagation across time-points: D/DT. Finally, there is information flow from one state in the behavior of a model to another. This occurs via D/DT constraints, e.g., D/DT(amount)=netflow. By looking at quantitative information about the values Kuipers and Berleant 325 Structure: Standard bathtub with open drain., Initialization: Filling at constant rate, starting from empty (S-O) Behavior 2 of 3. Final state: NIL, (TRANSITION-IDENTITY NIL), T4NF. Time point ranges: TO=[O.O 0.01 ?1=[0.873 18.2161 amount of water in tub P-1=[0.882 0.9291 o.....o..... Q. IF*=[l.O 1.011 inflow from faucet Figure 1 - INF T . . - - - p TOP=[O.9 0.911 0 I I TO Tl level of water in tub - INF P OF-1=[0.864 0.9481 outflow through drain - INF 0 c - MINF I netflow into tub kl lower envelope r(y) = a range for Y. The old range for X Figure 2: Range propagation across an M + constraint: Y = M +(X). Given r(y), a range for Y, find the new range for X: 1) Project r(y) across the envelopes. 2) Intersect the projection with the old range for x. ction = new range for X. 326 Common Sense Reasoning at adjacent time points of the integral, the derivative and the time, propagation can potentially constrain the ranges associated with each of these. For the bath- tub, the mean value theorem of calculus tells us that 3T* E (TO,Tl) such that netflow = amount - amount (Tl -TO) From figure 1 we see that netflow = [0.051,1.01]. We also see that amount started out at 0 and climbed to anywhere from 0.882 to 0.929. TO is known to have the value 0. Thus, T1 = 0 + [“*882’ o’g2g1 = [ 873 18 2161 [0.051’ 1.011 * ’ - These four kinds of constraint apply the quantitative information provided by the user to narrow the ranges as- sociated with each landmark of each parameter until ei- ther no further narrowing is possible, or an inconsistency is flagged. Inconsistency, of course, is relative to a behav- ior and means that the behavior is not compatible with the available quantitative information. If all behaviors of a model are inconsistent then an additional inference is possible: The model itself is incompatible with the quan- titative information, whether that information is known a priori or from observations. 2.2 The propagation algorithm The range propagator (cf. [Davis, 19871) is straightfor- ward, making no distinction between the various kinds of constraint for control purposes. It starts by setting each landmark of each model parameter to an initial range of [O+,oo], [-co, 01, or [O, 01, depending on whether the land- mark is above zero, is the “0” landmark, or is below zero. Then any quantitative information provided by the user is used to narrow the appropriate landmarks. For the bath- tub, (inflow . IF*), th e “IF*” landmark of the inflow of water from the faucet, is initialized to [l.O, 1.011. In addi- tion, (level. TOP) is initialized to [0.9,0.91], meaning that we are dealing with bathtubs whose height falls between 0.9 and 0.91. Narrowed landmarks can potentially enable narrowing of other landmarks. A constraint is attached to a land- mark L if it and range r(L) can be used to try to narrow other landmarks. All constraints attached to the narrowed landmarks are added to an agenda. The propagation al- gorithm now takes the first constraint off the agenda and uses it to try to narrow the landmarks associated with it. If it fails it goes back to the agenda for the next constraint. If it succeeds it adds to the agenda all constraints attached to any landmarks it succeeded in narrowing, and returns to the agenda for a new constraint to process. The cur- rent implementation is depth first, and termination occurs when the agenda is empty. Our models run in on the order of 1 minute. 2.3 Reasoning with Models, and Values. The Q2 reasoner can make distinctions at three levels of granularity. The coarsest level deals with different systems and their MODELS. For the bathtub system, different models might include bathtubs with completely blocked drains, ones with rusted out bottoms, ones with partially blocked drains, and upside-down bathtubs. Qualitative-quantitative reasoning can also discriminate among different qualitative BEHAVIORS of a model. Be- haviors of a model are consistent or not with the incom- plete quantitative knowledge a user has provided. We have previously shown how Q2 can infer, for a bathtub with partially blocked drain that satisfies the specifications of certain quantitative ranges, that the equilibrium behav- iors are inconsistent and only the overflow behavior could occur. Figure 3 illustrates the reasoning about models and be- haviors that is one of the capabilities of Q2. The finest level of granularity deals with VALUES of pa- rameters. For the bathtub, quantitative data in the form of envelopes, and ranges that constrain uncertainty about the tub height and the faucet flow, imply quantitative pre- dictions (figure 1) about other qualitative aspects of the bathtub and its behavior. ode11 ode12 Behawior21 Behavior22 Model3 Figure 3: Quantitative reasoning eliminates 3 aviors, and sometimes an entire designates an inconsistent beh 3 A ore Complex Example Consider the more complex example of a one-tank equilib- rium system with a proportional controller attempting to keep amount near a desired point by modifying inflow. One example of such a system is a heating system where the rate of heat inflow from the heater is proportional to Kuipers and Berleant 327 the difference between the actual and desired temperature of the heated vessel. (The usual household thermostat does on-off control, not proportional control.) Linear propor- tionality is only a special case of such a controller: In general, restoring force may be a monotonic function of the measured error. Proportionally controlled systems are very common in the world, including physiological mecha- nisms, chemical systems, automobile cruise control, etc. We created four distinct models for a hypothetical pro- portionally controlled heating system: 1. The properly working system; 2. Continuous ture; maximum heating, regardless of tempera- 3. No heat at all, regardless of temperature; 4. Thermostat with faulty calibration, which acts as though the temperature is higher or lower than it re- ally is and therefore causes an equilibrium tempera- ture different from the thermostat setting. Most models have more than one possible qualitative behavior. For example, a properly working temperature controlling system may respond successfully to a demand for increased heating, or it may “max out” by delivering heat steadily at its maximum capacity despite increasing demand. When given a particular set of a priori and observed quantitative knowledge, Q2 generated eighteen qualitatively possible behaviors from the four models, and used the quantitative knowledge to eliminate all but two of them. The remaining two make identical predictions, since the fault model accounting for the uncalibrated thermostat includes the behaviors of the properly working thermostat as special cases. 4 There has been considerable other work relevant to the integration of quantitative with qualitative knowledge. The measurement interpretation methods developed by Forbus [1983, 19861 are closest to our work in terms of the problem solved, though quite different in approach. We, like Forbus, are attempting to interpret quantitative mea- surements by matching the observed measurements against the predictions of a model. Where there are several candi- date models, or several behaviors of a given model, failure to match refines the set of remaining viable candidates. Our method differs from Forbus’ approach in the han- dling of quantitative information. In the more complete formulation [Forbus, 19861, a continuous stream of quanti- tative data is mapped into a stream of qualitative descrip- tions; in his example, directions of change, or Ds values { +1 ‘0, - 1). In an example involving heating a cant ainer of mixed alcohol and water, the stream of temperature mea- surements is described qualitatively as [+l, 0, +l, 0, +l]. The total envisionment of a given situation can be re- garded as a finite-state transition graph, which is used to “parse” the stream of Ds values from an acceptable initial state to an acceptable final state. The path successfully taken through the envisionment describes the sequence of process structures the system goes through. Failure to parse presumably refutes the model. Notice that a signifi- cant amount of quantitative information is lost when fine- grained quantitative measurements are mapped to coarser- grained qualitative representations (in this case the Ds val- ues), and the comparison with the model takes place with the measurements expressed in the same coarse qualitative terms as those used in the model. Karp and Friedland [I9871 1 a so share the goal of integrat- ing qualitative and quantitative constraints in reasoning about mechanisms. They create a frame for each parame- ter at each instant, capable of representing a rich variety of algebraic equations and inequalities involving that value, plus frames for interactions between constraints. While the expressive and inferential power of their approach is potentially very large, so is the potential for combinatorial explosion, since there is no clear structure on the types of constraints and the circumstances under which differ- ent types of constraints are applied. In Q2, ordinal rela- tions between values and landmarks are used by QSIM to propose qualitatively possible behaviors, and quantitative ranges are then used to refine or refute each behavior. The use of distinct types of knowledge for distinct purposes sup- ports conceptual clarity and implementational efficiency. Simmons’ [1986] q uantity lattice, and Sacks’ [1987] hi- erarchical inequality reasoner are more powerful methods of arithmetic reasoning than the package currently in Q2. We plan extensions along these lines. As discussed above, our method depends on starting with a qualitative description of behavior in terms of land- mark values which function as “names” for real numbers, and about which we can accumulate and refine quantii tative descriptions. Thus, our approach does not apply in any natural way to qualitative category representations such as {high, medium, low}, since these symbols refer to sets rather than values, and the boundaries between the sets are not distinctive values. Furthermore, qualitative category representations do not support a rigorous form of qualitative simulation, since limit analysis is not meaning- ful in that context. It is also relatively difficult to apply our approach to the de Kleer and Brown [1985] {+,O, -} representation, since the quantity space contains no non- zero landmarks, and zero already has a precise value. 328 Common Sense Reasoning 5 Concllusions and Directions for Future By,& For conceptual clarity during development, the current im- plementation of Q2 applies quantitative knowledge to indi- vidual, complete qualitative behaviors from the output of QSIM. We plan to interleave quantitative and qualitative processing, so that quantitative inferences can be applied to partially complete qualitative behaviors. Where a quan- titative inconsistency can be identified at an early stage, an entire subtree of qualitative behaviors may be eliminated, greatly increasing the efficiency of the overall simulation. Our current implementation represents incomplete quantitative knowledge as numerically bounded ranges. We believe that our approach will also be applicable to propagation of quantities described by probability distri- butions (i.e. mean and variance) [J. Pearl and P. Cheese- man, personal communication]. In this case, the result will not be to filter out certain behaviors as inconsistent, but to define a probability distribution across the set of possi- ble behaviors. Reasoning with mean-variance descriptions of quantities is of obvious practical importance, given the probabilistic nature of most real-world measurements. As we have discussed, after assimilating a set of quanti- tative observations, the refined quantitative descriptions of surviving behaviors are precisely what is needed for differ- ential diagnosis, for example by selecting a quantity whose ranges in two different behaviors are non-overlapping, and testing for its value. The work on “diagnosis from first principles” by Davis [1984], Genesereth [1984] and Reiter [1987] provides methods for optimizing the selection of new tests. It should also be possible to perform a sensitivity anal- ysis [Raiffa, 19701 on the results of the propagation, to assess the sensitivity of &2’s conclusions to variations in the quantitative observations. This will provide a first step towards capturing second-order uncertainty in the descrip- tions of incomplete quantitative knowledge. 6 References [Alefeld and Herzberger, 19831 Gotz Alefeld and Ju- rgen Herzberger. Introduction to Interval Computations. Academic Press, 1983, p.5. [Davis, 19841 Randall Davis. Diagnostic reasoning based on structure and behavior. Artificial Intelligence, 1984, 24: pages 347-410. wi$a;~~r~~~7~a~;est Davis. Constraint Propagation m S. Artzficial Intelligence, 1987, 32: pages 281-331. [de Kleer and Brown, 19851 Johan de Kleer and John Seely Brown. A Qualitative Physics Based on Confluences. In Qualitative Reasoning about Physical Systems, ed. by Daniel C. Bobrow. MIT Press, 1985. [Forbus, 19831 Kenneth D. Forbus. Measurement Inter- pretation in Qualitative Process Theory. In Proceedings IJCAI-8, 1983, pages 315-320. [Forbus, 19861 Kenneth D. Forbus. Interpreting Mea- surements of Physical Systems. In Proceedings AAAI-86, 1986, pages 113-117. [Genesereth, 19841 Michael R. Genesereth. The use of design descriptions in automated diagnosis. In Artificial Intelligence, 1984, 24: pages 411-436. [Karp and Friedland, 19871 Peter D. Karp and Pe- ter Friedland. Coordinating the Use of Qualitative and Quantitative Knowledge in Declarative Device Modeling. Knowledge Systems Lab. Report KSL 87-09, Stanford Computer Science Dept., 1987. [Kuipers, 19861 Benjamin J. Kuipers. Qualitative Simu- lation. In Artificial Intelligence, 1986, 29: pages 289-338. [Raiffa] H. Raiffa. Decision Analysis: Introductory Lec- tures on Choices under Uncertainty. Addison-Wesley, Reading MA, 1970. [Reiter, 19871 R y a mond Reiter. A Theory of Diagnosis from First Principles. In Artificial Intelligence, 1987 32: pages 57-95. [Sacks, 19871 Elisha Sacks. Hierarchical reasoning about inequalities. In Proc. AAAI-87, pages 649-654. [Simmons, 19861 Reid G. Simmons. “Commonsense” Arithmetic Reasoning. In Proc. AAAI-86, pages 118-124. [Widman] Lawrence E. Widman. Representation Method for Dynamic Causal Knowledge using Semi- Quantitative Simulation. In Proceedings of the Fifth World Conference on Medical Informatics (MEDINFO86), pages 180-184. Kuipers and Bet-leant 329
1988
94
842
Function Sharing in Mechanical Design* Karl T. Ulrich and Warren P. Seering Massachusetts Institute of Technology Artificial Intelligence Laboratory Cambridge, Massachusetts, USA 02139 ulrich@hephaestus.ai.mit.edu, wps@hephaestus.ai.mit.edu Abstract Function sharing in mechanical design is the si- multaneous implementation of several functions with a single structural element. If automo- biles were designed without function sharing they would be relatively large, expensive and unre- liable. But because elements like the sheet- metal body implement many functions (electrical ground, structural support, aerodynamic faring, weather protection, and aesthetics among others) automobiles perform better and cost less than a non-function-sharing alternative. This paper de- scribes how function sharing can be viewed as a computational design procedure that produces efficient designs from modular designs. The func- tion sharing procedure consists of three steps: 1) a structural element is deleted from the design; 2) physical features that can provide alternative implementations of the function(s) of the deleted element are found; 3) modifications are made to the design to accentuate the desired properties of the features found in step 2. We have chosen mechanical devices that can be described func- tionally as a network of lumped-parameter ideal- ized elements as a domain for exploring function sharing. Such d evices include pressure gauges, accelerometers, and hydraulic cylinders. 1 Introduction Function sharing in mechanical design is the simultane- ous implementation of several functions by a single struc- tural element. For example consider the difference between the devices shown in figure 1. The devices are function- ally similar, yet the upper device is much more efficient because each structural element of the device implements several functions. This paper describes how function shar- ing can be viewed as a computational design procedure that produces efficient designs from modular designs. The objective of this work is to develop ideas that will lead to enhanced design teaching, better computational tools for design, and increased understanding of general issues in *This paper describes work performed at the Artificial In- telligence Laboratory of the Massachusetts Institute of Tech- nology. Support for the laboratory’s research is provided in part by the Defense Advanced Research Projects Agency of the United States Department of Defense under the Office of Naval Research contract N00014-85-K-0124, and the National Science Foundation under grant DMC 8618776. Figure 1: Example of function sharing machine intelligence. We present our ideas with five sec- tions: the concept of function sharing, domain description, the function sharing procedure, an example, and discus- sion. 2 The Concept of Function Sharing Designers must somehow represent a design in terms of its constituent functional elements. We call this represen- tation a schematic description. Designers must also rep- resent the structure or physical properties of the design. We call this representation a physical desctiption. Physi- cal descriptions correspond in some way to schematic de- scriptions of the same design. Whatever language is em- ployed for the representation of schematic and physical de- scriptions, we define function sharing as a correspondence between several elements in a device schematic descrip- tion and a single element in a device physical description. In the case of the devices shown in figure 1, some of the functional elements in a possible schematic description lan- guage might be cutter, actuator, or finger interface. The structural elements in a possible physical description lan- guage might be the geometrical descriptions of each sepa- rate part. Whenever a single structural element maps to 342 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. more than one functional function sharing. element, then the exhibits 2.1 Function sharing is important If automobiles were designed without function sharing they would be relatively large, expensive and unreliable. But because elements like the sheet-metal body perform many functions (electrical ground, structural support, aerody- namic faring, weather protection, and aesthetics among others) automobiles can be manufactured relatively in- expensively and can perform relatively well. As a gen- era1 rule, function sharing is a good design strategy for high-performance or mass-produced devices. On the other hand, function sharing is generally a poor design strategy for research devices and prototypes where debugging, ad- justment and diagnosis are important. Our work on function sharing is justified by at least two factors. First, modular, non-function-sharing devices are easier to design, understand and modify than func- tion sharing devices. For this reason a useful approach to design would be to generate initial device descriptions in a modular, decomposed way but then subsequently to process the designs to make them inexpensive and reliable through the use of function sharing. Second, we are in- terested in innovative design, and function sharing is part of the perception of novelty, simplicity, or cleverness with respect to a mechanical device. w bid resistance fluid capacitance I 1 I I Figure 2: Schematic description of design 2.2 The key idea The key idea that allows function sharing to be performed computationally is that most of the properties of a struc- tural element in a design description are secondary and in- cidental to the properties that allow that element to imple- ment its intended function. By recognizing and exploiting these secondary properties, neighboring elements can be eliminated from the design. For example, a modular non- function-sharing design of an automobile would include a ground wire running from the tail light to the battery. By recognizing that there is already an element (the automo bile body) connecting the tail light to the battery, and that sentations. Third, the domain is of some engineering im- portance and interest. Fourth, the geometry of the devices is not generally constrained by packaging issues. 3.1 Describing devices schematically In this paper, we focus on devices incorporating fluid- mechanical and translational-mechanical functional ele- ments. These elements are fluid resistances, fluid ca- pacitances, fluid inertances, translational inertias, trans- lational compliances, translational dampers, and fluid- translational transformers. Figure 2 is a schematic descrip- tion of a device to measure the rate of pressure change of the pressure source. The description consists of a network of five idealized functional elements. The pressure source specifies a pressure at the node labeled P(t). The fluid- translational transformer produces a force on the transla- tional compliance proportional to the pressure drop across its other two terminals. The fluid resistance has a linear relationship between the pressure and flowrate across its terminals, and the fluid capacitance relates flow to change in pressure. The gross behavior of this network is that the displacement of the translational compliance will be pro- portional to the rate of change in the pressure source. This device can be thought of as an aircraft rate-of-climb indica- tor, if the pressure source is thought of as the atmospheric pressure surrounding the aircraft. this element has the secondary property that it conducts electricitv. the ground wire can be eliminated. Perform- 3.2 Describing devices physically ing this r*easoniig requires a physical representation of the design, and an ability to recognize and exploit secondary properties of elements in this physical description. 3 Domain Description For a design to be realized, its schematic description in terms of idealized functional elements must somehow be implemented physically with structural elements. For our work on function sharing, we have chosen a two and one- half dimensional geometry to describe devices physically. In this representation, devices consist of a collection of * As a domain for exploring function sharing, we have cho- structural elements (these can be thought of as design com- sen mechanical devices whose schematic description can ponents like piston-cylinders or springs), which are in turn be expressed as a network of lumped-parameter idealized built from orthogonally connected rectangular-prismatic elements. The computer program that implements the sections of material (these rectangular-prismatic sections function sharing procedure is further limited to devices are the primitive physical building blocks of the system). that can be described with fluid-mechanical elements and Figure 3 shows the physical description of a piston-cylinder mechanical-translational elements. Such devices include structural element. Figure 4 shows the top view of the pressure gauges, accelerometers, force transducers, and physical description of a rate-of-pressure indicator con- pneumatic cylinders. We will call this domain dynamic taining the piston-cylinder structural element. There is systems. We have chosen the dynamic systems domain a structural element for each functional element in the for four reasons. First, the schematic language for such schematic description in figure 2. Note that there are devices is well-understood (generalized analog circuit lan- also two additional structural elements that correspond guage). Second, many devices in this domain can be de- to the connection nodes in the schematic description. In scribed physically with two-dimensional geometrical repre- this two and one-half dimensional geometry, Newtonian Ulrich and Seering 343 FRONT RIGHT SIDE Figure 3: An example structural element represented as a collection of orthogonally configured rectangular-prismatic sections piston-cylinder conkector coniector Figure 4: Physical description of a design approximating the behavior of the schematic description in figure 2 physics applies as if the device were fully three dimensional if one imagines the device to be sandwiched between two infinitely stiff and strong, frictionless plates. This par- ticular physical representation was chosen to simplify the computational geometry problems while still maintaining the applicability of three-dimensional physics. Note that in our rate-of-climb indicator example, the pressure source and the translational compliance are considered part of the input and output environment of the device and therefore are not the target of simplification. 3.3 Problem definition The input to the function sharing problem is both a phys- ical and schematic description of a device with the corre- spondence between structural elements and functional ele- ments specified. The output of the function sharing proce- dure is a physical description containing fewer structural elements. The objective of the procedure is to simplify the device. 4 Procedure The function sharing procedure consists of three steps: 1) a structural element is deleted from the design; 2) physical features that can potentially implement the function of the deleted element are found; 3) modifications are made to the design to accentuate the desired properties of the features found in step 2. We describe each of these steps for the dynamic systems domain. 4.1 Deleting a structural element The physical description of the design is represented as a collection of structural elements. The first step in the func- tion sharing procedure is to remove one of these structural elements. Removal of a structural element from a design may cause some side effects. For example, removing a fluid element from a design may cause leakage. Remov- ing a mechanical-translational element may cause parts of the design to become disconnected. Because of these side effects, the design must be repaired after the dele- tion step, requiring replacement of T-type connectors with straight or L-type connectors, replacement of L-type or straight connectors with plugs, and/or reconnection of dis- connected regions of a design. The deleted structural element corresponds to a func- tional element in the schematic description. This func- tional element can be thought of as being approximated in the physical description with respect to some geomet- rical reference point(s). For example, a fluid resistance is defined with respect to two points in the fluid flow. A translational inertia (mass) is defined with respect to a single point. So, for every deleted element there will be one or two corresponding reference points in the physical design description. 4.2 Recognizing alternative features The second function sharing step is to find alternative fea- tures in the physical description, with respect to the appro- priate reference point(s), that can potentially implement the function of the deleted structural element. We have ap- proached this task as a computational geometry problem- that of identifying one of a set of known physical features that can approximate the relevant function. For example, fluid resistance can result from a narrow passage between two edges, a long narrow channel through a solid region, or an orifice in a plate. For each of these ways of imple- menting resistance there is a physical feature that could be potentially modified to achieve the resistive function. For example a path between two reference points that passes between two adjacent but detached edges could be resistive if a clearance were established between the edges. A path between two reference points that is obstructed by a solid wall could be resistive if a hole were punched in the wall. These relations constitute the function sharing knowledge base. The relations for fluid resistance are shown in figure 5. 4.3 Modifying features Once the potentially useful features are found, the final step is to execute the modification operators associated with each feature. So, in the case of the fluid resistance, if the feature found by the recognition procedure were a path obstructed by a solid wall, the modification would be to punch a hole in the wall. 4.4 Control Our implementation of the function sharing procedure leaves the control to the user. Specifically, the user selects an element to eliminate and chooses one of several modified designs as the starting point for another function sharing iteration. The control could also be automatic, although 344+ Common Sense Reasoning 1 RESISTANCE 1 OBSTRUCTING FUNCTION FEATURE MODIFICATION Figure 5: Organization of function sharing recognition and modification procedures because there will in general be several design modifica- tions for each deleted element, the worst-case number of designs generated by this procedure is exponential in the number of structural elements to be eliminated. 5 Example This section illustrates the function sharing procedure for the rate-of-climb indicator with top views of the physi- cal device description. In this example, the fluid resis- tance and fluid capacitance are both eliminated through two three-step function sharing cycles. Figure 6 shows a single branch of the possible outcomes of the function shar- ing procedure. First, the fluid resistance is deleted, and the leaks re- paired. The repair is accomplished in this case by sim- ply replacing the T-type connector connectors with L-type connectors. Second, the feature recognizer finds a path between reference points A and B that passes between ad- jacent edges. Third, a clearance is established between the edges. This clearance makes the piston leaky; thereby eliminating the separate fluid resistance. To eliminate the fluid capacitance, first the fluid capacitance is deleted and the leak repaired. Second, alternative capacitive features are found with respect to point A. In this case the rele- vant feature is a cavity adjacent to A. Finally, the cavity is expanded by extending the piston length. By executing the function sharing procedure, a very modular, inefficient design is transformed into a simple and compact design. 6 Discussion This work on function sharing as a design procedure fits into a larger project aimed at the general problem of gen- erating design concepts computationally. In this section, we discuss several issues relating to function sharing and to the larger project. 6.1 Interpreting physical descriptions The physical descriptions produced by the function shar- ing procedure can be thought of as parameterizations of a design description. In the case of a piston-cylinder, there are many possible parameters in the physical description that may be relevant to the design. The function sharing procedure is a way of identifying those parameters that Figure 6: Simplifying the rate-of-climb indicator. Ulrich and Seering 345 should be considered in the design. For example, a piston- should be considered in the design. For example, a piston- cylinder is normally parameterized by the piston area and cylinder is normally parameterized by the piston area and the stroke length. There are, however, many other param- the stroke length. There are, however, many other param- eters that relate to the element- among them the thermal eters that relate to the element- among them the thermal conductivity of the cylinder wall, the size of the input and conductivity of the cylinder wall, the size of the input and output ports, or the mass of the piston. After the function output ports, or the mass of the piston. After the function sharing procedure operated on the rate-of-climb indicator sharing procedure operated on the rate-of-climb indicator example, two of these extra parameters where identified as example, two of these extra parameters where identified as important- the clearance between the piston and cylinder important- the clearance between the piston and cylinder and the volume of one end of the cylinder. The designer is and the volume of one end of the cylinder. The designer is alerted that the original design can be simplified if these alerted that the original design can be simplified if these parameters are considered when performing the detailed parameters are considered when performing the detailed design and selecting dimensions. design and selecting dimensions. 6.2 Big picture Big picture In other work [l], we have developed a technique for gen- erating the schematic description of a device described by a relationship between input and output quantities. That work, paired with the function sharing procedure described in this paper, allows a computational system to generate an efficient physical description directly from a specified input-output relationship. This system would first gener- ate a schematic description from a input-output specifi- cation and then, through the function sharing procedure, would generate an efficient physical description from the schematic description. 6.3 Implementation The function sharing procedure has been implemented as a computer program. The program can simplify devices whose schematic descriptions consist of fluid resistances, fluid capacitances, fluid inertances, translational masses, translational resistances, translational compliances, and fluid-translational transformers; and whose physical de- scriptions are represented with the two and one-half di- mensional geometrical language we have developed. The program performs all of the element deletion, feature recog- nition, and design modifications automatically, with the user specifying which element to eliminate at each step. 6.4 Novelty One result of the implementation is the discovery that the unbiased application of our physical-feature-based function sharing procedure can yield some surprising designs. Fig- ure 7 shows screen dumps of one outcome of the function sharing procedure applied to the elimination of a fluid ca- pacitance. 7 Related Work Several researchers have contributed to the work presented in this paper. Suh et al [2] propose a set of design ax- ioms from which globally optimal designs should follow. These axioms suggest that functional elements be com- bined only when their properties can be controlled inde- pendently. Sussman [3] discusses the mapping between function and structure in mechanical devices, and points out the phenomenon of function sharing in a watch mech- anism. Ishida et al [4] h ave developed procedures for de- tecting certain unanticipated functions (like leakage) in mechanical devices. This work was aimed at detecting Figure 7: Novel design resulting from function sharing harmful unanticipated functions, a task similar to detect- ing useful unanticipated functions. Hirschtick [5] devel- oped a technique for recognizing features in a cross-section of an aluminum extrusion in order to suggest modifica- tions to make the part more manufacturable. This work involved a feature recognition problem related to the one faced in function sharing. 8 eferences 1. Ulrich,K.T., Computation and Pre-Parametric De- sign, Massachusetts Institute of Technology Artificial Intelligence Laboratory Technical Report 1043, Au- gust 1988. 2. Suh, N.P., A.C. Bell and D.C. Gossard, “On an Ax- iomatic Approach to Manufacturing and Manufactur- ing Systems”. Journal of Engineering for Industry, May 1978, Vol. 100, p.127. 3. Sussman, G.J. and G.L. Steele Jr., “Constraints: A Language for Expressing Almost-Hierarchical De- scriptions.” Artificial Intelligence 14, 1980. 4. Ishida,T., H. Minowa, and N. Nakajima, “Detection of Unanticipated Functions of Machines,” Proceedings of the International Symposium on Design and Syn- thesis, Tokyo 1984. 5. Hirschtick, J., Geometric Feature Extraction Using Production Rules, S.M. Thesis, Massachusetts Insti- tute of Technology Department of Mechanical Engi- neering, 1986. 34h Common Sense Reasoning
1988
95
843
From Kinematics to Shape: in up roach to Innovative Leo Joskowicz Department of Computer Science Courant Institute of Mathematical Sciences New York University 251 Mercer Street, New York, NY 10012 Abstract We address the problem of designing the shape of solid objects to satisfy a given set of func- tional specifications. In particular, we show how to design elementary components of mechanical devices (kinematic pairs) from a description of their desired behavior and a set of constraints. This is done using a backtracking algorithm that modifies (or creates) object shapes by adding and deleting line and arc segments to the objects’ con- tours. These modifications are guided by the con- figuration space description of the desired behav- ior. The algorithm is extended to handle both qualitative and causal descriptions of desired be- haviors. This work is based on the theory of shape and kinematics developed in [Joskowicz, 19881. I. Introduction The automatic design of mechanisms presents a number of interesting issues, not encountered in other domains [Dixon, 19861. One of the key issues in mechanism de- sign is the ability to reason explicitly about the relation- ship between the geometry of objects and their function in the mechanism. The motions of each object and the relationships between these motions (i.e., the mechanism’s rEinema2ic behavior) are directly determined by the shapes of the objects and the nature of the contacts between them. Unlike other domains, the basic building blocks of a mech- anism are pairs of objects, rather than individual objects [Reuleaux, 18761. Examples of elementary components (called kinematic pairs) are a screw and bolt, a pair of meshed gears, prismatic joints, etc. Complex mechanisms are designed by assembling kinematic pairs to achieve the desired behavior. It is a common observation that in order to comply with a set of design requirements, new or modified shapes of objects in kinematic pairs need to be considered. In most existing Computer-Aided Design (CAD) syst,ems, the de- cision on the creation or modification of an object’s shape is the task of the human designer; the CAD system is re- sponsible for handling and verifying the consistency of the design decision. Other systems are capable of modifying the object’s shape by varying the values of predefined ps- rameters, such as the diameter, thickness, etc. (routine *This work was partially supported by an NSF grant under contract DCR-8603758 knd by DARPA under contract N00014- 85-K-0163 from the Of&e of Naval Research. Sanjaya Addanki IBM T. J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 design) [Brown and Chandrasekaran, 19861, [Mittal et al., 19861, [Mitchell et al., 19851. These systems configure their designs from a library of existing elementary components that have been parameterized to reflect the important as- pects of the design problem. When the design specifica- tions require the consideration of an additional parameter, or the introduction (or modification) of a new elementary component, the design process fails. In order to modify or introduce a new component, the system must be capable of reasoning about the structure and the function of the component (innovative design). A first approach to this problem is presented in [Murthy and Addanki, 19871 for the domain of structural beam design. This paper presents a new method for designing shapes of objects, capable of handling both incomplete and qual- itative functional specifications of the desired behavior. Our method is an extension of previous work on mech- anism analysis showing that configuration spaces are an appropriate intermediate representation for relating kine- matic behavior and object geometry [Faltings, 1986; 19871, [Forbus et al, 19871, [Joskowicz, 1987a; 1987b; 19881. 2 resentation of t bPe Consider the following design scenario: we are given a ro- tating disc A and a translating rectangle B (Figure l(a)). Our design goal is to modify the shapes of the objects so that for two specific orientations of A, 0 and 7r/2, B pre- vents the rotation of A. For all other orientations, the motions of A and B must remain independent. A possible solution is to modify the shape of A by introducing two slots that allow B to create new contacts that prevent the rotation of A (Figure l(b)). In addition to kinematic requirements, design specifica- tions contain other const,raints that directly influence the final shape of the objects; they stem from practical and en- gineering considerations of the desired device. Examples of such constraints are minimum object thickness, simplic- ity, and manufacturability. The most important of these constraints is the physical feasibility constraint. For two- dimensional objects, it requires objects to be topologically equivalent to a disk with a finitely many holes. It also rules out point objects. These constraints must be taken into account during the design process. In the following, we assume that objects are two- dimensional, that their contours are formed by line aeg- ments and circular arcs, and that each object has at most one degree of freedom (either rotation or translation) along an axis fixed in the plane. We distinguish between five de- sign spnces, corresponding to the degrees of freedom of each object in the pair: fixed-rotation, fixed-translation, Joskowiu and Addanki 347 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. (a) Initial Shapes Initial Space, CO(A, B). A (b) Modified Shapes Space After Modification, R(A, B). Figure 1: A Design Example Figure 2: Corresponding Configuration Spaces translation-translation, rotation-translation rotation. snd rotation- 3 Functionall Specification of Kinematic Behavior The kinematic behavior of a mechanism can be described in terms of possible motions or in causal terms [Joskowicz, 1987a]. Both descriptions are functional since they specify motion relationships between objects without referring to their actual geometry. Functional descriptions come from the engineer’s analysis of the requirements of the desired device. A possible motions description specifies all the possi- ble motions that each object (represented by a reference point) can have, together with the relationships between these motions. Every degree of freedom is associated with a motion parameter. The relationships between motions are specified by a function relating motion parameters. Func- tions can be real-valued or qualitative, indicating whether the motion parameters’ ratio is increasing, decreasing or constant. Each motion parameter is bounded by intervals that define its legal range. Since we assumed that objects are two dimensional and move on fixed axes, an object A can only have one of the following three types of motions: o A is fixed at point p: f ized(A, p) o Possible rotation around axis 0: p,rotution(A, 0,8), 19 E [Bmin, Q,,,] g Possible translation along axis 0: p,translution(A, 0, X), X E [Xmin, X,,,] Kinematic behavior can be described as the union of sev- eral possible motion regions. For example, all the reachable behaviors of the pair in Figure l(b) are described as the union of three regions: &: p-rotation(A, 01, S), p&wnsZation(B, 02, X), for 8 E [O, 2r]rnod2n and X E [XO, 00) RI: fized(A, 0), p-trunslution(B, 02, X), for 8 = 0 and X E [X1,X0) &: fixed(A, t!9), ph=unslution(B, 02, X), for 8 = n/2 and X E [Xi, X0) In a previous paper, we showed that there is a direct, one- to-one correspondence between possible motion descrip- tions and configuration spaces’ [Joskowicz, 1987a]. Since each object has at most one degree of freedom, a two- dimensional configuration space fully describes the kine- matic behavior of a pair of objects. Figure 2 shows the configuration space of the pair (A, B) before and after the modification. Note the direct correspondence between the above description and the regions of free object place- ments, indicated by hatched areas. An alternative description of kinematic behavior is a causaZ description. This description states the effects that ‘The configuration sl>ace of a mechanism defines the set of free paacemerats (position and orientations) of objects in a mechanism so that no two objects overlap [Lozano-P&ez, 19831, [Schwartz and Sharir, 19831. 348 Common Sense Reasoning the motion of one object has upon the others (e.g., if A rotates clockwise then B rotates counter-clockwise). The kinematic behavior of a mechanism can then be described by the motions of its objects resulting from a sequence of input motions. Section 6 shows that causal descriptions can also be mapped into equivalent configuration spaces specifying the desired behavior. 4 s esign from Configuration Space We use configuration spaces as the basis of the design pro- cedure. In this section, we assume that the desired pairwise behavior is given as a two-dimensional configuration space with exact boundaries. Initially, we are given two objects, A, B (possibly empty), and a desired configuration space R(A, B), cor- responding to the desired kinematic behavior. The actual kinematic behavior of the objects corresponds to their ac- tual configuration space, CO(A, B). Comparing both the actual and desired behaviors amounts to comparing the two configuration spaces, CO(A, B) and R(A, B). The differences between them indicate where and how these behaviors differ. For example, in the previous design prob- lem, the desired configuration space R(A, B) contains two regions, RI and Ra, not present in CO(A, B) (Figure 2). The behavior of a kinematic pair can be modified by changing the boundaries of CO(A, B) so that they match with the boundaries of R(A, B). Boundaries of the con- figuration space are formed by the contact of two object features (a vertex, an edge, or an arc). Therefore, con- figuration space boundaries can be modified by removing contacts or introducing new ones. This in turn implies that the shape of the objects must be changed by adding and deleting edges and arcs to their contours. In the previ- ous example, there are six configuration space boundaries, ~2, cg, ~4,133, ~7, cs, that must be added to CO(A, B), and two that must be deleted (~1 and cg) to allow transitions from Ro to RI and R2’. The design problem consists in finding a sequence of feature additions and deletions to the objects’ contours so that the actual and the desired config- uration space boundaries match and the design constraints are satisfied. Thus, design constraints (both kinematic and non-kinematic) are interpreted and enforced t8hough con- figuration spaces. 4.1 Configuration Space Boundaries The form of the configuration space boundaries is deter- mined by the design space and by the features that come in contact to create it. For example, in the rotation- translation space (one object rotates, the other trans- lates), a vertex-edge contact produces a configuration space boundary with the following equation XA = r[sinOB + cos0~ tan$] - d tan+ (1) where T is the distance from the rotating vertex to the rotation point of B, 1c, is the angle of the edge of A with the translation axis, d is the distance froni the rotation axis to the translation axis. Arc-vertex or arc-edge contacts 2Regions RI and R:! are rectangles of width have four sides, two of which of zero length. zero, and thus produce (when the center of the arc coincides with the center of rotation) a configuration space boundary that is a line, such as the boundary CO in Figure 2(a) produced by the contact (a 0, bo). We have classified the different types of boundaries that arise from the nine possible pairwise contacts in each of the five design spaces. The result is a table of elementary contacts that specifies, for each type of contact and design space, the type configuration space boundary produced, together with the set of equations that define it. Given a desired configuration space boundary, the de- sign task consists in finding a pair of object features that, when in contact, will create this boundary. Note that not every contact between features can produce a desired con- figuration space boundary. For example, in the rotation- translation space, a vertex-edge contact can never be used to produce a line boundary in CO(A, B), since for no val- ues of r, d and $, equation (1) represents a line. In this case, only a vertex-arc or an edge-arc contact can produce the desired boundary. This means that arc a0 cannot be substituted by a vertex and still produce the boundary CO when in contact with bo. Thus, the type of the configura- tion space boundary can be used to determine which pair of features can, in principle, produce the boundary. Waving determined the type of contact, we then find the precise coordinates of the features that create the boundary. 4.2 An Algorithm f’r Shape esign The design procedure starts by comparing the actual and the desired configuration spaces. The goal is to delete the configuration space boundaries of CO(A, B) that do not match boundaries of R(A, B) and to add to CO(A, B) the boundaries that appear in R(A, B) but not in CO(A, B). Two boundaries match iff their form is identical and the free object placements lie on the same neighborhood. For each boundary difference, a pair of object features to either delete or add the required boundary is selected. For a deletion, at least one of the features that contributed to the boundary creation must be deleted. For an addition, one or two new features must be created to produce the boundary. The type of features that produce the bound- ary in question is determined from the table of elementary contacts. For example, in order to delete cl, either a0 or bo must be deleted. In order to add ~2, it is sufficient to add the edge a2 (but not an arc) since its contact with edge bs creates cz. In both cases of addition and deletion, there might be more than one candidate feature pair and thus a (nonde- terministic) choice must be made. For example, c3 can be created with the existing edge bo and a new edge ~4, or with a new arc bg and a new edge ~4. In this case, the first choice is preferred since it introduces fewer new features. After every object contour change, the config- uration space CO(A, B) is updated. If the new features violate a design constraint (except closed contour), the pair is rejected and a new candidate pair is selected. This guarantees that a bad choice is rejected as soon as a vi- olation occurs, instead of waiting until the whole design process is completed. Note that the final designed objects might not be consistent, i.e., their contour might not be closed. For example, if we remove the edge bo from B, and take A as shown in Figure l(b), we still have that Joskowiw and Addanki 349 Procedure DESIGN(A, B, R(A, B), CONST) 1. Compute CO(A,B). 2. DELETE := boundaries in CO(A, B) that do not match boundaries in R(A, B). ADD := boundaries in R(A, B) that do not match boundaries inCO( A, B). 3. While CO(A, B) # R(A, B) do 3.1 For a boundary ci in ADD, do a. Using the table of elementary interactions, determine the type of features that can pro- duce the type of boundary of c;. b. Choose a pair of features (a, b) of the appro- priate type that produce ci. Prefer pairs in which one of the features is already existing and is connected to the object boundary. c. Check whether the new feature(s) comply with the design constraints, CONST. 3.2 Update CO(A, B), ADD and DELETE. 3.2 For ci in DELETE, choose a feature from the pair that created it and delete it from the corre- sponding object,. Do not delete new features. 3.3 Update CO(A, B), ADD and DELETE. 4. Complete the object without modifying CO(A, B). If this is not possible, return “FAIL”. Figure 3: Algorithm for Shape Design. CO(A,B) = R(A, B), although B does not have a closed contour. An attempt to “fill in” the missing contours is made, without altering CO(A, B). If this attempt fails, the algorithm backtracks over its previous choice. The de- sign process is successful when all the differences between CO(A,B) and R(A,B) have been eliminated, and both objects are consistent with the design constraints. Fig- ure 3 shows a backtracking algorithm that is design-space independent. The analysis of feature contacts reveals that the equa- tions relating a configuration space boundary ci to the fea- tures that created it are underconstrained when only ci is given. Thus, there is, in principle, an infinite number of coordinate values for features to create a new config- uration space boundary, leading to an infinite number of feature choices. Nevertheless, for most of the interesting design cases, the number of choices is finite. When one of the objects (B) is not allowed to change, the number of possible choices of features of B that can participate in the creation of the new boundary is bou’nded by B’s total number of features. Also, if only one new object fea,ture is introduced at a time (to either A or B, but not both), the number of choices is bounded by the number of features of A and B. The overall complexity of the algorithm is exponential in the number of choices. The algorithm can be improved by incorporating two heuristics for choosing candidate features based on the adjacency properties of local object convexity. Note that if a bad choice of initial object shapes is given as input (for example, taking the block B to be of height comparable to the diameter of A), the algorithm will eventually discard all their features one A 0: 0 1” Initial Shapes Qualitative configuration space. Modified Shapes An acceptable configuration space Figure 4: An Example of Qualitative Boundary Match. by one and come up with a solution that has no relation to the initial shapes. Also, if edge-edge contacts are pre- ferred over edge-vertex contacts (less wear), the addition of edges can be considered before the addition of vertices (step 3.1.b). For many special design cases, we developed efficient de- sign algorithms. For example, if we assume that both ob- jects must be convex, the number of choices in each step is reduced to four, and the correct choice can be made in con- stant time. The result is a deterministic algorithm whose time complexity is linear in the size of R(A, B). For the translation-translation space, all the design algorithms, in- cluding those dealing with non-convex objects, have poly- nomial time complexity [Joskowicz and Addanki, 19881. 5 ualitative Shape esign Up to now, we assumed that we either have, or can pro- duce, an exact description of the desired configuration space. In some cases, such a precise description is not available, or not required. Consider the following example: we are given a disk A that can rotate around axis 01 and a rectangle B that can translate along axis 02. Let 8 and X be their rotation and translation parameters, respectively. Suppose we want, for a full rotation of A, B to slide up, then down, and then stay stationary. The precise relationship between X and 0 is not important. We only require X to increase when 0 increases for the intervals X E [O, X0] and 19 E [0, n/2], and X to decrease when 0 increases for X E [X0,0] and 6 E [n/2, n]. For ~9 E (r, 27r), X is to remain constant, X = 350 CommonSenseReasoning 0. This description is not sufficient to produce an exact configuration space since the type of configuration space boundary in the first two regions is unknown. Indeed, any boundary is satisfactory as long as the qualitative relations between the parameters hold continuously in each region. Figure 4 shows a solution that meets these requirements. The given boundary points are matched exactly, but also new boundary points are introduced. To design shapes from qualitative descriptions, we no longer require an exact boundary match between CO(A, B) and R(A, B). The matching requirement for qualitative boundaries is relaxed as follows: let S be a set of boundary segments of CO(A, B). S matches a qualita- tive boundary defined by two given points Pi and P2 of R(A, B) iffz 1. The boundary segments of S form a connected, piece- wise differentiable boundary whose endpoints are Pr and P2. 2. Each boundary segment in S reflects the same tative change than the change from Pr to P2. quali- Qualitative boundaries broaden the number of choices for pairwise contacts in the backtracking algorithm. The elementary contact table is augmented with additional in- formation, indicating the value range for which the con- figuration space boundary is monotonically increasing, de- creasing, or constant. New boundary points are introduced only when all other choices fail. The boundary endpoints PI and P2 must be matched precisely. 6 Causal eseriptions In this section, we show how to map causal descriptions to their corresponding configuration space. A causal descrip- tion is represented as a collection of state diagrams [DeK- leer and Brown, 19841, [F or b us, 19841, where each state cor- responds to a qualitatively different behavior. Two kine- matic behaviors are qualitatively different when they spec- ify different possible motions, when the axes of motion are different, when at least two motion parameter intervals are disjoint, or when the functions relating motion parameters are different. Causal descriptions are sometimes simpler and more intuitive than possible motions descriptions. While possible motion descriptions specify all the po- tential kinematic behaviors of a mechanism, causal de- scriptions might only specify a subset of these behaviors. Indeed, a causal description can be interpreted as either being a partial or a complete description of the desired be- havior. Both descriptions require the described behaviors to take place, but the partial description allows additional qualitatively different behaviors. A complete description requires that no other qualitatively different behaviors take place. In both cases, the design is considered successful when the input motion sequences applied to the objects produce exactly the original state cliagrams. Let S = {Si, . . . , Sn} be a collection of state dia- grams, where each state diagram Si is a triple [cri, {saj}, {< Saj 2 Sik >I]. CT~ is the input motion sequence, {sij} is the set of states describing the motion of each object, and (< Sij7 sik >} is the set of state transitions. The func- tion apply(a, CO(A, B)) p ro d uces the state diagram cor- responding to the input sequence u and the configuration 1. dir(.XA) = +, dir(Xe) = dir(X,q) = -, dir(X8) = or = -, dir(XB) = -, A+ B,or = +, dir(X8) =+,Ba A -- 3. dir( ?(A) = +, dir(XB) = -, A * B, or djr(XA) = -, dir(X8) = +, B * A B? f X X 4. dir(XA) = -, dir(X8) = +, A * B, 01 dir(SA) = +, dir(Xg) = -, B * A x,B X,B 4xA B- ’ f(xA) X Figure 5: Causal Descriptions and their Corresponding Qualitative Configuration Space Regions. space CO(A, B) (f or a description of this procedure, see [Joskowicz, 1987a]). The shapes of A and B satisfy a given collection S of state diagrams ifi VSi E S A Vai E Si, apply(q, CO(A, B)) = 5% i.e., the application of each input motion sequence to the actual configuration space produces the same state dia- gram as the-one desired. A- configuration space that sat- isfies the above property is acceptable. Given set of state diagrams, the goal -is to construct an acceptable desired configuration space, R(A, B). We construct R(A, B) by composing individual configu- ration spaces &(A, B) resulting from each Si. The space &(A, B) is in turn constructed by composing configuration space regions rij resulting from each state sij. Each state sij is mapped into a region of the configuration space by using the information contained in the state about object motions and their relationships: 1. The type of motions determines the design space. 2. The intervals of the motion parameters determine the region of the configuration space in which the behavior takes place. 3. The boundary of the configuration space is determined either by an explicitly given relation (X, > I), Joskowicz and Addanki 351 or deduced from the causal description that defines the instigator of the movement and the direction of change for the motion parameters: motion(A) CAUSES motion(B), dir(X& dir(XB) The configuration space boundary resulting from a causal description is a qualitative boundary, whose endpoints are determined by the intervals of Xn and Xg. The re- gion of free placements is determined by one of the eight possible combinations of values for dig, dir( X,) and motion(A) CAUSES motion(B) (A 3 B), as shown in Figure 5. For example, in the first case, the qualitative configuration space boundary is defined by the endpoints (Xt, X,“) and (Xk, X,“). Th e set of free placements corre- sponds to the region XB 5 I, where f is the equation (possibly qualitative) of the boundary line. The individual regions rij are combined by taking the union of their forbidden placements. Conceptually, com- posing two regions amounts to requiring two behaviors to take place in the common subregions, and preserving the behaviors in the disjoint subregions. The configu- ration spaces &(A, B) resulting from each Si are com- posed analogously. This method produces an acceptable configuration space R(A, B) with the least constraints on free placements. If the causal description is taken to be complete, we require a qualitative match between R(A, B) and CO(A, B). Otherwise, we allow additional regions in CO(A, B) not appearing in R(A, B). Then, R(A, B) matches CO(A, B) iff there exist a set of re- gions ~1, . . . . ,r, C CO(A, B) such that R(A, B) matches r-1 U . . . U rn. 7 Conclusion We have presented a new method for the innovative design object shapes from a kinematic description of their desired behavior and a set of design constraints. The method is based on the use of configuration spaces, which provide a mechanism to explicitly reason about the relationship be- tween the structure and the kinematic function of objects. Our design method differs from existing cam design methods, but is also more general. Cam design consists in finding the shape of a single, continuous feature that defines the contour of the cam. Our design method is ca- pable modifying both objects and producing discontinu- ous boundaries formed by simple features (e.g., vertices, edges and circular arcs). We can incorporate cam design methods in our design framework by specifying the config- uration space boundaries for which a new feature must be designed; the design of this feature can then be done with the existing cam methods. We have started the implementation of the design al- gorithm for exact configuration spaces in the translation- translation space and plan to extend it to other spaces, incorporating both qualitative and causal descriptions of the desired behavior. Acknowledgment Many thanks to Ernest Davis for his valuable comments on an early draft, as well as his guidance. References 352 Common Sense Reasoning [Brown and Chandrasekaran, 19861 D. Brown and B. Chandrasekaran, “Knowledge and Control for a Mechanical Design Expert System”, Computer, July 1986. [DeKleer and Brown, 19841 J. DeKleer and J. S. Brown, “A Qualitative Physics based on Confluences” Artificial Intelligence 24, 1984. [Dixon, 19861 J. Dixon, “Artificial Intelligence and De- sign: A Mechanical Engineering View”, Proc. of the 5th AAAI Conference, Philadelphia 1986. [Faltings, 19861 B. Faltings, “A Theory of Qualitative Kinematics in Mechanisms”, Report UIUCDCS-R-86- 1274, University of Illinois, May 1986. [Faltings, 1987a] B. Faltings, “Qualitative Place Vocab- ularies for Mechanisms in Configuration Space”, Tech. Rep. UIUCDCS-R-87-1360, U. of Illinois, July 87 [Faltings, 1987b] B. Faltings, “Qualitative Kinematics in Mechanisms” Proc. of IJCAI-87, Milano. [Forbus, 19841 K. Forbus, “Qualitative Process Theory” Artificial Intelligence 24, 1984. [Forbus et al., 19871 K. Forbus, P. Nielsen and B. Falt- ings, “The Inferential Structure of Qualitative Kine- matics”, Proc. of IJCAI-87, Milano, 1987. [Joskowicz, 1987a] L. Joskowicz, “A Framework for the Kinematic Analysis of Mechanical Devices”, Tech. Rep. 313, Computer Science Dept, Courant Institute, New York University, August 1987. [Joskowicz. 1987b] L. Joskowicz, “Shape and Function in Mechanical Devices”, Proc. of the 6th AAAI Con- ference, Seattle, 1987. [Joskowicz, 19881 L. Joskowicz, “Reasoning about the Kinematics of Mechanical Devices”, to appear, Int. Journal ofArtificial Intelligence in Engineering, 1988. [Joskowicz and Addanki, 19881 L. Joskowicz and S. Addanki, “Innovative Shape Design for Kinematic Pairs” Tech. Rep. 399, Computer Science Dept., New York Univ., March 1988. [Lozano-PQrez, 19831 1 T. Lozano-Perez, “Spatial Plan- ning: A Configuration Space Approach”, IEEE Trans. on Computers, Vol C-32, No. 2, 1983. [Mitchell et al., 19851 T. Mitchell T., L. Stenberg and J. Shulman, “A Knowledge-Based Approach to De- sign” IEEE Transactions on Pattern Analysis and Machine Intelligence, September 1985. [Mittal et al., 19861 S. Mittal, C. Dym and M. Mor- jaria, “PRIDE: An Expert System for the Design of Paper Handling Systems” Computer, July 1986 [Murthy and Addanki, 19871 S. Murthy and S. Ad- danki S, “PROMPT: An Innovative Design Tool”, Proc. of the 6th AAAI Conference, Philadelphia 1986. [Reuleaux, 18761 F. Reuleaux, The Kinematics of Ma- chinery: Outline of a Theory of Machines, 1876 (Reprinted by Dover Publications Inc., 1963). [Schwartz and Sharir, 19831 J. T. Schwartz and M. Sha.rir, “On the Piano h/lovers II. General Techniques for Computing Topological Properties on Real Alge- braic Manifolds”, Advances in Applied Mathematics 4, 1983.
1988
96
844
Assembling a Device Jean-Luc Dormoy Olivier Raiman Electricit de France Research Center I.B.M. Scientific Center Clamart, France LAFORIA Paris, France Pierre & Marie Curie Paris University Abstract We present here a new way of reasoning on a device based on structure, we call assembling a device. It consists of a symbolic combination of local qualitative constraints (namely confluences) leading to more global relations. Some reference variables are selected according to the task to be performed (simulation, observation, postdiction,...). The assembling step produces a set of equations expressing directly “internal” quantities as functions of the reference quantities. We call such a set a task-oriented assemblage. Then, determining the non ambiguous variables for a particular assignment of the reference quantities turns out to be straightforward. We can thus expect to perform qualitative reasoning on large systems. The assembling tool is a new rule, we call the qualitative resolution rule. It has agreable properties: (1) interpretation: each application can be interpreted as joining local descriptions to more global ones; (2) completeness: an assemblage provides all the non ambiguous variables for any assignment. of reference variables. I Introduction Qualitative reasoning about a physical device is an attempt to make a computer focus on the device properties in the same way an engineer does. A typical problem is to capture key features of the device behavior. This has been the main concern for people working in the Qualitative Physics area. This work shows how a computer program can deduce global properties specific to a device by combining local physical laws. Essentially we attack the problem by defining a new task, which is not based on causality, but on the idea of assembling the components of the device. Technically speaking, this task is performed by a single rule, we call the qualitative resolution rule. First we show on some simple and motivating examples how the resolution rule, by assembling the device, produces global laws. Thus, performing simulation or other tasks, such as observation, turns out to be straightforward. This enables us to produce very efficient task-oriented programs, even for large-scale plants. 330 Common Sense Reasoning Then we describe precisely how the resolution rule must be applied. This leads us to prove some basic properties of the signs algebra. Then we state a completeness result: all the non ambiguous physical quantities can be drawn from global laws produced by the resolution rule. Such a set of global laws is called an assemblage. In practical terms, we specify the form of the global laws composing an assemblage. This enables us to stop firing the qualitative resolution rule as soon as it has provided an assemblage. We conclude by a comparison to De Kleer’s and Brown’s work. 2 Assembling some devices 2.1 Is the sum of two pipes a pipe? Consider a very simple example, a qualitative model for two connected pipes (Fig. 1). A B C Figurel: Two connected pipes For each pipe, there is a confluence describing the link between the sign of the pressure at the different ends of the pipe and the flow Q. The confluence ( 1) resp. ( 2 ) for pipe 1 and pipe 2 are the following: [-A] - [dpgl - [dQl =o (1) [dPg] - [dPcl - [dQl-0 (2) This model describes separately the different parts of the physical device. It is obvious that. two connected pipes behave like a single pipe. This means that the following confluence must hold: [dPA]-[dPC]-[dQl=O (3) A system performing qualitative reasoning should be able to deduce (3) from (1) and (2). 2.2 The qualitative resolution rule From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. Deducing confluence ( 3 ) from confluences ( 1) and ( 2 ) requires eliminating variable [ dPB ] . The trouble is that gaussian elimination is not correct in general for confluences. The following rule states under what conditions such an elimination can be performed: Qualitative Resolution Rule: Let x, y, z, a, b be qualitative quantities such that x + y=a and-x + z=b If x is different from ?, then y + z=a + b Detailed explanation, proof and related properties are given below. Takex=[dP~],y=-EdPcl-[dQl,z=[dPAl-[dQ] and a and b= 0. As [ dP B ] is a physical quantity, its range is { 0, +, - 1. Hence the conclusion can be drawn: [@Al - IdPcl- [dQl=O (3) Moreover, the qualitative resolution rule provides another confluence by “subtracting” confluences ( 1) and (2 ) and “eliminating” [ dQ] : [dPAl-[dPBl+[dPCl"O (4) Initial confluences ( 1) and (2 ) describe links between the physical variables involved in the elementary components pipe 1 and pipe 2. The inferred confluences ( 3 ) and ( 4 ) describe the consequences of connecting the two pipes: they are specific properties of the composite device. The qualitative resolution rule discovers global relations starting from local ones. 2.3 Consequences for a simulation task A cktssical task which a qualitative reasoner should be able to perform is simulation, that is predicting the behavior for a given input. For example, we would like to perform a SimUlatiOn under the assumptions [ dPA] =+ and [ dPC ] =O. We can use two obvious rules, we call in this paper propagation rules: PRl: If the value of a variable x is known, then substitute x by its value in all the confluences mentioning x. BR2: If an equation mentions exactly one variable, then deduce its value. Consider the inferred confluences ( 3 ) and ( 4 ) . It is obvious using the two propagation rules that [dPBl =+ and [dQl =+. The reason for being able to draw these conclusions is that confluences ( 3 ) and ( 4 ) are global behavioral descriptions of the device as a whole, linking explicitly the internal variables [dPB] and [dQ] totheinput [dPAI and [dPCl: k=Bl"[dPAl+[dPcl (AlI [dQl= [*Al - CdPcl (242) By assembling the two pipes and providing the global behavioral relations (~1) and (A2 ) , we have reduced (in this case - based on the quasi-static assumption) any simulation task to simple propagation. For instance, it would have been as easy to predict the behavior of the device starting from other input values: [dPA]=O,[dPC]=+ ==> [dPg]=+, [dQl=- [dPA]=+,[d+]=+ ==> [dPg]=+,[dQ] remains unknown The last case is important. It is possible to compute the value of [ dPB ] , but the propagation rules lead to an ambiguous value for [dQ] : [dQ] = [+I - [+I. It is often the case that some quantities are determinate as others remain ambiguous. The method introduced here is not responsible of the ambiguity of [ dQ] , but the “qualitativeness” of the model is. On the other hand, [ dPB ] is not ambiguous in the model, and its value is inferred. Now, forget for a while that confluences ( 3 ) and ( 4 ) can be inferred and apply propagation directly to the initial confluences ( 1) and ( 2 ) . This gives: - [ dPB I - [ dQ ] =- (1) and [dPB]-[dQ]=O(2).Nootherinformationcanbe gotten, except by using some kind of indirect proof. By itself, propagation is incomplete. This example shows intuitively the advantage and the meaning of the resolution rule: it seems to reduce simulation to simple propagation, while propagation by itself is incomplete. at the same time, it assembles the parts of the device and provides global properties specific of the compound device. We will now give a deeper insight into the nature of what is assembling a device. 2.4 Assembling the device for simulation Consider a general device with input il , . . . , i variables ~1, . . . , vp and a qualitative mode P , internal based on confluences involving the qualitative derivatives of these quantities. Suppose we want to build a system which can answer quickly any simulation-like question : “How does the devicereacttoinput [dil]=al,..., [dip]=ap?" This can be done in two steps (Fig. 2): Assembling the device, that is obtaining from the initial qualitative model global relations, for instance relations expressing directly the internal variables as functions of theinput: [dvj]zf*([dil],..., [dip]), 1ljSn These relations will i old whatever values are assigned to the input. Then propagating input values into these global laws. The second step relies only on the two basic propagation rules. In our first example, the first step is achieved using the resolution rule. Assembling the device Figure 2: Simulation in two steps Solving confluences happens to be an NP-complete problem Dormoy and Raiman 331 [Dormoy, 19871. At first sight, if there are k simulations to be performed, then we can expect to be confronted k times with a (probably) exponential problem. Thus, splitting simulation into these two steps is fundamental. The first step is NP-complete too, but it is done once and for all, The second step will be performed k times, but it is known to be polynomial (in the worst case, 0 (nxp) ). The first step can be viewed as compiling the device for simulation, and so avoids “i-e-interpreting” the initial set of confluences for each new simulation. The second step can be coded as a very simple and efficient program. This program is specific to the device, but this is why it is efficient. We may thus expect to perform on-line simulations on large-scale plants having multiple input variables. 2.5 The pressure regulator revisited Consider a second example, the well known pressure regulator. The model used here (Fig. 3) is slightly different from De Kleer’s and Brown’s [ 19841: [dPl]-[dP2]-[dQ]=O [dP2]-[dP31-[dQ]t[dA]=O [dP4 -[dP51- [dQl=O 2 3 IdPI]-[dP2]-[dQ]=O [-zl-[dp33- [dQl+[dA]=O [-31-L-41- [dQl=O [dP41- [dP51- [dQ] =O [dP41+[dAl=o 5 (1) (2) (3) (4) (5) Figure 3: The pressure regulator and its model PI and P5 are the input variables, P2, P3, ~4, Q and A are the internal variables. Assembling the pressure regulator for simulation using the resolution rule is possible. For instance, we can get the relation involving [ dP2 ] : kQl-W'~l+W51 (AlI in four steps (Fig. 4): [dP21-[dP31-[dQI-[dP41=0 (6)=(2)-(5) W2l-W4k[dQI=O (7)=(6)+(3) W21- Wgl - [dQl =O (8)=(Y)+(4) [dP+[dP2l+[dPS]=O (9)=(l)-(8) In the same way the resolution rule provides the following global laws ( [ dP 3 ] will be given later on): w4l=w~l+[+jl (A21 CdQl= W’ll- Wgl (A3) Figure 4: Assembling the pressure regulator for simulation [dA]=- E-11 - Wgl (A4) As in the example of the two pipes, simulation is now reduced to propagation. 2.6 Assembling the device for postdiction As expected, the resolution rule assembles the device for simulation. But this is not the only point. This example highlights other tasks that can be performed using resolution. Imagine that we cannot directly observe the input, but that we can measure the evolutions of [dAl and [dQl . We are no longer interested in simulation, but in postdiction : “what input has caused the fact that [do] =a and [ dQ] =q ?" Formally, this problem is very similar to simulation: solving it only requires expressing the other variables as functions of [ dA] and [ dQ ] . The general task of assembling a device can still be performed, whatever set of reference variables is selected. The global laws of the pressure regulator for reference variables IdAl and [ dQ] are: W+[dQl-[dAl [dP2l=[dQl-MAI [dP$=[dQl-MAI rm43 =- [dA] k@51 =- Id41 - [dM We can thus expect to observe a device with the same advantages as for simulation. 3 Scanning the qualitative resolution rule Before discussing about what the resolution rule can indeed produce, we need to pause and see exactly what it is. We are starting by the proof, for it may avoid possible confusion. 3.1 Proof The qualitative resolution rule can be stated in several ways. We gave the shortest and the most general one: Qualitative Resolution Rule: Let x, y, z, a, b be qualitative quantities such that x + y=a and -x + z=b If x is different from ?, then y + z=a + b 332 Common Sense Reasoning Before proving the rule, we need two basic qualitative 3.2 Using the resolution rule in the right way calculus properties: Quasi-transitivity of qualitative equality: If a=b and b=c and bf?, then a=c. Compatibility of addition and qualitative equality: a+b = c is equivalent to a = c - b It is very easy to prove these properties provided that the relation =, called qualitative equality, or sign compatibility, is properly defined: a = b iff a = b or a = ? or b = ? This relation is not the usual equality. Let F 1 and F2 be two expressions, involving additions and products of physical quantities, such that F 1 =F 2, and E 1 and E 2 the corresponding qualitative expressions. E 1=E2 means that the resulting signs of the two expressions F 1 and F2 must be compatible. Suppose we have assigned some values to the physical variables involved in both F 1 and F 2, and let s 1 and s 2 be the corresponding values of E 1 and Ea. If s 1 and s 2 are non ambiguous, i.e. are both different from ? , they must be equal; but if one of them is ? (the sum of a + and a -), the underlying real expression may have any sign; hence, it may be compatible with any other sign. All this is obvious and well-known. The point is that as soon as we have formally defined the set s = { + I 0 , - , ? } , its addition and product, as well as the qualitative equality, we may work within this structure and prove things while forgetting the initial motivation. For instance, the proof of the first property is: If a=? or c=?, then obviously a=c. Otherwise a=b=c. 0 No matter what a, b or c are. The second property can be proved by case analysis. We can now Let x, Y, rule: give the proof of the rule: z,aandbbelikeinthe first statement of the x+y=a -x+z=b We get, by applying the second property: Z -bz xandxsa-y The assumption xf? allows us to apply the first property: z - b =a- Y which can be rewritten using again the second property: Y+Z =a+b 0 This proof needs a comment. Probably the most expressive way to state the resolution rule is: A variable may be eliminated by adding or subtracting two confluences, provided that no other variable is eliminated at the same time. But this may lead to confusion: it could be thought that we may add or-subtract two confluences, and then eliminate a variable by the “elimination rule” x-x --> 0. This is clearly a wrong statement, since x-x is hardly ever 0, unless x itself is 0. But the resolution rule states that one can proceed as if this were true, and provided that one applies the “elimination rule” only once. This is clearly not the way the We have shown in the examples above how one succeeds in firing he resolution rule. We show here how one can fail. In practical t,crms, the relations x+y=a and -x+z=b stand for some confluences, and x is a variable involved in both. The hypothesis if x is diflerent from ? is thus always verified, since x stands for a qualitative derivative of a physical quantity. In order to obtain the exact pattern of the rule, the second confluence may be multiplied by - if necessary. a and b are the respective right-hand sides of the confluences (until now 0). y and z are the remaining expressions of the respective left-hand sides after having removed X. y and z may involve a common variable. There is a problem if y and z involve a variable t with opposite coefficients: when adding y+z, we get t-t, which cannot be simplified (it is not correct to substitute 0 for t-t , cf. previous remark). Otherwise, we get t +t or - t -t , which can be simplified according to the rule t +t =t . All this is better illustrated by the following examples (borrowed from the pressure regulator). Consider first the two confluences: C~~~l-~~~~l-~~Ql+~~l~O (2) [dP41+[dA]=0 (5) They have a single variable in common, X= [ dA] . We must consider the opposite of confluence ( 5 ) . y and z have no variable in common: y= [dP2] - [dP3] - [dQ] , z=- I dP 4 ] . The resulting confluence is: [dP21- [dP$- LdQl- W41=0 (6)=(Z)-(5) + Let’s try now to combine this confluence and confluence: W3l-EdPql-[dQl=O (3) There are three possible choices for X: [ dP 3 ] , [ dP 4 ] and [dQl. L&try [dP3] first. We have Y=- W’41- [dQl and Z=[dP2l-[dP~l-[dQl. y and z have two variables in common, and we are in the case t +t. Hence we get: W21-W’ql-MQI-0 (7)=(6)+(3) Let’s now try x= [ dP 4 ] starting with the same confluences ( 3 ) and (6) (choosing X= [ dQ ] would lead to a similar conclusion). We obtain y=[dP21-[dP31-IdQl andz=[dP3l+[dQ].Weare in the case t-t . The relation y+ z=O is of no practical use. Such applications of the resolution rule must be avoided. For functional purposes, the resolution rule must be stated as follows: Letx+E1= a and -x+Epb be two confluences, where x is a variable and E 1 and E2 have no variable with opposite coefficients in common. Then EJ=a+b is a valid confluence, where E 3 is the same expression as E1+E2, but with no repeated variable. 3.3 Why resolution? We had called the qualitative resolution rule the qualitative Gauss rule, because of its similarity with gaussian rule is proved. Dormoy and Raiman 333 elimination. But another analogy seems stronger. The qualitative resolution rule and the Resolution Rule in logics (weakened here to the propositional calculus) have a similar aspect: Let X, Y, Z be propositional variables (and x, y, z their boolean equivalents) such that XVY (x + y = 1) and 1x v z (-x + z = 1) Then YVZ (y + z = 1) Moreover, the two resolution rules have completeness properties (see below). It must be mentioned that there is a third resolution rule, valid in a model dealing with orders of magnitude (which embeds the standard signs model). We have proved no completeness result within this framework, but we guess that there is one. We are thus facing a situation with three similar rules and two completeness results (probably three) in models of increasing complexity: there is something fishy going on. But we have not caught it yet. 4 Completeness of qualitative resolution 4.1 Power of the resolution rule We have shown in the examples the advantage of performing the ‘task we have called assembling a device: the resolution rule provides relations, from which the basic propagation rules are powerful enough tools to determine the non ambiguous variables and their values. Efficient programs could be designed in this way. But are we sure that this works in all cases ? This is a completeness problem. For instance, in the two pipes case, the values for [ dP B] and [ dQ I , when not ambiguous, are imposed by the model, not by a particular method. The challenge, when proposing an effective method, is to know whether it can reach all that is embodied in the model. This is not true for the propagation rules. But we saw that these rules could deduce all the non ambiguous values from the global laws produced by resolution whatever the assignments of reference variables were. We suspect that the resolution rule is complete in this way. 4.2 Assemblages Which kind of global laws the propagation step needs depends on the task to be achieved, i.e. on the choice of reference variables. Suppose we have selected one. Then the resolution rule is requested to discover an assemblage: that is, a set of global laws from which the propagation rules deduce all the non ambiguous variables and their values for any assignment of the reference variables. More formally, an assemblage can be defimed as follows: Let C be a set of confluences, wi be selected reference variables and v j the remaining ones. A set of global laws A is called an assemblage for the reference variables wi iff for each assignment of the reference variables wi=a i , as 334 Common Sense Reasoning soon as the model C imposes the value b j to the internal variable v j , then. the basic propagation rules can deduce v j =b j from the assemblage. The completeness problem comes down to obtaining assemblages for each possible choice of reference variables. 4.3 Partially proved Indeed, though we think that it is true in any case, we have proved the completeness only in the square case, i.e. when the number of confluences and of internal variables are equal. The proof is difficult to show: it requires introducing the notions of qualitative determinant, qualitative rank, maximal matrices with full rank,... Its total length exceeds twenty pages, and therefore will not be given here (it can be found in [Dormoy, 1987-J). Incidently, this completeness result also applies when the reference set is empty. This means for instance that, if we are performing a simulation for some particular input perturbations, then the resolution rule can find out all the non ambiguous variables from the initial set of confluences as performing a simulation for some particular input perturbations, then the resolution rule can find out all the non ambiguous variables from the initial set of confluences as well. But the advantages of the assembly step would be lost if the resolution rule were to be used in this way. 4.4 The general resollution rule is needed for completeness Unexpectedly, we discovered after having written down the completeness proof that this work was not the first attempt to seek an effective and complete method for the unicity problem in confluences. In the field of economics, Ritschard proposed a more constrained form of the resolution rule, but leading to a more informative conclusion (the divergences from the resolution rule are underlined) [R&hard, 19831: Let x+El=a (Cl) and -x+Ez=b (C2) be two confluences, where x is a variable and El and E2 have no variable with opposite coefficients in common. Assume that all the variables involved in ~2 are involved in ~1a d. Then E3 =a+b ( 3 ) is a valid confluence, where E3 is the same expression as E 1 +E2, but with no repeated variable. Moreover. if a + b= b. then substituting confluence (C ) for confluence (~1) provides an eauivalent set of Zonfluences. For instance, this rule applies in the pressure regulator example to confluences ( 6) and (3) : ~~~l-~d~3l-~~4l-~dQl~~ (6) [ dP 3 ] can be eliminated in confluence ( 6 ) , so giving confluence ( 7 ) : W21-W4l-[dQl=0 (7) This deduction is made by the resolution rule as well, but the additional result is that confluence ( 6) can be discarded. Ritschard claimed a completeness result concerning this rule. Unfortunately, his claim is wrong, as shown by the counter example: y+z+t=o X -z+t=O X+Y -t=o x-y+2 =o All the variables must be 0, but Ritschard’s rule does not apply even once. It can be checked that the resolution rule works right. The completeness result stated above theoretically proves that the resolution rule always provides an assemblage. But, in practical terms, we must describe precisely the form of the global laws composing an assemblage. We saw in the examples that we could express an internal variable as a function of the reference ones: [dv~l=fj([dwll,...,[dwpl) For instance:‘[dP2]= [dP1] + [dP5] (Al), drawn from thegloballaw: [dPl]-[dP2l+[dP51=0 (9).Butwe saw too, when assembling the pressure regulator for simulation, that a global law mentioning [ dP 3 ] and the input [ dP 1 ] and [ dP5 ] was missing. Completing a simulation-oriented assemblage for the pressure regulator requires extending the notion of a confluence. The following relation holds for [ dp 3 ] : W3l=W11+? N’gl (A51 The use of ? coefficients in confluences must not make things confused. This relation means that: if [ dP 5 ] is different from 0, then [ dP3 ] cannot be determined from this relation. if [dP5] =0, then [dP3] = [dPl] (since regular and qualitative equalities are equivalent for two qualitative quantities different from ?). Hence, relation (A5 ) , despite the ? coefficient, provides some information. Indeed it provides the best, since [ dP 3 ] is ambiguous as soon as [ dP 5 ] is different from 0. In general, the way we represent physical laws must not change: confluences are suitable. But the goal to be achieved for a particular task imposes changes to their form: the reference variables must be passed to the right-hand side. The resolution rule applies in the same way, but regardless to the right-hand side. This means that one can deduce relations involving a pattern w-w in their right-hand sides, where w is a reference variable. As usual, we run into ambiguity as soon as w is different from 0, but such a relation may provide some information when w= 0. We call such relations task-oriented confluences. The conclusion is that the resolution rule provides assemblages composed of task-oriented confluences. 6 Conclusion and Brown called RAA the chronological backtracking algorithm which determines all the solutions of a set of confluences. But RAA cannot capture the way an engineer discovers how a device works. Technically speaking, causal heuristics are designed to control RAA. But they are intended to express more: they are an attempt to set within the device-centered model based on confluences the engineer’s notion of causal perturbations. Our work presents two aspects as well. From a technical point of view, the resolution rule avoids the incompleteness of propagation by discovering task-oriented assemblages. The completeness result makes this step safe. At the same time, efficient task-oriented programs are produced. But we intend more: in our opinion, assembling a device captures the idea of an engineer inglobing local laws into descriptions which are specific to the device. eferences [De Kleer & Brown, 19841 Johan de Kleer, J.S. Brown. A qualitative physics based on confluences. Artificial Intelligence Vol.24 n”l-3, December 1984. [De Kleer, 19841 Johan de Kleer. How Circuits Work. Artificial Intelligence Vol.24 no1 -3, December 1984. [Raiman 19861 Olivier Raiman. Order of Magnitude Reasoning. AAAI86. [Dague et al., 19871 P. Dague, P. Deves, 0. Raiman. Troubleshooting: when modeling is the trouble. AAA187. [Dormoy, 19871 Jean-Luc Dormoy. Resolution qualitative: completude, interpretation physique et controle. Mise en oeuvre dans un langage a base de regles: BOOJUM. Paris 6 University Doctoral Thesis, December 1987. [Dormoy, 19881 Jean-Luc Dormoy. Controlling Qualitative Resolution. AAA188. [R&hard, 19831 Gilbert Ritschard. Computable qualitative comparative statics techniques. Econometrica Vol. 51 n”4, July 1983. weld, 19861 Dan Weld. The use of aggregation in causal simulation. Artificial Intelligence Vol. 30. The work reported here clearly is in the continuum of previous research in qualitative physics, but it relies on a different and new approach. As said above, propagation rules are incomplete by themselves, hence a kind of indirect proof is needed. De Kleer Dormoy and Raiman 335
1988
97
845
Sarosh Talukdar, Jim Rehg, Rob Woodbury, Albert0 Elfes Engineering Design Research Center Carnegie Mellon University Pittsburgh, PA 15213 Abstract Design systems can have considerable embedded value. Improvements in such systems are better achieved through upgrades than through complete replacements. To determine how best to make these upgrades requires a systems view of design. Such a view is provided by what we call TAO (test- aspect-operator) graphs. Nodes in these graphs represent . aspects of the artifacts being designed while arcs represent operators (transforms between aspects) and tests (comparisons of aspects). Upgrades can be thought of as the additions of nodes or arcs to an existing TAO graph. To illustrate these ideas we will briefly describe the upgrades that we are making to a system for designing certain automobile parts. 1 Introduction The important properties of artifacts like cars, computers and microelectronic chips are determined by their designs. Manufacturers of such artifacts must maintain competitive design systems, if they are to preserve their market shares. When design systems have considerable embedded value, as is often the case, it makes more sense to maintain their competitive margins through frequent and relatively small upgrades, rather than through infrequent and massive changes. To understand where upgrades are most needed we must identify the principal weaknesses of existing design systems. Consider the three stages of virtually every large design project, namely: the decomposition of the project into loosely coupled, partially ordered tasks, the performance of the tasks, and the integration of the results. For the third stage to work, the second stage must produce results that are compatible. One way to do this is to establish a team of agents whose function is to monitor the tasks of the second stage and make sure their results are compatible. We will refer the activities of this team as “simultaneous engineering” or “concurrent design”. We note that simultaneous engineering is closely related to distributed problem solving and to ideas of contingency theory as used in human organizations. These relationships are further discussed in [S]. 1.1 Second Stage Weaknesses Some of the more profound weaknesses of existing design systems stem from their second stages. These weaknesses include: @ closed design processes that inhibit the monitoring and mid-course adjustments needed for effective simultaneous engineering; 0 limited vision and delayed feedback (The decisions made in upstream design tasks can have significant effects on downstream tasks. Wowever, decision makers often cannot see these effects. When they can, it is through simultaneous engineering and tests that are usually so slow that it is difficult to do anything about the effects.); e weak infrastructures that can support only a fraction of the representations needed (Complex design processes call for wide varieties of representations. Various geometric models, differential equations and block diagrams are some examples. Existing CAD systems, however, support only a very few of these representations.); 2 Design Processes 2.1 Process Architectures and TAO Graphs The three main components of a design process are aspects, operators and tests. An aspect is a perspective or view of an artifact, from any point in its life cycle. For example, sketches on the back of an envelope, detailed blueprints, and full size prototypes are all aspects of a car. Such aspects can be divided into three categories: 0 input aspects (given data), 0 output aspects (goal states), and 0 intermediate aspects (subgoal states or stepping stones to the outputs). Operators calculate the intermediate and output aspects and can be either manual or automatic. Large processes usually require some of each. Tests are special cases of operators. The purpose of a test is to compare two or more aspects for consistency and post the results of the comparison in another aspect. 336 Common Sense Reasoning From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. A convenient way to describe the architecture of a design process (the arrangement of aspects, operators and tests) is by a directed graph whose nodes represent aspects and whose arcs represent operators and tests. We will call these graphs TAO (test-aspect-operator) graphs. 2.2 Control The control problem can be stated as follows: given a TAO graph, select paths by which to calculate the output aspects from the input aspects. Some of the given inputs may ~)e test results. That is, the paths may have to be selected so that certain tests are passed. Often this requires cycles through the graph (iterations). The functions of the control scheme are to produce an “initial design” by instantiating the empty aspects, and then, to reduce inconsistencies among the aspects, much as a servo control system acts to reduce errors. The human members of a design task force usually work in a distributed mode that allows for both spontaneous and preplanned collaboration. The computer tools in existing systems, however, are usually incapable of spontaneous or opportunistic action. 2.3 Eliminating Weaknesses What are the causes of the second-stage-weaknesses listed earlier to be alleviated? We suspect that the causes in many, if not most, industrial processes are architectural in nature. Factors to be considered in eliminating these causes are: 1. aspect placement (Do the aspects decompose the overall task into manageable subtasks? Do they provide the information and connection points needed for simultaneous engineering?); 2. aspect capability (Do the data abstractions and representation schemes used by each aspect adequately capture and present the information needed by the operators that deal with the aspect, and just as important, hide the information that might confuse these operators?); 3. test adequacy (Do the tests make sense? And are they appropriately placed? Do they cover all the consequences the designers should know about? Can they be run fast enough to provide useful feedback to the designers?); 4. modularity and expandability (Can new aspects, operators and tests be readily added to the architecture?). 3 CASE: A Project to Upgrade Window It is difficult to work on issues of upgrading processes in the abstract. To provide an instance of a real, working process, CMU and Fisher Guide have combined forces and selected the design of window regulators as a prototypical process. A window regulator is a device that raises and lowers the glass in an automobile door. The type of regulator considered here has three main parts: a lift arm to move the glass; a combination of a handle, sector and pinion to translate handle rotations to lift arm movement; and a backplate to attach the entire device to the inner door panel. A TAO graph for the existing window regulator design process is shown in Fig. 1. An examination of the graph reveals several typical flaws, including: 0 Most of the operations are manual. @ Much of the generative reasoning is done in a single step (stick model to blueprints). This reasoning process is relatively inaccessible to outsiders and even the designer can forget why he did things. @ There is little feedback of the consequences of design decisions to the designer. Where there is feedback (e.g. from the lab tests and finite element analysis), it is delayed by days or even weeks. Stick Model Computerized Operator 5 Blueprint ti Generation s $ Prototype 5 Construction . ..--.. I Generation Prototype ~~~~,~~,~~~,,,,~,,,~~, Figure 1: Test-Aspect-Operator (TAO) Graph for the existing window regulator design process. Our plan for upgrading the process involves two phases. The goals of the first phase are to automate much of the routine, well understood parts of the existing process and to add some feedback mechanisms (tools that can quickly predict the consequences of design decisions) in important domains such as manufacturing cost. Thus, when the first phase is completed, the human designers will have a system that can Talukdar, Rehg, Woodbury and Elfes 337 quickly transform specifications into a finished design and also provide evaluations of the design in a number of critical domains. The goals of the second phase are to integrate the human designers into the system by providing them with a powerful interface and the means to guide the automatic operators. Work on the first phase is nearing completion. A TAO graph for the system is shown in Fig. 2. The system involves upgrades in each of the four important areas of engineering design: synthesis, analysis, optimization and simultaneous engineering. In the rest of this paper we report on these components, dwelling most on synthesis because it provides the backbone of the system. 4 The Design Synthesis Task Synthesis in CASE is based on a window regulator design scenario in which existing backplate and sector designs are chosen from a parts library to meet a given set of requirements, while the lift arm is designed “from scratch”, along with some smaller components, to interface to the existing backplate and sector and meet the specifications. This scenario is analogous to actual design practice in many segments of the automotive industry. specifications. Thus, the window regulator design task falls into the class of problems often referred to as routine design, for which some solution methods have been proposed [l]. 4.1 Design Aspects In the paradigm described above, design synthesis consists of two activities: the selection of a group of primitives that meet the design performance requirements, and the instantiation of a set of parameter values that meet the design constraints. In routine design, the correct primitive group is assumed to be known in advance and the synthesis activity essentially involves constraint satisfaction in multiple representations. Two of the representations employed in synthesis are described below. 4.1.1 Stick Model The stick aspect corresponds roughly to the planar kinematic diagram commonly employed in mechanism design [2]. It captures the basic skeleton of the mechanism and those key parameters that determine its motion. Within this representation, the synthesis task consists of choosing the major link dimensions and gear ratios necessary to meet the design specifications. As a result of the choice of stick aspect primitives, essential device parameters can be determined without considering the full detail of a manufacturable part. Figure 2: Test-Aspect-Operator (TAO) Graph for the improved window regulator design process. In addition, the lift arm itself is typically one of several predetermined types, making lift arm synthesis a problem of selecting the appropriate structure for the arm and then choosing the dimensional parameters to meet the - = ‘connected-to’ relation Figure 3: Stick Aspect Network. The above data structure describes the manner in which the stick aspect primitives are interconnected. The resulting graph can be traversed to generate design equations or query about the kinematic properties of the mechanism. 338 Common Sense Reasoning Following the representational paradigm described earlier, the stick model is composed of two groups of primitives: links and joints. Links define the skeletal structure of the mechanism, while the joints define permissible relative motions of links. There are three types of link primitives: binary, gear, and ground links, and four types of joint primitives: revolute, prismatic, rolling, and fixed joints. The stick model consists of a network of interconnected link and joint primitives. A sample stick aspect network is given in Fig. 3. The stick diagram described by the network is depicted in Fig. 4. 4.1.2 Parts Aspect While the stick aspect captures the essential kinematic information about a design object, the parts aspect provides a description of the object at the level of detail necessary to manufacture it. Unlike the stick aspect, the choice of manufacturing process is important at the parts level, for it determines the types of primitives that will be employed. For example, because the window regulators are manufactured through a progressive die operation, the parts primitives consist mainly of formed sheet metal objects and rivet-type connectors. Sash = Ground (Door) = Prismatic Joint Figure 4: Stick Diagram. The above sketch depicts the stick model window regulator described by the graph of Fig. 3. Note how unnecessary design detail has been suppressed. Although the parts aspect is more domain specific than the stick aspect, it is identical in form. The two classes of primitives it employs are parts, which consist of the manufacturable elements necessary to design the mechanism, and connections, which represent the specific fastening technologies employed in assembling the device. The part and connection primitives necessary to describe a manual window regulator are discussed in [3]. Like the stick aspect, the parts aspect has a network structure. Unlike the stick aspect, primitives in the parts aspect are not parameterized at the parts level. Instead, each part has a feature representation, which describes the part as a combination of more detailed primitive design elements. In the current implementation, the development of feature representations has been restricted to the lift arm. Other parts, such as the backplate and sector, are characterized at the feature level by a single parameter list. The feature representation of the lift arm is based on three classes of primitives: slabs, formations, and seams. The body of the arm is composed of slab primitives, joined together by eitherflat or bend seams. Flat seams are used when two slabs that lie in the same plane joined together, while bend seams are used where a change in vertical orientation occurs. Viewed from the side, the lift arm consists of alternating parallel and slanted sections connected by flat and bend seams, respectively. Each individual slab can in turn contain any of the two formation primitives: holes and slots. In a manner similar to the stick primitives, the feature primitives are arranged in a semantic network. There are two possible network connections: connected-to and contains. AS before, the primitives are characterized by a parameter set. Fig. 5 shows the top view of the arrangement of primitives that form the lift arm. OI-Iole O Ix *ohayo F B C D E I! I = Flat Seam II = Bend Seam Figure 5: Lift Arm Feature Diagram. The above illustration shows the manner in which the feature primitives are arranged to model a lift arm. 4.2 Operation of the Synthesis Architecture In the current implementation of CASE, synthesis occurs in two stages. In the first stage, the stick model synthesis program solves the stick aspect constraints to obtain a set of stick aspect parameters that meet the design specifications (see Fig. 2.) The stick synthesis module produces a skeletal window regulator design, in which the major design decisions have been made. The design, however, lacks the detail that would make it a manufacturable part. Talukdar, Rehg, Woodbury and Elfes 339 In the second stage, the remaining design detail is added by the feature model synthesis program, which solves the feature model constraints to obtain the feature model parameters. After the feature synthesis program has performed its task, the design is complete and ready for analysis. Note that within each design stage the problem-solving process is the same, but the design representations differ in the view of the design object they present. This separation permits the design process to occur in a hierarchical fashion that strongly resembles actual design practice. While the current design process is sequential, we are presently developing a more flexible problem-solving approach in which the stick and feature synthesis modules can interact in producing a design. Located between the stick and feature aspects in Fig. 2 is a design translator module that maps the effects of design decisions made in the stick representation to constraints on design decisions made in the feature representation. The presence of the translator is necessary to ensure that, for example, the length of the feature model lift arm is compatible with the length of the binary link that corresponds to the lift arm in the stick representation. A description of the operation of the translator module is given in [4]. 4.3 The Synthesis Qperator The problem-solving technique employed by the individual synthesis modules is an adaptation of the agent hierarchy approach suggested in [l]. In CASE, groups of design parameters are “assigned” to problem-solving agents that contain the domain knowledge necessary to generate parameter values through the use of heuristic rules and constraint propagation. These agents are arranged in a hierarchy, and communicate through message-passing. There are two basic mechanisms for assigning values to parameters: instantiation from default values and constraint satisfaction. The constraint satisfaction algorithm employs breadth-first search and is guided by domain knowledge stored in the agents as plans. This synthesis approach facilitates the incremental development of the system and provides a well- organized structure for the acquisition of domain knowledge. A more detailed description of the operation of the synthesis modules is given in [4]. 5 Simultaneous Engineering Simultaneous engineering addresses two fundamental issues in the design of any complex system. On one hand, the different, often conflicting requirements imposed on the design of a device from the point of view of the various design stages have to be addressed and harmonized. These include satisfaction of the original specifications, materials issues, handling manufacturing and assembly concerns, etc. On the other hand, a complex device will have many subsystems that are designed separately and have to be integrated into a working whole. In this Section we briefly discuss two parts of our system which support simultaneous engineering: tolerance optimization and interference/clearance analysis. 5.1 Optimization of Tolerances The goal of tolerancing aspects and operators is to help the rational selection of tolerances based on considerations of cost, performance specification and sensitivity. When tolerance analyses are performed early in the design process, feedback from these analyses can be constructively used to guide design decisions. Tolerances may be optimized for cost, subject to performance, but this requires information that is often not available until later in design. Part of the necessary information for tolerance optimization, the device geometry, is available quite early, and may be used to examine, in a qualitative sense, the sensitivity of performance to design decisions. We report here the optimization problem and its component sub-problem, the computation of sensitivity information from design geometry. To obtain the optimal tolerance settings, three types of input information are required: 1. The cost of holding tolerances, or controlling variations in dimensions, during manufacturing. 2. The cost associated with the degradation of a design’s performance measures. 3. The sensitivity of performance measures to variations in nominal part dimensions. Once this input information is obtained, an optimization problem is formulated to compute estimates for the optimal tolerances. Individual dimensional variances are selected to minimize the total cost of controlling these variances and the expected value of the quality loss function. The expected loss is transformed from a function of performances to a function of variances using the sensitivity relationships, the quadratic nature of the loss function, and the assumptions of normal distributions for the dimensional deviations. The cost of holding tolerances and the cost of performance measure degradation are relatively independent of decisions made during the design process. These costs are derived from data about the manufacturing facility. They include information about rework, scrap, repair and warranties. On the other hand, the sensitivity information is extracted from design representations generated during design synthesis. For these purposes, a device is represented abstractly, as a linked series of homogeneous coordinate transformations and points. This abstract representation is automatically computed from a design representation by traversing a connection graph between parts. The rotation and translation parameters of the transforms and points describe the device dimensions. Each parameter is represented as a nominal value and an error distribution about the value. Distance functions between 340 Common Sense Reasoning points are used to express performance measures. The sensitivities of performance measures to errors are computed at the nominal parameter values by a combination of symbolic differentiation and matrix concatenation. 5.2 Interference and Clearance Analysis The purpose of interference and clearance analysis is to verify that the separately designed components of a design do not interfere spatially, either as they are placed in the assembly or move through their paths in space. The facility consists of one test (for the dynamic interference analysis), two aspects (functional and 3D solid, which comprise a model for the analysis) and two operators (which build the aspects from the feature aspect). 52.1 Functional and Solid Aspects The functional aspect represents the kinematics of the window regulator design at the feature level. It is differentiated from the stick aspect described earlier by its inclusion of specific part geometry which makes the spatial analysis meaningful. The three-dimensional solid aspect is used to test a particular configuration of solid representations for interference and proximity. An instantiation of this aspect consists of a set of (location, solid) pairs. The solids are generated by procedural calls to a solid modeller (VEGA), made at the time of creation of the functional aspect. The locations are specified dynamically from the state of the functional aspect. The 3D solid aspect serves two purposes in the system: precise and full representation of spatial position and geometric relationships, and visualization of designs to the user interface. 6 Conclusions The work that we have done so far falls into two categories: identifying crucial issues in upgrading design systems, and assembling apparatus with which we can begin to study these issues. The system that has resulted is noteworthy for its openness, for its use of multiple representations and for its ability to integrate diverse types of design tools. At present, the system’s operational capabilities are as follows. Given a set of specifications, the automatic parts of the system can synthesize a sketch (stick diagram), a parts/feature model (a more detailed representation than a sketch) and a solid model (a full description of the geometry) of a window regulator that meets the specifications. Through a combination of human and programmed activity, the system can also optimize dimensional tolerances (and thus, solve a large part of the design for manufacturability problem for the regulator); point out incompatibilities that may be developing between the regulator and door geometries (the door is designed concurrently with the regulator); and draw attention to mechanical weaknesses in the regulator. Our plans for the immediate future include: 1. adding more automatic operators to the system so that most aspects in Fig. 2 can be reached by several different paths and implementing these operators in a distributed network of computers; 2. studying ways for editing designs and rapidly attenuating the inconsistencies that tend to emanate from the site of a change; 3. studying ways to rapidly reconfigure design systems. eferences r.11 VI 131 r41 PI Brown, D. and B. Chandrasekamn. Knowledge and Control for a Mechanical Design Expert System. Computer :92-100, July, 1986. Erdman, A. and G. Sandor. Mechanism Design: Analysis and Synthesis. Prentice-Hall, Englewood Cliffs, 1984. Rehg, J., Elfes, A., Talukdar, S., Woodbury, R., Eisenberger, M. and Edahl, R. CASE: Computer-Aided Simultaneous Engineering. In Proceedings of the I988 Al in Engineering Conference. Computational Mechanics, Stanford, August, 1988. Rehg, James M. Computer-Aided Synthesis of Routine Designs. Master’s thesis, Carnegie Mellon University, June, 1988. Talukdar, S., Elfes, A. and Papanikolopoulus, N. Concurrent Design, Simultaneous Engineering and Distributed Problem Solving. III Proceedings of the 1988 AAAIIAI in Design Workshop. AAAI, AAAI-88, Minneapolis, MN, August, 1988. Submitted. Talukdar, Rehg, Woodbuty and Elfes 341
1988
98
846
The Induction of Fault iagnosis Systems from uakitative S* D.A. Pearce The Turing Institute 36 North Hanover St, Glasgow, Scotland Abstract This paper describes a methodology for the auto- matic construction of diagnostic expert systems, and its application for fault diagnosis of a satel- lite’s electrical power subsystem. The synthesised knowledge base is compared with an existing expert system for the same applica- tion built using a commercial expert system shell. Both systems have been tested using a real-time satellite simulator which has the capability to fail components. A traditional knowledge-engineering approach in- volves building a prototype which is refined until satisfactory results are obtained. This process is error-ridden, as even in small systems, rules can conflict, be irrelevant, or missing. It is never clear when a system is complete and validation is al- ways difficult. As an alternative, a fault diagnostic knowledge base can be automatically synthesised from a qualitative model of the device. This is achieved by systematically simulating all component fail- ures. Individual failures are used as examples. A learning algorithm is applied to the examples to output a set of diagnostic rules. The resulting rules are complete and consistent with the quali- tative model and diagnose component failures in the model 100% accurately. Validation becomes a higher level problem of ensuring that the qualita- tive simulation accurately models physical device behaviour. Introduction The traditional knowledge engineering approach to build- ing an application in the fault diagnosis field, is to chose a suitable expert system development tool and manually enter rules which cover all the possibilities the domain spe- cialist can envisage. Even after a number of iterations the system is unlikely to be complete and consistent. Valida- tion of the knowledge base can be a major problem if the system is to be used in a live environment. For some systems an alternative approach is to use a qualitative model of a physical device and its possible fail- ure modes as a specification. It is then possible to generate a knowledge base which accurately diagnoses failure of any single component in the model. The problem of validation *Work has been carried out in collaboration with British Aerospace with funding from the European Space Agency becomes the higher level issue of ensuring that the qual- itative simulation accurately models the behaviour of the physical device. We have found that this is a task the domain specialist can accurately perform. We have developed a methodology for building fault di- agnosis expert systems, and have tested it in an aerospace application. The application has driven development of software tools capable of being adapted to other problem domains. The application chosen was fault diagnosis of the electri- cal power subsystem of an on-station satellite. A real-time numerical simulator for the satellite had previously been developed, and used for operator training. This allowed testing of the knowledge base on real data. In addition, a rule-based expert system using the commercial shell En- visage [Systems Designers, 19861 had previously been de- veloped to analyse the simulator output for faults. Thus a direct comparison could be made with a separate expert system designed to perform the same task, but constructed via the traditional knowledge engineering methodology. It is our intention to build on the existing tools to de- velop a general purpose software environment, capable of automatic generation of diagnostic knowledge bases. Simulation of the satellite power subsystem has been achieved through constructing an executable model [Mozetic et al., 19881. The model is deep as regards the distinction between deep causal knowledge and shallow, op- emtional knowledge. We define shallow-level knowledge as knowledge that is sufficient for performing the task itself, but typically without any representation of the underly- ing causal mechanisms. Deep knowledge, on the other hand, captures an underlying causal structure and facil- itates reasoning from first principles. When running the application, explanations and advice can be derived from the underlying model. Take, as an example, failure of any relay switch in the network of switches used to route solar array power to charge the batteries. This will not only be correctly detected, but advice is given in the form of an al- ternative (minimum change) relay configuration that will restore battery charge. This information is taken straight from the model’s representation. The model is qualitative in the sense that it does not deal with electrical components represented numerically as voltages and currents over time, but with components rep- resented by symbolic descriptions that specify qualitative features. For example, in the model a voltage level may be considered low, normal or high. Such a qualitative mod- elling approach has several advantages over conventional Pearce 353 From: AAAI-88 Proceedings. Copyright ©1988, AAAI (www.aaai.org). All rights reserved. numerical modelling: l The qualitative view is closer to the domain specialists descriptions of the reasoning about the operation of the device. o To execute the model we do not have to know exact numerical values of the parameters in the model. o The qualitative simulation is computationally less complex that the numerical simulation. * The qualitative simulation can be used as a basis for constructing understandable explanations. The model is used for automatic synthesis (through simula- tion) of a shallow, operational representation of the knowl- edge. This knowledge is bulky and is compressed using an inductive learning algorithm. Figure 1 illustrates the vari- ous levels of knowledge and transformations between their representations. Causal Model DEEP LEVEL Qualitative Simulation v Operational Knowledge SHALLOW cJ=w LEVEL I Learning Program L5 Compressed Operational Knowledge SHALLOW (9W LEVEL Figure 1: Deep and shallow levels of knowledge 3 Synthesising the Knowledge Base The task of generating a knowledge base is split into the following sub-tasks: 9 Development of a static, qualitative component model * Development of heuristic knowledge about behaviour 0 The simulation of all possible component failures to generate a set of examples e Compression of the examples to generate a diagnostic rule base The first two items jointly constitute the qualitative sim- ulation. This model can be tested against the behaviour of the physical device by simulating failure of components and observing results. Satisfactory model behaviour can be established before compression takes place. 3.1 The qualitative component model The component model defines all the basic components, their initial states and their relationship with each other. Indicators used in the model are also defined stating under what circumstances they change value. In our application, the model of the power subsystem is based on Figure 2. Solar power hY Electrical -Y w Switching llltL?gi3iOXl wing Regulator Unit A power Batteries 4 charge PAYLOAD Figure 2: Block Diagram of Satellite Power Subsystem Power from the solar arrays is routed through the Ar- ray Switching Regulator (ASR) to supply power to the bus when the spacecraft is in sunlight. During eclipse the power comes from two batteries. The ASR contains a set of switches used to enable or disable solar array sections. During operation, a comparator detects a rise or fall in the bus voltage and automatically opens or closes ASR switches to restore the bus voltage to normal. The Elec- trical Integration Unit contains relay switches necessary for main charging or trickle charging the two batteries. The Prolog model contains a description of each compo- nent, and its relationship with upstream and downstream directly connected components. This is expressed using the camp predicate. The arguments are used to specify name, type, controlling device, input connections and out- put connections. comp(array2W a, El , Cl , [or-switch-2, asr,switch-21) . camp (comparator, c , Cl , [bus] , [asr_switch2, . . . ,asr-switchlO]) . These two camp clauses are used to define components array section $A and the comparator. Here, array sec- tion 3A has no downstream (input) connections, and is directly connected upstream (output) to both ASR switch 2 and override switch 2. The comparator has direct input from the bus, and its output is connected to all the ASR switches. In order to start the model, initial states for each device must be specified. This is achieved using the kit-state predicate. The initial state may be conditional, depending on the state of some external factor such as the mission phase or the payload connected to the bus. 354 Common Sense Reasoning init,state (array,3A, no-power) : - init,state(mission,phase,eclipse). init,state (array_3A, power) : - init-state(mission-phase,post,eclipse); init,state(mission,phase,solstice). init-state(comparator,sOOOOOll~l) :- i&t-state (load, 3) . 3.2 The heuristic behaviour knowledge Behaviour rules are used to specify the operation of the static component model. A set of behaviour rules exist for each component in the model, indicating how a change of state affects its neighbouring components. Simulation of faults in the power subsystem can be achieved through changing the state of the component to be failed, then firing the behaviour rules repeatedly until the system reaches a new stable state. The inference mechanism used is form a queue of active components, and to fire a behaviour rule for the compo- nent at the head of the queue. Behaviour rules are tested until one is found which can successfully fire. This is a deterministic operation of finding the first rule that can fire from a linear search of the rule set. The result of a rule firing will typically be to change the state of another component, which then joins the active component queue. The behaviour rules are repeatedly fired for the component at the head of the queue until the queue empties and the model reaches a static state. This complete process for any one component in the satellite model is typically executed in about a second. As a result of components changing state, visible indi- cators may also change. Indicators are observable values which show the status of the system at various points. In the satellite application, indicators are the telemetry val- ues which the spacecraft sends to earth at regular intervals. Following a simulated failure, a generated example makes use of the indicators as attributes, with the component failure as the class or decision. Two of the behaviour rules from the satellite application are listed: rule 1 : : if component bus of-type b is low and component comparator of-type c is AnyState then comparator takes-next-up-state. rule 9 : : if component comparator of-type c is s000111111 and component asr,switch-3 of-type s is open then asr-switch-3 becomes closed. 3.3 Generation of the examples To generate a set of failure examples, each component in the model is failed under every possible combination of ex- ternal factors on the model. This gives a complete example set covering all possibilities. The example set is reduced by applying some common sense constraints. An example is not considered if it is a duplicate of an existing one, or if an example describing a failure is identical to a normal state of operation with respect to the observable indicators. The second constraint is necessary to prevent suggestions that faults have occurred under normal operation. For example, failing a solar array during an eclipse will go undetected until the eclipse is over. It would be unhelpful for the sys- tem to suggest that during normal operation any one of the solar arrays could have failed! In the satellite example, 61 functional components were failed under 8 payload values 0 through 7, and 3 solar phases (eclipse, post eclipse and solstice). This resulted in 1464 (61x24) examples, which reduced to 708 under the above constraints. 3.4 ule Induction A version of the AQ rule induction algorithm called AQR has been used [Michalski and Larson, 1983; Clark and Niblett, 19871. AQR d in uces a single decision rule for each failure in turn. It is able to deal with conflicting examples by giving a rule with a disjunction of failures. Two exam- ples conflict if they contain identical attribute values, but have a different decision. This can happen in our appli- cation when identical telemetry indicator values are seen after failing different components. Generally the process of rule induction takes a set of incomplete examples as input, and forms a general rule which covers the example set. This can best be described as induction. However, in the mode&g case, the examples are complete and consistent and we generate performance rules by data compression. It the satellite application some 75 diagnostic rules are produced from over 700 examples, with an average of only 3 attribute tests to identify a fault, where each example contained over 30 attributes. g software to The software has been developed on a Sun 3 workstation using Quintus Prolog. Use has been made of Puneltools II [Hoff, 19871 an internally developed Prolog graphics pack- age which provides an interface to Sun windows. Three separate processes have been developed: A non-interactive analysis program which systemat- ically fails every component in the model for every external factor. The output of this program is a file of examples. Rule induction on the set of examples using a version of the AQ algorithm. Again this is not interactive, and its output is a file of diagnostic rules. A graphical, interactive diagnostic program which uses the output from the induction process to diag- nose faults introduced in the model. This program combines simulation of component failure with expert system diagnosis using the knowledge base. The user interface consists of a graphical qualitative sim- ulation of the satellite power subsystem in which compo- nents such as switches and solar array panels are seen to change state. Using a mouse the user of the system can fail components, view the graphical simulation and receive advice from the diagnostic knowledge base, which detects invalid patterns in the status indicators. The user may, in response to a piece of advice, perform the action such as closing a switch to verify the expert system’s advice is correct. Pearce 355 5 Comparison with the o Type checking hand-crafted system 0 Incompleteness The two knowledge based systems, both designed to detect the same set of component failures, are compared in two different ways. Firstly, the real-time satellite simulator was used to create telemetry data files covering a range of dif- ferent failure situations. Each KB was then run using this data as input. Secondly, the internal integrity of each KB is checked using the Knowledge Integrity Checker (KIC) [Pearce, 19871, a tool previously developed at the Turing Institute. e Subsumption The Envisage system contains roughly 110 rules split fairly evenly between forward chaining, and backward chaining rules. The forward chaining rules are used to control the execution from one KB area to another, while the backward chaining rules are used to assign values to attributes. Total development time was of the order of 6 man months. Both knowledge bases required conversion into a Horn clause format before they could be suitable for input to the KIC. This was straightforward in both cases. The rule induced knowledge base was automatically converted to change the form of each rule, to which manually con- structed type information was added. The other knowl- edge base, developed using the Envisage shell from Systems Designers PLC., had to be manually converted. Again, type information was added. Although Envisage supports a fairly rich environment for developing applications, the underlying logic of the rules mapped easily onto Horn clause logic. The rule induced KB contains some 75 rules which for- ward chain from telemetry indicator values to reach a di- agnosis. This is the executable set of rules used for com- parison. However, for maintainability, the behaviour rules in the model are used. There are 64 high-level behaviour rules used to describe the operation of the power subsys- tem. Total development time for the modelling approach was of the order of 3 to 4 man months. Running the induced knowledge base through the in- tegrity checker showed up no errors or inconsistencies. On the other hand, when the Envisage knowledge base was checked, the following problems were noted: 1. A type error which would prevent one rule from ever successfully firing. 2. Two unreachable clauses, which could be removed. 3. Four dead-end clauses, which indicated the definition of a particular predicate was incomplete. Figure 3 summarises the results: 5.1 Testing KB on simulator data At present the satellite has not been launched. Telemetry data is not available therefore the simulator has been used. When the induced rules were tested on a small (14) set of failure situations created using the real-time satellite simulator, a 100% success rate was achieved in correctly diagnosing the faulty component. This compares favourably with the manually con- structed expert system which managed to detect 10 out of the 14 as failures, a 72% success rate. Although the sample size is too small to give accurate success rates, it is evident that the manually constructed rules contain omissions, which are not present in the auto- matically synthesised rule base. 5.2 Checking Ml3 rule integrity Confidence in the internal integrity of a knowledge based system can be increased through the use of the Knowledge Integrity Checker (KIC) which takes the knowledge base as an input, and produces as output a set of rules which have been identified as possible errors. The system indicates possible omissions, contradictions and errors, giving the system designer a powerful facility to help development and debugging of a knowledge based application. The software has been developed in Quintus Prolog on a Sun workstation. KIC performs validation on knowledge bases represented in a Horn clause format, through de- tecting logical, structural and semantic inconsistencies in the rule base. The following inconsistencies are capable of being detected: e Unreachable clauses * Dead-end clauses e Cyclic clauses 356 Common Sense Reasoning Documentation Figure 3: Comparison of results 6 Conclusion of the study For fault diagnostic applications, the study has shown that qualitative modelling techniques can be used to substantial benefit. * Synthesised diagnostic rules are 100% accurate with respect to the model. o Qualitative models are easily constructed using a logic programming representation, and are extensible. 8 Applications can be constructed more cheaply, requir- ing less of the domain specialists valuable time. 8 Explanations during execution are improved through the ability to refer to the underlying structure of the device. o Validation is easy for the domain specialist to perform, especially if the model simulation is interfaced to a graphical display. 7 Future Plans The current plan is to develop and build on the techniques used to date to produce a general purpose modelling en- vironment for constructing diagnosis expert system appli- cations. Research into current developments in qualitative modelling will be pursued to ensure the process of mapping the real world to a model can be achieved with elegance and efficiency. 7.1 Production of a general purpose environment Development of the modelling environment will be driven by one or more significant application. Careful choice of application will ensure that the modelling environment will support similar types of application in a different problem domain. One of the chosen applications will be in the fault diagnosis field, although it is hoped other types of application can be identified which will fit the model. For example, design or planning problems may be possible to be tackled using the environment. 7.2 ult iple faults The present modelling software assumes single failure op- eration, rather than combinations of failures. In the satel- lite application this is suflicient, as the hardware is de- signed with single-fault tolerance and all operations are built around this premise. It is therefore reasonable that single component failure diagnosis is performed. However, for many other diagnostic applications, single failure cannot be assumed. In these cases the model repre- sentation must be capable of handling double or multiple combinations of failures. It will be necessary to apply cer- tain constraints on permissible multiple failures, otherwise exponential growth will occur in the systematic simulation and hence the example set. In (Mozetic et al., 19881 a qual- itative model of the heart is built to interpret ECG signals. Here multiple combinations of some 30 basic disorders are dealt with in the model. However, a maximum of seven of these were medically possible at one time, and further con- straints such as disregarding logically and physiologically impossible combinations, and also medically uninteresting states further reduced the possibilities. 7.3 Hierarchical modelling Hierarchical modelling could allow the division of a model into logically separate parts, with the ability to define sub- parts, and specify a relationship between the various levels and units on the same level. This would allow large models to be more easily main- tained, and would also facilitate a graphical representation of the model on the screen with an ability to expand and hide sub-modules. 7.4 Time based reasoning Implementation of time-based reasoning can open up areas previously ill-suited to a qualitative modelling approach. The current system assumes that any component in the model will only change state as a direct result of some other component-based action. It could be possible to model time as a qualitative entity, and allow definitions of components against time. A timed-based simulation could be achieved by stepping through time units, where units are defined according to the particular application. In the satellite application, this would allow elegant mod- elling of such functions as a battery losing its charge during an eclipse etc. QSIM, the qualitative simulation developed by Kuipers [Kuipers, 19861 allows such time based simula- tion. In this system components may be defined as mono- tonically increasing, decreasing or remaining constant over time. eferences [Clark and Niblett, 19871 Peter Clark and Tim Niblett. Induction in noisy domains. In I. Bratko and N. Lavrac, editors, Progress in Machine Learning (pro- ceedings of the 2nd European Working Session on Learning), Sigma, Wilmslow, UK, 1987. [Systems Designers, 19861 Systems Designers. Envisage Reference Manual. Technical Report, Systems De- signers PLC, Camberly, Surrey, 1986. [Hoff, 19871 A.A. V an Hoff. Paneltools II Documentation and Reference Manual. Technical Report, The Turing Institute, Glasgow, 1987. [Kuipers, 19861 B. Kuipers. Qualitative simulation. Arti- ficial Intelligence, 29(3):289-338, 1986. [Michalski and Larson, 19831 R. S. Michalski and I. Lar- son. Incremental generation of VL1 hypotheses: the underlying methodology and the description of pro- gram A&U. Technical Report IS@ 83-5, The Univer- sity of of Illinois at Urbana-Champaign, Department of Computer Science, Urbana, 1983. [Mozetic et al., 19881 I. Mozetic, I. Bratko, and N. Lavrac. Automatic synthesis and compression of cardiological knowledge. Machine Intelligence 11, 1988. [Mycroft and O’Keefe, 19841 A. Mycroft and R.A. O’Keefe. A polymorphic type system for PROLOG. Artificial Intelligence, 23:295-307, 1984. [Pearce, 19871 D. Pearce. KIC: A Knowledge Integrity Checker. TIRM 87-025, The Turing Institute, Glas- gow, 1987. Pearce 357
1988
99
847
Distributed Truth Maintenance David Murray Bridgeland and Michael N. Huhns Microelectronics and Computer Technology Corporation Artificial Intelligence Laboratory 3500 West Balcones Center Drive Austin, TX 78759-6509 bridgeland@mcc.com Abstract In this paper 1 we define the concept of logical consis- tency of belief among a grou- * .p of computational agents that are able to reason nonmonotonically. We then pro- vide an algorithm for truth maintenance that guaran- tees local consistency for each agent and global consis- tency for data shared by the agents. Furthermore, we show the algorithm to be complete, in the sense that if a consistent state exists, the algorithm will either find it or report failure. The algorithm has been implemented in the RAD distributed expert system shell. Introduction Two trends have recently become apparent out of the widespread use of knowledge-based systems: 1) systems are being developed for larger and more complicated do- mains, and 2) there are attempts to use several small systems in concert when their application domains over- lap. Both of these trends argue for knowledge-based systems to be developed in a distributed fashion, where modules are constructed to interact productively. The individual modules then are characteristic of intelli- gent agents. The interconnected agents can cooper- ate in solving problems, share expertise, work in par- allel on common problems, be developed modularly, be fault tolerant through redundancy, represent multiple viewpoints and the knowledge of multiple experts, and be reusable. Additional motivations are presented in (Huhns, 1987) and (Gasser and Huhns, 1989). But in order for these agents to coordinate their activities and cooperate in solving mutual problems, it is essen- tial that they be able to communicate with each other. Further, in order for them to interact intelligently and efficiently, we believe that the agents must be able to assess and maintain the integrity of the communicated information, as well as of their own knowledge. Knowledge Base Integrity There are many desirable properties for the knowledge base of an expert system or agent, such as complete- ness, conciseness, accuracy, and efficiency. For an agent that can reason nonmonotonically, there are additional properties used to describe the integrity of the agent’s knowledge base: stability, well-foundedness, and logical consistency. A stable state of a knowledge base is one in which 1) each knowledge base element that has a valid justification is believed, and 2) each knowledge base el- ement that lacks a valid justification is disbelieved. A well-founded knowledge base permits no set of its be- liefs to be mutually dependent. A logically-consistent knowledge base is one that is stable at the time that consistency is determined and in which no logical con- tradiction exists. Depending on how beliefs, justifica- tions, and data are represented, a consistent knowledge base may be one in which no datum is both believed and disbelieved (or neither), or in which no datum and its negation are both believed. These concepts are often extended to include other types of contradictions. In addition, any algorithm that attempts to maintain well-founded stable states of a knowledge base, such as one of the many algorithms for truth maintenance (Doyle, 1979; de Kleer, 1986; Martins and Shapiro, 1988; McAllester, 1980; Russinoff, 1985), should be complete, in the sense that if a well-founded stable state exists, the algorithm will either find it or report failure. In general, .we desire each agent in a multiagent envi- ronment to have a complete algorithm for maintaining the integrity of its own knowledge base. However, the above definitions of properties for a sin- gle knowledge base are insufficient to characterize the multiple knowledge bases in such a multiagent environ- ment. When agents that are nonmonotonic reasoners exchange beliefs and then make inferences based on the exchanged beliefs, then new concepts of knowledge-base integrity are needed. In addition, the relevant concept of global truth maintenance becomes especially prob- lematic if agents must compute their beliefs locally, based on beliefs communicated and justified externally. The next sections extend the above definitions to the multiagent case. The JTMS We presume that each agent has a problem-solving corn- ponent, separate from its knowledge base, that makes 72 AUTOMATEDREASONING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. inferences and supplies the results to the knowledge agents: there may be a shared datum that one agent base. Our discussion applies to the set of beliefs that are believes to be IN and another believes to be OUT. held and maintained in-this knowledge base. In partic- ular, we focus on the systems for maintaining beliefs known as truth-maintenance systems (TMS) (Doyle, 1979). TMSs are a common way to achieve knowledge base integrity in a single agent system, because they deal with the frame problem, they deal with atomicity, and they lead to efficient search. Furthermore, the jus- tification networks they create can be used for non- monotonic reasoning, problem-solving explanations to a user, explanation-based learning, and multiagent ne- gotiation. Our research is based on a justification-based TMS, in which every datum has a set of justifications and an associated status of IN (believed) or OUT (disbe- lieved) . In the example considered below, an initial state of a distributed knowledge base is given and presumed con- sistent. Our goal is to construct a consistent extension of this state or determine that no such extension exists. The distributed TMS (DTMS) algorithm presented for this task is most often invoked to restore consistency when a consistent state is disrupted by altering the jus- tification for a datum. Consistent Beliefs among Agents Consider a network of many agents, each with a partially-independent system of beliefs. The agents in- teract by exchanging data, either unsolicited or in re- sponse to a query. Each agent has two kinds of data in its knowledge base: Shared Data Beliefs that the agent has shared with another agent sometime in the past. Private Data Beliefs that the agent has never shared with another agent. A private datum might become a shared datum by being told to another agent, or by being the answer to some other agent’s query. Once shared with other agents, a datum can never again be private. Each shared datum is shared by a subset of the agents in the network-precisely those that have either sent or received assertions about the datum. We extend the concept of knowledge-base consistency stated above by defining four degrees of consistency and well-foundedness that are possible in a multiagent sys- tem. Inconsistency: one or more agents are individually in- consistent, i.e., at least one agent has a private datum without a valid justification and labeled IN, or a pri- vate datum with a valid justification and labeled OUT. Local Consistency: each agent is locally consistent, i.e., no private OUT datum has a valid justification, and each private IN datum has a valid justification. However, there may be global inconsistency among Local-and-Shared Consistency: each agent is lo- cally consistent and each agent is mutually consistent about any data shared with another agent, i.e., each shared datum is either IN in all the agents that share it or OUT in those agents. There is, however, no global consistency. Global Consistency: the agents are both individu- ally and mutually consistent, i.e., their beliefs could be merged into one large knowledge base without the status of any datum necessarily changing. In the absence of interagent communication, and pre- suming the local environment of each agent is consis- tent, then Local Consistency should hold. The intro- duction of interagent communication, however, tends to drive the system towards Inconsistency, because the agents might receive data that conflict with their cur- rent beliefs. The mechanism for truth maintenance we describe below enables each agent then to strive for Local-and-Shared Consistency. The presumption here is that the shared data are the most important, be- cause they affect the problem solving of another agent, and so special effort should be made to maintain their consistency. Although our goal is to maintain Local-and-Shared Consistency, we at times allow the system to fall short of this goal in order to permit agents to have different viewpoints. In this case, one agent may hold a belief that is known to be contrary to the belief of a second agent. The agents do not attempt to resolve this dispute if resolution would result in their being individually in- consistent. A consequence of this is that these agents should then not believe any data originating from each other, unless that agent can prove that its belief for that data is independent of the disputed data. Ill-Foundedness: individual agents have beliefs that are internally ill-founded. Local Well-Foundedness: individual agents have beliefs that are internally well-founded; however, there may be shared data that are IN but have no valid justifications in any agent. Local-and-Shared Well-Foundedness: individual agents have beliefs that are internally well-founded, and every IN shared datum has a valid justification in some agent; however, there may be ill-founded cir- cularities of beliefs among groups of agents. Global Well-Foundedness: every datum has a glob- ally valid justification and no set of data, whether local to an agent or distributed among a group of agents, is mutually dependent. A Multiagent TMS In the classical TMS, a datum can be either IN or OUT. For the DTMS, we refine the IN status to two substa- BRIDGELANDANDHUHNS 73 tuses: INTERNAL and EXTERNAL. An INTERNAL datumis one that is believed to be true, and that has a valid jus- tification. An EXTERNAL datum is believed to be true, but need not have a valid justification. Intuitively, the justification of an EXTERNAL datum is %o-and-so told me.” Hence, only a shared datum can be EXTERNAL. For Local-and-Shared Well-Foundedness, a shared da- tum must be INTERNAL to at least one of the agents that shares it and either INTERNAL or EXTERNAL to the rest of the agents. In the only complete justification-based TMS labeling algorithm (Russinoff, 1985), Russinoff takes a generate and test approach, first unlabeling a collection of data, then attempting to relabel that collection. On failure to relabel, a superset of the last unlabeled collection is unlabeled. We take a similar approach in the DTMS. Since new data in one agent can change not only the sta- tus of that agent’s beliefs, but also that of other agents, our unlabeling and subsequent labeling will sometimes involve multiple agents. The support status of a shared datum is jointly main- tained by several agents. Hence, a single agent is gen- erally not free to change the status of a shared datum on its own accord. It must coordinate with the other agents so that they are all consistent on the status of the datum. Central to the DTMS then is the single agent operation of label-wrt. label-wrt is a variation of classical TMS labeling in which the statuses of some data-though unlabeled- are fixed by external require- ments. More precisely, label-wrt is given a network of data. Some of the data have statuses of IN, OUT, INTERNAL, or EXTERNAL. Other data are unlabeled. For each shared datum, there is a desired label of either OUT, INTERNAL, or EXTERNAL. label-wrt either finds a consistent well- founded labeling of the network that satisfies the shared data requirements, or it reports failure. Space prohibits a presentation of an algorithm to implement label-wrt. Our approach is a variation of Russinoff’s well-founded and complete labeling algorithm (Russinoff, 1985). Algorithm Schema The DTMS is a double generate and test. Relabel- ing is invoked by the addition or removal of a justifi- cation. When invoked, the DTMS does the following three things: Unlabel some data, including the newly justified da- tum and presumably its consequences. This unla- beled data set might be confined to a single agent or it might span several agents. If a shared datum is unlabeled in some agent, it must be unlabeled in all the agents that share it. Choose labelings for all the unlabeled shared data, as defined above. Label each of the affected agents with respect to the requirements imposed by the shared data, invoking label-wrt as described above. If any of the affected agents fails to label, then backtrack. Either choose different labelings for the shared data (step 21, or unlabel a different set of data (step 1). This schema will be refined later, but some nice prop erties emerge at this abstract level: o Any labeling found by the DTMS will have Local- and-Shared Consistency and Well-Foundedness. 0 If the two generate steps are exhaustive, the DTMS is complete: it will find a labeling should one exist. Note that these properties are true both of the DTMS algorithm described in this paper, and any other algo- rithm that conforms to this schema. Unlabeling When the DTMS algorithm is invoked, it starts by un- labeling a collection of data. This collection may be confined to a single agent or it may span many agents. However, it must meet the following constraints: 1. It must include the datum that originally acquired the new justification. 2. A shared datum that is unlabeled in one agent must be unlabeled in all the agents that share it. 3. On failure to label the collection, it must generate a new collection of unlabeled data. To guarantee com- pleteness, the generation must be exhaustive: it must eventually generate a collection sufficiently large that failure to label it means the whole network cannot be labeled. Using only these constraints, unlabeling is undercon- strained: many algorithms satisfy. For example, on any status change one could unlabel all data in all agents. This global unlabeling satisfies all the constraints and is also quite simple, but also is too inefficient for prac- tical use. The global unlabeling does reveal two DTMS principles that motivate the more complex algorithm presented later: Principle 1 Belief changes should be resolved with as few agents as possible. Principle 2 Belief changes should be resolved by changing as few beliefs as possible. Most belief changes can be resolved by changing things only Udownstream” of the new justification, i.e., those data that directly or indirectly depend on the da- tum newly justified. It is sometimes necessary to move “upstream” as well, and relabel data that directly or indirectly support the status of the newly justified da- tum. Consider the knowledge base for a single agent shown in Figure 1 (Russinoff, 1985). Here, datum Q acquires the new justification shown in dotted lines. If only P and Q are reassigned, the system is forced to report an unsatisfiable circularity. In order to restore stability, the status of the data upstream from P must 74 AUTOMATEDREASONING d J \ \ R (IN) S (OUT) Figure 1: Relabeling upstream data to resolve an odd loop be changed: if the system makes S OUT and R IN, both P and Q can be OUT. Principle 3 Belief changes should be resolved down- stream if possible; upstream relabeling should be mini- . . mszed. The above principles motivate the algorithm unla- bel. It attempts to minimize both the involvement of other agents and the unlabeling of upstream data, but prefers the former to the latter. Unlabel is invoked on a list containing either the newly justified datum, when unlabel is first invoked, or the unlabeled data that could not be labeled on a previous invocation. Unla- be1 attempts to find yet to be unlabeled private data downstream of those already labeled. If there are none, it looks for shared data downstream, unlabeling those in all the agents that share them, and also unlabeling private data downstream of the shared data. Finally, if there are no shared data downstream that are yet to be unlabeled, it unlabels data just upstream of all the downstream data, and all private data downstream of that. If there is nothing yet to be unlabeled upstream, unlabel fails and, in fact, the data admit no Local-and- Shared Consistent and Well-Founded labeling. Consider the DTMS network in Figure 2. There are two agents, Agent 1 and Agent 2, and they share the datum T. As in Fignre 1, the initial labeling shown in the diagram is perturbed by the addition of the new dotted justification. Agent 1 initially unlabels just the changed datum and private data downstream, P and Q, but there is no consistent relabeling. Hence, Agent 1 unlabels all shared data downstream of P and Q, and all private data downstream from there: P, Q, both Ts, and U. Again labeling fails. Since there is no further shared data downstream, Agent 1 and Agent 2 unlabel upstream and privately downstream from there: P, Q, Ts, U, R, and S. Now labeling succeeds (with S and U IN and everything else OUT). Had labeling failed, Unlabel would not be able to unlabel more hata, report that the network is inconsistent. and would Distributed System Issues To be implemented, the unlabel algorithm needs to be distributed. This is straightforward if each agent keeps track of which data are currently unlabeled and reports to other agents only whether yet to be unlabeled data became unla- beled. Upstream and downstream messages mention Figure 2: A DTMS network before relabeling only which shared datum is affected, and the corre- sponding acknowledgements report only whether new data were unlabeled. When a group of agents are labeling, their beliefs are in a state not suitable for reasoning. Hence, queries from other agents must be queued until labeling is com- plete. However, if two agents share a datum, and are involved in separate unlabeling tasks (i.e., initiated by different changes to beliefs), deadlock could occur. For- tunately, the separate unlabeling tasks can be combined into a single one, with a corresponding combination of the subsequent labeling. Labeling Once an area of data is unlabeled, the DTMS must pick candidate labels for the shared data, such that each datum is either OUT in all the agents that share it,or INTERNAL in at least one agent and INTERNALor EXTERNAL in the rest. Any exhaustive means of picking will guarantee completeness. The following Prolog code shows one means: label-shared([], Shared-labels, Shared-labels). label-shared([AgentlAgents], So-far, Final) <-- label-one-shared(Agent, So-far, New), label-shared(Agents, New, Final). label-shared relates its first argument-a list of agents-and its third argument-a l&t of shared data and their labels. The second argument is used to pass commitments about labels for shared data to recursive calls to label-shared. The relation calls label-one- shared, attempting to assign labels to a single agent BRIDGELAND AND HUHNS 75 that are consistent with those already assigned to oth- ers. lf it finds such an assignment, it recursively at- tempts to find labels for the other agents. On failure, it bactracks and looks for alternatives to the previous single-agent labeling. This algorithm could be implemented on a fully con- nected multiagent system by having each agent respon- sible for generating labels that are consistent with oth- ers already generated, as in label-one-shared, and im- plementing the recursive nature of label-shared with a message passed depth-first from agent to agent. This message needs to contain a list of agents already vis- ited, so that none are revisited, and a list of the al- ready labeled shared data. Also, before an agent passes a message to another, it needs to record its state for future backtracking. The shared-data labeling fits into the larger labeling process as follows: label(Agents) <-- label-shared(Agents, [I, Shared), label-private(Agents, Shared). label-private([],Shared). label-private([AgentIAgents],Shared) <-- local-shared(Agent, Shared, Local-Shared), label-wrt(Agent, Local-Shared), label-private(Agents, Shared). The private labeling follows the shared data labeling. The private data are labeled one agent at a time. First the relation local-Shared extracts the shared labels relevant to a single agent from the list of all the shared labels. Then label-wrt attempts to label the private data consistently with the shared data. Any failure to label causes backtracking. This algorithm will find a Local-and-Shared Consistent and Well-Founded label- ing of an unlabeled area in a collection of agents. Unfortunately, this algorithm has poor computa- tional performance. If there is no consistent labeling of the agents, the DTMS will generate all shared data labelings and attempt to label each privately. The per- formance can be improved by interleaving the labeling of the shared data and the private data. Failure in a sin- gle agent to find a private labeling consistent with the labels of the shared data will then cause earlier back- tracking: label(Agents) <-- label-internal(Agents, [I). label-internal([], Shared). label-internal([AgentlAgents], So-far) <-- label-one-shared(Agent, So-far, New), local-shared(Agent, New, Local-shared), label-wrt(Agent, Local-shared), label-internal(Agents, New). Label-internal could be implemented by a message passed depth-first from agent to agent. This message 76 AUTOMATEDREASONING needs to contain a list of the agents visited so far, and a record of the labels given so far to the shared data. Consider again Figure 2. R, S, P, Q, U, and both Ts have now been unlabeled. Agent 1 chooses labels for T and attempts to label his private data in a consistent manner. If Agent 1 chooses INTERNAL as T's label, he finds there is no labeling of his private data to make T internally justified. A next attempt with EXTERNAL is consistent (with S IN and everything else OUT), and Agent 1 passes his label of T to Agent 2. Agent 2 must then label T INTERNAL, but finds there is no way to label U. Agent 2 then backtracks and Agent 1 tries a final attempt to label T, this time as OUT. This succeeds with S IN and everything else OUT, and Agent 2 can also label T OUT by labeling U IN. Optimizations This DTMS algorithm admits several local optimiza- tions: An agent can forgo the labeling of its unlabeled shared data by label-one-shared and instead la- bel everything that is unlabeled with label-wrt. This requires a more sophisticated label-wrt that can generate INTERNAL and EXTERNAL labels for the shared data, as well as IN and OUT labels for the pri- vate data. An agent can keep a record of label attempts, caching for each combination of shared data labels whether it succeeded or failed to find a private labeling. A call to label-wrt will first consult the cache, thus avoiding redundant work. Ordering the shared data and then indexing the combinations in a discrimina- tion net seems to be a good implementation for this cache. In the above algorithm, only one agent is active at a time. However, there is something productive that the other agents can do: fill in their label caches by attempting to find private labelings for shared data combinations not yet examined. In fact, this effort could be guided by other agents. lf agent 1 shares data with agents 2, 3, and 4, when agent 1 passes a label-internal message to agent 2, it could advise agents 3 and 4 about its decisions on the labels of shared data. Then other agents could work only on that portion of their caches that are consistent with agent l's decision. Discussion There have been many other attempts to develop sys- tems of cooperating agents or knowledge sources. Early attempts, based on the blackboard model, involved agents with independent knowledge bases. The in- dependence was achieved by restricting agent interac- tions to modifications of a global data structure-a blackboard-and by minimizing overlap in the agents’ knowledge. Later systems allowed richer agent interac- tions and overlapping knowledge, but the agents were required to have consistent knowledge and to reason monotonically. This led to representational problems, because different experts in the same domain often have different perspectives and confEic tang knowledge, mak- ing it difficult to construct a coherent problem-solving system for that domain. Earlier solutions were to allow inconsistent knowledge bases; this enabled the conflict- ing knowledge to be represented, but it did not confront the problem of how to resolve the conflicts. Other researchers have explored negotiation as a means to mediate among conflicting agents. These sys- tems have involved either monotonic reasoners, such as (Sycara, 1989), or nonmonotonic, but memoryless, rea- soners, such as (Zlotkin and Rosenschein, 1989), i.e., reasoners that simply discard old solutions and re-solve in the face of conflicts. Another approach is to consider the research ef- forts in multiple-context truth-maintenance systems (de Kleer, 1986; Martins and Shapiro, 1988) from a dis- tributed viewpoint. These systems manipulate belief spaces, or contexts, in order to remove inconsistent ones. One might imagine each belief space represented by a different agent, who then maintains it. However, in this model, the belief spaces themselves do not inter- act and, in fact, the belief revision system treats each space separately. A notable exception to this is the work of (Ma- son and Johnson, 1989), who developed a distributed assumption-based TMS. In this system, agents inter- act by exchanging data, with their associated assump- tion sets, and NOGOODS, i.e., bad assumption sets. The agents maintain separate belief spaces and may dis- agree about an exchanged datum. The agents therefore have Local-and-Shared Well-Foundedness, but only Lo- cal Consistency. The system presented herein, although an improve- ment in that it achieves Local-and-Shared Consistency, nevertheless suffers from several deficiencies: o First, by not supporting some form of explicit nega- tion or reasons for disbelief in a datum, the system allows an agent with less information to dominate an agent with more. For example, if two agents each have an identical justification for belief in a shared datum, and one agent learns a fact that invalidates its justification, the other agent’s still-valid justifica- tion will be sufficient for both to continue believing in the datum. 0 Second, our algorithm can involve significant compu- tational overhead if the agents have shared a large amount of data, if the data have many connections to the rest of the agents’ belief networks, and if the status of their beliefs changes frequently. We are currently investigating the likelihood and sever- ity of these deficiencies in real-world application do- mains. We are also developing a mechanism for negoti- ation that uses the beliefs supplied by our DTMS. The above algorithm has been implemented in the RAD distributed expert system shell, which includes a framework within which computational agents can be integrated. RAD is a first step toward cooperative dis- tributed problem solving among multiple agents. It pro- vides the low-level communication and reasoning prim- itives necessary for beneficial agent interactions, but it does not yet guarantee successful and efficient coopera- tion. The next steps will require increased intelligence and capabilities for each agent, resulting in more sophis- ticated agent interactions occurring at a higher level. eferences Johan de Kleer. An Assumption-Based TMS, Ex- tending the ATMS, and Problem Solving with the ATMS. Artificial Intelligence, 28(2):127-224, March 1986. Jon Doyle. A Truth Maintenance System. Artificial Intelligence, 12(3):231-272, 1979. Les Gasser and Michael N. Huhns. Distributed Ar- tificial Intelligence, Volume II. Pitman Publishing, London, 1989. Michael N. Huhns. Distributed Artificial Intelligence. Pitman Publishing, London, 1987. Joao P. Martins and Stuart 6. Shapiro. A Model for Belief Revision. Artificial Intelligence, 35(1):25-79, May 1988. Cindy L. Mason and R. R. Johnson. Datms: A Framework for Distributed Assumption Based Rea- soning. In Les Gasser and Michael N. Huhns, editors, Distributed Artificial Intelligence, Volume II, pages 293-317. Pitman Publishing, London, 1989. David A. McAllester. An Outlook on Truth Main- tenance. AI Memo No. 551, Artificial Intelligence Laboratory, MIT, Cambridge, MA, August 1980. David M. Russinoff. An Algorithm for Truth Mainte- nance. MCC Technical Report No. ACA-AI-062-85, Microelectronics and Computer Technology Corpo- ration, Austin, TX, April 1985. Katia Sycara. Multiagent Compromise Via Negotia- tion. In Les Gasser and Michael N. Huhns, editors, Distributed Artificial Intelligence, Volume II, pages 119-137. Pitman Publishing, London, 1989. Gilad Zlotkin and Jeffrey S. Rosenschein. Negotia- tion and Task Sharing among Autonomous Agents in Cooperative Domains. In Proceedings IJCAI-89, pages 912-917, Detroit, MI, August 1989. o Third, we believe unsatisfiable circularities are more likely in a distributed system. BRIDGELAND AND HUHNS 77
1990
1
848
An Algebraic Approach to Conflict Resolution in Planning Qiang Yang * Department of Computer Science University of Waterloo Waterloo, Ont. Canada, N2L 3Gl qyang@watdragon.waterloo.edu Abstract This paper presents an algebra for conflict resolution in nonlinear planning. A set of conflicts in a plan is considered as a constraint network. Each node in the network represents a conflict, and is associated with a set of alternative ways for resolving it. Thus, resolving conflicts in a plan corresponds to selecting a set of consistent resolution methods so that, after they are applied to the plan, every conflict can be removed. The paper discusses the representional issues related to the conflict resolution, presents an algebra for resolving conflicts, and illustrates that some modified algorithms for preprocessing networks of constraints can greatly enhance the efficiency of conflict resolution. Introduction Many planners can be considered as search in a space of possible plans [Chapman, 1985; Sacerdoti, 1977; Stefik, 1981; Tate, 1977; Wilkins, 19881. A major con- tributing factor to the branching factor in this space is the number of alternative ways for resolving a set of conflicts in a plan. Since search efficiency is greatly affected by the branching factor in the search space, reducing the number of ways for resolving a set of con- flicts is an important way for improving planning effi- ciency. Unfortunately, most existing planning systems [Chapman, 1985; Sacerdoti, 1977; Stefik, 1981; Tate, 1977; Wilkins, 19881 spend little or no effort in an- alyzing conflicts in an intermediate plan, in order to reduce the number of ways for resolving them. Usu- ally there is more than one conflict to be introduced to a given plan as a result of some planning activity. These planners will simply generate a set of resolution methods for each conflict, and either commit to one of them in a depth-first manner, or generate the set of all possible successor states, in a breadth-first way. However, as we will show later in the paper, some of *This work was supported in part by an interim research grant from the Faculty of Mathematics at the University of Waterloo. the conflict resolution methods can be proven to be either not applicable to the current plan, or related to other methods in such a way that they are redun- dant. In most of the existing planning systems, a great deal of computational overhead can be spent on these “useless” branches. It would be desirable to reduce the number of alternative ways in many circumstances through an analysis of conflicts in a plan. Without any analysis at all on the inter-relations among the conflicts can be considered as one extreme on a spectrum of search control, while doing a complete analysis in order to minimize the number of possible alternatives can be considered as the other extreme on that spectrum. Certainly, it may not be advanta- geous to do a complete analysis over doing no analysis in improving planning efficiency, since there may exist some middle points on the spectrum that are better than either extreme. However, without knowing both ends of the spectrum well, it is hard to make an intel- ligent decision on how much analysis is needed. It is the purpose of this paper to investigate various ways for minimizing the number of alternative methods for resolving a given set of conflicts. In particular, this paper presents a conflict algebra for analyzing methods for resolving a set of conflicts in nonlinear plans. Modified versions of the preprocess- ing algorithms for network-based constraint satisfac- tion problems (CSP) can be used for efficient applica- tion of the algebra. The modified algorithms take into account a wider range of possible constraints, so that it is possible to prune from the constraint network not only values that are inconsistent, but also those that can be proven “redundant.” Interestingly, these algo- rithms can be applicable to large classes of problem domains rather than just the planning domain, and in this respect, part of the results in this paper should also be of interest to researchers in CSP area. Below we discuss how conflicts in a plan are repre- sented. Then we consider the details of the conflict algebra, and show how to apply it to planning. This is followed by a discussion of the algorithms for prepro- cessing a constraint network. 40 AUTOMATEDREASONING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Conflicts and Conflict Resolutions Preliminaries A plan consists of a set of operators, a set of prece- dence constraints on the operators, and a set of co- designation constraints on the binding of variables of the operators. Each operator o is defined in terms of a set of preconditions, P,, and a set of effects E,. Two special operators, start and finish, exist in any plan, representing the initial and goal situations, re- spectively. The operator, start (finish), has a set of empty preconditions (effects), and has as its effects (preconditions) the set of conditions true in the ini- tial (goal) situation. Let P be a plan. We adopt the notational convention of [Chapman, 19851 for precedence and codesignation constraints. Thus, a 4 b denotes that the operator a precedes operator b in P, and p M Q or p $ Q denotes that p and 4 are constrained to codesignate or non- codesignate in P, respectively. If for two variables x and y, x = y, then x and y are constrained to be bound to the same constant. Moreover, we also assume both the definitions and graphical notation of necessarily (0) and possibly (0) in [Chapman, 19851. Below we formally define conflicts. To do this, we first define precondition establishment:operator a is said to establish a condition p for operator b, or Est(a, b, p), if and only if (i) p is a precondition for b, (ii) q (a 4 b), (iii) 3u E E, such that q (u x p), and (iv) Vu’ such that q J(U 4 a’ 4 b) and ‘#u’ E Eat, lO(U’ z p). That is, no other operators necessarily between a and b necessarily assert p. In [Hertzberg and Horz, 19891, a is called a producer of p for b, while b is called a user of p. If for ev- ery operator b in a plan and for every precondition p of b, there exists an operator a such that Est(a, b,p), then P is said to be well-formed [Hertzberg and Horz, 19891. Note that a well-formed plan is not necessarily a correct plan, since some operators may exist that can possibly deny certain preconditions. In this paper, we make the same assumptions as [Hertzberg and Horz, 19891. That is, (1) All plans are well-formed. (2) The locality assumption holds. That is, every operator must specify all the domain conditions it may change, and every change is independent of the situation before the operator is applied. (3) The STRIPS assumption holds. That is, con- ditions change only if mentioned in the effects some operators in a plan. Although the conflict resolution methods to be intro- duced below become inadequate without these assump- tions, the preprocessing algorithms later in the paper are independent of them. Conflicts in a Plan Let P and U be operators in a plan such that Est(P, U,p). Suppose there is another operator N in the plan such that (i) N can possibly be between P and U, and (ii) 3-q E EN such that 0( 4 z p). Then N is called a clobberer of p for U, and tuple (P, U, N, p, q) is called a conflict in the plan. To distinguish from other forms of conflicts in a plan, such as consumable re- sources, we call the conflicts defined above as deleted- condition conflicts. In this paper, we only consider deleted-condition conflicts. Hertzberg et. al. [Hertzberg and Horz, 19891 have shown that all deleted-condition conflicts in a well- formed plan are compositions of four types of conflicts listed below: (1) Linear Conflict (LN(P,U,N,p,q)) if N is be- tween P and U. (2) Left Fork (LF(P,U,N,p,q)) if U is after P and N, and P and N are unordered in the plan. (3) Right Fork (RF(P,U,N,p,q)) if P precedes both U and N, and the latter two are unodered in the plan. (4) Parallel Conflict (PR(P,U,N,p,q)) if P pre- cedes U, and N is unordered with both operators. A plan is correct, if its set of operators is partially or- dered by the precedence constraints, it is well-formed, and it is free of conflicts. Conflict Resolution Methods To resolve a conflict, a planner imposes various kinds of constraints. Chapman [Chapman, 19851 formulated a necessary and sufficient goal achievement criterion, known as the necessary modal truth criterion, or MTC. He also provided a set of sufficient procedural inter- pretations of the MTC, which includes “promotion,” “demotion, ” “establishments,” “separation,” and “in- troducing white knights.” These methods can be con- sidered as various alternative constraints one can im- pose on a plan for resolving a given conflict. A simpli- fied version of the above methods is listed below: Let (P, U, N, p, a) be a conflict. Then the following con- straints are sufficient for resolving it: (1) promotion of clobberer: U 4 N, (2) demotion of clobberer: N -+ P, (3) separation: p $ 4, (4) demotion and establishment : for some W, where W is either an existing operator in the plan, or an inserted operator, and for some T E Ew, N < W 4 U and r ==: p. Now consider how to resolve each type of conflict in a well-formed plan. We use + for logical disjunction, and . for conjunction. Let rde represent the establishment and demotion methods. That is, Tde = (N + W < U) . (T z p), YANG 41 III e P(W) Figure 1: An example plan with two conflicts. The liter& P(x), 4(Y), t e c., are the effects of the operator immediately before them. where T E Ew and IV is an operator. Let Rde be the disjunction of all r&. Then LN(P, u, N, P, 4) = (P $4) + Rdet LF(P, u, N, P, 4) = (N-‘)+(ph)+Rde, RF(P, u, N, P, a) = (u -( N) + (P # a) + Rde, PR(P, u, N, P, 4) = LF(P, u, N, P, 4) +RF(P, U, N, P, 9). where ‘=‘ means “can be resolved by.” As an example, consider the plan shown in Figure 1. The set of precedence constraints in this plan is: P = (a 4 b, a 4 c, b 4 d, c 4 d, a -i d). Suppose that Est(a, b, p(z)), and Est(a, c, q(y)), then there are two conflicts in this plan, Cl and C2, where Cl = RF@, h c, P(x),P(Y)) = ~11 + m + R, ~11 = (b 4 c), rr2 = (x $ y), and R = 733 . 734 . ~15, with 713 = (c 4 e), ~14 = (e 4 b), and 735 = (w M x). C’2 = -w% c> b, q(y), q(x)) = r21+ 7-22, where r21 = (c -X b), and ~22 = (x + y). Relations Among Conflicts Above we have shown that all conflicts in a well-formed plan can be resolved using a set of resolution methods, represented in a concise form. If all the conflicts in a plan are found, then each conflict will be automatically associated with a set of alternative resolution methods, and the set of all conflicts can be represented in a con- junction of disjunctive normal form. The purpose of representing the conflict resolution methods is to find one or all constraints that can resolve the conflicts in a plan. Each consistent set of constraints that can resolve all the conflicts is called a solution. Below we consider the relationship between different conflict res- olution methods. Let RI and R2 be two precedence constraints. RI subsumes Ra, or S( RI, Rz), if and only if (RI} UB(P) 3 R2, where B(P) is the set of prece- dence (co-designation) constraints in P. Likewise for codesignation and non-codesignation constraints. Intuitively, RI subsumes R2 if imposing R1 will guarantee that R2 is also imposed. Thus, Ra is consid- ered to be weaker than Rx. For example, let rr = (b 4 c), and r2 = (a 4 d). If ((a 4 b),(c 4 d)} E B(P), then S(rl,rs). As another example, let r1 = (z x y) and r2 = (y $2 z). If (z $ z) E C(P) then S(q,rz). Imposing a set of constraints on a plan may result in an incorrect plan. With the assumptions in this pa- per, such incorrect plans are signaled by inconsistent constraints of the form a 4 a or x + x, for some op- erator a and variable x. Below, we use “Fail” for such situations. Two constraints RI and R2 are inconsistent, or 1( RI, R2), if and only if (RI, R2) U C(P) U B(P) 1 Fail. Intuitively, RI and R2 are inconsistent if imposing RI and R2 together will result in a contradiction in a plan. For example, if r1 = (a 4 b), and r2 = (b -X a), then I(rr , r2). Also consider an example of inconsis- tent codesignation constraints. If r1 = (x M y) and = (u c ? 1 4, and ((a: = 4, (Y $ 41 E C(P), then n,r2 . Having the above definitions, we now can prove the following theorems: Theorem 1 If S(R1, R), S(R2, R’), and I(R, R’), then I( RI, R2). Intuitively, this theorem says that if the weaker con- straints implied by two constraints are inconsistent, then they are inconsistent themselves. Theorem 2 If S(R1, R2), S(R2, Ra),then S(R1, R3). This says the subsumption relation is transitive. We also require that S is reflexive. However, the inconsis- tency relation is not transitive. Algebraic Rules The disjunction and conjunction operations can be considered as algebraic operations, with special seman- tics. For example, the meaning of a conjunction RI - R3 in a disjunctive normal form representation of a con- flict is that it is sufficient to impose both RI and R2 for removing some conflicts. The meaning of disjunction RI + Ra is that either RI or R2 is sufficient for resolv- ing some conflicts. Given the precise meaning of these operations, we can prove that “s” and “+” satisfy the rules of boolean algebra, as well as the following set of rules: Rule 1 If S(R1, I??) then RI - R2 = RI. 42 AUTOMATEDREASONING Rule 2 IfS(R1,R~) and u is any constraint, then RI- u+Ra=Ra. Rule 3 If S(R1, R2) and S(R1, Rs), then S(R1, R2 . R3), and S(Rl, & + R3)* Rule 4 If R = RI . R2, then S( R, RI) and S( R, Rs). Consider the plan shown in Figure 1. Analysis of the conflicts establishes the following relationships: I(rll, r21) S(rl2, m), S(m, w), and S(& r21). Expanding Cr . C2 we get rll - r21+ rll - r22 + rl2 - r21 +rl2 - r22 + R - r21+ R - r22 rl l - r2 2 + rl 2 l r2 1+ rl 2 - r2 2 + R - m+ R. r2 2 (I(rl l , 7- 21)) rl l - r2 2 + rl 2 - r2 1+ rl 2 + R - m + hf. m (Rulel) r12 + R - r21 + R - r22 (RuZe2) m + R - m + R (RuEel) r12 -I- R (RuZe2) Thus, using algebraic rules we are able to reduce the number of backtracking points from 6 to 2. Pruning using CSP Techniques A set of conflicts can be considered as a constraint net- work, where each conflict is a node, and the consistency relations between the conflicts are arcs in the network. Each node has a set of values to choose from, each value being an alternate conflict resolution method. Thus, a conflict resolution problem can be considered as a constraint satisfaction problem (CSP). In particular, the goal of a conflict resolution problem in planning is either to find out the set of all consistent values, or to find out just one value. As in CSP, we would like to enforce arc and path consistency of the network. The degree of constraints in such a network of conflicts can be the size of the network. To see this, consider the following exam- ple. Each node Ni in the network contains a primitive precedence constraint ai < ai+l, for i = 0, . . . , n - 1. Also, node N, contains a value uT2. 4 ao. Thus, if no two ai are identical, then no proper subset of the set of these values is inconsistent, while the set of values when considered together is. This particular network is constrained by n-ary constraints. It follows that en- suring arc and path consistencies is not sufficient for global consistency in general. Important difference exists between this particular CSP and a traditioual one, because of the existence of subsumption relations between the different values of a node. Using this relation, redundant backtrack points can be quickly discovered, aud removed. The set of rules that euable this ability is what we call “pruning rules.” Below, we consider path consistency and redundancy removal separately. Path Consistency The first type of pruning is the same as a traditional definition for arc and path-consistency in CSP: Let Cr and C2 be two nodes in a constraint network. If for some R E Cl such that ‘JR’ E C2 I(R, R’) holds, then R can be pruned from Cr. If all the values of Cr are pruned,then the network has no solution. This type of pruning is called “incon- sistency pruning.” New relations concerning inconsistency can also be obtained when considering groups of nodes greater than two. The most commonly known such algorithms for establishing new relations are the path-consistency algorithms. We present a modified path consistency al- gorithm based on algorithm PC-2 in [Mackworth and Freuder, 19851. As in PC-2, this algorithm returns an updated set of inconsistency relations, possibly im- plemented in a matrix form. The difference here is that upon termination, a list Q of arcs that are mod- ified during the execution of the entire algorithm is returned. This list is used for further removal of re- dundant values or nodes in the network, and we will discuss this in detail in the next subsection. The function PC is listed below. In PC, Related- Paths(i, b, j) is a function which returns a set of length- 2 paths that might have their consistency affected by a change in the consistency of (i, k, j). Likewise, REVISE( (i, k, j)) re t urns true if I(i, j) is modified due to path inconsistency. Both functions are defined in [Mackworth, 19811. Function PC(Q’) begin Q := emptyset; while Q’ is not empty, do begin end select and delete a path (i, k, j) from Q’; if REVISE( (i,k,j)) then begin Q’ := Q’ U Related-Paths(i, k, j); Q := Q U-l(i, d> end; (if) end; (while) return Q Redundancy Removal More pruning can be achieved using subsumption rela- tions among constraints. Note that the redundancy as defined below is caused by the subsumption and the in- consistency relations, instead of the latter alone. Thus, our results here are fundamentally different from that in [Dechter aud Dechter, 19771. Theorem 3 Suppose 3R:! E (2’2, such that ‘v’R1 E Cl, S(R1, Rz). Then Cz can be pruned from the network, without affecting the set of solutions. YANG 43 Note that this theorem is different from inconsistent pruning. It says that if some value of C2 is subsumed by all values of Cr, then C2 is subsumed by Cr, in the sense that any solution for Cl must also be a solution for C2. This type of pruning can be called “subsump- tion pruning.” Subsumption relations also allow for the removal of individual values. Theorem 4 If 3R2 E C2, such that VRr E Cl, either I. 3Ri E C2, such that R2 # R& and S(R1, R&), or 2. I(Rl,Rz), then R-J can be pruned from C2, without aflecting the sobution of the network. Removal of nodes or values in some node is called re- dundancy removal. This is different from inconsistency removal. If a node becomes empty after applying the inconsistency pruning rule, then the whole network is inconsistent. In planning, this means that the current plan corresponds to a dead end in the search space. On the other hand, if a node is made empty by apply- ing the subsumption theorems, then it simply means that the removed node is redundant, and has no direct relation with the consistency of the whole network. Algorithm RR (Redundancy Removal), listed below, is an implementation of the above two redundancy- pruning theorems. In the function RR, the list Q, which is returned at the end, contains length-2 paths which have changes in their domains because of the redundancy pruning. Q will be used for checking more possible inconsistency pruning. Function REVISE-RR( (i,j)) begin DELETE := false for each R E Ci do if for all R’ E C’j such that either I(R’, R) or S( R’, R”) for some R” E Ci such that R # R”, then begin delete R from Ci; DELETE := true end; if Ci becomes empty, then delete it from the network; return DELETE end Function RR( Q’) begin Q := emptyset; while Q’ is not empty, do begin select and delete any arc (i, j) from Q’; INC := {(k,i) 1 k # i, k # j} if REVISE-RR( (i,j)) then begin Q’ := Q’ U INC; Q := Q U Related-Paths((i, j, i)) end;(if) end; (while) return Q Combining Path Consistency and Redundancy Pruning When both inconsistency and redundancy pruning are done, the outcome of modifying one relation canpos- sibly affect the status of the other. For example, re- moving a redundant value from a node can produce further inconsistency, and thus, the two types of prun- ing have to be used interchangeably. The basic idea is to interleave the two algorithms PC and RR, until no changes can possibly be made. Algorithm PP (Pre- Processing), listed below, achieves this purpose. It can be shown that the algorithm PP has a worst case com- plexity of O(k5n3), where k is the number of values in a node, and n is the number of nodes in the network. Procedure PP begin ~2~Qj(i, A k) I -(i = j = k)}; . if any hode is deleted, then return(fai1); end begin Q := PC(Q); if any node is deleted, then return(fai1); en&d:= RR(Q); ; Example Now consider again the example given in Figure 1. We start with the set of relations among the con- straints: 1(r 11, m), S(m,m), and S(m, m), and S(R, r21). Now apply the pruning rules. By Theorem 4, rrr can be pruned from Cr. Thus, after updating, Cr = r12 + R. By Theorem 3, the node C2 can be pruned. Thus, what is left is Cl = r12 + R. The - number of backtracking points for planning is again reduced form six to two. Therefore, using preprocess- ing techniques, one can simplify the constraint network while avoiding an exponential number of algebraic ex- pansions. Applying the Algebraic and CSP Techniques to Planning A typical planning session tions of several steps: can be considered as itera- (1) Select a condition to be achieved and some operators for achieving it. Insert these operators into the plan ., possibly with certain precedence 44 AUTOMATEDREASONING and codesignation constraints. (2) Activate a conflict detection routine, and com- pute the set of all conflicts introduced by the in- serted operators and constraints. (3) Impose a set of conflict resolution constraints for resolving one or more conflicts. Save the rest of the alternative conflict resolution methods as backtrack points. After step 2 is done, the conflict algebra can be applied for simplifying the conflict resolution methods. The resultant methods can be represented in a disjunctive normal form. One or more disjuncts can then be cho- sen to be imposed on the plan, and the rest saved as backtrack points. Conclusion This paper proposes to analyze conflicts in a plan in order to reduce the number of backtracking points in a planner’s search space. In particular, a set of algebraic rules, together with a set of preprocessing algorithms are presented for simplifying a set of conflict resolution methods. Via subsumption relationship between con- straint to be imposed on a plan, the algorithms are able 9 to not only remove inconsistent choices, but also those that are redundant. Furthermore, these algorithms are applicable to a class of constraint satisfaction problems in which subsumption relations are involved. With the theoretical foundation of this paper, we in- tend to further explore the amount of analysis of con- flicts necessary for a planner’s best performance. Such exploration will have to involve a considerable amount of experimentation. We will also look for other kinds of relations in a constraint network, similar to subsump- tion ones, in order to allow more powerful preprocess- ing algorithms to be designed. Acknowledgments. Thanks to Peter Van Beek for many useful comments. References [Allen and Koomen, 19831 J. Allen and J Koomen. Planning using a temporal world model. In Pro- ceedings of the 8th IJCAI, pages 741-747, 1983. [Allen, 19841 James F. Allen. Towards a general the- ory of action and time. Artificial Intelligence, 23(2):123-154, 1984. [Chapman, 19851 David Chapman. Planning for con- junctive goals. AI Technical Report 802, Mas- sachusetts Institute of Technology, 1985. [Charniak and McDermott, 19851 Eugene Charniak and Drew McDermott. Introduction to Artificial Intelligence. Addison-Wesley Publishing Company, 1985. [Dean and Boddy, 19881 T. Dean and M. Boddy. Rea- soning about partially ordered events. Artificial In- telligence, 36:375-399, 1988. [Dechter and Dechter, 19771 A. Dechter and R. Dechter. Removing redundancies in constraint networks. In Proceedings of the 6th AAAI, pages 105-109,1977. [Dechter and Pearl, 19871 R. Dechter and J. Pearl. Network-based heuristics for constraint-satisfaction problems. Artificial Intelligence, 34, 1987. [Hertzberg and Horz, 19891 Hertzberg and Horz. To- wards a theory of conflict detection and resolution in nonlinear plans. In Proceedings of the 11th IJCAI, pages 937-942, Detroit, Michigan, 1989. [Mackworth and Freuder, 19851 A.K. Mackworth and E.C. Freuder. The complexity of some polynomial network consistency algorithms for constraint satis- faction problems. Artificial Intelligence, 125:65-74, 1985. [Mackworth, 19811 A.K. Mackworth. Consistency in networks of relations. In Webber and Nilsson, edi- tors, Readings in Artificial Intelligence, pages 69-78. Morgan Kaufmann Publishers Inc., 1981. [Sacerdoti, 19771 Earl S acerdoti. A Structure for Plans and Behavior. American Elsevier, 1977. [Stefik, 19811 Mark Stefik. Planning with constraints. Artificial Intelligence, 16(2):111-140, 1981. [Tate, 19771 Austin Tate. Generating project net- works. In Proceedings of the 5th IJCAI, pages 888- 893, 1977. [Van Beek and Cohen, 19891 P. Van Beek and R. Co- hen. Approximation algorithms for temporal rea- soning. Technical Report CS-89-12, Department of Computer Science, University of Waterloo, 1989. [Vilain and Kautz, 19861 M. Vilain and H. Kautz. Constraint propagation algorithms for temporal rea- soning. In Proceedings of the 5th AAAI, pages 337- 382, 1986. [Wilkins, 19881 David Wilkins. Practical Planning: E&ending the Classical AI Planning Paradigm. Morgan Kaufmann, CA, 1988. [Yang and Tenenberg, 19901 Qiang Yang and Josh Tenenberg. Abtweak: Abstracting a nonlinear, least commitment planner. Department of Computer Sci- ence, University of Waterloo, Technical Report No. cs-90-09, 1990. [Yang, 19901 Qiang Yang. Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6, 1990. YANG 45
1990
10
849
An Experiment in Direct Knowledge Acquisition Peter W. Mullarkey Schlumberger Laboratory for Computer Science P.O. Box 200015, Austin, TX 78720-0015, U.S.A. mullarkey@slcs.slb.com Abstract LQMS is a knowledge-based system that identifies and explains anomalies in data acquired from multiple sen- sors. The knowledge base was built by a sequence of domain experts. Its prototype performed with a high level of accuracy and that performance has been incre- mentally and significantly improved during develop- ment and field testing. Several points are developed in this pa.per. (1) The combination of an intuitive model (sufficient for the task) and powerful, graphical devel- opment tools allowed the domain experts to build a large, high performance system. (2) The Observation- Situation-Blation representation illustrates an inter- mediate point on the simplicity-expressiveness spec- trum, which is understandable to the domain experts, while being expressive enough for the diagnostic task. (3) The system was designed as a workbench for the domain experts. This enticed them to become more directly involved, and, resulted in a better system. (4) The use of an integrated knowledge base edit-tracking system was important to the project in several ways: it reassured computer-naive experts that they could not damage the overall system, which increased their productivity; and, it also allowed experts located in va.rious places around the world to compare, contrast, and integrate changes in a structured way. Introduction L&MS is a knowledge-based system that identifies and explains anomalies in data acquired from multi- ple sensors [O’Neill and Mullarkey, 19891. This paper describes its knowledge representation and graphical development tools, which allowed direct involvement of several domain experts in the construction of its knowledge base. Task Description The task is to enhance the performance of well-trained field engineers in a demanding environment (oil field exploration). Oil-well logs are made by lowering tools into the borehole and recording measurements made by sensors in the tools as they are raised to the surface. The resulting logs are sequences of values indexed by depth. Logging tools measure a variety of petrophysi- cal properties. The field engineer’s task involves data acquisition and interpretation, and is characterized by high data rates, with noisy and uncertain data. Earlier work on some aspects of the interpretation problem is discussed in [Smith, 19841. In this paper, we con- centrate on data acquisition. The quality of the data. may be affected by many sources: problems with the tools, problems with the downhole environment (e.g., unusual borehole geometry and fluids, extreme forma- tion conditions), and untoward interactions between the tools and the downhole environment. In normal circumstances, it takes three to five years for a field engineer (with an engineering degree) to become truly competent in this task. The goal of LQMS is to assure that the field en- gineers leave the field location with full knowledge of the quality of the data collected. Given the task complexity and the level of training required, it was clear from the outset that the knowledge-based system would need to encompass a large a.mount of knowledge in order to achieve high performance. This makes the acquisition of domain knowledge critical. Overview of LQMS LQMS has two main components: a signal-to-symbol module that identifies anomalous behavior in the sig- nal data and an analysis module that applies ex- plicit domain models to arrive at causal explanations for the anomalies. These two modules are imple- mented in an object-oriented paradigm, and commu- nicate via asynchronous message-passing. The sys- tem was implemented in the HyperClass environment. [Smith et al., 1987, Schoen et al., 19881 and Common Lisp and runs on both Sun and VAX workst,a.tions. The overall process can be envisioned as a progres- sion of transformations from the raw data, through a signal-to-symbol module, to the analysis module (which can direct the signal-to-symbol Illoduie to search for further evidence about features of interest). Our basic design philosophy is to have small, efficient computational agents (the signal-to-symbol module) whose responsibility is only to find anoma.lous seg- 498 KNOWLEDGE ACQUISITION From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. ments of data, and then use more extensive domain knowledge in the analysis module to discern which anomalies are problems requiring action, and which are explainable in other ways. This decomposition allows the signal-to-symbol module to run real-time, while the analysis module works in background. A research prototype was deployed over a two-year field test program in several locations (Houston, TX; New Orleans, LA; Cairo, Egypt; Midland, TX; and, Paris, France). It has done an effective job of dealing with the task and providing a model that the users feel is intuitive. Knowledge Representation Model The analysis module consists of an inference engine that operates on domain knowledge represented as net- works. These networks are composed of Situations that can be used to explain groups of jZbservations connected by Relations; they are referred to as OSR networks. We will explain via the example shown in Figure 1. In the example network, Situations are boxed nodes (e.g., LDT-PROBLEM), Observations are unboxed leaf nodes (e.g., RHOB-UNUSUALLY-LOW). Rela- tions are unboxed interior nodes, drawn from the set AND, OR, COMB ( a weighted average), NOT, and ANY (essentially an OR that uses breadth-first search). The example network can be understood (and con- structed) from either of two viewpoints. 1. From a symptomatic view, the network has possible explanations for the anomaly RHOB-UNUSUALLY- LOW (highlighted in the middle right section of the network). This observation means that the sensor is measuring a bulk density of the surrounding mate- rial that is lower than normally exp.ected, but not so low as to be out of the physical measurement range of the sensor. This anomaly can be explained by a misapplication of the sensor pad to the surround- ing rock (LDT-PAD-STANDOFF), allowing lower- density material (e.g., drilling fluid) to exist between the sensor and the material of interest, and/or a problem with the tool that houses the sensor (LDT- PROBLEM). 2. From a “failure model” point of view, there are two subnetworks shown in Figure 1. One describes the observations that would be associated with an LDT sensor pad being out of contact with the bore- hole wall. The other describes the observations that would be associated with an LDT tool problem. The various domain experts have found that a choice of different ways of visualizing (and building) the net- works allows them to think about their knowledge in the way that is most natural for them. One domain expert prefers the symptomatic perspective, while an- other prefers the failure model view, and a third seems to be comfortable switching between these views. The basic components of the knowledge representa- tion scheme were derived by direct observation of the domain experts at work. This recognition that the be- havior of the experts is a real expression of the knowl- edge is discussed in [Musen, 19891 in regard to sepa- rating the knowledge level from the symbol level. The domain experts visually recognize an anomalous data pattern, and attempt to construct a set of conditions that could explain that anomaly. Their “heuristic asso- ciations” between anomalies and multiple possible ex- planations are not through explicit relations, but such relations are implicit in the context. For example, to support an LDT-PAD-STANDOFF there needs to be evidence of both borehole wall roughness (described by the upper subnetwork to the right of the AND in Fig- ure 1) AND an indication of abnormal density borehole fluid (described by the lower subnetwork to the right of the AND in Figure 1). In describing this situation, the AND relation would be obvious to a domain expert from the context. The OSR environment encourages the domain expert to make these implicit relations ex- plicit, both for obvious computability and to make the networks less ambiguous for the next domain expert. The OSR framework was developed to organize and unify the various concepts that seemed intuitive to the domain experts. These experts clearly had more than a jumble of loosely-connected heuristics, but did not use a complete structural and behavioral model. Our early experience with a rule-based prototype demonstrated that the domain experts had difficulty encoding their knowledge as rules. Furthermore, the prototype did not clearly support the domain experts’ model. This was also noted in [Davis and Ha,mscher, 19881. We were also led to believe tha.t a “deep model” approach was also not appropriate because: e No complete model exists for the domain. The lack of a strong domain model is a good reason for using a different approach [Hamscher, 1988, page 191. e The problem solving must be done in a real-time environment and “deep models” are typically com- putationally intensive. e The users do not reason with a deep model; hence, a system using such a model would be less under- standable to them. The inherent simplicity of the OSR framework is a strong asset, since (after three years of successful use) it appears to be expressive enough to cover the domain of interest, while being intuitively comfortable to a set of five geographically distributed domain experts with different backgrounds. Thus, it illustrates an interme- diate point on the simplicity-expressiveness spectrum. Inference LQMS uses its knowledge base of OSR networks in a way similar to the “cover and differentiate” problem- solving method used in MOLE [Eshelman, 19881. The signal-to-symbol module notices an anomaly. The MULLARKEY 499 AddSituation AddObservation AddlncompleteNets CreateRelation CreateLink ModifyLinkWeight ReplaceRelation DeleteLink DeleteRelation ---_----------- HideRight HideLeft HideNode ExpandRight ExpandLeft Move Node Refresh _----__-__--____ ShowAll NewNetwork Start ParmEditor Describe : OR(, ,, CALI-HIGH-SLOPE ,,OR c’ “‘/=t-COMB 6;:’ l ;;;;[LS;o;MALHIGH “.oR <:‘- % DCAL-UNUSUALLY-HIGH I i ,. OLS-MINUS-QSS-UNUSUALLY-HIGH : !’ ;oFK:* : . QLS-MINUS-QSS-UNUSUALLY-LOW LDT-PAD-STANDOFF ~-ANN \ DRHO-HIGH-VARIANCE \ \ COMB. - - - LOW-MUD-DENSITY \ ‘ORi;;- ‘\\, \ ‘\ ‘, #(+iiE-k COMB. - - - HIGH-MUD-DENSIT) 8 ‘\ ’ ‘\ : ‘. ‘\ COM& . . I’ ‘; DRHO-UNUSUALLY-LOW COMB I’ I . : *. DRHOLAFiGE-STEi , ; PEF-LARGF-STEP 1 COMB l ‘i F&F-uNu$uALLy-HIGH -.OR<:( : I LDT-PROBLEM /--ANY .- <EF-UNUSUALLY-LOW DRHO-TOO-LOW ; : : I I \ DRHO-TOO-HIGH i \ I I I I I, !SSHV-H/GH-VARIANCE \ /OR<:: : ’ : COMB ,:’ ‘.jLSHV-HIGH-VARIANCE . ‘.COMB; _ . .OR:--- RHOB-UNUSUALLY-HIGH Figure 1: The OSR Network Editor (see text for legend) analysis module retrieves all situations that can ex- plain the anomaly. It gathers appropriate evidence for discrimination between the possible explanations by traversing the networks. It develops the best explanation(s) for the anomaly from the situations that have the highest belief, based on available ev- idence. LQMS combines evidence in the OSR net- works using an algebra that combines beliefs for the relation types in a conventional manner [Reboh, 1981, Buchanan and Shortliffe, 19841. Beliefs are repre- sented as a five-valued range with an explicit repre- sentation for unknown values. The particular combi- nation rule used is determined by the Relation type. A small set of relations has been found to work well in the LQMS domain. LQMS can show its reasoning with an animated dis- play of the OSR networks it is processing (highlighting nodes currently being processed, and changing fonts to indicate current belief). The user can also direct the system to explain its conclusions a.fter a session using a mouse-sensitive debriefing report to access the appro- priate OSR networks and signal-to-symbol agents. The system’s explanation of its results is clear and mean- ingful to our domain experts. Over the three years of construction, refinement and use, the main debugging tool used has been animated OSR networks. In addition to the basic framework, the OSR envi- ronment offers the domain expert several ways of aug- menting the networks with further experiential knowl- edge. e search first - The domain expert can specify a partial order of situations that are most likely to 500 KNOWLEDGEACQUISITION explain a particular anomaly. o relative link weights - The links between a re- lation and its operands can be assigned different weights, thus their evidence will be combined based on the specified relative importance. e tie breaking - The domain expert can designate that some situations be preferred. Then if a “pre- . ferred” situation is among the equally believed ex- planations for an anomaly, it is selected as the best explanation. Knowledge-Acquisition Tools LQMS has a rich knowledge engineering environment (based on the HyperClass system). These tools have made it possible for domain experts to enter, test, and refine the OSR network knowledge base (which con- tains 1880 classes) with minimal assistance from the system developers. Like the knowledge editors in RO- GET [Bennett, 19851 and MOLE [Eshelman, 19881, the OSR Network Editor takes advantage of knowledge of the problem-solving method used in LQMS to assist the user in entering knowledge. This approach has been referred to as using a method-orierzted conceptual model to support knowledge acquisition [Musen, 19891. The “boxes and links” OSR Network Editor (Fig- ure 1) is the main tool for constructing, maintain- ing, and refining OSR networks. It has an integrated, object-oriented, edit-tracking system to notice, save (and restore) incremental changes to the knowledge base. This allows multiple domain experts to work independently on the knowledge base, making incre- ment a.1 changes. It supports review, comparison, and conflict detection of various changes proposed for the core knowledge base. Aside from indicating when they start and end editing sessions, the domain experts do not notice the presence of edit-tracking in their normal style of development. The edit-tracking capabilities enhanced both speed and consistency of development and inter-developer communication (discussed in the subsequent section, Support for Distributed Develop- ment). Experiences This section describes some of our experiences during the construction of L&MS. We begin with a timeline that summarizes the main knowledge-a.cquisition phase of its development. We then describe two episodes from late in the project that illustrate the ease with our domain experts could understand and modify the knowledge base. Development The initial domain expert (who will be referred to as DE1 ), had an engineering degree, over 15 years of var- ied field experience, and much exposure to program- ming. He worked on the project part-time, helped with the analysis of the domain, and prompted the develop- ment of the OSR idea. He was followed, with overlap, by a second domain expert (DEe), working full-time on the project. DE2 had an engineering degree, over ten years of varied field experience, and much exposure to programming. DE*, using the early knowledge- acquisition tools, built the first 80 networks, which represented behavior models for four basic tool types. The knowledge base by then included about 800 total classes. DE2 transferred back to the field after over a year on the project, and there was a 4 month gap be- fore a third domain expert (DE3) was transferred in from the field to work on the project. During this time, we developed the OSR Network editor, based on ob- serving DE2 and seeing that while he was able to con- struct the networks using the object/slot/value linkage editors, he always kept a current graphic display that would show him the developing structure. The new OSR Network editor allowed these activities to be uni- fied, thus increasing the productivity of future network development. This process of knowledge-acquisition tools being developed contemporaneously with the sys- tem is discussed in [Smith, 19841. DE3 had an engi- neering degree, over five years of field experience, and very little exposure to programming or workstations. DE3 reviewed the state of the system, revised fewer than 10% of the networks, and began adding models of three additional tools. These new tools were quite complex, and involved the addition of 200 networks (and tie-ins to many of the existing networks), over one and a half years. The system used during the final field tests and transferred from research to engineer- ing had models of nine tools, with over 280 networks, including 480 Observations, 280 Situations, and 680 Relations. Refinement During the later development phase of the prototype, the system was field tested in five locations worldwide. Each of these field tests involved installing the sys- tem in an office environment where engineers would use it to replay previously run commercial data acqui- sition jobs and provide feedback on the utility, ease of use, and performance of the system. The office envi- ronment was selected since the commercial logging ac- quisition trucks work in a very high-cost, time-critical environment while the drilling rig is configured to al- low logging; while in the office, the systems and data are still available after normal work hours. Some of the field test sites provided excellent feedback on how naive users might benefit from the system’s anal- yses, while others provided feedback on how a complex, knowledge-based system would react to local modifica- tions. One of these latter (more technically demand- ing) locations was Cairo, Egypt, where by that time DE2 was working. During the field test, in addition to the normal use and testing of the system, DE2 recog- nized a weakness in the system’s coverage, understood the part of the knowledge base that required refine- ment, ma.de several changes, and tested and validated MULLARKEY 501 his new (better) results. All this activity without the support of the developers. transpired Support for Distributed Development The next episode illustrates how the edit-tracking sys- tem acted as a communication vehicle. When LQMS was transferred to engineering, two more domain ex- perts (DE4 and later DES) became part of the team. After a period of electronic communication, DE3 met with DE,. They worked on understanding the cur- rent state of the system, and discussed refinements that might be made. After DE3 returned to his home lo- cation, he made several hundred changes (of various sizes) since he was still the principal domain expert. DE, then asked if he could find out what had been changed. Since all the changes DE3 had made were recorded by the edit-tracking system, it was straight- forward to generate a transcript of those changes. A single change summary is shown that captures adding a link from a situation (MULTIPLEXER#I-FAILURE) to an AND relation (AND-99). Although the transcript is very low level, DE4 found it quite helpful. Updating Object MULTIPLEXER#l-FAILURE Slot ACCOMPANIEDBY to ADD AND-99 These transcripts were also used as a chronicle of knowledge base development. This automatic con- struction of a record of the construction and refine- ment of the knowledge base deals with one of the inher- ent problems in knowledge engineering: “interviewing without a record”[Forsythe and Buchanan, 19891. Performance The size of a knowledge base and ease of its construc- tion are interesting only when the system performs well. In this section, we summarize several studies of L&MS’s performance. DE4 performed a set of case studies on the system. The first set was done in 1988 before he had any knowl- edge of the internals of the system, while the second set was done in 1989 after DE4 had become famil- iar with the system. He was not responsible for the knowledge base development until after these studies were completed. In the earlier studies he found that correct diagnoses of causes of anomalies had been far from perfect (60% correct) but with the testing and re- finement activities (done by DE3 primarily), accuracy steadily improved. In June, 1989, DE4 ran eight, jobs and compiled the following estimated results. He judged approximately 85% of the explanations to be reasonable. In fact, the system found and explained some problems likely to be missed by a normal field engineer. He believed 15% were wrong, since the given explanation could easily be shown to be inappropriate by anyone with a reasonable level of experience in log interpretation. All of the grading was done on the basis of infor- mation that an engineer would normally have avail- able post-hoc, in the office environment. This level of information is not always available to the field engi- neer, on location, so these results are very encourag- ing. Explanations classified as wrong were typically those that reasonably fit with a subset of the observa- tions, but conflicted with other available information. This additional information was missed for any of sev- eral reasons. It may have been omitted (or improperly included) in the OSR network. It was sometimes in- accessible because no Observers had been designed to monitor that particular data. We believe that most of these wrong explanations can be corrected through further completion of the knowledge base. LQMS was estimated by DE, to be performing at 85% of the level of an experienced field engineer, and had shown that its performance can be improved in- crementally and substantially. Current Status The prototype LQMS was transferred to engineering in 1989 after three years of development and field t#est- ing. The system is being ported into the commercial environment of the data acquisition system. Related Work The idea of using networks to structure the asso- ciation of observations and situations has some of its roots in Hendrix’s partitioned semantic networks [Hendrix, 19791. The basic architectural approach of abstraction of data by low-level modules with higher-level process- ing (sometimes directing the low-level modules) is very much like Hearsay-II [Erman et al., 19801. The MOLE [Eshelman, 19881 system has many similarities to the OSR framework used in L&MS. MOLE has more pow- erful support for refinement of the knowledge base, although the OSR Network Editor has the capabil- ity to discover and display (partial) networks it con- siders incomplete (Figure 1-AddIncompleteNets com- mand). The domain expert has played a more ac- tive role in refining the knowledge base in LQMS (with good results). Based on the examples shown in [Eshelman, 19881, MOLE has a less expressive knowl- edge representation, in that there is only one “relation” called “covering (explanatory)“, which is visually en- coded as a link. Additionally, there is a presumption that initial symptoms are different in a significant way from other sy7r2pto7)1s and tha.t only initia,l sympt,oms can be used to prompt a diagnosis. In contrast, OSR has the concept of a trigger observation, but this is just an additional characteristic that any observation can have, although intuitively, some observations (e.g., value-out-of-range) are much more likely to be inter- esting triggers in a diagnosis task. We designed this uniformity into the observations since we believe that if a knowledge base describes a domain, there may be several uses for that knowledge (e.g., data quality diag- nosis, pre-interpretation data analysis, training), and 502 KNOWLEDGE ACQUISITION the trigger observations for the different uses may not be the same. There is a problem inherent with the development of ambitious systems: keeping the pace of develop- ment fast enough to maintain the domain experts’ in- terest and contribution [Buchanan et al., 19831. One approach is to use a team of developers, suggested by Reboh [Reboh, 1981, p. 941, one to interview the do- main expert and another to implement the knowledge in a prototype. The approach used in LQMS brings the domain expert directly to the center of the action and responsibility. This approach has the additional feature of making the domain expert a real part-owner of the system, with all the positive aspects that “own- ership” brings. Conclusions LQMS is a system based primarily on knowledge en- coded directly by domain experts. Its prototype per- formed with a high level of accuracy and that per- formance has been incrementally and significantly im- proved during development and field testing. Several points have been developed in this paper. The combination of an intuitive model (sufficient for the task) and powerful, graphical development tools allowed the domain experts to build a large, high performance system. There was a significant transi- tion from the prototype system based on rules (no direct interaction between domain expert and knowl- edge base) to the system based on OSR networks built with direct “boxes and links” editing (very pro- ductive interaction). The OSR representation illustrates an intermedi- ate point on the simplicity-expressiveness spectrum, which is understandable to non-developers, while be- ing expressive enough for the domain. The system was a natural workbench for the domain experts, which enticed them to become more directly involved, resulting in a better system. The edit-tracking system served many useful pur- poses. It reassured computer-naive experts that they could not damage the overall system, which increased their productivity. It also allowed experts located in various places around the world to com- pare, contrast, and integrate changes in a structured wa#y. Acknowledgments There have been many people associated with the suc- cessful history of the LQMS project without whom these lessons would not have been possible. With re- spect to the knowledge-acquisition aspects, I would like to acknowledge the efforts of our excellent domain ex- perts: Alistair Cox, Ace Dumestre, Laurent Moinard, and Alan Sibbit. The core development team included Dennis O’Neill, Paul Gingrich, and the user interface work of Ruven Brooks. Although this paper primarily describes work on the research prototype, the engi- neering team at Schlumberger Austin Systems Center, involved in the technology transfer, provided valuable interactions. Bob Young was a. critical resource during the original development of the OSR ideas, provided the spark of motivation to produce this paper, and ex- cellent feedback to make it right. Eric Schoen and Reid Smith supplied insightful comments and helpful point,- ers. Stan Vestal and SLCS allowed this retrospective activity to occur. References [Bennett, 19851 James S. Bennett. ROGET: A knowledge-based consultant for acquiring the con- ceptual structure of a diagnostic expert system. Journal of Automated ReasonGng, 1149-74, 1985. [Buchanan and Shortliffe, 19841 B. G. Buchanan and E.H. Shortliffe. Rule-Bused Expert Systems: The MYCIN Experiments of the Stanford Heuristic Pro- gramming Project. Addison-Wesley, Reading, Mass., 1984. [Buchanan et al., 19831 Bruce G. Buchanan, David Barstow, Robert Bechtal, James Bennett, William Clancey, Casimir Kulikowski, Tom Mitchell, and Donald A. Waterman. Constructing an expert sys- tem. In Frederick Hayes-Roth, Donald A. Water- man, and Douglas B. Lenat, editors, Building Expert Systems, chapter 5, pages 127-168. Addison-Wesley, Reading, Mass., 1983. [Davis and Hamscher, 19SS] Randall Davis and Wal- ter Hamscher . Model-based reasoning: Trou- bleshooting. In H.E. Shrobe and AAAI, editors, Ex- p1orin.g Artificial Intelligence, pages 347-410. Mor- gan Kaufmann, 1988. [Erman et al., 19801 L. D. Erman, F. Hayes-Roth, V. R. Lesser, and D. R. Reddy. The Hearsay- II speech-understanding system: Integrating lmowl- edge to resolve uncertainty. Computing Surveys, 12(2):213-253, June 1980. [Eshelman, 19881 Larry Eshelman. MOLE: A knowledge-acquisition tool for cover-and-differentiate systems. In Sandra Marcus, editor, Automating Knowledge Acquisition for Expert Systems, chapter 3, pages 37-80. Kluwer Academic Publishers, Bost,on, Mass., 1988. [Forsythe and Buchanan, 19891 Diana E. Forsythe and Bruce G. Buchanan. Knowledge acquisition for expert systems: Some pitfalls and problems. IEEE Transactions on Systems, Man and Cybernet- ics, 19(3):435-442, May/June 1989. Special issue on perspectives in knowledge engineering. [Hamscher, 19881 Walter C. Hamscher. Model- bused Troubleshooting of Digital Systems. PhD thesis, MIT AI Lab, 1988. MULLARKEY 503 [Hendrix, 19791 G. G. Hendrix. Encoding knowledge in partitioned networks. In Associative Networlcs- The Representation and Use of Knowledge in Com- puters, pages 51-92. Academic Press, New York, NY, 1979. [Musen, 19891 Mark A. Musen. Automated Generation of Model-Based Knowledge-Acquisition Tools. Re- search Notes in Artificial Intelligence. Pitman Pub- lishing, London, 1989. Revision of Stanford Univer- sity PhD dissertation (STAN-CS-881194). [O’Neill and Mullarkey, 19891 D. M. O’Neill and P. W. Mullarkey. A knowledge-based approach to real time signal monitoring. In Proceedings of the Fifth Con- ference on Artificial Intelligence Applications, pages 133-140, March 1989. [Reboh, 19811 R. Reboh. Knowledge engineering tech- niques and tools in the Prospector environment. Technical Report 243, SRI International, Menlo Park, Calif., June 1981. [Schoen et al., 19881 Eric Schoen, Reid G. Smith, and Bruce G. Buchanan. Design of Knowledge-Based Systems with a Knowledge-Based Assistant. IEEE Transactions on Software Engineering, 14( 12): 1771- 1791, December 1988. [Smith et al., 19871 R. G. Smith, P. S. Barth, and R. L. Young. A substrate for object-oriented in- terface design. In Research Directions in Object- Oriented Programming. MIT Press, Cambridge, MA., 1987. [Smith, 19841 R. G. Smith. On the Development of Commercial Expert Systems. AI Magazine, 5(3):61- 73, Fall 1984. 504 KNOWLEDGE ACQUISITION
1990
100
850
Establishing the Coherence of an Explanation to Improve Refinement of an I complete Knowledge ase Young-Tack Park and David C. Wilkins Computer Science Department 405 North Mathews Ave. University of Illinois Urbana, Illinois 61801 Abstract The power of knowledge acquisition systems that em- ploy failure-driven learning derives from two main sources: an effective global credit assignment process that determines when to acquire new knowledge by watching an expert’s behavior, and an efficient local credit assignment process that determines what new knowledge will be created for completing a failed ex- planation of an expert’s action. Because an input (e.g., observed action) to a failure-driven learning system can generate multiple explanations, a learning opportunity to extend the incomplete domain theory can go un- observed. This paper describes a failure-driven learn- ing with a context analysis mechanism as a method to constrain explanations and thereby increase the num- ber of learning opportunities. Experimentation using a synthetic expert system as the observed expert shows that the use of context analysis increases the number of learning opportunities by about 47%, and increases the overall amount of improvement to the expert system by around 10%. Introduction Knowledge acquisition is the major bottleneck in the development of expert systems. One promising method to overcome this difficulty is exemplified by the Learn- ing Apprentice System [Mitchell et al., 19851 and other apprenticeship learning programs, which assimilate new problem solving knowledge by observing and analyzing a human expert’s actions [Kodratoff and Tecuci, 19871 [Wilkins, 1988a]. Apprenticeship learning involves first recognizing a failure to interpret an expert’s actions and then acquir- ing new knowledge to recover from the failure. The detection of a failure can be done either by outside hu- man experts or by the apprenticeship learning system itself. When outside human experts are used, the hu- mans point out the the expert system’s failure and pro- vide the apprenticeship learning system with a learning opportunity [Mitchell et al., 19851 [Kodratoff and Te- cuci, 19871. Hence, in such systems, the global credit as- signment as defined in [Dietterich and Buchanan, 19811 is done by a human and is not addressed in the learning process. In contrast, when an apprenticeship learning sys- tem recognizes a failure by watching a human expert’s problem solving steps, it must employ a global credit assignment process to determine when to learn. For example, the ODYSSEUS apprenticeship learning pro- gram [Wilkins, 1988a] for the HERACLES classification shell [Clancey, 19871 watches an expert and tries to ex- plain the expert’s observed actions. An explanation in ODYSSEUS is created by backward chaining the meta- level strategy rules. When ODYSSEUS fails to explain an action, it assumes that relevant facts are missing from its knowledge base. Apprenticeship learning systems use diverse means to acquire new knowledge from failures [Mitchell et al., 1985; Kodratoff and Tecuci, 1987; Wilkins, 1988a]. The most common approach is to construct an explanation of the failure. ODYSSEUS suggests new knowledge that can complete a failed meta-rule chain when this knowledge is added to the knowledge base. Therefore, the power of knowledge acquisition sys- tems that employ an apprenticeship learning method derives from two main sources: an effective global credit assignment process that determines when to acquire new knowledge by watching an expert’s behavior, and an efficient local credit assignment process that deter- mines what new knowledge will be created for complet- ing a failed explanation of an expert’s action. Because a human’s observed action can be explained in many different ways, a learning opportunity can go unnoticed. If the learning system does not filter out implausible explanations, its performance may be seri- ously hampered. In order to profit fully from watching an expert, the learning system must be able to reason about explanations as well as generate them. In this paper we present an effective mechanism to avoid masking of learning opportunities by multiple explanations in apprenticeship learning system. The method reasons about the generated explanations and has been demonstrated to improve the performance of the learning system. We also offer an efficient method to suggest missing knowledge to recover from failures through a repair module that employs top-down and bottom-up approaches to infer new knowledge. The repair program reasons about failures, suggests new PARK AND WILKINS 5 11 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. knowledge based on induction over solved cases, and verifies the suggested knowledge based on the context. Explanation Generation When a human expert takes an incomprehensible ac- tion and fails to do what a learning system expects him or her to do, failure driven learning attempts to under- stand the action by generating plausible explanations of the expert’s behavior. Understanding is a process that explains the incomprehensible action in a context- sensitive way. Hence, it is important to represent ex- planations and contexts declaratively. Experts usually employ problem solving strategy knowledge that is obtained through experience. There- fore, strategy knowledge is an important part of ex- plaining an observed action. We implemented the MIN- ERVA expert system shell [Park et al., 19891 that uses explicit and flexible strategy knowledge. MINERVA is a rule-based expert system shell that is a Prolog reimplementation and enhancement of HERACLES ex- pert system shell [Clancey, 19871. The enhancement was guided by experience gained from the ODYSSEUS [Wilkins, 1988b] apprenticeship learning program for HERACLES. In MINERVA, the representation of the meta-level strategy knowledge is more explicit, declar- ative, and modular. This strategy knowledge is a Horn clause. The following is an example: goal(clarify-finding(Finding1)) :- new-datum(Findingl), not value( Findingl, no), clarified-by( Findingl, Finding2), not concluded( Finding2), goal(findout (Finding2)). The head of this clause consists of name and ar- gument of this strategy knowledge source. The body consists of a sequence of premises followed by a sub- goal. A strategy knowledge source is activated only by a new change of problem state and invokes a sub- goal that changes problem states. When the premises of an activated knowledge source are satisfied, the sub- goal is inserted into the agenda. MINERVA employs explicit schedule knowledge to determine which subgoal to perform next. Subgoals are of two types: actions and tasks. An action refers to a subgoal that tries to find out a symptom. A task refers to a subgoal that invokes another task. For example, a subgoal apply-ruZe(RuZe) represents a task that fires a domain rule. It triggers a sequence of tasks that find the values of antecedents of the domain rule, checks if the values satisfy conditions, and evaluates the domain rule. The modular represen- tation and the opportunistic control of strategy knowl- edge source is suitable both for flexible problem solving and efficient knowledge acquisition. We view an expert’s action as a coproduction of strat- egy knowledge, domain knowledge, and problem solv- ing state. Finding relevant explanations of an expert’s action hinges on the notion of differential modelling, whereby the expert’s behavior is compared to the pert system’s choices for the same circumstances. ex- The modular knowledge sources predict all reason- able actions that an expert might take based on the new problem solving states. The problem solving states can be acquired from the expert or by inferring from the expert’s problem solving steps. Predicted actions and tasks are justified by the explanations that describe why those are expected by the expert system. And, we will say that apprentice understands an observed action if it can construct an explanation of the same expected action or the relevant task. Failure Detection If an apprenticeship learning system fails to generate an explanation of an expert’s action, this suggests that the knowledge base is missing a piece of information that the expert knows. This kind of explanation failure is categorized as the expectation failure type. While the detection of the expectation failure is im- portant, it is also important to be able to recognize and learn from situations where explanations are not coherent with a problem solving context. When fail- ure driven learning encounters a situation where all the generated explanations fail to be consistent with the problem solving context, it discards them and tries to understand the action by finding a new explanation that is coherent with the context. This kind of explanation failure is categorized as the context failure type. For example, suppose a human physician asks a ques- tion on headache-duration. Perhaps she wants to clar- ify a finding headache, because headache-duration is a more specific fact. Or, she might intend to differenti- ate two diseases; the symptom of one disease hypothe- sis is long headache-duration and the other has short duration. Hence, a learning system cannot identify which explanation is a plausible intention of the ques- tion without using a bias. However, if the physician asked many questions on the detailed facts of headache just before, her intention is more likely to be clarifica- tion of headache. In contrast, if she asked questions on both diseases, she is probably trying to differentiate between the hypotheses. We need meta level reasoning about generated explanations to judge the quality of these multiple explanations. Our model of knowledge acquisition includes the de- tection of both types of failures and the generation of explanations for the failures (see Figure 1). The global and local credit assignment processes are guided by rea- soning about strategy knowledge and context analysis. The scope of this work is limited by the assumption that the strategy knowledge of the expert system is complete, consistent, and correct. Thus, when a dis- crepancy arises between what the learning system ex- pects and what the human expert apparently does, the learning system is justified in assuming that strategy knowledge is right and domain knowledge is wrong. 5 12 KNOWLEDGE ACQUISITION constraints Figure 1: A knowledge acquisition cycle Using expectations to generate explanations ZJ We use an expect-explain method to understand ob- served actions (see Figure 1). As the learning program watches the human expert’s actions, it builds a prob- lem solving model which contains all the given facts, derived facts, and hypotheses. Whenever the learning program observes an action, it updates the problem solving model and predicts the next possible actions based on the problem solving model and its strategic knowledge (see section ). Expectation failure detection In order for the learning system to explain an ex- pert’s action, it tries to connect the action to one of its expected actions. If the observed action is one of those predicted, then the learning system understands the action. The justification of the expected action is a candidate explanation for the expert’s action. An observed action may be related to an expected task. Suppose the system expects a task that executes a domain rule. If an observed action is one of the an- tecedents of the domain rule, the explainer assumes the task’s explanation as a candidate explanation of the observed action. In order to find the path between the observed action and the task, the learning program runs meta rules backward. A declarative and explicit repre- sentation of meta rules makes it easy for the learning program to run the system backward. When the learning system fails to find a candidate explanation for the observed action, this suggests that the knowledge base is missing the piece of information that prompted the expert to take the observed action. This expectation failure triggers the repair module to refine the incomplete knowledge base by finding missing knowledge that will allow a coherent explanation of the expert’s actions. Context analysis An explanation is relevant to an observed action if it coheres with the problem solving context as well as ad- dresses the action. To understand an action by finding an explanation, it is important to recognize whether an explanation is relevant to the context as well as the ob- served action. Even if a single goal may prompt the human expert to ask a question, when we try to find reverse mappings from the question to possible expla- nations, there will be many possible answers. To single out the correct explanation among them requires rea- soning about the context where the action is taken. In general, experts tend to take an action which is co- herent with their problem solving context. For example, physicians usually ask follow-up questions on current hypotheses or symptoms. When human apprentices en- counter difficulties to understand an expert’s actions, they have an urge to find an explanation based on the human expert’s problem solving context. Constrain context failure The learning program observes a human expert’s ac- tion one at a time and generates a set of explanations of the action. An explanation consists of a strategy knowledge source name, such as clarify-finding, and a focus of attention such as headache (see a knowledge source example in section ). The context analysis program maps an observed ac- tion on the explanation plane (see Figure 2). The ex- planation plane has two axes. The strategy axis rep- resents all the predefined strategy knowledge sources. The focus axis represents human expert’s actions, de- rived facts, and hypotheses. An explanation on this plane has a pointer to a set of actions that are explained by the explanation. The context analysis program also keeps track of observed actions until a context failure occurs. Figure 2: A context transition guided by a goal transi- tion network Since an action can have multiple explanations, the mapping may be one-to-one or one-to-many. An expla- nation on the explanation plane becomes a sub-context PARK AND WILKINS 5 13 observed actions that explains a subset of observed actions. When the context analysis program inputs the set of explanations of the first action, it creates as many sub-contexts as ex- planations in the set. As the context analysis program inputs a succession of explanation sets, it creates many sub-contexts that explain a subset of observed actions. Since the context analysis program keeps track of ob- served actions, it can find a sub-context that explains all the actions. Such a sub-context is considered as the context of the observed actions. However, this simple method to build a context may produce many redundant context failures even if the human expert changes his or her goal slightly. It is nec- essary for the context analysis program to understand such change and not to consider it as a context failure. We use a goal transition network to identify a natural goal transition from a context failure (see Figure 2). There are at least three factors which can be jus- tified as rational bases for people to make causal de- scriptions: contiguity, similarity, and statistical [An- derson, 19871. Our context analysis program currently considers contiguity and similarity factors to constrain redundant context failures. By the similarity factor we mean similar contexts. Contexts are similar if their strategy axes are the same and their focus axes can be grouped by a known relation. For example, suppose context-i and context-:! consist of (clar- ify-finding, surgery) and (clarify$nding, neurosurgery), respectively. Since surgery and neurisurgery is defined by a relation more-specific, a transition from context-i to context-2 is considered as a natural one. This tran- sition describes a human expert’s natural stream of di- agnosis that attempts to find out more detailed facts progressively. By the contiguity factor we mean contiguous tran- sition. A transition from context-1 to context-2 is contiguous if their focus axes are the same and the tran- sition between strategy is declared to be plausible. For example, (cZarifyJCzding, X) and (process-finding, X) ’ are different contexts. When a human expert starts to ask a question that is explained by (process-finding, X), after he or she asked questions related to (clar- &finding, X), the observed action can not be explained by the current context (clarify-finding, X). Hence, the simple context analysis program considers the explana- tion (process-finding, X) as a context failure and tries to repair the failure. However, the transition from (czar- ify_finding, X) to (process-finding, X) is a stream of the diagnostic reasoning sequence. This transition de- scribes a human expert’s reasoning approach that pre- dicts hypotheses related to a symptom after he or she find out more detailed facts of the symptom. A transition on the goal transition network plane rep- resents such allowable transitions to guide context tran- sitions. In the current system, the goal transition net- work is explicitly hand-coded. ‘The first seeks more specific information applies domain rules related to X. and the second In Figure 2, action1 and action2 are described by a context {sl,fl}. B ecause no transitions are known from {s2,f2} to {s3,f2} and {sl, fl} to {sS,f2) in the goal transition network, only {sl,fl) becomes the context when action2 is observed. Then, the context analy- sis program projects explanations of action3 for con- text {sl,fl). If the context {sl,fl) were unable to cover any explanation of action3, the transition model would guide the induction program in considering the transi- tion from {sl,fl) to {s,d,fl) as a natural one. Suppose projected explanations of an action-n are too far from the current context on the explanation plane and no paths in the goal transition model can explain them. Then the context analysis program con- siders two possible cases: the expert may have changed his or her goal or the system may be missing a piece of information. When the context analysis program encounters such a situation where no explanation co- heres with the context, it saves the current context, restarts the context analysis by posting sub-contexts of action-n, and waits to see the next actions. If some of the sub-contexts explain subsequent actions, the con- text analysis program assumes the expert changed his or her goal. However, if the subsequent actions are ex- plained by the saved context instead of the sub-contexts of action-n, the explanations of action-n are not co- herent with the expert’s problem solving behavior. The context analysis program recognizes the explanations of the action-n as a context failure. If a failure driven learning system does not employ this context analysis, multiple explanations may mask a learning opportunity in this situation. Failure Recovery and Constraints When the global credit assignment program encounters an explanation failure, it triggers a repair module. The repair module employs top-down and bottom-up ap- proaches to infer new knowledge. While the top-down interpretation fires a set of knowledge sources to predict actions and tasks, the bottom-up interpretation accepts observed actions and finds tasks related to the actions by running the meta rules backward. We employ a combined method (see Figure 1) that first runs meta rules backwards [Wilkins, 1988b] and then runs strategy knowledge sources forward. Both searches will suggest plausible paths which connect the observed action and the strategy knowledge sources, and also post constraints to predicates in the paths. These constraints and the use of the meta-level strategy knowledge enable the repair module to reduce search space. Construction constraints of explanation and When the learning system detects expectation failures and context failures, it invokes a bottom-up repair mod- ule that drives meta rules backward from an observed action to tasks (see Figure 1). Since in medical di- 5 14 KNOWLEDGE ACQUISITION n I Learning II Method-l II Method-2 Disease opportu- expectatron multrple expectation context multiple nities failure masking failure failure masking Bacterial Meningitis 40 11 29 11 8 21 Brain Abscess 3 0 3 0 1 2 Cluster Headache 16 16 0 16 0 0 Fungal Meningitis 0 0 0 0 0 0 Migraine 5 0 5 0 2 3 Myco-TB Meningitis 6 6 0 6 0 0 Primary Brain Tumor 1 0 1 0 1 0 Subarach Hemorrhage 34 18 16 18 13 3 Tension Headache 3 0 3 0 3 0 Viral Meningitis 26 15 11 15 3 8 Totals 134 66 I 68 II 66 , 31 , 37 -L Table 1: Comparison of failure detection with and without context analysis. Method-Z and Method-l represent the failure detection method with and without context analysis, respectively. agnosis, the backward search may suffer from combi- natorial explosion, the bottom-up repair module runs the meta rules backward from an observed action to tasks that are used as subgoals of strategy knowledge sources. Hence the backward search is not deep and the number of paths from the action to tasks is small The bottom-up repair module generates constraints of the tasks as well as finds tasks that are related to the observed action. For example, suppose the human ex- pert asks about seizures and the learning system fails to find explanations of the action. The bottom-up module runs meta-rules backward from the observed action to a task such as apply-ruZe(domain-rulel) and the con- straint that domain-rule1 must have seizures in the an- tecedent. The tasks and constraints of the argument generated by the bottom-up module are used to deter- mine the subgoals of strategy knowledge sources. The top-down repair module is run after the bottom- up module and determines the heads of strategy knowl- edge sources based on the context and sub-contexts. This assumes the human expert diagnoses a case in a context sensitive way. The top-down module instanti- ates a set of strategy knowledge sources using subgoals, constraints, and instantiated heads generated both re- pair modules. A strategy knowledge source in MIN- ERVA is designed to be instantiated by the bindings of head and subgoal. Suppose the current context is (explore-hypothesis, primary-brain-tumor), then the top-down repair module instantiates explore-hypothesis strategy knowledge source and adds a constraint that domain-rule1 must conclude primary-brain-tumor. In this example, domain-rule1 has one antecedent seizures and must conclude primary-brain-tumor. The constraints are used to remove an instanti- ated knowledge source from the candidate explana- tions of the unexplained action. The top-down repair module first removes an instantiate knowledge source whose constraints of a premise are contradictory.2 Sup- 2MINERVA uses a maintenance system to maintain the dependency structure of facts and hypotheses. pose an instantiated knowledge source has a con- straint that domain-rule1 must have an antecedent vaZue(headache,yes). However, if the patient is known not to have a headache, the knowledge source contains a contradictory condition and is removed from the can- didate set. If a premise of an instantiated knowledge source is not contradictory and fails to be satisfied, it may be the missing knowledge that is responsible for the failed explanation. Suppose there is no rule that has the de- sired symptom in the antecedent and the desired dis- ease in the consequent; this means the knowledge base is missing the domain rule. The repair module identifies that a domain rule is missing and induces the domain rule which has the desired disease in the consequent and has the symptom in the antecedent, using the case library. Moreover, the repair module produces new do- main rules which have more specific antecedents and also conclude the disease. If this domain rule generates a coherent explanation and its belief value calculated by Pearl’s method [Pearl, 19861 over a library of 112 solved cases exceeds the threshold, it will be added to the knowledge base. Experimental Results We have completed two experiments using the knowl- edge refinement method described in this paper. The first experiment tests the performance of the failure de- tection program. The second experiment tests the re- pair module which generates new domain knowledge based on the failure, and also tests the diagnostic ac- curacy of the MINERVA expert system shell [Park et al., 19891 after learning has taken place. We used a col- lection of 112 solved medical cases that were obtained from records at Stanford Medical Hospital. The synthetic agent method [Wilkins, 1988a] was used in experiments: MINERVA with complete domain knowledge is used as a synthetic agent. We created ten incomplete knowledge bases. An incomplete knowledge base is missing all the domain rules which conclude a PARK AND WILKINS 5 15 Disease Bacterial Meningitis Brain Abscess Cluster Headache Fungal Meningitis Migraine Myco-TB Meningitis Primary Brain Tumor Subarach Hemorrhage Tension Headache Viral Meningitis All diseases Incomplete KB Perf.1 50 54 55 53 57 57 60 53 53 60 Refined KB Perf.2 Yerf.3 59 61 54 57 58 58 53 53 57 58 61 61 60 77 59 60 53 60 62 67 64 75 Table 2: The performance of MINERVA after failure driven learning and context analysis. Each experiment is done over a library of 112 solved medical cases. specific disease (see Table 1). In Table 1, learning op- portunity means that a missing rule is a part of the syn- thetic agent’s explanation. MuEtipZe masking represents a case where multiple explanations mask an explana- tion failure. When the failure detection method with- out context analysis is applied, it can detect 66 out of 134 missing domain rule applications (49.2% accuracy). When the failure detection method with context anal- ysis is used, it can detect 97 out of 134 missing domain rule applications (72.4%accuracy). Hence, the proposed method increased the number of learning opportunities by 47%. In Table 2, under performance 1 column, the per- formance of MINERVA with missing knowledge is re- ported. Each experiment is done over a library of 112 solved cases with an incomplete knowledge base that is missing all the rules that conclude one designated dis- ease. Under performance 2 column, the improved per- formance after failure driven learning without context analysis is presented. Each incomplete knowledge base is refined by the failure driven learning method. And each improved knowledge base is used to diagnose the 112 case library. Under performance 3 column, further improved performance after failure driven learning and context analysis is reported. After the system learns new knowledge for each disease, it adds all the knowl- edge to the existing knowledge base. The results of experiments with these new knowledge bases are shown in the last rows of performance 2 and performance 3 columns. Conclusion We have presented an effective and efficient knowledge acquisition method that employs failure driven learning. The failure detection process has been demonstrated to pinpoint the context failure as well as the expecta- tion failure. It enables the learning program to avoid masking of learning opportunities due to the existence of multiple explanations, thereby improving the perfor- 5 16 KNOWLEDGE ACQUISITION mance of the knowledge acquisition process. Our failure repair approach generates explanations and constraints using a bottom-up search followed by a top-down in- stantiation. A subject for future work is to develop a method which can reduce the number of redundant con- text changes by building a more robust goal transition model. Acknowledgments We would especially like to thank Marianne Winslett for providing invaluable comments on this paper. Chip Mayse also provided helpful comments. We would like to express our deep gratitude to Ziad Najem, Lawrence Chachere, Kok-Wah Tan for their major role in the design and implementation of the MINERVA shell. This research was supported by ONR grant N00014-88K0124. References [Anderson, 19871 J.R. Anderson. Causal analysis and inductive learning. In Proceedings of the Fourth International Workshop on Machine Learn- ing, Urvine,CA, 1987. [Clancey, 19871 W.J. Clancey. Acquiring, represent- ing, and evaluating a competence model of diagnostic strategy. In Contributions to the Nature of Expertise. Lawerce Erlbaum Press, 1987. [Dietterich and Buchanan, 19811 T.G. Dietterich and B.G. Buchanan. The role of the critic in learning systems. Technical Report STAN-CS-81-891, Stan- ford University,CA., 1981. [Kodratoff and Tecuci, 19871 Y. Kodratoff and T. Te- cuci. What is an explanation in disciple? In Proceed- ings of the Fourth International Machine Learning Workshop, pages 160-166, Irvine,CA, 1987. [Mitchell et al., 19851 T.M. Mitchell, S. Mahadevan, and L.I. Steinberg. Leap: a learning apprentice for vlsi design. In Proceedings of the National Confer- ence on AI, pages 573-580, Los Angeles,CA, August 1985. [Park et al., 19891 Y.T. Park, K.W. Tan, and David Wilkins. MINERVA: A Knowledge Based System with Declarative Representation and Flexible Con- trol. Working Paper UIUC-KBS-89-01, Dept. of Computer Science, University of Illinois, Urbana, IL, 1989. [Pearl, 19861 J. Pearl. 0 n evidential reasoning in a hier- archy of hypotheses. Artificial Intelligence, 28:9-15, 1986. [Wilkins, 1988a] D.C. Wilkins. Apprenticeship learning techniques for knowledge based systems. Technical Report STAN-CS-88-1242, Stanford University,CA., 1988. [Wilkins, 1988b] D.C. Wilkins. Knowledge base refine- ment using apprenticeship learning techniques. In Proceedings of the National Conference on AI, pages 646-651, St. Paul, MN, August 1988.
1990
101
851
A Design Based Approach to Constructing Computational Solutions to Diagnostic Problems D. Volovik & I. A. Zualkernan Department of Computer Science University of Minnesota, Minneapolis, Minnesota USA P. E. Johnson Department of Information & Decision Sciences University of Minnesota, Minneapolis, Minnesota USA C. E. Matthews IBM Application Business Systems, Rochester, Minnesota USA Abstract Troubleshooting problems in real manufacturing environments impose constraints on admissible solutions that make the computational solutions offered by “troubleshooting from first principles” and the conventional experience based expert systems approaches infeasible. In this paper we present a computational theory for a solution to these problems that is based on the Principle of Locality and exploits the domain specific weak methods of troubleshooters and debugging knowledge of the designers. The computational theory is evaluated by generating focus of attention heuristics for a moderately complex digital device. 1. Computational Problem We are interested in finding computational solutions to diagnostic problems as they occur in real manufacturing environments [Johnson 891. This class of problems introduces two constraints on any computational solution: 1) the devices are large and complex, which precludes any solution that relies on a complete simulation or on enumeration of the various fault propagation paths through one or more abstractions of the device (e.g., [Genesereth 84],[de Kleer 871, [Reiter 871) and 2) the life-cycle of these devices is short, which makes any solution (e.g., [Freiling 851) that relies on trouble-shooting knowledge specific to a particular device unfeasible. The first constraint suggests that any computational solution to this class of problems should have a uniform mechanism for handling the order of complexity of these devices. The second constraint precludes the use of any troubleshooting knowledge that is compiled for a specific device as an adaptation to the task of troubleshooting that device. 2. Computational Theory The task of troubleshooting complex large-scale devices can be viewed as a two step process: 1) Determine an appropriate search space in which the fault is “local” [Davis 841 and 2) Apply weak- troubleshooting methods on this space to locate the fault. A search space is defined by a specification of appropriate ‘pathways of interactions” that are exploited by a weak troubleshooting method. Although the definition of our computational problem precludes the use of device-specific troubleshooting knowledge, device-independent but domain-specific troubleshooting methods [Reed 881 developed by troubleshooters can be a valid component of a computational solution. However, a solution that relies solely on these methods, given the constraints of the computational problem is not adequate. There are two difficulties with such a solution: 1) Domain specific weak methods are used by troubleshooters only when the device specific heuristics fail; this means that the best performance one can expect from a solution that relies solely on these methods would be that of a troubleshooter’s performance on a new device. 2) The locus of search spaces that can be used by troubleshooters is restricted by device representations available in their task environment. The representations of a device available to troubleshooters in real manufacturing environments, however, often consist of just the physical device itself (which leads to the used of a search space based on physical pathways of interaction) and at most one other level of abstraction (such as schematic for digital designs). This means that any solution that relies solely on weak-methods of troubleshooting will have the restriction that it can only work for faults that are ‘local’ in search spaces that can be derived from these representations. A large class of interesting faults are not ‘local’ in a search space derived from the physical representation of a device [Davis 841. Another source of knowledge that is admitted by the definition of the computational problem is that of designers conducting the task of debugging, which though different from troubleshooting, has components that are similar to the troubleshooting task. Designer’s task environment has a rich variety of representations of the device. In any design process, these consist of various by-products of the design. For example, in the case of digital hardware VOLOVIK ET AL. 5 17 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. design, the designers may use requirements, functional specifications and block level descriptions in addition to the schematics and the physical representation of a device. Our solution to the computational problem described above is to exploit the knowledge of how designers debug designs in addition to the device- independent methods and search spaces of the troubleshooters. This solution has the advantage that it can detect faults that are not only ‘local’ in the search spaces of troubleshooters, but ones that are ‘local’ in the various spaces that result from the combination of the search spaces used by designers for debugging a class of devices. This solution is based on a computational theory that can be stated as follows: 1) The task of troubleshooting is to determine the appropriate search space in which a fault is local and then apply domain dependent weak troubleshooting method to the search space until the fault is found. 2) The source for domain dependent weak troubleshooting methods is the task of troubleshooting in the domain. 3) The source for appropriate search spaces is task of debugging by designers in the domain during the design phase. The construction of a computational solution based on the computational theory consist of the following steps: 1) Determine weak trouble-shooting methods used by troubleshooters in a domain. 2) Determine search spaces used by designers in debugging designs in the domain 3) Use the “principle of locality” as an organizational principle to construct a solution that applies weak troubleshooting methods of troubleshooters to search space used by the designers. 3. Experimental Validation of the Computational Theory To asses the validity of the computational theory described above, we chose the domain of small card digital design. This is an appropriate domain under the definition of the computational problem as the small card designs are fairly complex devices (- 1 O2 - 103 components at the schematic level) and the life- cycles of these cards are short; typical life is w 2 years. 3.1 Constructing a Computational Solution for Digital Devices Previous work don& in the domain of troubleshooting digital devices indicates that troubleshooters use a variety of domain specific weak troubleshooting methods [Reed 891. As a first test of the computational theory, we concentrate on a method called initial focus of attention method. This is a method that troubleshooters use to initially focus on a sub-set of the device, given certain symptoms. The symptoms in the case of digital design typically consist of test failures on a specific device. As a first step in the validation process, a designer (with 10 years of experience with constructing small card digital designs) was asked to construct a design for an Interrupt Control Coprocessor with the following features: l One level prioritized Interrupts 0 Subroutine handling capabilities e LIFO stack for return address storage a Interrupt masking by software and hardware 0 Stack full flag The resulting device consisted of 254 components (LSl’s and gates) with approximately 104 signals. In addition to the designing the device, the designer was also asked to write tests (see endnote #I) for the device (a normal part of the design process). The designer was asked to think aloud as he conducted the design and these comments (verbal protocols) were tape-recorded along with the various by-products generated by the designer as he constructed the design. An analysis of the verbal protocols during the process of debugging design indicated that the designer used the search space schema for initial focus of attention shown in Figure 1. This space is based on pathways of interaction that go across the various representations of design available to the designers (e.g., functional specifications, workbook level design and tests). Given the nature of the search space used by the designers and the domain specific weak troubleshooting methods of troubleshooters, the principle of locality was used to construct a computational model that used “path following” weak method on the search space. To apply the principle of locality we identify the types of faults that are local in component spaces of the search space. For example, in the space defined by test specifications, immediately obvious faults are patterns of test failures. These failures are detected in the space through observations of test outcomes generated by the testing equipment. Interactions of tests and specifications establish pathways of interaction between these two spaces. If pathways of interaction map local faults in one space (e.g. specification faults) into local faults in another space (e.g. test failures), propagating the faults from one space into another along the pathway is desirable. This allows detection of specification failures in a specification space by observing test failures in the test space. 5 18 KNOWLEDGE ACQUISITION Functional Figure 1. Search Space Schema for Focus of Attention Similarly, failures of modules in workbook-level specification space are detectable through pathways of interaction between functional specifications and workbook-level modules. Single-stepping from failing tests to failing specifications to failing modules is an example of a weak search method for traversing along the pathways of interaction. When local faults in one space cannot be mapped along pathways of interaction into local faults in another space, different pathways of interaction are required for successful diagnosis. The suitable pathway should preserve the locality of faults. An instance of search space schema for focus of attention method for the interrupt controller (see Figure 2) demonstrates how the principle of locality is used to identify appropriate pathways of interaction. For example, test T3 verifies specifications C and E and test T8 verifies specifications A and C. Specification C is implemented by module c, specification E by modules dand e, and specification A by module a. If tests T3 and T8 both fail at the same time, specifications C and/or E are failing at the same time as specifications A and/or C are failing. Assuming singular faults and disjoint modules, one of modules c or d or e is failing at the same time as one of the modules a or c is failing. Thus, module c is failing and specification C is failing. On the other hand, test T7 verifies specification B which is implemented by module 6. If both T7 and T3 fail, b is failing at the same time as c or dor e is failing, which is not possible under the assumptions (singular faults and disjoint modules). Fault locality is not preserved along the pathways. Different pathways of interaction that map test spaces into workbook-level spaces can be constructed by identifying signals (on which tests fail) with modules that generate these signals. In workbook-level spaces, for example, failure of tests T7 or T3 can be due to any of the signals I, rd, halt, stf, mask, SEQ, pendO or SAVfailing. In test spaces, T7 failure is detected on rd and T3 on halt. In workbook-level spaces rd is generated by dor e and ha/t is generated by d or e, thus either d or e, for example, is faulty. An application of the path following method to the above search space results in a computational solution that is based on the following heuristics: (i) Start with a set of failing tests. (ii) Use interactions of tests and functional specifications to propose a set of failing high level functional specifications, given a set of failing tests. (iii) Use interactions of specifications via aggregation and decomposition to propose a set of failing low-level functional specifications. (iv) Use an interaction across representations of low-level functional specifications and high-level workbook modules to propose a set of failing high- level workbook modules. and (i) Start with a set of failing tests. (ii) Use the mapping between interactions of tests and functional specifications they check, and among modules via signals/busses to -propose a set of failing high-level workbook modules, given a set of failing tests. VOLOVIK ET AL. 5 19 Signals/ Buses Tests Specs Modules Modules Figure 2: Instance of Search Space Schema for Focus of Attention The above heuristics can be implemented by the following two rules: IF a substantial number of tests fail and a substantial number of test failures are caused by a variety of different test signals THEN among all the test that fail and that point to a possible failure of a set of functional units (areas) on the board, chose an area or areas that the majority of tests point to ACTION the areas most tests point to are likely to contain a fault. IF a substantial number of tests fail and a substantial number of test failures are caused by a small number of different test signals THEN among all the functional units or areas on the board that generate this small set of failing signals, chose the area that generates the most such failing signals ACTION the areas that generate the most failing signals are likely to contain the fault and the signals are good starting test points Although in this paper we discuss only the initial focus of attention method, the corresponding search spaces for other domain dependent weak troubleshooting methods is given in Appendix A. 3.2 Evaluation of the Computational Solution The validation of the computational solution was carried out as summarized in Figure 3. A simulation model of the board as designed by the designers was constructed using a commercially available simulation environment running on IBM- PS/2 Model 70. A variety of faults such as bridge- faults, stuck-at faults and general component failure were introduced in the simulation. Bridge faults arise from shorts (see [Davis 841 for a description). The stuck-at faults consist of input or output of a component held to a constant value. Component failures were simulated by either eliminating the component or substituting one component for another. For example, a failed NOT gate behaves like a wire. For each fault introduced, tests created by the designer were run on the simulation to produce test results for that fault. Test results for each fault were used as the input to the computational solution that generated the predictions about the area on the board where the fault exists (focus of attention). 520 KNOWLEDGE ACQUISITION Apply initial focus of attention (test results as inputs) Repeat the process for several possible faults in the design for preselected common types of faults Figure 3. Process used to Evaluate the Computational Solution 3.3 Results The results from testing the computational SOiUtiOn were evaluated by using the following two criteria: Effectiveness = Number of Correct/v Predicted Faults Total Number of Faults introduced Power = Number of Components in the Predicted Sub-set Total Number of Components Table 1. gives the distribution of the various types of faults introduced on the board. 4. Conclusions Of the 27 faults introduced, the Overall We are attempting to construct solutions to an effectiveness was 77.8 %. The effectiveness interesting class of computational problems that measure of computational solution demonstrates impose unique constraints on an admissible that generated troubleshooting heuristics correctly solution. ln this paper we have proposed a predict a functional area on the board where the fault computational theory that exploits designer’s is located for 77.8% of all faults that were introduced. knowledge obtained from the process of designing a The power of the focus of attention method is device, and combines it with domain specific weak- given in Table 2. troubleshooting methods by using the principle of locality as the organizational principle. We have also The reduction in search space measure from Table 2. demonstrates that the focus of attention method presented results on the feasibility of the (on average) reduces the search area on the board computational theory by applying it to the process of designing digital devices to generate focus of where the fault is located to 27.9% of all the components (area) of the board. Comparing the attentions heuristics. The initial results, although best, worst and average cases in Table 2 limited, seem promising. We are in the process of extending the approach to incorporate additional demonstrates that computational solution might be weak methods. imprecise, reducing the search by only to 70.5%, or very accurate, reducing the search to 8.8% of all components. Fault Type Number of Faults Number of Faults Caught Introduced Table 1. Distribution of Faults Introduced to the Simulation Model Table 2. Power of Focus of Attention Method Q VOLOVIK ET AL. 521 Bibliography [Davis 841 R. Davis, “Diagnostic Reasoning Based on Structure and Behavior,” Artificial Intelligence, No. 24, Volumes 1-3, 1984, pp. 347-410. [Freiling 851 M. Freiling, J. Alexander, S. Messick, S. Rehfuss, and S. Shulman, “Starting a Knowledge Engineering Project: A Step-by-step Approach,” The Al Magazine, No. 6, Vol. 3, 1985, pp. 150-l 64. [Genesereth 841 M. R. Genesereth, “The Use of Design Descriptions in Automated Diagnosis,” Artificial Intelligence, No. 24, 1984, pp. 41 l-436. [Johnson 891 P. E. Johnson, D. Volovik, I. A. Zualkernan and C. E. Matthews, “Design Knowledge for Discovering Troubleshooting Heuristics,” in Proceedings of IASTED Symposium on Expert Systems Theory and Applications, June 26-28, 1989, Zurich, pp. 17-21. [de Kleer 871 J. de Kleer and B. C. Williams, “Diagnosing Multiple Faults,” Artificial Intelligence, No. 32, 1987, pp. 97-130. ]Reed 881 N. E. Reed, E. R. Stuck, and J. B. Moen, “Specialized Strategies: An Alternative to First Principles in Diagnostic Problem Solving,” Proceedings of the Seventh National Conference on Artificial Intelligence, Vol. 1, August 1988, pp. 364-368. [Reiter 871 R. Reiter, “A Theory of Diagnosis from First Principles,” Artificial Intelligence, No. 32, 1987, pp. 57-95. Endnote #I. The typical tests written by designers are used by designers for validation and hence are not sufficient for troubleshooting as they only check top level functional specifications. Appendix A I interaction to track interfaces on the schematic t a remote point along b-system functional level 522 KNOWLEDGE ACQUISITION
1990
102
852
ase Niall Murtagh and Masarnichi Shimura Department of Computer Science, Tokyo Institute of Technology 2-12-1 0-okayama, Meguro-ku, Tokyo 152, JAPAN email: nia.llQcs.titech.ac.jp Abstract Conventional methods for the parametric design of engineering structures rely on the iterative re-use of analysis programs in order to converge on a sat- isfactory solution. Since finite element and other analysis programs require considerable computer resources, this research proposes a general method to minimize their use, by utilizing constraint- based reasoning to carry out redesign. A problem- solver, consisting of constraint networks which ex- press basic relationships between individual design parameters and variables, is attached to the anal- ysis programs. Once an initial design description has been set out using the conventional analysis programs, the networks can then reason about re- quired adjustments in order to find a consistent set of parameter values. We describe how global constraints representing standard design behav- ioral equations are decomposed to form binary constraint networks. The networks use approx- imate reasoning to determine dependencies be- tween key parameters, and after an adjustment has been made, use exact relationship informa- tion to update only those parts of the design de- scription that are affected by the adjustment. We illustrate the ideas by taking as an example the design of a continuous prestressed concrete beam. 31 Introduction Parametric design refers to the design of engineering objects, in which the parameters and variables describ- ing the object are known, and the problem is one of finding a consistent set of parameter values which con- form to specified requirements. Conventional paramet- ric design algorithms for civil and mechanical engineer- ing structures use finite element and other structural analysis programs to propose an initial solution. If the solution is infeasible, redesign is carried out in iterative fashion in order to converge on a feasible solution. The analysis programs have to be treated as black-boxes with fixed input and output, and hence each redesign iteration requires a complete re-processing, irrespec- tive of how smalI the change made to the previous de- sign description. Furthermore, these analysis programs are often large and require considerable computer re- sources [Murthy and Addauki, 19871. Therefore, any reduction in their usage through the application of in- telligent redesign methods is advantageous for design efficiency. Previous research into structural engineering design has dealt with meta-level control [Orelup et al, 19881, domain independence [Dixon et al, 19871, and inno- vative design [Murthy and Addanki, 19871, but has not applied intelligent technology to the fundamental analysis part of parametric design. In this research, we replace the analysis programs in redesign with a knowledge-based constraint reasoning process. This provides the double advantage of minimizing the use of expensive analysis programs, and of being able to up- date only those parameters or variables in the design description which require updating. Constraint-based techniques have been shown to im- prove the problem-solving capabilities for applications such as combinatorial problems and vision [Montanari, 19741, [Mackworth, 19771, electrical circuit analysis [Sussman and Steele, 19801, and preliminary structural design [Sriram and Maher, 19861. A constraint-based approach reflects the view that design is essentially a process of integrating constraints from a variety of sources [Mostow, 19851. In the present system, how- ever, the constraints considered are restricted to those that can be conveniently handled, i.e., numeric con- straints on design parameter values. We explain how constraint networks are formed for the type of applica- tion considered here, and we propose an architecture to accommodate a constraint-based problem solver to- gether with structural analysis programs. The actual design process is then detailed, showing how backward reasoning enables dependencies between key parame- ters to be determined, and how forward propagation through the constraint network obviates the need to re- access analysis modules. We illustrate our techniques by referring to the design of a statically indeterminate prestressed concrete beam. MURTAGHAND~HIMURA 505 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. 2 Design Process Model A conventional parametric design problem is described by parameters and variables, which can be classified according to the following three groups: Initial Parameters These include the user specifica- tion together with certain other unknown parame- ters whose values must be estimated before process- ing can be carried out to determine performance pa- rameter values. Intermediate Variables These are determined part of the processing of the initial parameters. as Design Performance Parameters These are pro- vided either by the user or by engineering codes, and they determine whether the solution proposed by the initial parameter values is adequate or not. Since it is not possible to directly determine the un- known parameters from the specification and required performance, conventional design is carried out in an iterative generate-and-test manner, using analy- sis programs to generate a complete description, be- fore testing the performance parameters. The analy- sis programs, cannot be fully replaced by more flexi- ble reasoning systems, such as constraint-based prob- lem solvers, because the analysis programs have certain capabilities-for example, the handling of various data types-which constraint-based reasoning systems do not possess, e.g., analysis programs process both single and multi-valued parameters, such as arrays of loads on a multi-span beam, while constraint-based systems can only reason about single valued parameters. There- fore, we use both conventional and constraint-based processing of parameters. Our system initially follows the conventional algorithm using analysis programs to generate an initial complete design description. How- ever, in subsequent redesign, only extreme values need be considered for each parameter, so that constraint- based reasoning alone can complete the design prob- lem. 3 Constraint Network Model A standard constraint-satisfaction problem is charac- terized by a set of variables or parameters, each of which has a domain, finite or infinite, of possible val- ues. To this is added a set of domain constraints and a set of inter-parameter constraints. Domain Constraints are unary constraints, c(p), where c is a function mapping a parameter, p, onto a particular domain of allowable values. In our model, these constraints act on the initial and performance parameters. Intermediate variables do not have unary constraints and will accept auy value propagated to them. Two levels of unary constraint exist: o Unary limitconstraints restrict the parameter to a multi-value domain within fixed limits, which cannot 0 be adjusted during design. They may be obtained from a domain expert or textbook, or may be spec- ified by the user. Unary value-constraints assign a single numerical value to each initial parameter, i.e., they further tighten the limit-constraints so that a single value is associated with the parameter prior to propagation. These value-constraints may be supplied either by the user as part of the design specification, in which case the limit-constraints ensure that the value is in the allowable domain, or they may be supplied by the system, through default or other methods. System-supplied unary value-constraints may be au- tomatically adjusted during redesign, whereas user- supplied constraints are considered as part of the specification and may not be altered. Inter-Parameter Constraints are L’n-ary” con- straints, c(pl, p2 . . . . . . p,), where c is a function relat- ing parameters pl, p2 . . . . . . pa, n 2 2. These constraints, representing the laws of statics, geometry, mathemat- ics, etc., link aJI the parameters/variables in the design description. They are used to propagate the value- constraints on the initial parameters, i.e., they force an “nary” constraint between the initial and perfor- mance parameters. If the constraints cannot be satis- fied, adjustments have to be made. This is where the manner of representation of the inter-parameter con- straints can have a big influence on the efficiency of the design. We now describe the treatment of these constraints. 3.1 Constraint Decomposition In conventional engineering design methods, global inter-parameter constraints are represented by behav- ioral equations contained in the analysis programs which process the input description and check the re- sult against performance requirements. This can be re- garded as applying an “rrary” constraint to the initial and performance parameters, where n is the number of parameters involved. We convert these %ary” constraints into binary constraint networks for the following reasons: In conventional methods which use “n-ary” con- straints as in Figure l(a), specific or generalized de- pendency relations between initial and performance parameters must be explicitly stated, e.g., factor x imposed loading = tensile stress. By linking all pa- rameters together in a binary network, part of which is shown in Figure l(c), approximate dependency re- lations can be worked out by the system. With a few “nary” constraints linking many param- eters, as in Figure I(a), if one parameter is adjusted then a complete re-computation involving all n pa- rameters, is generally required to determine the ad- justment necessary for the other parameters. The “n-ary” constraint gives no information regarding 506 KNOWLEDGE ACQUISITION (Initial Parameters a-ary constraints (representing relations or equations involving 92 parameters) (4 (Intermediate variables) (iF)JeJ ternary constraint: prestress force/sction area = axial stress (represents relation or equation involving three variables) w section prop ort ion al x3Y-Y area / axial binary constraints (representing relations between pairs of variables or parameters) Figure 1: Constraint Decomposition individual parameter-to-parameter relations, so that partial re-adjustment of the design description is not possible. However, a binary constraint network di- rectly relates individual parameters, enabling local adjustments to be carried out and propagated only to those parameters afFected by a change. The “nary” constraints as in Figure l(a), are decom- posed by firstly introducing intermediate variables as “stepping-stones” in the propagation process, i.e., we explicitly represent all steps in the evaluation of the behavioral equations as nodes in the constraint net- work. This reduces the “n-ary” constraint to simpler forms, e.g., the ternary constraint shown in Figure l(b), which processes two variable-values to produce one new variable-value. Secondly, these simpler con- straints are then further decomposed into binary con- straints, or relations between pairs of nodes. After the first step the constraints still represent equations, as in Figure l(b), but after the second, we only have one-to- one relations between nodes as in Figure l(c), and we can directly transfer an adjustment from one node to another, without having to consider any more than two nodes at a time. In our system binary relations sim- ilar to those shown in Figure l(c), cater for addition, multiplication, etc., and power operators, as well as the reverse operators. Certain complications that oc- cur in the adjustment transfer process are dealt with in section 4.1. 3.2 System Architecture An expert system such as the one proposed here will generally be built on top of existing procedural pre grams, not the other way around. Therefore, it is nec- essary to adapt constraint-based ideas to suit the de- mands of conventional programs. As shown in Figure 2, the architecture consists of a 2-level structure, with the constraint network set out in the upper, shallow- knowledge level, and conventional analysis modules lo- cated in the lower, procedural knowledge level. The links between nodes in the upper level consist of sim- plified direct dependency relations, as explained in the previous section, while those which pass through the lower level utilize traditional software tools, such as finite element programs, etc. Unary constraints are shown acting vertically down on the upper level. This separation between top and bottom levels reflects the distinction in conventional engineering terminology be- tween “design” and %nalysis”. Design concerns initial decisions, outline data, etc., and uses heuristics or shal- low knowledge, while analysis concerns precise numeri- cal values and uses procedural knowledge embedded in algorithmic programs. User communication and sys- tem output take place through the upper design level, which is also used in redesign to carry out propagation in forward and backward directions. The lower level analysis modules are used in setting out a full initial description, and typically link several nodes together, MURTAGHAND~HIMURA 507 unary initial intermediate performance binary constraint network conventional programs module initial design propagation: -w--m-&- Figure 2: System Architecture i.e., they form “ttiary” constraints, which, for clarity, we illustrate by dotted-line arrows. 4 Problem-solving Process 4.1 Application of Constraints A design problem is commenced by the application of unary constraints. Allowable domains and specific values are assigned to the initial parameters, and al- lowable domains are assigned to the performance pa- rameters. The user specification provides some of these unary constraints and the remainder are supplied by the system. This is done either through system- dependent default values, or through constraint derive tion, which generalizes standard textbook heuristics linking certain parameters, e.g., span/depth ratios. t For details of the constraint derivation algorithm, see Murtagh and Shimura, 19891). The values assigned to the initial parameters are then propagated using the “n-axy” constraints. This resembles standard propagation except that in the ini- tial propagation the lower level algorithmic modules are used and multiple parameter values are simulta- neously operated on. A complete description is thus obtained, and its validity is tested by the performance parameters. Generally this fist complete description will be invalid due to conflict between the propagated constraint values and the allowable domains of the per- formance parameters, and a redesign phase is automat- ically commenced. In the redesign phase, the system first determines the dependency information necessary for redesign, i.e., it back-propagates from the critical performance parameter to determine how much the initial parame- ters should be adjusted in order to achieve the required alteration in the critical performance parameter. Thus, it obviates the need to set out parameter-to-parameter dependency information, although parameter ordering [Dixon et al, 19871, specifying which parameter to ad- just first, is still required. After the adjustments required in the initial param- eters have been established, one initial parameter is selected for adjustment in accordance with the current redesign strategy. The adjustment is carried out, and forward propagation through the network updates the nodes related to the adjusted initial parameter. While the initial forward propagation can be re- garded as a global propagation, the redesign phase is based on local propagation. Local propagation has the advantage that it is fast because it uses only simple relationships and deals only with single-valued nodes. However, it relies on approximate reasoning in back- propagation, due to the presence of loops or cycles in the network. Referring to Figure 3(a), in baclc- propagation where more than one link lead out of the node N4, the required adjustment in N4 is propagated to both N2 and N3, an adjustment in either one be- ing sufficient to produce the required adjustment in N.#. Where the two paths subsequently lead into a single node Nl, a cycle or loop occurs. In general, the two adjustments back-propagated to Nl along the two paths, will not be the same. In order to combine the values, we propose an empirically derived approx- imate equivalence equation, supplemented with a self- updating heuristic correction factor, F. proposed adjustments x9 Y executed adjustment F x [x x ?//(a: + dl The heuristic factor, initially set to 1, and updated after adjustment, is used to speed up the adjustment process, i.e., it attempts to move immediately to the top of the current hill. This factor is problem-specific 508 KNOWLEDGE ACQUISITION (a) Node relations represented (b) Node relations represented by binary links by operators (Arrows show direction of forward propagation) Figure 3: Loop in Binary Constraint Network Design Parameter Values paraameter 9aame user specification no of spans 3 span lengths 10 ?n,18 m, 10 7-n end fixities cantilever, pinned dead load 5 kN/m live load 6 kN/m computed parameters initial value final value concrete strength 35 N/mm’ 40 N/mm’ section width 300 mm 450mm section depth 650 mm 800 mm prestress steel strength 1950 N/mm2 1950 N/mm2 cover to steel t 30 mm 103 mm balanced load factor $ 0.4 0.486 tStee1 cover determines max prestress tendon sag. $Proportion of load balanced by prestress. Table 1: Design Description for Prestressed Beam and path-specific. Hence, although it has no usefulness in the first adjustment, since its value is 1, it performs better than over-generalized statistical factors and will enable convergence to be quickly obtained in a subse- quent adjustment of the same parameter. In forward propagation, where multiple adjustments are passed to a single node, they are super-imposed to give the exact required adjustment. The difference between forward and backward propagation, while not obvious from Figure 3(a), can be seen more clearly in Figure 3(b), which shows the links passing through operators, which act like ternary constraints. 5 Implementation and Results The system at present comprises four partially over- lapping constraint networks, each network joining the initial parameters with one particular performance pa- rameter. These networks are written in Prolog, while the structural analysis modules are in C. The analysis modules in the current system carry out bending mo ment and shear computations, prestress system com- putations, and ultimate strength verification. Table 1 shows the numeric data for the design of a 3-span pre- stressed concrete beam. The design was carried out us- ing the constraint network for redesign and also using conventional techniques, i.e., using the analysis pro- grams only. The same results were obtained for each case, since the constraint networks exactly reflect the processing of the analysis programs. However, the con- ventional design method necessitated accessing each of the analysis programs a total of 18 times, while the constraint-based method only required the use of the analysis programs once, in the initial design. 6 Comparison with Related Research Relaxation [Leler, 19881 involves making an initial guess at the value of an object, estimating the resultant error, and making new guesses in order to converge on a rational value. We have adapted this technique to the demands of engineering design, focusing on how to represent relations between the objects, and how to improve the convergence methods. In [Chan and Paulson, 19871, constraints are used for determining design descriptions as welI as checking proposed descriptions for a structural engineering ap- plication. However, procedures are required for each intended use of the constraint. We also use constraints in different ways, but instead of setting out numerous procedures, we avail of simplified relationships repre- sented by constraint networks. One of the ways in which we utilize the constraints, in addition to check- ing, is in establishing parameter-to-parameter depen- dency information, using local constraint relationships together with problem-specific heuristic factors. Thus we do not need to set out explicit dependency rela- tionships as in more conventional systems [Dixon et al, 19871. In [Mackworth, 19771 consistency algorithms are proposed for attaching feasible values to variables, and for filtering infeasible values in constraint networks. MURTAGHAND~HIMURA 509 Only unary and binary constraints are treated, and the inter-variable constraints are assumed to be inex- pensive, unlike those represented by analysis programs in engineering design. Hence, our research focuses on the nature of the inter-variable constraints. In [Dechter and Pearl, 19871, the variable-domains consist of a finite number of discrete values, en- abling candidate-ordering and/or elimination to be used. These techniques cannot be used in engineer- ing design where the parameters normally have con- tinuous and therefore infinite domains. However, the continuous nature of the domain values permits move- ment from one candidate value to another, using only information concerning the extent to which the previ- ous value was unsatisfactory, thug avoiding expensive re-computation procedures. 7 Conelusion This system has shown how a constraint-based problem solver can improve the efficiency of parametric engi- neering design, by minimizing the use of large analysis programs and updating only the minimum number of parameters and variables in redesign. The fundamen- tal difference between previous work and the present system is the following: in conventional systems the re- design process releases all intermediate variables from their currently assigned values, and after some alter- ation to the initial parameter(s), re-assigns these vari- ables using the same methods as before. In our system, redesign does not release variables from their current values, but instead automatically adjusts the values of those variables related to the most critical performance parameter, by propagation through the constraint net- work. Forward propagation is exact, depending only on the binary constraints between the parameters and variables. Back propagation is generally approximate, due to loops or cycles in the constraint networks, and relies on ad hoc procedures. At best this will give exact accuracy (for single path, or no-loop dependencies) and good approximations for other multi-path dependen- cies. At worst, it will rely on the heuristic correction factor to steer the adjustment in the right course. The system at present uses a general hill-climbing strategy. A parametric design system should ideally have a va- riety of strategies, as shown in [Orelup et al, 19881. However, this is a separate issue to the one considered in the present research. The strategy used in this work is applicable to para- metric design problems in which basic relationships be- tween parameters and variables can be obtained from the behavioral equations, and in which multi-valued parameters can be rationalized into controlling single values for reasoning purposes. Although the system has been tested in only one domain, parametric de- sign does not differ radically from domain to domain, so that our strategy has the potential to be applied generally. 5 10 KNOWLEDGE ACQUISITION References [Brown and Breau, 19861 Brown, D.C., Breau, R., Types of Constraints in Routine Design Problem- Solving, in Applications of Artificial Intelligence in Engineering Problems, Proceedings of the 1st In- ternational Conference, Southampton U.K., 1986, Springer-Verlag. [Chan and Paulson, 19871 Chan, W.T., Paulson, B.C. [: 1 Jr, Exploratory Design using Constraints, in Arti- ficial Intelligence in Engineering Design, Analysis and Manufacturing 1, 1987, pp. 59-71. Dechter and Pearl, 19871 Dechter, R., Pearl, J., Net- work Based Heuristics for Constraint-Satisfaction Problems, in Artificial Intelligence 34, 1987, pp.l- 38. Dixon et al, 19871 Dixon, J.R., Howe, A., Cohen, P.R., Simmons, M.K., Dominic I: Progress Toward Domain Independence in Design by Iterative Re- design, in Engineering with Computers 2, 1987, pp. 137-145. [Leler, 19881 Leler, Wm., Constraint Programming Languages, their Specification and Generation, 1988, Addison- Wesley Publishing Company. [Mackworth, 19771 Mackworth, A., Consistency in Networks of Relations, in Artificial Intelligence 8, 1977, pp. 99-118. [Montanari, 19741 Montanari, U., Networks of Con- straints: Fundamental Properties and Applications to Picture Processing, in Information Sciences 7, 1974, pp. 95-132. [Mostow, 19851 Mostow, J., Toward Better Models of the Design Process, in AI Magazine 6, 1985, Spring. [Murtagh and Shimura, 19891 Murtagh, N., Shimura, M., A Constraint-Based Hybrid Engineering Design System, in Proceedings of The Third IFIP WG 5.2 Workshop on Intelligent CAD, 1989, Osaka, Japan, (to be published by North-Holland). [Murthy and Addanki, 19871 Murthy, S., Addanki, S., PROMPT: An Innovative Design Tool, in Proceed- ings of AAAI, 1987, pp. 637-642. [Orelup et al, 19881 Orelup, M-F., Dixon, J.R., Co- hen, P.R., Simmons, M.K., Dominic II: Meta-Level Control in Iterative Redesign, in Proceedings of A A AI, 1988, pp. 25-29. [Sriram and Maher, 19861 Sriram, D., Maher, M.L., The Representation and Use of Constraints in Struc- tural Design, in Applications of Artificial Intelli- gence in Engineering Problems, Proceedings of the 1st International Conference, Southampton U.K., 1986, Springer-Verlag. [Sussman and Steele, 19801 Sussman, G.J., Steele, G.L. Jr, Constraints -a Language for Expressing Al- most Hierarchical Descriptions, in Artificial Intelli- gence, 14, 1980 pp. l-39.
1990
103
853
Connectionism, Rule Following, and Symbolic Manipulatio Robert F. Wadley School of Computing Science Simon Fraser University Bumaby, Canada VSA lS6 hadley@cs.sfu.ca Abstract At present, the prevailing Connectionist methodology for representing rules is to implicitly embody rules in “neurally-wired” networks. That is, the methodology adopts the stance that rules must either be hard-wired or “trained into” neural structures, rather than represented via explicit symbolic structures. Even recent attempts to implement production systems within connectionist networks have assumed that condition-action rules (or rule schema) are to be embodied in the structure of individual networks. Such networks must be grown or trained over a significant span of time. However, arguments are presented herein that humans sometimes follow rules which are very rapidly assigned explicit internal representations, and that humans possess general mechanisms capable of interpreting and following such rules. In particular, arguments are presented that the speed with which humans are able to follow rules of novel structure demonstrates the existence of general-purpose rule following mechanisms. It is further argued that the existence of general-purpose rule following mechanisms strongly indicates that explicit rule following is not an isolated phenomenon, but may well be a pervasive aspect of cognition. The arguments presented here are pragmatic in nature, and are contrasted with the kind of arguments developed by Fodor and Pylyshyn in their recent, influential paper. 1. Introduction In a recent and influential paper, Fodor and Pylyshyn present principled arguments to the effect that widespread methods of representation in connectionist networks are incapable of manifesting certain properties which occur as essential aspects of human cognition. These include compositionality, generalizability, and systematicity, all of which are prevalent in human thought and language. The kernel of Fodor’s and Pylyshyn’s position is that these crucial properties presuppose the existence of structure-sensitive operations which, of necessity, occur at a higher level of abstraction than that which is typically associated with connectionist processing. Moreover, the required structure-sensitive operations presuppose structured representations which do not exist at the level of the local and distributed representation schemes prevalent in the connectionist literature (or so it is argued). Now, although some connectionists remain skeptical about Fodor and Pylyshyn’s ultimate conclusions, many concede that compositionality, generalizability, and systematicity are indeed central aspects of human cognition which connectionism, as a general enterprise, must address. Recently, Elman (1989) and St. John & McClelland (1989) have devised connectionist networks (hereafter, c- nets) which exhibit these crucial properties, while remaining faithful to conventional (distributed) methods of connectionist representation. Several intriguing issues are raised by these results, which I shall touch upon only obliquely. My primary concern shall be to describe certain human cognitive abilities which challenge the connectionist thesis on grounds different from those put forth by Fodor and Pylyshyn. In particular, I describe cognitive abilities which involve (virtually) instantaneous rule learning and application of these rules to data which are retained in short term memory. Such abilities cast doubt upon the widespread connectionist practice of modelling the acquisition of ah’ general rules by the training (or hard-wiring) of c-nets. While I do not question whether c-nets could eventually be trained to display the relevant cognitive behavior, I argue that the speed with which humans are able to acquire and follow rules of novel structure demonstrates both that humans sometimes represent rules in an explicit fashion (in a sense of ‘explicit’ defined below), and that they possess general-purpose mechanisms for applying such rules. Moreover, one of the examples presented here involves such conscious and explicit rule following and symbol manipulation that if essentially connectionist mechanisms are involved, we seem forced to conclude that sometimes, at least, c-nets merely provide the architectural foundation for conventional, structure-sensitive, symbol manipulation. HADLEY 579 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. 2.(Nearly) Instantaneous Rule Acquisition In this section we examine a methodological principle that is frequently assumed (and sometimes fervently believed) by connectionists, namely, that cognitive processes which are ostensibly rule-governed ought to be modelled on the assumption that individual rules are embedded in the structure or weight distribution of particular c-nets, and should not be modelled as symbolic code which is applied to data sets on different occasions. Against this approach, I shall argue that if all (human) higher-level cognitive processes can be modelled by a system of c-nets, then we must suppose that at least some of these c-nets function as general purpose rule- interpreters which can apply rules to arbitrary sets of input. In particular, I argue that some rule-like behavior cannot be the product of “neurally-wired” rules whose structure is embedded in particular networks, for the simple reason that humans can often apply rules (with considerable accuracy) as soon as they are told the rules.’ The following example will help to illustrate this. Consider the phrase: Example I : “love ever keeps trying”. While holding this phrase in mind, try applying the rule described in the footnote indicated at this point.2 If you succeeded in remembering the given phrase while you applied the indicated rule (which is italicized), then you discovered that this rule, applied to this datum, yields the word ‘over’. (Otherwise, you were no doubt distracted by having to read the surrounding instructions, jumping to footnotes, etc.) Most people have no difficulty finding the correct answer when the experiment is verbally described, face to face. Now, the immediate thing to note about this experiment is that we can promptly and correctly apply the rule to a mentally retained datum, even though we have never encountered the rule or the datum before. The fact that we can comprehend novel rules and phrases argues for the compositionality and systematicity of thought, but that is not my point here. (In any case, ?he phrase you have in mind contains four words. Proceedingfiom leji to right, mentally extract the second letter from each word, and concatenate these letters in sequence. If the resulting string forms an English word, make a note of it. recent work by St. John & McClelland, 1989, demonstrates that, at least in some experimental conditions, c-nets can develop and represent “semantic interpretations” of novel sentences.) Rather, the point is that we have never been trained to respond either to this particular rule, or to this datum, or to their joint occurrence. Nevertheless, we are not only able to comprehend the rule, but to act in accordance with it. Given that we are able to follow the rule immediately, it would be wild to suppose that, in the short time available, the rule. is somehow transformed into an appropriately trained network, which (implicitly) implements this particular rule. (Bear in mind that our example rule is general, in the sense that it may be applied to many distinct inputs. Widespread experience has established that rules of this degree of generality can be trained into c-nets only via gradual tuning of weights, involving many small incremental changes.) It appears, therefore, that we can safely rule out the possibility that our brains contain a c-net which implicitly embodies this specific rule (via, hard-wiring or a distributed set of weights, as the conventional connectionist paradigm would suggest). The question remains, then, how could this novel rule be rapidly executed by a system of one or more c-nets? Well, if the rule is being executed by c-net(s), and no c-net is specifically trained for the rule, we can only suppose that once the rule has been processed as sensory input (and perhaps been assigned an in tern al representation (local or distributed)), the rule functions as data which causes some fairly general purpose c-net(s) to do just what the rule tells us to do. The c-net(s) which “react” to the rule being input must, in some sense, be fairZy general, because we have already seen the implausibility of postulating c-nets which are specific to each novel rule that we can execute. However, in saying the c-nets are “fairly general”, I do not preclude the possibility that very different kinds of c-nets might be required to process radically different kinds of rules. At this point, the degree of generality of c-nets which function as rule-interpreters and executors must be left partially indeterminate. It seems likely, however, that we could invent a whole family of rules of the same type as the one we have considered, and that each of these rules would be processed by the same set of c-nets (up to a point, at least. Ultimately, individual words, say ‘two’ vs. ‘three’, would presumably involve different subnets.) Now, at this point the following objection may arise: Well, yes, given that we sometimes interpret and appZy novel rules, and given that these rules cannot plausibly be supposed to be (innately, or by training) neurally-wired in our brains, then we must suppose that the brain contains sets of c-nets which function as (moderately) general rule interpreters, provided the brain is a collection sf c-nets. But, note that the 580 KN~WEDGEREPRESENTATI~N example you describe is a-typical, and does not establish that rule interpretation and application are important or common processes in human cognition. Before replying to the above objection, it will be helpful to introduce some terminology. Hereafter, we shall say that an explicit rule has been followed, if and only if a rule has been followed, but the rule is not embedded in the structure or weight distribution of some c-net. (I am here using ‘explicit’ as a technical term. No claim is made that this technical usage corresponds closely to common usage. Admittedly, the natural language distinction between ‘implicit’ and ‘explicit’ is slippery and difficult to unravel. However, our present concern is to distinguish rules which are embedded in the structure or weight-distribution of c-nets from those which are not.) Also, it is important to realize that example (1) establishes not only that we sometimes follow rules which are explicit in the sense just defined, but that some of these explicit rules are internally represented when we follow them. To see this, consider that in example (1) an explicit (non-embedded) internal representation of the rule must be posited to explain the fact that subjects are able to follow the rule for several minutes after it is spoken, even when other, extraneous utterances intervene.3 Now, to reply, first note that even if the interpretation and application of non-embedded (explicit) rules are uncommon cognitive events, they do occur, and their explanation seems to require a departure from the methodological principle described at the beginning of section 2. In addition, the example establishes that it is not only a theoretical possibility that neural structures could support the general application of explicit rules, but this is sometimes a reality. It is important that connectionists bear in mind that at least some of our neural structures are organized in ways that permit the use of rules that have not been trained (or grown) into a net. Moreover, we need to consider how it happens, if explicit rule use is such an a-typical cognitive phenomena, that we possess an assembly of c-nets capable of performing such feats. There appear to be only two possibilities: either the c-net(s) primarily responsible for the rapid interpretation and application of rules are innately present 3The objection may arise that we need not suppose that a representation of the rule is stored beyond the first few seconds, for it is logically possible that a c-net will rapidly be trained to implicitly embody the rule once the rule has been applied to the first input set. However, this objection presumes the existence of biological mechanisms which are able, very rapidly, to train up a network to perform a genera2 task which has only once been comprehended and performed. Apart from the questionable existence of such biological mechanisms, the objection ignores the fact that c-nets can only be trained to acquire general rules by a gradual iterative process. in the brain (with the appropriate weights pre-set), or they are not. Let us consider these possibilities in turn. Suppose the relevant set of c-nets to be innately hard- wired. Now, it is no doubt a very difficult problem to distinguish the relevant set of innately hard-wired c-nets (if they do exist) from other c-nets involved in language understanding (which must be trained during language acquisition), but fortunately we may sidestep that problem. For the point is that if these innate c-nets are primarily responsible for our general ability to rapidly apply novel rules, then the need for such general capacity c-nets must arise rather often. Otherwise, (a) it is unlikely that the relevant c-nets would have evolved in the first place, and (b) even supposing that these c-nets had not evolved specifically to handle rule application, it is not plausible that they should integrate so rapidly and accurately with our general language comprehension mechanisms that we would be able to apply novel rules with the facility that we exhibit. I conclude that if the relevant c-nets are innately hard-wired, this strongly suggests that explicit rule application is not a rare event, but is an important (and probably common) aspect of our cognitive life. On the other hand, suppose the relevant c-nets are not innately given (with pre-sets weights). In this case we must suppose the relevant c-nets are either specifically trained to perform the general task of rule application, or the c-nets possess these abilities as a side-effect of other abilities. In the former case we cannot suppose the c-nets in question would receive the required specialized training, unless the general task of applying rules to representations was frequently encountered. So, in this case the ability to apply explicit rules can hardly be regarded as an isolated phenomenon, as the objector implies. Let us consider, therefore, the latter case, in which the ability to apply novel rules arises as a side-effect of other abilities. We should note at the outset that the ability to apply novel rules may often, and perhaps always, involve a series of sub-skills which have been acquired through slow learning. For example, the rule I presented earlier may involve the sub-skills of retrieving the spelling of a word, of selecting the n-th element in a list (in this case a list of letters), and of concatenating letters to form a word. For argument sake, I concede that each of these skills may have been acquired by slow, iterative training of c-nets. However, the mere presence of these separate skills does not explain how we should be able, as a side-efect of these abilities, to create a coherent sequence of operations, in which each skill is invoked at the proper time, and applied to the proper object. By analogy, the mere presence of a set of primitive operations in a HADLEY 581 programming language does not cause a program to be written, assembled, or executed. In short, the existence of primitive, slowly learned skills in a neural system may be a necessary condition for the application of novel rules, but it is not a sufficient condition. If we are to preserve the “side-effect hypothesis”, then the relevant side-effects must arise from c-nets other than (or in addition to) those responsible for executing the sub-skills involved. Now, while I know of no way to disprove this possibility, it does seem odd that such a complex ability as explicit rule following would arise as a mere side- effect of other neural processes. In any case, there is a deeper point to be made here. For, even if the side-effect hypothesis is correct, connectionism has not provided us with any reason for supposing that side-effects of this kind are limited in their scope. On the contrary, if side- effects of collections of c-nets are capable of supporting rapid application of completely novel rules, why should we not suppose that much, or even most higher-level cognition also involves the explicit application of rules which are acquired through direct observation or through explicit teaching? Why should we not suppose that many rules are stored in non-embedded form, and are interpreted as the need arises? Such a hypothesis would not exclude the further conjecture that when a rule enters long-term memory, some c-net will be trained to implicitly represent the rule by means of acquired weights. However, since neural mechanisms are clearly capable of applying explicit rules, we should await clear empirical evidence before judging the pervasiveness of this form of rule following. In sections 4 and 5, I present two examples which suggest that explicit rule following is more pervasive than a connectionist might suppose, but before passing to these examples, let’s consider what more we can learn from the present example. 3. Symbolic Manipulation Recall that in example (1) the subject is asked to keep a phrase in mind, in this case “love ever keeps trying”. This phrase must be retained (presumably, in short term memory, or some other buffer region) while the subject listens to a rule. After hearing the rule the subject somehow retrieves the individual words of the given phrase, in sequence, in order to select the second letter from each word. At least, this is how it appears to us introspectively. However, from a purely logical standpoint, we need not suppose that individual words (or representations of words) are being reviewed in sequence. We may choose to ignore introspective evidence (though such evidence seems to require some explanation), and suppose that the input phrase is assigned an internal representation which is not sputiuZZy composite.4 In what immediately follows, we will accept this supposition, since it appears to represent the “worst case” for what I wish to demonstrate, viz., that example (1) involves explicit (mental) symbol manipulation. Now, if our mental representation of the example phrase is not spatially composite, then we have two possibilities. Either the phrase is internally represented by a single node (i.e., it is assigned a ZocaZ representation) or it is assigned a distributed representation whose spatial parts are not themselves meaningful representations. For simplicity sake, and for reasons given by Fodor and Rylyshyn (1988) we shall not consider the localist approach? So, we assume that the input phrase is assigned a distributed representation. (An account of how phrases and sentences may be assigned distributed meaning representations is given in St. John & McClelland, 1989). Now, although spatial sub-regions of this distributed representation are assumed not to be representations of any kind, it is still conceivable that some c-net exists which, given this distributed representation, and primed with the rule in question, could simply output a representation for the word ‘over’, i.e., the answer word. But while this is conceivable, it seems rather doubtful. For, since subjects receive no training at the particular task in question, it is not reasonable to suppose that any c-net contains information (tacit, or otherwise) about the specific letters occurring at specific positions in the particular phrase or sentence being represented. Moreover, the general task of retrieving letters from entire phrases at specified positions is not one that people are commonly trained for. By contrast, the task of retrieving the spelling of individual words is one that we are trained for, as is the task of finding the n-th element in a series of objects (e.g., a series of letters). It is entirely plausible, therefore, that we should have c-nets capable of performing these sub-tasks. Now, given the complexity of the task of going from an arbitrary phrase representation to the spelling of the answer word, it would be strange indeed if the c-nets comprising the general rule interpreter (which we have already seen to be necessary) did not arrange for the relevant subtasks to be performed by c-nets which have already been specifically trained for 4We will say that a representation is spatially composite if some of its spatial parts are themselves meaningful representations (just as the words of this sentence are meaningful spatial parts of the entire sentence). For more on this, see (van Gelder, 1989). ‘As Fodor and Pylyshyn stress, we cannot suppose that each phrase is represented by a unique neuron, because the number of phrases we can comprehend exceeds the number of neurons available. 582 KNowLEDOEREPRESEN-L~TION those sub-tasks. Moreover, if we adopt this hypothesis -- that a series of sub-tasks are performed by c-nets specific to those tasks -- we need not abandon the idea that the input phrase receives a spatially non-composite distributed representation. For, it is plausible that we possess c-nets capable of taking this spatially non- composite representation as input, and yielding a sputiarly (or temporally) sequenced series of representations of the individual words in the sentence. In fact, c-nets which performed this transformation would simply embody a partial inverse of the process which produced the internal representation from the original input phrase. The kind of transformation just described illustrates one way of achieving functional compositional&y (as opposed to spatially concatenative compositionality) in connectionist architectures. Both Smolensky (1987) and van Gelder (1989) have explored the feasibility of incorporating functional compositionality in connectionist networks. Van Gelder, in particular, argues that the potential for including this form of compositionality in c-nets removes a barrier to the thesis that c-nets can exhibit the kind of systematicity which Fodor and Pylyshyn persuasively argue to be necessary. He argues further that, if connectionists are to avoid the kind of high-level control algorithms associated with classical AI, they will need to develop connectionist mechanisms for exploiting functional compositionality. While I do not dispute this conclusion, I contend that the present example shows that the classical paradigm of symbol manipulation is the most appropriate for some cognitive processes. For, in the absence of any plausible, direct c-net transformation from our example phrase to the correct answer, I submit that we should conclude the following: at some stage in the solution of our exercise the spellings of individual words are retrieved, and letters in the second position of each spelling are identified (and in some sense selected). If the above conclusion is accepted, we seem committed to a process which is at least strongly analogous to classical symbol manipulation. For, consider the possibilities. Although the spellings of each of the four individual words need not be simultaneously present (in some buffer, say) still each of the four spellings must either be “consulted” in sequence or in parallel. In either case, the spellings must be examined to obtain the second letter of the given word. Now, the (representations of) letters of a given spelling must either be examined sequentially until the second element is located, or the letters are present all at once and the second letter is identified (perhaps by parallel processing). In all of the above cases we either have a (spatially) concatenated series (of letters within words) or a temporally concatenated series? In either case, we have a concatenated series of representations, which taken collectively represent higher-level objects (words), and which are being processed to obtain the n-th element of the series. Moreover, once the n-th (2nd, in our case) element of each series is identified, it must somehow be marked, copied, or otherwise remembered, and its sequential position (relative to the original string of words) must be implicitly or explicitly remembered. Ultimately, these separate elements must be combined (or at least treated) as a concatenated series to obtain the representation of the external symbol ‘over’. Now, the foregoing description involves several operations which are typical of classical symbol manipulation (e.g., searching a list, marking or copying, selection, concatenation), but it could be argued that in one respect this description departs from classical processing. That is, I have allowed that the symbolic elements of a representation may be temporally rather than spatially concatenated. This might happen, for example, if the c-net which functionally decomposes a representation into its parts does so by producing those parts in a temporal sequence. But, even this kind of processing does not violate the spirit of classical symbol manipulation. Indeed, temporal concatenation seems only a minor modification (if at all) of the classical symbol manipulation paradigm. (Recall, after all, that a computer sends its symbolic output to the printer character by character.) To be sure, I have not shown that the sequentially ordered letters which comprise the final answer are literally spatially concatenated to produce this answer, but we should not expect the micro-details of how the foregoing operations are performed to resemble the micro-details of a digital simulation of these operations. For, as Pylyshyn would put it, the operations we have been considering are classical symbol manipulations at the cognitive level of description. There is a literal isomorphism between the series of sub-tasks performed by the c-nets involved and the moderately high-level sub- tasks involved in a computer simulation of this example. Moreover, the kinds of arguments produced earlier (to the effect that cognitive mechanisms which support explicit rule manipulation are not likely to be isolated aberrations) also apply here. We are not yet in a position to say how pervasive classical symbol manipulation is within higher- % might be argued that a third possibility exists -- that the spelling of each word is neither spatially nor temporally concatenated, but is an arbitrary (local or distributed) representation. It is conceivable that a c-net exists which takes such representations, and a parameter such as “position 2”. and returns representations of individual letters. However, as we previously argued, since we rarely have training for tasks as specific as “give me the second letter of the word ‘ever”‘, it is not plausible that such a specialized c-net would exist. HADLEY 583 level cognition. However, the following sections may shed light upon this issue. 4. Condition-Action Rules Consider the following simple rule: Example 2. If I pause and say a word which ends with ‘ly’, then you repeat that word. Humans are able to learn simple novel rules of this kind as soon as they are spoken. Such rules are not remembered for long (without special motivation), but they are easily retained for 15 minutes, and can be applied immediately.7 Now, as in our previous example, we are confronted with a rule which can be immediately retained, interpreted, and applied. For reasons previously stated, we must suppose this novel rule is interpreted and applied by (at least) moderately general-purpose c-nets. Moreover, it is reasonable to believe that these c-nets receive a representation of the rule as input. (As in example (l), an internal representation must be posited to explain the fact that subjects can, for many minutes, reapply the rule to new data without being reminded of the rule.) Now, unlike our previous example, the present rule has the same general form as other rules which we do learn by training. For example, “If the light turns green, then you can go” also has this general form. It is possible therefore, that the particular c-nets responsible for applying rule (2) have been trained just to recognize if- then, condition-action rules. Note, however, that condition-action rules (which usually lack truth values) structurally resemble propositional if-then rules (which have truth values). In light of this structural similarity, it would not be surprising if the same c-nets were responsible for interpreting and applying both condition-action rules (such as (2)) and for executing modus ponens in the presence of if-then propositions. In any case, it appears that the existence of some general c-nets, capable of recognizing and applying a variety of novel condition-action rules, should be conceded. These c-nets may or may not be specific only to condition-action rules, but once their existence is conceded the possibility certainly arises that these c-nets are involved in widespread condition-action (or production system) reasoning. Recognition of this fact ‘I’ve tested rules of this type on different occasions in graduate courses. Although no controlled experiment was conducted, the results were so unambiguous as to leave little doubt. People are able to follow the rule for some time after hearing it, even when ordinary conversation intervenes between “testing moments”. may well have motivated two recent connectionist attempts to model production systems (cf. Touretzky & Hinton, 1988; Ajjanagadde & Shastri, 1989). Although both these systems lack the full generality of AI-type production systems, each represents an attempt to incorporate structure-sensitive rule firing mechanisms in a connectionist framework. It is also worth noting that each of these systems represent if-then rules as structured, spatially composite objects, where the antecedents and consequents of individual rules are spatially separate clusters. Although these implementations do not employ classical symbol manipulation techniques at the micro- level, there is a clear and appropriate level of description at which they are performing quantifier instantiation and modus ponens. Unfortunately, it is doubtful whether either implementation can account for the immediate application of novel if-then rules, since in both systems, distinct conditionals are represented by distinct neurally-wired networks. It is not possible that such networks could be “instantly grown” to represent a rule which has just been understood. Barnden (1988) describes a method by which constants (which replace variables in rule schemata) could be rapidly represented in connectionist matrices, but his approach also requires the hard-wiring of each rule schema, and no indication is given of how this could be accomplished “on the fly”. 5. Rules of Arbitrary Structure. As we have noted, the previous rule (2) shares a general form with other rules we typically encounter. For this reason it is at least plausible that rules having this form are applied by c-net(s) which respond only to rules of this form. However, the following rule, like the rule in example (l), does not share a syntactic structure with rules we normally encounter. The background context for the following rule is this: The subject is told that she/he will be presented with a series of at most five integers, where the value of each integer is less than five, and the subject is to apply the rule (below) once the series is presented. Example 3. Regard the second integer in the series as an exponent. Take the last integer in the series and raise it to the power indicated by the exponent you identified. Now, college students have no difficulty understanding and applying this rule to series of integers which are presented to them. (At least the students in my sample had no difficulty. In any case, we only need a few successes to make the point.) Also, because the rule is novel, moderately complex, and unusual in structure we must suppose, as we did in example (l), that initial applications of the rule involve (at least) moderately 584 KNOWLEM~EREPRESENTATION general c-nets, capable of interpreting and applying a considerable range of possible rules. Moreover, as in the preceding examples, we must assume that (temporarily at least) the rule is stored as a representation, for the rule can be repeatedly applied without being re-presented to the subject. We have, then, a rule which embodies a moderately complex arithmetic procedure, and which is stored in explicit representational form before being applied. Now, given that humans are clearly capable of internally representing explicit, complex arithmetic rules before applying them, the question naturally arises whether children commonly learn algorithms such as long multiplication and division by storing explicit representations of these algorithms. While I have no conclusive answer to this question, the following considerations arc suggestive: (i) Before children are taught long multiplication, they are intensively trained in simpler sub-skills, such as adding a column of digits, “carrying” a value to the next column, and multiplying pairs of single digits. (ii) When taught skills such as long multiplication, children are in fact told explicit rules (e.g., “after multiplying the entire number by the rightmost digit, move one digit to the left and multiply through again. But this time write down the answer on a new line. But start writing one place to the left . . . I’). Of course, these explicit rules are not instantly remembered, and examples are required, but students of average intelligence learn long multiplication after hearing the rules several times and practicing on perhaps 20 or 30 problems. Given the comparatively small number of examples and “practice trials” required to train most children in long multiplication, it is difficult to believe that having the relevant sub-skills, and being given explicit instructions do not have a dramatic effect on the learning of arithmetic algorithms. (Certainly, possessing the relevant sub-skills is crucial to our ability to apply, so rapidly, the rule in example (3).) This becomes more apparent when we compare human learning to recent connectionist attempts to teach c-nets relatively simple arithmetic algorithms. For example, recent work by Cottrell $ Tsung (1989) on the addition of 3-digit numbers required on the order of 3000 distinct training examples, and several thousand iterations, to achieve a reasonable degree of generalization (even though back propagation of error was employed, and the numerals were restricted to base four). In light of the examples we have considered thus far (each of which underscores the power of explicitly invoking prior sub-skills), it seems incumbent upon connectionists to address the issues implicit in points (i) and (ii) above, and to devise methods for rapidly controlling the sequence of sub-skills which are applied to a moderately complex problem. To date, scarcely any (published) connectionist research openly addresses these problems. I suggest that the reasons for this include: (a) connectionists are reluctant TV integrate the classical paradigm (of having explicit representations control tie sequencing of lower-level functions) into the existing connectionist paradigm, which treats all rules as implicit. (b) This reluctance arises (in part) because of the complexity of the task. It is very difficult to imagine how c-nets can support higher-level, representational control processes without resorting to more conventional (though possibly parallel) architectures. I submit, however, that examples (1) and (3) establish the following: if connectionism is to provide a model for all cognitive phenomena, it must include mechanisms for explicit rule representation and application. These mechanisms must be general enough to accommodate rules of novel structure. If connectionists can accept and meet the challenge of devising these mechanisms, they will have gone a long way towards integrating the prevailing classical and connectionist paradigms. 6. Summary We have examined three examples of rule following in which the immediate representation, and application of rules appears to require the presence of general rule application mechanisms. Two of these examples involve rules of novel structure, which argues for a high degree of flexibility in these application mechanisms. Although the remaining example belongs to the more syntactically regular class of condition-action rules, it should be remembered that one prominent cognitive theory attempts to model most higher-level cognition in terms of (condition-action based) production systems (Anderson, 1976). Moreover, as I have argued, the existence of the kinds of general rule-application mechanisms considered here strongly suggests that explicit rule representation and rule following are not isolated exceptions, but are important features of human cognition. Furthermore, I have argued that at least some explicit rule following is best modelled by the paradigm of classical symbol manipulation. Admittedly, it is uncertain whether most high-level processes, such as planning and abstract reasoning, involve explicit rule following and/or symbolic manipulations, but, in light of the fact that neural mechanisms do in fact sometimes support these classical processes, we must regard it as a serious open question whether most higher-level cognition involves these classical processes. Of course, nothing I have said here would suggest that all rule following ought to be modelled on the classical paradigm. Indeed, I have suggested elsewhere (Hadley, 1989) that semantic grounding rules are best modelled by connectionist HADLEY 585 methods. However, I believe the arguments presented here seriously challenge the prevailing connectionist methodology of modelling all rules by means of implicit, neurally-wired networks. Moreover, our conclusions present the connectionist with a formidable scientific challenge, which is, to show how general purpose rule following mechanisms may be implemented in a connectionist architecture. References Anderson, J.R. (1976) Language, Memory and Thought, Lawrence Erlbaum Associates, Hillsdale, N.J. Ajjanagadde, V. dz Shastri, L. (1989) “Efficient Inference with Multi-Place Predicates and Variables in a Connectionist System”, Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, Ann Arbor, pp. 396403. Barnden, J.A. (1988) “The Right of Free Association: Relative-Position Encoding for Connectionist Data Structures”, Proceedings of the Tenth Annual Conference of the Cognitive Science Society, Montreal, pp. 503-509. Cottrell, G.W. & Tsung, F. (1989) “Learning Simple Arithmetic Procedures”, Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, Ann Arbor, pp. 58-65. Elman, J.L. (1989) “Structured Representations and Connectionist Models”, Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, Ann Arbor, pp. 17-23. Fodor, J.A. & Pylyshyn, Z.W. (1988) “Connectionism and Cognitive Architecture: A Critical Analysis”, Cognition, Vol. 28, pp; 3-71. Hadley, R.F. (1989) “A Default-Oriented Theory of Procedural Semantics”, Cognitive Science, Vol. 13, pp. 107-138. Pylyshyn, Z.W. (1984) Computation and Cognition, Bradford Books, MIT Press, Cambridge, Ma. Smolensky, P. (1987) “The Constituent Structure of Mental States: A Reply to Fodor and Pylyshyn”, Southern Journal of Philosophy, Vol. 26, Supplement, pp. 137-160. St. John, M.F. & McClelland, J.L. (1989, in press) “Learning and Applying Contextual Constraints in Sentence Comprehension”, Artificial Intelligence. Touretzky, D.S. 8z Hinton, G.E. (1988), “A Distributed Connectionist Production System”, Cognitive Science, Vol. 12, pp. 423-466. van Gelder, T. (1989) “Compositionality and the Explanation of Cognitive Processes”, Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, Ann Arbor, pp. 34-4 1. 586 KNOWLEDGEREPRESENTATION
1990
104
854
A Structured Connectionist Unification Algorithm Steffen Hiilldobler* International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704, USA Ema& steffenQicsi.Berkeley.edu Abstract A connectionist unification algorithm is presented. It utilizes the fact that the most general unifier of two terms corresponds to a finest valid equivalence rela- tion defined on a occurrence-label representation of the unification problem. The algorithm exploits the maxi- mal parallelism inherent in the computation of such a finest valid equivalence relation while using only com- putational features of connectionism. It can easily be restricted to solve special forms of the unification prob- lem such as the word problem, the matching problem, or the unification problem over infinite trees. Introduction Following Leibnitz’s and Frege’s idea to formalize hu- man thought Herbrand, Gijdel, and Skolem developed predicate logic by 1930. Great efforts were taken to find efficient proof procedures which can be used to mech- anize human thought . A certain break-through was achieved when J. A. Robinson invented the resolution principle in 1965. In the meantime several other refu- tation techniques have been developed like Kowalski’s (1979) connection graphs or Bibel’s (1987) connection method. At the heart of all these refutation methods is the unification procedure, a version of which can al- ready be found in Herbrand’s theses (1930) and which was formally introduced by Robinson (1965). Despite their success automatic theorem provers and logic programming languages are still plagued be sev- eral problems like the lack of a clever proof strategy or the lack of common sense. The problem being tackled in this paper is that most theorem provers do not ex- plore the parallelism inherent in logic. They are still designed with a von-Neumann computer in mind. The sequential characteristics of such a computer is funda- mentally different from the characteristics of an animal - and presumably a human - brain. In the brain slow neural computing elements with a switching time of a few milliseconds are heavily interconnected. Neverthe- less, the brain is capable of performing complex tasks which require millions of operations on a conventional computer and this seems to be conclusive evidence that massive parallelism must take place in the brain. *on leave from FG Intellektik, FB Informatik, TH Darm- stadt, West-Germany It is the goal of connectionist theories to utilize the understanding of the brain for building systems with in- teresting behaviour. The fundamental process in a con- nectionist model is the activation of associated units. However, Smolensky (1988) has emphasized that such a spreading of activation cannot be adequate for com- plex tasks such as question answering. And in an earlier paper Smolensky (1987) h as argued that connectionist systems may well offer an opportunity to escape the brit- tleness of symbolic AI systems . . . if we can find ways of naturally instantiating the sources of power of symbolic computation within fully connectionist systems. Such symbolic systems are powerful because they provide a combinatorial syntax and semantics and processes are structure sensitive (Fodor & Pylyshyn, 1988). In his response to (Smolensky, 1988) J. McCarthy ob- served that in connectionist models which he has seen the basic predicates are all unary a.ld are even applied to a fixed object, and a concept is a propositional function of these predicates. It is the goal of this paper to show a way out of the propositional fixation of connectionist models. This is done by demonstrating how the unifi- cation computation, which is at the center of inference, can be modeled in a connectionist system. Informally, the unification problem is the question of whether there exists a substitution for two terms s and t such that the respective instances of s and t are equal. Paterson & Wegman (1978) have shown that this prob- lem can be solved sequentially in time linear to the size of the problem. Dwork et. al. (1984) have proved that unification is logspace-complete and, thus, we should not expect that a parallel unification algorithm has a significant better time complexity in the worst case un- less P C NC. However, we can expect that a parallel algorithm improves the time-complexity for the average case. As we will show the unification problem can be solved in 2 steps if it degenerates to a word or a match- ing problem, where a word (matching) problem is the question of whether s (an instance of s) is equal to t. This significantly improves results by Dwork et al., who have shown that the matching problem of size n can be solved in log2n parallel time. Such an improvement is important for many applications as, for example, a study by Citrin (1988) h as shown that up to 50% of Prolog’s execution time is consumed by the unification process and many of the unifications are easy. HOLLDOBLER 587 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Distributed unification algorithms such as the one de- veloped by Vitter & Simons (1986) make use of a dug’- representation of the unification problem. The paral- lelism exploited by these algorithm has its source in the decomposability axioms which characterize unification. These axioms state that two terms f (si , . . . , sn) and f@l,. . . , tn) are unifiable if all corresponding subterms s; and ti are unifiable. The unifiability of these sub- terms is determined in parallel as far as this is possible. Most of the approaches towards a connectionist uni- fication algorithm are parts of the design of larger infer- ence systems. Let us describe these approaches as far as unification is concerned. Ballard (1986) prewires all substitutions and selects a substitution which is consis- tent with a refutation proof. This is possible because terms can only be variables and constants, clauses are used at most once and, thus, the set of substitutions is finite. Touretzky’s & Hinton’s (1988) DCPS is capable of matching a hypothesis of the form (zeb)(xcd) against the content of the working memory by searching for a minimum energy state, where x is a variable and the remaining symbols are constants. While it seems to be possible to relax the constraints on the occurrences of variables in the hypothesis it is by no means obvious how their technique can be applied if n-ary function symbols are allowed and if the elements of the work- ing memory may also contain variables. Mjolsness et al. (1989) match a dag with variables against a dag without variables by minimizing an objective function, which specifies the mismatch (or distance) between the dags. However, it is not obvious that they always find the most general solution and it seems to be impossible to extend their approach in order to deal with unifica- tion problems. Lange & Dyer (1989) and Ajjanagadde SC Shastri (1989) assign a unique signature or phase (of a phased clock) to constants. Dynamic bindings are cre- ated by passing these signatures or phases. However, their systems do not guarantee in general that multiple occurrences of the same variable are bound to the same constant. It is also not easy to see how their propos- als can be extended to cope with function embeddings which arise during the unification process if n-ary func- tion symbols are allowed. The connectionist approaches mentioned so far have severly shortcomings as far as the unification compu- tation is concerned. Only Stolcke (1989) has recently investigated unification from a connectionist point of view. Stolcke represents terms as dags and - inspired by Paterson SC Wegman (1978) - defines an equivalence relation on the nodes of a dag. He computes this rela- tion via a connectionist network by explicitely coding the axioms defining the equivalence relation. However, Stolcke’s algorithm will unify the terms x and f(x), since it does not check that the equivalence relation is acyclic or, in other words, it does not perform an occur check. Furthermore, the algorithm does not compute the most general unifier d for two terms s and t, but ‘directed acyclic graph the term CS. It is not obvious how the algorithm can be changed such that an occur check is performed and the most general unifier is returned. In this paper we present a connectionist unification algorithm. Terms and substitutions are represented as sets of occurrence-label pairs. Using Smolensky’s (1987) terminology the occurrences are the roles that are filled by the labels. This representation allows us to repre- sent terms and substitutions in a finite net though the set of terms as well as the set of substitutions is infi- nite. We define a finest valid equivalence relation on this representation, which represents a solvable unifica- tion problem, and show how this relation can be com- puted using only simple threshold units. Moreover, we formally prove that our connectionist model solves the unification problem. On this way we solve the variable binding problem and, moreover, ensure that multiple occurrences of the same variable are consistently bound to the same term (see e. g. (Barnden, 1984)). Due to lack of space we had to omit some details and all the proofs. They can be found in (Holldobler, 1990). As far as this paper is concerned we do not address the problem how the connectionist net is recruited. We ex- pect that the unification algorithm will be built into a larger system and that such a system will take care of this problem. The Unification Problem We assume to have a finite alphabet consisting of a set F of graded function symbols and a set V of variables. Terms and substitutions are defined as usual. Through- out the paper a, b, . . . denote function symbols, s, t, . . . denote terms, and Z, y, . . . denote variables. A unification problem consists of two terms s and t and is denoted by (s = t). It is the problem of whether there exists a substitution 0 such that crs = at. If such a substitution CT exists then c is called unifier of (s = t). A substitution c is said to be a most general unifier, or mgu, for (s = t) iff for each unifier 0 for (s = t) there exists a substitution X such that for each variable x oc- curring in (s = t) we find 0x = Xax. It is well-known that the unification problem is decidable and that an mgu of two terms can effectively be computed when- ever the terms are unifiable (Robinson, 1965). Such an mgu is unique modulo variable renaming (Fages 9c Huet, 1986) and, therefore, is often called the mgu. To develop a unification algorithm we need an axiom- atization of unification which is suitable for a connec- tionist implementation. We will essentially use Pater- son’s & Wegman’s (1978) approach, but our algorithm is not based on a dag but on an occurrence-label repre- sentation of the unification problem. The set of occurrences of a term t, O(t), is induc- tively defined as A E O(t), and A E O(ti) implies i.7 E O(f (tl, -- -9 tit - - . , tn)) for all 1 5 i 5 n2. On occur- rences a partial ordering is defined by ~1 2 a:! iff there 2We omit A and . i f this does not lead to confusion. 588 KNOWLEDGEREPRESENTATION exists a 7rz such that 11.7ra = x2. Furthermore, 1~1 > 7r2 iff ~1 2 7r2 and 7ri # 72. The set of labels, l(r), for each occurrence 7r in the terms of a unification problem (s = t) is the set of symbols occurring at 7 in s and t. The set l(n) can be split into the disjoint subsets l!(r) and I”(A), which contain the function symbols and‘ variables,‘ re- spectively. For the unification problem (f (x, x, y) = f (9(Y)? 9(9(4), s(4)) we obtain In the sequel let (s = t) be a unification problem and 0 = O(S) U O(t). An equivalence relation N on 0 is said to be decomposable3ifT Vq, 7~2 E 0 : q - 7r2 E 0 A 3 : (sl-i, q-i) C 0 * x1-i cv 7r2.i, homogeneous iff Vq, 7r2 E 0 : r1 N x2 singular iff h1,7r2 E 0 : 1,(7r~) n l,(n2) # 0 j xl N x2. A decomposable and singular equivalence relation is called DSE-relation. We denote a w-equivalence class C by [v-, 7r,J whenever { ~1,. . . , 7rn} C C. For N- equivalence classes Ci and C2 we define Ci + C2 iff there exist al E Ci and 7r2 E C2 such that ~1 > ~2. An equivalence relation m is said to be acyclic iff the N-equivalence classes are partially ordered by +. A ho- mogeneous and acyclic DSE-relation is said to be valid. For (f(x, x, Y) = f (9(y), 9(9(4), s(a))) a valid equiv- alence relation is defined by the equivalence classes [A] + [l, 21 + [ll, 21,3] + [211,31]. For (f(x, y) = f MY), g(x))) th ere e.xists a homogeneous DSErelation with equivalence classes [1,21] and [11,2]. But this re- lation is cyclic. To gain efficiency the check whether a DSErelation is acyclic - also called occur check - has been omitted in virtually all logic programming systems. Colmerauer (1984) corrected this bug by in- terpreting logic programs no longer over the Herbrand universe or finite trees, but over the domain of infinite trees. There, the latter problem is solvable by replacing a~ as well as y by the infinite tree g(g(g(. . .))). Propo- sition 1 is an immediate consequence of (Paterson SC ‘CVegman, 1978) and (MacQueen et al., 1984). Proposition 1 1. (s = t) has a solution over the domain of infinite trees ifl there is Q homogeneous DSE-relation on 0. 3 Decomposability is often called correspondence (e.g. (Dwork et al., 1984)) and sometimes implies homogeneity (e.g. (Kirchner, 1984)). 2. (s = t) has a solution ifl there is a homogeneous and acyclic DSE-relation on 0. Moreover, Paterson’s SC Wegman’s have shown that the mgu of two terms can be constructed from the finest valid equivalence relation on the set of occurrences of the unification problem. For our running example we will briefly recall this technique. [A] is the largest class4 (wrt +). But since [A] is not labelled by a variable we can discard it. For the next class, [l, 21, we find - by inspecting the labels as well as the ordering + - that x is bound to g(y) and we generate the binding {x 6 g(y)). Similarly, for [ll, 21,3] and [211,31] we obtain the bindings {y + g(t)) and {z + a}, respectively. Combining these bindings yields {% c Q, Y + SW, x c s(sb)>L which is the mgu of the unification problem. Because of this technique and since we regard our connectionist unification algorithm as part of a larger inference sys- tem utilizing the same representation as the unification algorithm itself, we are satisfied if the connectionist uni- fication algorithm generates the finest valid equivalence relation for a unification problem. A Connectionist Unification Algorithm In the first step towards a connectionist unification algo- rithm we use the fact that a DSErelation can be char- acterized by its equivalence classes, which in turn can be represented by the union of the labels of its members. As we show in the following subsection, these unions can be computed by two simple operations derived from the axioms of decomposability and singularity. There- after we demonstrate how the unification problem can be represented and how the finest DSE-relation can be computed by a connectionist model. It remains to be checked that the finest DSE-relation is homogeneous and acyclic. This can be done by a simple extension of the connectionist model developed until then. Representing a DSE-Relation Let w be a DSErelation for a unification problem (s = t). For a w-equivalence class C the set of labels, l(C), is the union of the set of labels of its elements. As before, the set of labels of C can be split into the dis- joint subsets Ii(C) and l,,(C) containing the function symbols and variables, respectively. As example con- sider (f (x, x, Y) = f(9(Y), 9(9(z)), s(4)) and we find Since we intend to represent the m-equivalence classes by l([x]) we have to generate I([x]). Let A contain the ‘called root class in (Paterson & Wegman, 1978) H~LLDOBLER 589 axioms of reflexivity, symmetry, transitivity, decompos- ability, and singularity. By A l- ~1 - 7~ we denote that ~1 - 7~ can be derived from A. It can now be shown that for ~1 # 7r2 At- 7r1 N r2 e3?r,~‘L,i~; : AI- n; ry ~5 Ax1 = n’,&br2 = +rAZ,([Qr$]) # 0 holds. This tells us that, if 7~ and 7~ are in the same DSE-equivalence class, then either there is a vari- able among the labels of [al, 7r2] or we find occurrences ?r, $, ~5 such that r # A, ~1 = ?ri. P, 7r2 = ni - x, and there is a variable among the labels of [tir, ti2]. In other words, the finest DSE-relation can be constructed en- tirely from occurrences which are labelled with the same variable. The key idea of the connectionist unification algorithm is to increase the set of labels of each occur- rence 7r until Z(7r) = {([xl). When shall the set of labels be increased? By the singularity of - we find that whenever Z&r) n Zv(7r2) # 0 (ups) and by the decomposability of - and the previous result we find that l(7rl .T) - Z(7rl .n) u Z(n2 4 whenever I, (~1) n 1, (7r2) # 0 (0~) where + denotes assignment and A # A. In the se- quel we show that these two operations can easily be performed by a connectionist model and that they gen- erate indeed the finest DSErelation. Representating a Unification Problem The unification algorithm is based on Feldman’s 9c Bal- lard’s (1982) connectionist model. Units are charcter- ized by a potential p, an output value v, and a vec- tor of inputs il, . * . .,2n. In particular we use so-called threshold units, whose potential and output values are determined via the rules P- c Wkik, v+ifp>othen lelse0, where 0 is a constant called threshold and Wk are weights on the input values. The unique output value is spread along all connections from the unit though these connections are not always drawn from the same location. For convenience we occasionally use a bidirec- tional link + with weight w between two units ~1 and T.L~ as an abbreviation for two links with weight w; one from ui to 112 and another one from ~2 to ui. The sets of labels for a unification problem can be rep- resented by threshold units M(n, j) for each occurrence a and each symbol j such that M(n, j) is active iff j is a label of r. A unification problem is specified by ex- ternally activating the units which define the problem. This external activation has to be maintained through- out the computation since threshold units do not memo- rize their potential. For our running example we obtain the units depicted in figure l(a) as a matrix. One should observe that this representation is not unique. The uni- fication problem (f(x, s(s(+, Y) = f(g(& 2, d-4)) 1~ afg xyz afg xyz afg xyz fl 0~0000 0a0000 0e0000 1 ooe~oo oo~eoo 001) a00 11 0000~0 ooooeo 000 oeo 2 001)QOO ooamoo oom QOO 21 00~000 oo1)ooo 0000ao 211 00000e oooooe 00000a 3 ooeoao oo*ooo 009 000 31 a00000 00 000 a00 000 w (4 Figure 1: The representation of (j(x, Z,U) = f(g(y), g(&)), g(a))); (a) initially, (b) after 1 step, and (c) after 2 steps, where the most recently activated units are half-filled. the same representation as the running example. But this does not lead to a problem since the finest valid equivalence relation is identical for both examples. If we update the labels according to the operations ops and OpD we obtain figures l(b) and l(c) after 1 and 2 steps, respectively. For example, since M(11, y), q3, !I), and M(3,g) are active in (a), M(ll,g) has become active in (b) by ops. Since M(l,x), M(2, z), and M(ll, y) are active in (a), M(21,y) has become active in (b) by OpD. The final matrix in (c) represents precisely the labels of the finest DSErelation and is considered as the output of our unification algorithm. How must a connectionist network look like in order to implement the operations ops and OpD? We pro- pose a network consisting of two layers called term and unification layer. Let n = IO(s) U O(t)1 be the size of (s = t), m be the cardinality of the alphabet, and w = imn(n-1) be an integer used to set up thresholds and weights. The term layer contains the representation for a uni- fication problem as an nxm matrix A4 of threshold units with threshold w. Each unit is connected via bidirec- tional links with n-l units in the unification layer and has weight w. The unification layer contains the units necessary to implement ops and OpD. For both oper- ations the algorithm has to determine whether two oc- currences share a common variable and to update sets of labels accordingly. This requires that for any two oc- currences rri and 7r2 and for any symbol j, M(K~, j) and M(n2,j) are connected. Since the unification problem has n different occurrences and m different symbols, we need in(n- 1) units for each symbol, i.e. altogether w units. These units can be represented by a Ln(n-1)xm matrix U of threshold units with thresh01 3 w+l. An element of this matrix is denoted by U( { ~1,7rz}, j), or U(ri, 7r2, j) for shorts, indicating that this unit is con- nected with itl(ni, j) and M(az, j). Each unification layer unit is also connected to other unification layer units with weight 1 such that there is a connection from 5Note, I/( A~, ~2, j) and U( x2, ~1, i) denote the same unit. 590 KNOWLEDGEREP~SE~ATION for all z E V, j E VU F, j # x, and U(rr, x2,2) to U(?rr .x, 1r2’1r, j) for all 7r # A, x E V, j E V U F such that {WI SW, 7r2.7r) C O(s) U O(t). The threshold of a unification layer unit is chosen such that active unification layer units can only activate an- other unification layer unit if this unit receives also ac- tivation from the term layer. Conversely, a term layer unit is activated as soon as a corresponding unification layer unit is active. To exemplify the network figure 2 shows the term layer together with the unification layer units and the connections needed to solve our running example. All externally activated term layer units are represented as full circles. The interested reader is en- couraged to verify that the number i in a unit indicates that this unit will be activated after i steps. a f g X Y z Figure 2: The cyclic term layer together with the trian- gular unification layer units and the connections needed for solving (f(x, 2, y) = f(!J(Y)Y s(sW, g(4)). Recall that a unification layer unit U(K~, 7r2, j) is uniquely de- termined by its term layer units M(R~, j) and M(.Rz, j). Each connection between unification layer units has weight 1 and each connections between a unification and a term layer unit has weight w. Computing the finest DSE-relation 1Ve are of course interested in whether the connectionist model computes the finest DSErelation for any solvable unification problem. To answer this question we define a function T on the units of the term and unification layer. Let N be a set of term and unification layer units in T(N) = N U Tl(N) U Tz(N), where Tl(N) = {M(s, j) 1%’ : U(r, x’, j) E N} G(N) = {U(n,mj) 1 {M(mj),M(mj)) S N v [(M(v,j) E N V M(n,j) E N) A D(wI, ~2, N)]} D(al, r2, N) = 3x E v : 37r, n;, 7: : Rl I =7r *?r A a2 = +iT Au(&a$,& N. By the definition of T and the fact that there are only finitely many units we find that T(N) 2 N, T(N) = N implies T’(N) = ,V for all i > 0, and 3k:Tk+'(N)= Tk(N). In fact, we can show that X: is bounded by the number n of occurrences in a unification problem. Now let N be the set of active term layer units which represent a ‘unification problem and let .V* = Tk(N) such that T(N*) = N”. The interested reader may check that for our running example N* = T6 (,V). We can now answer the question raised at the beginning of this subsection. Theorem 2 ‘dx~O:Vj~VuF:M(n,j)~N”ej~/([lr]). This result ensures that a finest DSE-relation for a unification problem is generated by the connectionist model. That it is indeed the finest DSErelation fol- lows immediately from the fact that each activation was forced by either the axiom of singularity or the ax- iom of decomposability. The space complexity is bound by the square of the size n of the unification problem, whereas the time complexity is bound by n, We should not be disappointed by these results as Dwork et al. (1984) have proved that unification is logspace com- plete. But look at a best case scenario. This is the case if the unification problem degenerates to a word problem, i.e. to the question of whether two terms s and t are syntactically equal. If the word problem is solvable, then the finest DSE-relation is found after 1 step. Similarly, we can show that even for a match- ing problem, i.e. the problem of whether there exists a substitution c for s and t such that 0s = t, the finest DSErelation is also found after 1 step. As an example consider s = f(x, s(x)) and t = f@(a), g(W)))- Then, N = {WA, f>, Wl, 4, -M(l, h), M(W-4, M(b), M(21, xc), Au(21, h), Af(211, a)} and T(N) as well as T2(N) are equal to N u {U(1,21,h), U(11,211,u),U(1,21,x)}. Hence, the matching problem is solvable with most gen- eral solution { 2 + h(u)}. Homogeneous DSE-relations A DSErelation need not to be homogeneous. This can be seen if we alter our running example by replacing the z by a new constant b. The finest DSErelation for (f(x, 2, Y) = m(Y)9 g(g(b)), SW) can be computed as before and we find that 211 - 31 and 11([211,31]) = b-d). H owever , since - is homogeneous iff we find for all 7r that I1f([r])l 5 1, we conclude that - is not homogenous iff there is a row rr in the term layer and at least two function symbols fr and f2 such that M(x, fl) and M(x, f2) are active. This condition can be directly translated into a connectionist network. For each row 7r of the term layer we insert an additional threshold HOLLDOBLER 591 unit which receives activation from each unit M(r, f), where f is a function symbol, and becomes active as soon as it is excited by two term layer units. Proposition l( 1) ensures that the algorithm devel- oped so far decides a unification problem over infinite trees. One should observe, that the matching problem is the same regardless whether it is interpreted over the domain of finite or infinite trees. Thus, we find that whenever T2(N) > T(N) the matching problem is unsolvable and, hence, a matching problem can be de- cided in 2 steps. This improves a result by Dwork et al. (1984), who have shown that the matching problem of size n can be solved on a PRAM in Zog2n parallel time. Valid Equivalence Relations It remains to be checked whether the finest homoge- neous DSE-relation is acyclic or, in other words, valid. Unfortunately, there is no unit in our connectionist model which represents a w-equivalence class. How- ever, we know that two different occurrences ~1 and r/1 are equal under - iff there are a variable x and occur- rences 7r2, irk, and 7r such that U(n2, K$, x) is active, Kl = r2.7r, and sri = +r. This fact will be exploited in order to determine whether - is acyclic in an additional layer of our algorithm called occur check layer. This layer consists of $z(n - 1) threshold units C( {n, n’)) - or C(n, R’) for short - with threshold 1, where 7r # ?r’. Let C(B~, n{) > C(Q, &J iff there exist x E { ~1, a{} and 7 E (x2, ni} such that I > 7. The units in the occur check layer are connected by links with weight 1 from C(7r1, ai) to C(a2,7r$) iff C(nl, 7r{) > C(7r2,7rr/2). Now assume that each element C(n, a’) is initially acti- vated if 7r - x’, whereas C(r, a’) is externally inhibited if A + r’. This inhibition must’be strong enough such that an occur check layer unit cannot be activated by other occur check layer units. As a result units rep: resenting a cycle will form a stable coalition, whereas units which are not part of a cycle will be deactivated after some time. As an example consider (f(x, y) = f(g(y),g(x))). A homogeneous DSE-relation exists and is depicted in figure 3(a). Since 1 - 21 and 2 - 11 the units C(1,21) and C(2,ll) will initially be active. Fig- ure 3(b) shows the occur check layer, where all con- nections to and from a self-excitatory unit are not drawn. The units C(l,21) and C(2,ll) are mutually excitatory and form a stable coalition. Notice that C(21,ll) is externally inhibited and,‘thus, cannot be activated by C( 1,21) and C(2,ll). For our running example (f(x,x, y) = fMY), !l(!l(4), !lW)) the units C(l, 2), C( 11,21), C(ll, 3), C(21,3), and C(211,31) are initially active, but are all deactivated after 3 steps. The behaviour of the occur check layer can be for- mally verified. Let C be a set of active occur check layer units and S be a transformation on C such that S(C)= c\ {C(?rlJ',) 1 GlC(iT&) E c: C(Al,k/l) < c(;r2,7r9}6. 2 el 21 k3 (4 w Figure 3: (a) the term layer representing the finest DSE- relation and (b) the occur check layer for (f(x, y) = f(gb), &w With this definition we find S(C) E c S(C) = C 3 Vk 2 0 : Sk(C) = C and 3k 5 n : Sk+l(C) = Sk(C) Now let C = {C(n, x’) 1 ?r - R’} and C” = Sk(C) such that S(C*) = C*. In other words, C contains all occur check layer units which are initially active. Theorem 3 C” = 8 i$ - is acyclic. It remains to show how the occur check layer units are initially activated. Therefore, assume that all oc- cur check layer units are externally inhibited. Such an inhibition of a unit C(rl, ~‘1) is blocked and the unit is excited iff there is an active unification layer unit of the form U(n2, ?r& x) and an occurrence x such that Tl = 7r2.7rT, and n{ = R$?T, where x is a variable. Notice that a unit has to be excited only once whereas the in- hibition has to be blocked for the time the occur check layer needs to settle down. Discussion We have presented a connectionist unification algorithm and shown that it is correct and complete. The algo- rithm does not propagate potential non-unifiability as Stolcke’s (1989) algorithm does. This propagation is vital for Stolcke’s approach since his algorithm is ini- tialized by activating the unit which represents the fact that the unification problem is solvable. We have tried to add the propagation of potential non-unifiability to our unification algorithm but all examples suggested that there will be no considerable speedup. Recall that an equivalence relation on a set of oc- currences is homogeneous iff each occurrence is labelled with at most one Function symbol. Consequently, if the equivalence relation is homogeneous, then in each row of the term layer there is at most one of the units active which represent the function symbols. In this special case a coarse coded representation of the function sym- bols is possible even without any cross-talk. 6C \ C’ denote the set C minus the set C’. 592 KNOWLEDGEREPRESENTATION What is this unification algorithm good for besides showing that unification can be implemented in a con- nectionist system ? We would like to apply it within term rewriting, logic programming, or theorem proving in order to tackle the problems mentioned in the in- troduction. But there are a lot of open problems. We have assumed that all units of our layered architecture are prewired. This is all right as long as the application requires only a bounded number of clauses. If, however, an unbounded number of variants of clauses is needed, then the network has to be recruited dynamically. The unification problem admits a single solution. Unfortu- nately, as soon as we are interested in unification under certain equational theories, we have to deal with com- plete sets of unifiers. These sets may be even infinite. Similarly, a problem posed to a theorem prover may admit several solutions. How can we represent multiple solutions and how can we deal with potentially infinite computations in a connectionist theory? Acknowledgement: I would like to thank Jerry Feld- man for his guidance and support as well as Andreas Stolcke and Heinz Schmidt, whose comments on earlier versions of the unification algorithm helped to improve it considerably. References Ajjanagadde, V. & Shastri, L. (1989). Efficient Infer- ence with Multi-Place Predicates and Variables in a Connectionist System. In Proceedings of 2he An- nual Conference of the Cognitive Science Society, pp. 396-403. Ballard, D. H. (1986). P arallel Logic Inference and En- ergy Minimization. In Proceedings of the AAAI National Conference on Artificial Intelligence, pp. 203 - 208. Barnden, J. A. (1984). On Short Term Information Processing in Connectionist Theories. Cognition and Brain Theory, 7:25-59. Bibel, W. (1987). A t u omated Theorem Proving. Vieweg Verlag, Braunschweig, second edition. Citrin, W. V. (1988). Parallel Unification Scheduling in Prolog. Technical Report UCB/CSD 88/415, University of California, Berkeley. Colmerauer, A. (1984). Equations and Inequations on Finite and Infinite Trees. In Proceedings of the In- ternational Conference on Fifth Generation Com- puter Systems, pp. 85-99. Dwork, C., Kannelakis, P. C., SC Mitchell, J. C. (1984). On the Sequential Nature of Unification. Journal of Logic Programming, 1:35-50. Fages, F. & Huet, G. (1986). Complete Sets of Unifiers and Matchers in Equational Theories. Journal of Theoretical Computer Science, 43:189-200. Feldman, J. A. SC Ballard, D. 1~. (1982). Connectionist Models and Their Properties. Cognitive Science, 6(3):205-254. Fodor, J. A. SC Pylyshyn, Z. W. (1988). Connectionism and Cognitive Architecture: A Critical Analysis. In Pinker &z Mehler, eds., Connections and Symbols, pp. 3-71. MIT Press. Herbrand, J. (1930). Sur la Theorie de la Demonstra- tion. In Goldfarb, ed., Logical Writings (1971). Cambridge. HSlldobler, S. (1990). A C onnectionist Unification Al- gorithm. Technical Report TR-90-012, Interna- tional Computer Science Institute, Berkeley, Cal- ifornia. Kirchner, C. (1984). A New Equational Unification Method: A Generalisation of Martelli-Montanari’s Algorithm. In Proceedings of the Conference on Automated Deduction, pp. 224-247. Kowalski, R. (1979). Logic for Problem Solving, vol. 7 of Artificial Intelligence. North Holland, New York/Oxford. Lange, T. E. & Dyer, M. G. (1989). Frame Selection in a Connectionist Model of High-Level Inferenc- ing. In Proceedings of the Annual Conference of the Cognitive Science Society, pp. 706-713. MacQueen, D., Plotkin, G. D., & Sethi, R. (1984). An Ideal Model for Recursive Polymorphic Types. In Proceedings of the ACM Symposium on Principles of Programming Languages. Mjolsness, E., Gindi, G., St Anandan, P. (1989). Opti- mization in Model Matching and Perceptual Orga- nization. Neural Computation, 1:218-229. Paterson, M. S. SC Wegman, M. N. (1978). Linear Unifi- cation. Journal of Computer and System Sciences, 16:158-167. Robinson, J. A. (1965). A machine-oriented logic based on the resolution principle. Journal of the ACM, 12:23-41. Smolensky, P. (1987). On Variable Binding and the Representation of Symbolic Structures in Connec- tionist Systems. Technical Report CU-CS-355-87, Department of Computer Science 9c Institute of Cognitive Science, University of Colorado. Smolensky, P. (1988). On the Proper Treatment of Con- nectionism. Behavioral and Brain Sciences, 1 l:l- 74. Stolcke, A. (1989). Unification as Constraint Satisfac- tion in Structured Connectionist Networks. Neural Computation, 1(4):559 - 567. Touretzky, D. S. St Hinton, G. E. (1988). A Distributed Connectionist Production System. Cognitive Sci- ence, 12:423 - 466. Vitter, J. S. & Simons, R. A. (1986). New Classes for Parallel Complexity: A Study of Unification and other complete Problems for P. IEEE Transactions on Computers, pp. 403-418. H~LLDOBLER 593
1990
105
855
E EXPRESSIVENESS NVA LES Rina Dechter (‘I Computer Science Department Technion -- Israel Institute of Technology Haifa, Israel, 32000 e-mail: dechter@techsel.bitnet units whose stable patterns, (relative to the visible units) coincide with the original relation. This task is central to most applications of connectionist networks, in particular to its role as associative memory. The task will be investi- gated for a connectionist architecture which is different from classic connectionist networks in that it is based on constraint networks. The sequential constraint network model is defined next. Abstract This paper investigates design issues associated with representing relations in binary networks augmented with hidden variables. The trade-off between the number of variables required and the size of their domains is discussed. We show that if the number of values available to each variable is just two, then hidden variables cannot improve the expressional power of the network, regardless of their number. However, for k23, we can always find a layered network using k-valued hid- den variables that represent an arbitrary relation. We then provide a scheme for decomposing an lpl-2 arbitrary relation, p, using - ables, each having k values (k >“Tff hidden vari- 1. Introduction Hidden units play a central role in connectionist model, without which the model would not represent many useful functions and relations. In the early days of the Per- ceptrons Minsky 19691 it was noted that even simple functions like the XOR were not expressible in a single layer perceptron; a realization that slowed research in the area until the notion of hidden units had emerged mumelhart 1988a, Hinton 19881. Nevertheless, a formal treatment of the expressiveness gained by hidden units, and systematic schemes for designing systems with hidden units within the neural network paradigm are still not avail- able. Our intention is to investigate formally the role of hidden units and devise systematic schemes for designing systems incorporating hidden units. Specifically, we address the following task: given a relation on n variables, called visible, we wish to design a network having n+h (1) This research was supported in part by NSF grant #IRI- 8821444 and by an Air Force grant #AFOSR 88 0177 while the author was visiting the cognitive systems lab at UCLA. A Network OP binary constraints involves a set of n variables X l,...rX,,r each represented by its domain values, D lr. . . ,D,, and a set of constraints. A binary constraint Ro between two variables Xi and Xi is a subset of the Cartesian product Di x Di that specifies which values of the variables are compatible with each other. A solution is an assignment of values to all the variables which satisfy all the constraints, and the constraint satisfaction prob- lems (CSP) associated with these networks is to find one or all solutions. A binary CSP can be associated with a constraint-graph in which nodes represent variables and arcs connect pairs of variables which are constrained expli- citly. Figure la presents a constraint network where each node represents a variable having values (a, b, c) and each link is associated with a strict lexicographic order (where Xi c Xi iff i < J]. (The domains and the constraints explicitlv indicated on some of the links.) x1’ *I x4 Figure I: An example of a binary CN Our constraint-based connectionist architecture assumes that each unit plays the role of a variable having k states, and that the links, representing the constraints, are quantified by compatibility relations between states of adjacent units. Each unit asynchronously updates its state 556 KNOWLEDGEREPRESENTATION From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. (i.e., assigns itself one of its values) using a decision func- tion based only on the states of its neighboring units and its compatibility relations with them. In a companion paper we provide a communication protocol for this model which is guaranteed to converge to a global consistent assignment of values [Colhn 19901. Although this constraint-based architecture differs from classical connectionist architcc- tures the resemblance is strong enough to shed interesting light on the latter architectures as well. The paper is organized as follows. Section 2 con- tinues with definitions and preliminaries. In section 3 we show that bi-valued hidden variables add no expressional power, while, in section 4 we show that if the hidden vari- ables have 3 values or more they can decompose any rela- tion. Bounds on the trade-off between the number of hid- den variables and the cardinality of their values are given as well. Section 5 extends the decomposition scheme to those having some initial inner decomposition, section 6 presents examples and section 7 provides concluding remarks. Due to space limitation most results are presented with sketchy or no proofs. For formal proofs see [Dechter 19901. 2. Definitions and preliminaries Since communication links are pairwise our constraint-based architecture is restricted to expressing binary constraint networks only. A general relation on n variables is not necessarily expressible as a binary network of constraints on the same set of variables. The question we pose, therefore, is how to express any relation in a binary constraint network, with the aid of new hidden variables. Let reZ(I?) denotes the relation associated with network R. (i.e., rel (R) is the set of all solutions to R). Let p be an nary relation over variables X = {X 1, . . . , XJ, each having k values. We now define the notion of decom- posability with hidden variables. Definition: Relation p is h-network-decomposable if there exist h additional variables, Y = iyl, . . . , Yd, having k values or less, for which there is a binary network R = R (X,Y), on XuY, such that p = l&reZ(R (X,Y)). (1) II&p) denotes the projection of relation p on subsets of variables U. The projection of variables Xil, . . . ,Xi, on relation P - given “x,,,...Jti(p) = {(Xi, 9 . . . ,Xi,) ITI X E p S.t. by: V ij ?i. =x.) When no hidden variables are required f& nitwork decomposability we say that the relation is network decomposable. Any relation, p, can be associated with a unique binary network that is generated by projecting the relation on each pair of its variables. This network is called the minimal network [Montanari 19741 and it is known to provide the best approximation to p. Namely, p c rel (M) and if R is any other binary network on the original set of variables s.t. p c rel (R) then p c rel (M) G rel (R). It fol- lows that the minimal network can determine whether a relation is network decomposable or not. Theorem 1: A relation is network decomposable if and only if its minimal network represents it precisely. Namely, if p = rel (Ad). Cl Every non-decomposable relation has a trivial star-decomposition using one hidden variable and an unrestricted number of values. In this decomposition the hidden variable, Y, needs t values, when t is the cardinality of the relation. Each value of Y is needed to “index” each tuple in the relation. This is achieved by constraining the hidden variable with each original variable as follows. The constraint between the hidden variable, Y, and an original variable Xi makes the it* value of Y compatible with one and only one value of Xit the value that appears in the ith tuple of the relation. That way each value of Y is made consistent with exactly one tuple (Fig. 2). The resulting constraint network, which has a star shape (hence its name), clearly represents the target relation (i.e. projecting it on all original variables yields the orieinal relation). P c? Xl x ( . . . x. . . . .L nK2iY 0 . . . ..a. u 1 t :b . . r{ . . . 4’ . . . r: y . . . . . . = Fc - d Figure 2: A star decomposition Having the minimal network on one extreme, (a potential decomposition with no hidden variable) and the star network (requiring one hidden variable and unres- tricted number of values) on the other, we are now interested in exploring middle ground cases. In particular, we wish to establish how many hidden variables are required, were we to restrict the size of their domains. DECHTER 557 3. Using bi-valued hidden variables When the hidden variables have only two values we get a surprising negative result: Theorem 2: Relations which are not network decompos- able cannot be decomposed by adding any number of 2- valued hidden variables. Sketch of proof: Lets p be a relation that is not network decomposable and let M be its minimal network. The minimal network, since not representing p, allows a tuple x=x1 , . . . ,X, which is not part of p. The task of any hid- den variable is to disallow such tuple while at the same time allow all tuples in p. Assume Y is such bi-valued hid- den variable that when added to the network M it is incon- sistent with X while consistent with any tuple in p. Y has to be consistent with each value of X (since x’s values are generated from the minimal network). Namely, each value of X is consistent either with Y’s “0” or with Y’s “1”. We claim, further, that all 3s values are either consistent with the “0” or with the ” 1”. Since if not there is a value, Xi 3 not consistent with Y=O and a value, Zj, not consistent with Y=l and the pair @,Zj) is not consistent with any value of Y. However, since this pair is allowed by the minimal net- work, excluding it must eliminate a legal tuple of p which yields a contradiction. The argument can be extended by induction to any number of hidden variables [Dechter 19901. 0 4. Multi-valued hidden variables 4.1 A conditional decomposition scheme This section investigates decomposition schemes utilizing multi-valued (i.e., more then 2 values) hidden variables. In particular we wish to explore the trade-off between r, the number of hidden variables, and k, their domain sixes, required for decomposing an arbitrary rela- tion. We first restrict ourselves to r = 1. Clearly, a relation having t tuples is l-decomposable by the star net- work. One may expect that when using also the minimal constraints between the original variables as part of the network decomposition the number of values needed by the centered hidden variable can be reduced. It can be shown, however, that for some relations, t, is also the smal- lest number of values required for decomposition (when using one hidden variable). Let us define the unit relation, U,,, to be the “O-1” relation on n variables whose ith tuple consists of a value ” 1” for variable Xi and a value “0” for all other variables (Fig. 3a). - 11 0 0 0 01 x1x2x3x4x5 10000 115 = 1 01000 00100 00010 00001 1 (a) 20000 I I 01000 02000 u 10.3 = 00100 00200 1 i KEOO 00001 @) i 00002 1 Figure 3: (a) the unit relation Us, (b) relation U 10,3 It can be shown that relation U, cannot exploit the minimal constraints in order to reduce the value cardinality of the hidden variable: Theorem 3: The smallest value, k, for which the unit rela- tion U, is l-decomposable is k=n. Sketch of proof: The minimal network of U,, allows the extra all “0” tuple. To exclude it any value of the hidden variable must be inconsistent with at least one Xi = 0 and therefore consistent with Xi = 1. As such it can “extend” only the it’ tuple of Un hence we need n values. Cl Let us define W,(h) to be the the minimum number of values (per variable) required for an h- decomposition of relation p. We can conclude: Corollary I: For every p, HP{ 1) I I p I and for some p’s, H,(l)= Ipl. cl A straightforward extension of the star decompo- sition to two hidden variables presents itself by expressing the I p I values of one hidden variable by I p I different pairs of values on the pair of hidden variables as follows. All pairs of values are of the form (i, 0) or (O,i), where A?!.. 2 2 i > 0, and the value (i, 0) is associated with the i ” tuple while the pair (O,zJ is associated with the Ipl +i 2 tuple. (For notational convenience we assume that all frac- tional expressions represent the ceiling integer of that frac- tion.) Since Ip I different pairs can be expressed in this way by two ( -!-$-)-valued hidden variables, we can infer that HP(2) I -$! It can be shown, however [Dechter 19901 that this bound is tight, namely: Corollary 2: For every p, I+(2) I T and for some p’ 558 KNOWLEDGEREPRESENTATION (e.g., U,)H,(2)= +-A 0 The above property is extensible to any number of hidden variables and we can show that the unit relation U,, needs at least r, (n)-valued hidden variables in order to be decomposed. We\onclude that: Corollary 3: For some p’s HP(r) 2 Lf?!+-J r ’ Our approach for systematically decomposing a relation is to start from a star decomposition using a I p I - valued hidden variable, and then, if only k-valued variables are available, to simulate the star hidden variable by a rela- tion that obeys the value restriction and that can be net- work decomposed. If the latter relation is not network decomposable we will apply the same principle to it and so on. This approach is detailed in the following paragraphs. Let us extend the notion of unit relation into a k- valued relation as follows: the k-valued unit relation, U,,A has n tuples and r = fi variables such that the (i(k-1) + j)” tuple, i I r, j I k-l, has zero everywhere accept for the izh variable whose value is j (Fig. 3b). We focus, first, on the decomposition of Un. The unit relation Un can be conditionally star-decomposed . via & k-valued hidden variables. We first generate the concatenated relation U,,U%k, and then decompose it via a two layered network. The first layer consists of the original variables X 1, . . . ,X, and the second layer has the hidden variables Y l, . . . , Yz. The only constraints in the decomposing network ke those relating each hidden vari- able with an original variable ( Fig. 4a). The constraints themselves are generated by projecting the concatenated relation UJJqR on the corresponding pairs of variables (Fig. 4b). We say that UqR “conditionally” decomposes Un in the following sense: for any instantiation of the hidden variables to a legal tuple in Un,R the network allows only legal tuples among the original variables, namely, those participating in Un. However, the new appended relation UnaR, by itself, is not network decomposable. Namely, even if we add all the minimal constraints between the hidden variables it will not exclude the “0” tuple on the hidden variables which in turn will allow any combination of values on the original variables. As its name indicates it is just a conditional decomposition, i.e. conditioned on our ability to further decompose IL&. number ‘2 T3 . / Yl (e) (b) Figure 4: Decomposing U6 by Ug.3 Let p1 and p2 be two relations, having the same of tuples on disjoint sets of variables. Let us denote by R (pl ,o2) the two layered network in which the top layer contains p1 ‘s variables, the bottom layer contains ~2’s variables and there are constraints between any vari- able in p1 and any variable in p2. The constraints them- selves are the projection of the concatenated relation p1p2 on the corresponding pairs of variables. Using this nota- tion we can say that R (UnrU,J is a conditional decompo- sition of U,. 4.2. A general decomposition scheme It seems as though we didn’t solve anything! just transferred the decomposability problem from one unit relation (Un) to another (U,,k). Nevertheless, since the number of variables in Uqt is - < n, (for k >2) we can knl now decompose it with a new k-valued unit “relation”, denoted U’ n,k having even a smaller number of variables. The unit relation U1 n,k is a pseudo-relation since it is a set of tuples which are not necessarily different. U ‘,,k has n - variables and it is generated by taking the unit (k-1)2 relation, u$,k, and duplicating each tuple in it (Fig. 5). The intention being that each tuple in U1 n,k will not distin- guish between tuples in UGA having non-zero values for the same variable. u’%,R can conditionally decompose Un,k in the same manner that U,,k conditionally decom- poses Un using the layered network R (U,+U' &. This results in having a sequence of “pseudo” unit relations, each with a smaller number of variables and each disallowing a smaller tuple of “0”‘s in the preceding rela- tion. The resulting relation is a concatenation of “inflated” unit relations each having a (k-l) fraction of the variables of the preceding relation. Let us denote the resulting rela- tion by U*, (Fig. 5). Clearly U, = l&U*,. DECHTER 559 ,x2x3 x5xd1xd3xld11x12 100000000000 010000000000 001000000000 000100000000 000010000000 000001000000 000000100000 000000010000 000000001000 000000000100 000000000010 000000000001 u12 \ I \/ \ ,Y~Y,Y.YsY~ Z, Z2 & TI 100000 100 1 200000 100 1 010000 200 1 020000 200 Kxx 010 : 010 2 000100 020 2 %5E 020 2 001 0 00~0000~~ 001 0 002 0 000002,‘ 002 0 u123 u’,u u2123 Figure 5: U* 12; the relation generated from decomposing U12 The network that decomposes U*, is a layered network where each set of new hidden variables are con- nected to all variables in the preceding layer and the con- straints are the projection of U*, on the corresponding pairs of variables. The bottom layer consists of one or two variables whose allowed tuples can be controlled by a direct constraint. A schematic description of the network decomposition for U12 is given in Fig. 6. Figure 6: A layered decomposition of U * 12 To summarize, U, can be decomposed by inter- secting sequences of two layered networks. The resulting network, R * n, is defined by: (“ml’ denotes the intersection operation) R*,= R(&d&,k) -R(&,k,U’n,k) * (4 WJ1n,kJJ2n,k), ’ . . . . ‘R(f&,k,@+‘n,k) . ,--0, ’ R (U1o@+’ ,+ U1oa”,k). We can now apply the same idea to an arbitrary relation. Namely, in order to decompose a given relation p using k-valued hidden variables we will generate the net- work R*(P)=R@&,l,k) -R(Ulpl,k,U1 1pl.k) ’ . (5) ,-*a, WJ log&lpl-1 lo%lPl Ipl,k 9 u 1pl.k). Let H-’ P(k) denotes the minimum number of k- valued hidden variables needed for decomposing p. We get the following theorem: Theorem 4: Any relation is decomposable by r Ipl-2 k - - valued hidden variables, when k > 2. Cl From corollary 3 and from Theorem 4 it follows that the unit relation’s decomposition cannot be substan- tially improved. Namely, Therefore, the decomposition scheme presented by (5) is optimal in the sense that for some relations (the Un’s) a better decomposition does not exist. Nevertheless, we still wish to find the minimum number of values needed for the star-decomposition, since it will provide a better bound for any general decomposition, namely: H-‘&k) I Hp;1’2- 2 . - 5. Decompositions of partially decomposable relations One way of improving our scheme can be hinted by investigating-the level of &mer decomposition of the relation using only its original variables. We assumed that the relation is not binary network decomposable, however it may be losslessly decomposed to relations having arity greater then 2 yet smaller then n. Let a relation scheme R =R1,. . . , Rl be a set of subsets of attributes of the origi- nal relation. We say that the scheme R is a lossless decomposition of p if: p=p1 Dq.J2MI,..., tq$. (8) When MI is the relational database join operator. We claim that if a lossless decomposition of the relation is available and if we use hidden variables to decompose each com- ponent separately without introducing any conflicts between the components, the combined network is a decomposition of the target relation. The general scheme . follows: given a lossless decomposition :=:; ,R2, ,. .., RI of p which is defined over variables x=x 19***, Xn, and given a binary network 560 KNOWLEDGE REPRESENTATION decomposition, R’i, for each subrelation pi, utilizing a set of hidden variables Yi, (i.e., pi = lIxreZ (R’i)), and denoting by p’ = reZ(R’1) MI, . . . , #IreZ(R’J, it is always true that l&p’c (IIxreZ(R’1)) MI,. . . , (9) KI o[-lxrel(R’~)) = p If we take special care to ensure that the hidden variables used in different components will not interfere with each other (by utilizing disjoint subsets, for instance) and will not eliminate a legal tuple of p we will have an equality in the left hand side of (9). In that case the combined net- work fl ‘i , is a network decomposition of p utilizing a set of hidden variables UYi. i We can associate a star decomposition with each component separately. Namely, if I hidden variables are available, each devoted to a star decomposition of one subrelation, then the hidden variable Yi of subrelation pi will need I pi I values. We will get therefore that H-‘~(~IpiI)II i (10) If only k-valued hidden variables are available we can decompose each component subrelation using disjoint subsets of k-valued hidden variables. This way the non- interfering property is maintained. When applying the bound of Theorem 4 to each component separately and summing over all components we get: ’ I pi I - 2 ClfJil -21 H-$(k) I C i=l i=l k-2 = k-2 (11) We see, therefore, that the “level” of inner decom- position can affect p’s decomposability. We further con- jecture that if p cannot be losslessly decomposed at all then HP( 1) = I p I . Examples conforming with this conjecture is the unit relation and the parity relation pechter 19901. 6. Examples The following two examples, taken from chapter 8 of [Rumelhart 1988b], demonstrate the use of inner decompositions once they are available. Figure 8: A 3-valued decomposition of add. Example 1: addition Example 2: the negation problem: Consider the problem of adding two-digit numbers, where the digits are 0 or 1. Denote the first number by X 1 Y 1, the Consider a situation where the input to a system consists of a pattern of n+l bits and n output bits. One of the input bit second number by X2Y2, let T stand for the carry and let _ Z1Z2Z3 stand for the resulting sum. The add relation is given{ in figure 7a. , XI Y, X2 Y2 T ZI Z2 Z3 000~00000 00010001 00100010 00110011 f \ 01000001 TX, x2 z, z* 01011000 00000 01100011 00101 cd=* 0 1 1 1 1 1 0 0’ 01001 10 0 0 0 0 10 10010011 R2=y ; (: ; y R1= 10100100 10110 10110101 11010 11000011 1 1 1 1 1 11011100 \ / 11100101 11111110 0 / “) Figure 7: (a) The add relation, (b) A lossless decomposition of add A star decomposition of add with one hidden vari- able requires 16 values, namely H&(l) I 16. Using 3- valued variables our scheme requires 14 variables (EQ. (7)). Consider now the lossless decomposition of the add relation given by R = R 1 ,R2 where R 1 = T,Yl ,Y2,Z3 and R 2 = T,X 1 ,X2,Z1 ,Z2 (see figure 7b.) By decomposing each component separately we get that one 8-valued vari- able is needed for rel(R2) and a 4-valued variable is needed for reZ(R l)r each of the two can star-decompose its corresponding subrelation. In that case nothing is gained by the inner decomposition since a decomposition of the relation with &valued variables can be directly applied to the overall relation using just two variables. However, if only 3-valued variables are permitted, reZ(R 1) would require two such hidden variables while rel (R2) will require 6 variables, resulting in a total of 8,3-valued vari- ables as in Fig. 8. DECHTER 561 is called “the negation bit”. When it is “0” the rest of the n input bits should be mapped directly to the output patterns. If it is “1” then each bit should be mapped to its negation. Fig. 9a describes the relation defined over the negation variable N, the input variables Xi,Xs,Xs and the output variables Y,,Y~,YJ. Figure 9: (a) The neg 3 relation, (b) Subrealtion Ri 9 (c) A network decomposition A direct decomposition of this relation requires 2” values for a star decomposition and when 3-valued variables are available 2” -2 hid&n variables are required. Consider now the following lossless decomposition negn = reZ(R I),K~ . . . , Wel(R,) (12) where reZ(RJ is given in figure 9b. Two 3-valued hidden variables can be used for each reZ(Ri), resulting in a total of 2n 3-valued hidden variables. The constraint graph of this decomposition is given in Fig. 9c. 7. Conclusions We have shown that any relation can be expressed as a network of binary relations if augmented by hidden variables having three values, while no expressional power is gained by hidden variables having only two values. Specifically, a constructive scheme is presented that decomposes any relation, p, into a layered network using lpl-2 k 2 k-valued hidden variables when k>2. We also showed that the scheme is worse-case optimal, meaning that some relations require that many hidden variables. We extended the scheme to exploit an initial lossless decomposition of the relation, if one is available. Comparing our decomposition scheme with current techniques used in the neural networks community we should consider two systems; those based on the Heb- bian rule and those using feedforward networks. The former are restricted to orthogonal vectors, and thus our scheme is more general. The latter have no established theoretical guarantees and pften require a long time to con- verge. In contrast our scheme is complete and it works in time linear in the size of the initial relation. Its drawback, however, is that it requires an a-priori knowledge of the entire relation. Nonetheless, understanding the basic theoretical limitations of architectures using hidden vari- ables should facilitate the development of effective gen- eralizing scheme based on partial relations. ’ [Collin 1990-J [Dechter 19901 [Hinton 19881 [Minsky 19691 References Collin, Z. and R. Dechter, “A distri- buted solution to the network con- sistency problem,” Technion., Tech. Rep. Haifa, Israel, 1990. Dechter, R., “Design of networks with hidden variables.,” Technion., Tech. Rep. Haifa, Israel, 1990. Hinton, G.E. and T.J. Sejnowski, Learning and relearningin Boltzman machines, Cambridge, Massachusetts: Parallel Distributed Processing, Vol 1. Ch. 7., 1988. Minsky, M. and S. Papert, Percep- trons, Cambridge, MA: MIT Press, 1969. [Montamui 19741 Montanari, U., “Networks of Con- straints: Fundamental Properties and Applications to Picture Processing,” Information Science, Vol. 7, 1974, pp. 95132. IRumelhart 1988al Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning Internal represen- tation by error propagation, Cam- bridge, Massachusetts: Parallel Distri- buted Processing, Vol 1. Ch. 8., 1988. Runehut 1988bl Rumelhart, D.E. and J.L. McClelland, Parallel Distributed Processing, Cam- bridge Massachusetts, London Eng- land: The MIT Press, 1988. 562 KNOWLEDGEREPRESENTATION
1990
106
856
The Complexity of Closed World Reasoning and Circumscription Marco Cadoli, Maurizio Lenzerini Dipartimento di Informatica e Sistemistica Universit A di Roma “La Sapienza” via Buonarroti 12, 00185 Roma, Italia Abstract Closed world reasoning is a common nonmono- tonic technique that allows for dealing with neg- ative information in knowledge and data bases. We present a detailed analysis of the compu- tational complexity of the different forms of closed world reasoning for various fragments of propositional logic. The analysis allows us to draw a complete picture of the tractabil- ity/intractability frontier for such a form of non- monotonic reasoning. We also discuss how to use our results in order to characterize the com- putational complexity of other problems related to nonmonotonic inheritance, diagnosis, and de- fault reasoning. 1 Introduction Closed World Reasoning (CWR) is a common nonmono- tonic technique that allows for dealing with negative in- formation in knowledge and data bases. The simplest form of closed world reasoning is the (simple) closed world assumption (CWA), introduced by Reiter [13], which states that a negative ground fact of the form -p is inferred from a knowledge base T just in case the corresponding positive fact p cannot be deduced from T. This represents the idea that every positive fact that is not known to be true, should be considered false. A se- mantic characterization of the CWA can be given in terms of minimal Herbrand models: reasoning with the CWA is equivalent to assuming that every ground fact which is not included in the intersection of all the Herbrand models of T is false. Starting from the consideration that the simple CWA may lead to inconsistency when the knowledge base contains disjunctive positive assertions, such as (a V b), Minker [lo] proposed a new form of closed world reasoning, called generalized closed world assumption (GCWA), which states that the negative facts to be in- ferred from a knowledge base T, should be those which do not appear in any of the minimal models of T. This . principle takes into account that a (non-Horn) theory may have, in general, more than one minimal model, and every such model should be considered in closed world infer- ence. Gelfond and Przymusinska [4] extend the work of Minker, by allowing the generalized closed world assump- tion to be applied to a specified set P of (not necessarily all) predicates of the knowledge base. The resulting form of closed world reasoning, called careful closed world assumption (CCWA), is shown to be more expressive than the original generalized closed world assumption. In particular, it allows the derivation of new positive facts, which is impossible in both the CWA and the GCWA. Gelfond, Przymusinska, and Rrzymusinski [5] deal with an extension of the CCWA, proposing the so-called extended closed world assumption (ECWA), which is shown to be the most powerful formalization of closed world reasoning. In particular, they prove the equivalence (at least for propositional theories) between ECWA and circumscription [9]. Although the importance of the above forms of CWR has often been stressed both from a theoretical and a prac- tical point of view, a complete analysis of the computa- tional complexity of CWR is missing. Notice that, on the contrary, such an analysis has been provided for other forms of nonmonotonic reasoning, such as default reason- ing [6], abduction [2], and path-based inheritance [ 171. The aim of this paper is to present a detailed analysis of the computational complexity of closed world reason- ing. In particular, we are interested in exploring the trade- off between the tractability of the inference problem and the expressive power of the representation language. To this purpose, we concentrate our attention on propositional logic (although our results can be generalized in several ways, as mentioned in Section 4). which provides a rep- resentation language which is decidable, and, at least in some case, tractable, and we consider different subclasses of propositional formulae, each one characterized by some syntactic restriction. In the analysis, we shall refer to the results concem- ing the computational complexity of CWR which have appeared in the literature since now, namely [1,7,8,16]. 550 KN~wLED~EREPRE~SE~ATI~N From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. The paper is organized as follows. In Section 2 we provide some definitions and results which will be used in the subsequent sections. In Section 3 we present the complexity analysis: the analysis is performed along two directions, one concerning the expressive power of the language, and one concerning the form of CWR. Finally, in Section 4 we discuss possible applications of our results to other reasoning problems. 2 We always refer to propositional formulae in conjunctive normal form, called CNF formulae or simply formulae. The set of propositional letters appearing in a formula is called its alphabet. A CNF formula is a conjunction of clauses, where a clause is a disjunction of literals, and a literal is either a propositional letter or its negation. A closed world reasoning rule (CWR-rule) is a rule specifying a set of clauses to be conjoined to a formula T in order to obtain the closure of T according to some closed world resoning criterion. Some CWR-rule requires the alphabet of the formula to be partitioned into three sets, denoted P, Q, 2 respectively. P contains the letters to be minimized, 2 contains the letters whose truth value can vary when minimizing the letters in P, and Q contains all the remaining letters. Given a set of letters R, we denote by R+ (resp. R-) the set of all positive (resp. negative) literals from R. All the forms of CWR we consider in this paper can be abstractly characterized as follows. Definition I. Let T be a propositional formula, and (P; Q; 2) be a partition of the letters of its alphabet. We define the closure of T with respect to the CWR- rule a as follows: a@‘; P; Q; 2) = T U {TIC 11-C is free for negation in T with respect to cu} where K is a formula whose form depends on Q. As a notational convenience, we say that the formula I< is a-ffn to mean that it is free for negation with respect to the CWR-rule Q in (T; P; Q; 2). In order to precisely characterize the different forms of CWR, we now consider every CWR-rule a, specifying what it means for a formula to be a-ffn. e The CWA-rule corresponds to the (simple) closed world assumption [13]. IC is CWA-ffn if K is a posi- tive literal and T F K. e The GCWA-rule corresponds to the generalized closed world assumption [lo]. K is GCWA-ffn if K is a positive literal and, for each positive clause B such that T p B, it holds that T If: BV K. 8 The EGCWA-rule corresponds to the extended gener- alized closed world assumption [ 181. K is EGCWA-ffn if I< is a conjunction of positive literals and, for each posi- tive clause B such that T k B, it holds that T k B V K. The CCWA-rule corresponds to the careful closed world assumption [4]. The letters of T are partitioned into (P; Q; 2). Ii’ is CCWA-ffn if II is a positive literal from P and, for each positive clause B whose literals belong to P+ U Q+ U Q- such that T /#= B, it holds that T /# BVK. The ECWA-rule corresponds to the extended closed world assumption [S]. The letters of T are partitioned into (P; Q; 2). Ii’ is ECWA-ffn if I< is an arbitrary formula not involving literals from 2 and, for each positive clause B whose literals belong to P+ U Q+ U Q- such that T k B, it holds that T /$ B V I<. Notice that in the CWA, GCWA, EGCWA rules, there is no need to partition the letters into P, Q, 2, and there- fore, we can simplify the notation and write CWA( T) GCWA(T), and EGCWA(T). The CWR-rules can also be given a semantical charac- terization, which is based on the notion of minimal model. A model of a formula T is a truth assignment that satisfies T. For any two models i’kf, N of T, we write M 5 N if the set of letters of T which are assigned true by M is a subset of the analogous set for N. Moreover, if (P; Q; 2) is a partition of the letters of T, we write M L(P;z) N if M and N assign the same truth value to the letters in Q, and the set of letters of P which are assigned true by M is a subset of the analogous set for N. We say that a model M is minimal for T if there exists no model N of T such that N 5 M and M $ N. Analogously, we say that a model M is (P; Z)-minimal for T if there exists no model N of T such that N L(p;z) M and M $p;z) N. Notice that (P; Z)-minimality reduces to minimality when Q=Z=0. The semantical characterization of freeness for negation is now given by the following properties. o K is CWA-ffn in T iff there exists a model M of T such that M &c: K. e I< is GCWA-ffn in T iff for each minimal model M of T it holds that M p K. e I< is EGCWA-ffn in T iff for each minimal model M of T it holds that M F K. e I< is CCWA-ffn in (T; P; Q; 2) iff for each (P; Z)- minimal model M of T it holds that M k K. o li is ECWA-ffn in (T; P; Q; 2) iff for each (P; Z)- minimal model M of T it holds that M k K. From the above properties, one can easily show that for any formula F, EGCWA(T) j= F iff for each minimal model M of T it holds that M j= F, and ECWA(T; P; Q; 2) + F iff for each (P; Z)-minimal model M of T it holds that M b F. More- over, it is shown in [5] that for any formula F, CADOLI AND LENZERINI 55 1 ECWA(T; P; Q; 2) b F iff CIRC(T; P; Q; 2) j= F, where CIRC(T; P; Q; 2’) denotes the circumscription of P in T with variables 2. In other words, the ECWA is equivalent to circumscription, at least for propositional formulae. The different forms of CWR are not independent from each other. In the rest of this paper we shall make use of the following facts relating the various CWR-rules. Fact 1: for each formula F, if T is Horn, i.e. is constituted by clauses with at most one positive lit- eral, then CWA(T) j= F B GCWA(T) b F iff EGCWA(T) b F iff ECWA(T; P;@; 0) j= F iff CCWA(T; P; 8; 0) b F. Fact 2: for each formula F, EGCWA(T) j= F iff ECWA(T; P;&@) k F. Fact 3: for each formula F, GCWA(T) b F iff CCWA(T; P; 8; 0) j= F. Fact 4: for each literal L belonging to P+ U P- , CCWA(T; P; Q; 2) + L iff ECWA(T; P; Q; 2) + F. As we said in the introduction, in this paper we are concerned with CWR in the context of propositional logic (although our results can be generalized in several ways, see Section 4). Since we are interested in the tractability frontier for CWR, we focus our attention on those classes of formulae for which monotonic inference is a polyno- mial task. The results reported in [15] show that this holds for three classes of formulae, namely, Horn, dual-Horn, and Krom. In the rest of the paper, we shall refer to various subsets of such classes. A complete classification of all the classes of propositional formulae that we consider is as follows: o Horn: at most one positive literal per clause e dual-Horn: at most one negative literal per clause 0 Krom: or negative at most tW0 literals per clause - either positive o Definite: exactly one positive literal per clause ID HomKrom: Horn and Krom e dual-HomKrom: dual-Horn and Krom e HomKrom-: HomKrom with no negative clauses having 2 literals e 2-positive-Krom: exactly negative literal, per clause. two positive literals, and no The complexity analysis presented in the next section, will be performed along two directions, concerning the expressiveness of the language, and the form of CWR, respectively. For those CWR-rules requiring the letters of a formula T to be partitioned into the three sets P, Q, 2 (e.g. CCWA and ECWA), we will consider two special cases, namely Q = 0 or 2 = 0. 3 Complexity Analysis In this section we consider the different forms of closed world reasoning, and, for each form, we analyze the various classes of formulae, providing lower complexity bounds for the deduction problem (for the sake of brevity, most of the proofs are omitted). With regard to the upper complexity bounds, we notice that, from the analysis of Schlipf [16] it follows that per- forming deduction under the CCWA in propositional logic is both an NP-hard and a coNP-hard problem. Hence it is unlikely that the problem is either in NP or in coNP. Obviously, by Fact 4, this applies to the ECWA as well. In fact, we can go one step further, and prove that the deduction problem under the ECWA is in PSPACE, and in particular in the class II; of the polynomial hierarchy, although we do not know whether the problem is complete for such a class. A simple intuition supporting this fact is that, determining if ECWA(T; P; Q; 2) j= y can be done by checking if y is satisfied in all the truth assign- ments of T which are (P; Z)-minimal models of T, and the problem of checking if a truth assignment M of T is a (P; Z)-minimal model is in the class of coNP (it suffices to compare M with all the other truth assignments of T). A more detailed analysis of the upper complexity bounds for several classes of propositional formulae is described in [3]. 3.1 CWA The results reported in [13] show that the application of the CWA to Krom and dual-Horn formulae may lead to inconsistency. Moreover, Minker [lo] shows that, when the CWA is consistent, it is equivalent to GCWA (see Subsection 3.2). In [ 11, it is shown that the deduction problem under CWA for Definite formulae is solvable in polynomial time. Since negative clauses do not contribute to the deduction (see [13]), this property extends to Horn formulae. By Fact 1, one can conclude that, for Horn formulae, the de- duction problem under both the GCWA and the EGCWA is polynomial too. 3.2 GC In [7], it is shown that the problem of determining whether GCWA(T) b L, where T is a dual-Horn formula and L is a literal, is cow-hard. By Facts 1,2, and 3, this holds also for EGCWA, CCWA and ECWA. The deduction problem under the GCWA for Krom for- mulae is polynomial; this derives from Fact 3 and from a stronger result which will be presented in Subsection 3.4. Recently, Rajasekar, Lobo and Minker [ 121 have pro- posed a weak form of GCWA, called Weak GCWA (WGCWA), which applies to disjunctive logic programs (i.e. logic programs whose rules have any number of pos- 552 KNOWLEDGEREPRESENTATION itive literals in the head) and is defined in such a way that every deduction under such a form reduces polinomially to deduction under the CWA on definite logic programs. It is easy to see that the WGCWA yields polynomial time deduction algorithms in the propositional case. 3.3 EGCWA We already mentioned that the deduction problem under the EGCWA is polynomial for Horn formulae. The fol- lowing theorem shows that this is the only tractable case. Theorem 1 Let T be a 2-positive-lr’rom formula, and y any clause. Then determining if EGCWA(T) b y is a coNP-hard problem. PROOF (sketch): We reduce the unsatisfiability problem to our problem by exhibiting a mapping from any CNF formula 71‘ to a 2-positive-Krom formula 7r” and a clause C such that 7r is unsatisfiable iff EGCWA(r”) + C. Let 7r be a CNF formula on the alphabet L. Let L’ be the alphabet L U {Ti 1 a E L) . We define n’ on L’ as follows: 1. for each letter a of L, there is a clause wV+iin d; 2. for each clause 1~11 V - - s V -on V ~n+lV’~‘V%+rn in K, there is a clause 7i7iV.. Wul,V w~+~ V . . . V w~+~ in 7r’. Obviously, r’ is satisfiable iff 7r is satisfiable. Let L” be the alphabet obtained from L’ by adding a new letter gi for each clause yi in x. We define ?y” on L” as follows: 1. for each letter a of L, there is a clause a V TX in K”; 2. for each clause “li = lull v . ...V~W,VW~+~V...VW,+, mn, there are n+m clauses giVl?Ti,. . . ,giVU)n,giVw,+l,. . . ,giV w~+~ in 7r”. Notice that the above mapping from 7r to 7r” is clearly polynomial, and 7r” is a 2-positive-Krom formula. Moreover, given a model M of 7r’, we can build a disjunction B such that 7r” b B V Ii and &’ k B, and, conversely, given a disjunction B such that ?Y” b B V I< and T” k B, we can build a model M of 7r’. Therefore, the conjunction ri’ = g1 A- - -Agh, where gr, . . . , gh are all the letters of L” corresponding to the clauses in 7r, is not EGCWA-ffn in 7r” iff X’ is satifiable. Taking into account that +< is a disjunction, and EGCWA(n”) + TIC iff K is EGCWA-ffn in K”, it follows that EGC WA( &‘) b 1K iff ;rr is unsatisfiable. 0 Taking into account Fact 2, we can conclude that the above result applies to the ECWA too. 3.4 CCWA In [S] a detailed analysis of CCWA is presented. In partic- ular it is shown that determining if CCWA(T; P; Q; 8) b L, where L is a literal, is coNP-hard for Horn formulae. Moreover, two different problems are shown to be polyno- mial: CCWA(T; P; Q; 2) b y, where y is an arbitrary clause and T is HomKrom, and CCWA(T; P; 8; 2) b y, where y is an arbitrary clause and T is Krom. We com- plete the analysis of [8] by providing the following result: Theorem 2 Let T be a Krom formula, and y be a clause. Then the problem of determining if CCWA(T; P; Q; Z) + y is polynomial (see [3] for an O( lT12) algorithm). Notice that, by Fact 3, the above result on Krom for- mulae can be extended to the GCWA. 3.5 ECWA We first present two HomKrom and Definite intractability results concerning formulae, respectively. Theorem 3 Let T be a HornKrom formula, y be a clause and L be a literal. Then both determining if ECWA(T; P; Q; 8) j= y and if ECWA(T; P; Q; 2) j= L are coNP-hard problems. Theorem 4 Let T be a Definite formula, and y be a clause. Then the problem of determining if ECWA(T; P; Q; 8) b y is coNP-hard. The above theorems can be proven similarly to theorem 1. Moreover, we can strengthen the result of theorem 1, by showing that, as far as 2-positive-Krom formulae are concerned, the problem of deducing literals under the ECWA is a coNP-hard problem, even if Q = 8. Theorem 5 Let T be a 2-positive-Krom formula, and L be a literal. Then the problem of determining if ECWA(T; P;& Z) b L is coNP-hard. The above theorem, together with theorem 2 and Fact 4, allows us to conclude that determining whether a given literal L logically follows from ECWA(T; P; 8; 2) can be characterized as follows: the problem is polynomial for L E P+ U P-, whereas is coNP-hard for L E Z+ U Z-. Notice that the same holds for the case when Q # 8 (see [31). In the rest of this subsection, we discuss two tractable cases, concerning Horn and HornKrom-, respectively. Theorem 6 Let T be a Horn formula, and y be a clause. Then ECWA(T; P;O; Z) b y $7 CCWA(T; P;0; Z) b y. Hence determining if ECWA(T; P; 0; 2) + y is a polynomial problem (see [8] for a polynomial algorithm). Theorem 7 Let T be a HornKrom- formula, and y be a clause. Then the problem of determining if ECWA(T; P; Q; Z) b y is polynomial. In [3], we present an algorithm that, given a HomKrom- formula T, and a clause y, determines if ECWA(T; P; Q; Z) b y. The algorithm uses a graph representation for T, and runs in O(T2) time. The above theorem is probably the first tractability re- sult concerning parallel circumscription. For example, it CADOLI AND LENZERINI 553 literal Pas neg tot CWA Horn Krom Definite HornKrom HornKrom- 2posKrom dual-Horn dual-HornKrom 51 1 51 51 2 51 0 51 51 22 52 52 2 52 rpll - il ;I ;] - - - GCWA ;I P ;I ;I il P coNP P PI WGCWA - - $1 - - &I A A EGCWA il coNP iI il il coNP coNP coNP PI CCWA il P Q=0 il il iI P coNP P VI CCWA coNP P il iI iI P coNP P z=0 PI PI CCWA coNP P iI iI iI P coNP P PI ECWA P coNP P P P coNP [71 coNP coNP PI coNP coNP z=0 VI ECWA coNP coNP coNP coNP P coNP coNP coNP PI Table 1: Sun-n-nary of the complexity analysis can be easily shown that the algorithm in [l l] for comput- ing circumscription is exponential even for HornKrom- formulae. This fact shows that in order to develop efficient methods for computing circumscription, it is necessary to look for structural properties of the ECWA-ffn formulae which are relevant to the deduction. All the complexity results about closed world reason- ing for the different classes of propositional formulae are summarized in Table 1, where P means “polynomial time”, coNP means coNP-hardness, and symbol “-” means non- applicable. Each entry referring to a result previously known, is marked with the appropriate reference. Entries without references refer to results presented in this paper. 4 Applications In this section we briefly discuss some possible appli- cations of the complexity analysis developed in Section 3. In general, our results can be used to characterize the complexity of any reasoning problem which can be formalized in terms of CWR on propositional formulae. We concentrate our attention to three of such problems, namely nonmonotonic inheritance, default reasoning, and diagnosis. As a first observation, notice that we can directly ap- ply all the above results to the problem of performing 554 KNOWLEDGEREPRESENTATION CWR on a particular class of first order monadic theo- ries, called Inheritance Networks [8]. Moreover, it can be seen that the problem of reasoning under circumscrip- tion on an Inheritance Network N can be reduced to the one of performing deduction under the ECWA on a suit- able propositional formula obtained from N by means of a polynomial transformation. By virtue of this fact, some of the results presented in Section 3 can be used to give lower bounds to the complexity of performing deduction on circumscriptive Inheritance Networks, i.e. Inheritance Networks with defeasible rules, where the meaning of de- feasible rules is formalized in terms of minimization of abnormalities. With regard to diagnosis, in [14] a general method- ology for solving the problem of diagnosing a malfunc- tioning system is given. The system is described by means of a logical formula SD in which the distinguished unary predicate symbol ABNORMAL is used to model the malfunctioning of a component. Observations about the system behaviour are represented by a further for- mula OBS. A diagnosis is then defined as a minimal set A C COMPONENTS such that SD U OBS U {~ABNORMAL(c) 1 c E COMPONENTS\A} is consistent, where COA4 PON ENTS is the set of con- stant symbols of SD representing the system components. Reiter shows that there is a natural correspon- dence between the diagnoses of SD U 0 BS and the extensions of the default theory whose first-order part is SD U OBS, and whose default rules are C ~ABNORMAL(c)/~ABNORMAL(c) 1 c E COMPONENTS). It is easy to prove that, when SD U OBS can be expressed as a propositional for- mula, for example when the system is a digital circuit, the diagnosis problem can also be formulated as a de- duction problem under ECWA. This allows us to apply some of our results to the diagnosis problem. For ex- ample, consider a very simple digital circuit made up by only two kinds of components: the wire and the inverter. A way to model (unidirectionally) the be- haviour of these components is by means of the clauses oninput(wirei) A lab(wirei) > onoutput(wirei) and oninput(inverteri) A xb(inverteri) > lonoutput(inverteri). Now, based on the result stated in theorem 1, it can be shown that determining whether a given fact is true in all the diagnoses of a system of the above form is coNP-hard. Finally, we observe that our analysis can be used to obtain complexity results in the context of default logic. The basic observation is that skeptical reasoning in de- fault theories (i.e. validity in all the extensions of a de- fault theory-see [6]) can be reduced to ECWA, at least for some propositional default theories where defaults are of the form : la/la. In [6] it is shown that for a superclass of this class (namely Normal Unary default theories) skep- tical reasoning is polynomial if the first-order part of the theory is a conjunction of literals. By exploiting the re- sults of this paper, it is possible to prove that the problem is coNP-hard for a slight enhancement of the expressive power of the language, namely if the first-order part of the theory is a 2-positive-Krom propositional formula. References l-11 PI [31 [41 Apt K.R., “Introduction to Logic Programming’.‘, Technical Report TR-87-35, University of Texas, Department of Computer Science, 1987. Bylander T., Allemang D., Tanner M.C., and Josephson J.R., “Some Results Concerning the Computational Complexity of Abduction”, Proceed- ings of the 1st RR Conference, 1989. Cadoli M., and Lenzerini M., “The Complexity of Closed World Reasoning”, Technical Report, Uni- versita di Roma “La Sapienza”, Dipartimento di In- formatica e Sistemistica, 1990. Gelfond M., and Przymusinska H., “Negation as Failure: Careful Closure Procedure”, Artificial In- telligence, Vol. 30, 1986. r51 PI [71 PI [91 WI WI WI r131 r141 D51 WI lJ71 Ha Gelfond M., Przymusinska H., and Przymusinsky T., “On the Relationship between Circumscription and Negation as Failure”, Artificial Intelligence, Vol. 38, 1989. Kautz H.A., and Selman B., “Hard Problems for Simple Default Logics”, Proceedings of the 1st RR Conference, 1989. Lenzerini M., “Covering and Disjointness Relation- ships in Hierarchy Networks”, Technical Report RAP.17.88, Universita di Roma “La Sapienza”, Di- partimento di Informatica e Sistemistica, 1988. Lenzerini M., “Reasoning about Inheritance Net- works”, Technical Report RAP.22.88, Universiti di Roma “La Sapienza”, Dipartimento di Informatica e Sistemistica, 1988. McCarthy J., “Circumscription - A form of Non- Monotonic Reasoning”, Artificial Intelligence, Vol. 13, 1980. Minker J., “On Indefinite Databases and the Closed World Assumption”, Proceedings of the 6th CADE Conference, 1982. Przymusinsky T., “An Algorithm to Compute Cir- cumscription”, Artificial Intelligence, Vol. 38, 1989. Rajasekar A., Lobo J., and Minker J., “Weak Gener- alized Closed World Assumption”, Journal of Au- tomated Reasoning, Vol. 5, 1989. Reiter R., “On Closed World Data Bases”, in Gal- laire H., and Minker J. (eds), Logic and Data Bases, Plenum, New York, 1978. Reiter R., “A Theory of Diagnosis from First Prin- ciples”, Artificial Intelligence, Vol. 32, 1987. Schaefer T.J., “The Complexity of Satisfiability Problems”, Proceedings of the 10th ACM STOC, 1978. Schlipf J.S., “When is Closed World Reasoning Tractable?“, Proceedings of the 3rd ISMLS’, 1988. Selman B., and Levesque H.J., “The Tractability of Path-Based Inheritance”, Proceedings of the 11th IJCA I, 1989. Yahya A., and Henschen L.J., “Deduction in Non- Horn Databases”, Journal of Automated Reason- ing, Vol. 1, 1985. CADOLI AND LENZERINI 555
1990
107
857
ptimally Efficient Inference Syste Lokendra Shastri and Venkat Ajjanagadde Department of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104, USA shastri@cis.upenn.edu Abstract This paper describes a knowledge representa- tion and reasoning system that performs a lim- ited but interesting class of inferences over a restricted class of first-order sentences with op- timal eticiency. The proposed system can an- swer yes-no as well as w/z-queries in time that is only proportional to the Zength of the short- est derivation of the query and is independent of the size of the knowledge base. This work suggests that the expressiveness and the infer- ential ability of a representation and reasoning systems may be limited in unusual ways to ar- rive at extremely efficient yet fairly powerful knowledge based systems. Introduction Research in artificial intelligence has made it abun- dantly clear that tremendous computational activity un- derlies even the most commonplace intelligent behav- ior. For example, language understanding, a task that we usually perform so effortlessly and effectively, depends upon the agent’s ability to disambiguate word senses, recover the phrase structure of input sentences, resolve anaphoric references, impose selectional restrictions, rec- ognize speaker’s plans, perform numerous predictions, and generate explanations. Within the knowledge representation and reasoning paradigm, most of the above computations are viewed as inferences. Such a view, however, leads to the following paradox: A generalized notion of inference is intractable, yet the human ability to perform cognitive tasks such as language understanding in real-time suggests that we are capable of performing a wide range of inferences with extreme efficiency. The success of AI critically depends on resolving the above paradox. Fortunately, one is only faced with an *This work was supported by NSF grants IRI 88-05465, MCS- 8219196-CER, MCS-83-05211, DARPA grants NOOO14-85-K-0018 and NOOO14-85-K-0807, and AR0 grant ARO-DAA29-84-9-0027. apparent paradox because humans are not general pur- pose reasoners - we can only perform a limited (but perhaps a fairly rich) class of inferences with extreme ef- ficiency. This suggests two alternate research strategies for addressing the paradox: One may attempt to char- acterize the class of inference that people perform with great efficiency and try and discover appropriate repre- sentations, algorithms, and computational architectures to perform these inferences with requisite efficiency. Unfor- tunately, a precise characterization of what humans can infer with ease does not appear to be in sight. An alter- nate strategy would be to develop a complexity theory of reasoning by identifying what classes of inference can be performed with what degree of efficiency. The work re- ported in this paper conforms to this latter strategy. Such a strategy has been pursued by several researchers (for example, [6, 10, 13, 9, 14, 7]), and their work has cov- ered a wide band of the complexity spectrum: Thus we have decidability results (e.g., [13]) as well as character- izations of simple knowledge representation systems that merely perform database retrieval [lo]. Most of the above results have been surprisingly neg- ative and have shown that even fairly restricted kinds of reasoning turns out to be intractable (for example, refer to [4, 71). Yet the human ability to understand language in real-time clearly suggests that there does exist a fairly rich class of reasoning that humans can perform effortlessly and within seconds. To appreciate the richness of such reasoning, consider the following sentence: “John seems to have suicidal tendencies, he has joined the Columbian drug enforcement agency”. Even though the reader would have understood the above sentence spontaneously, and perhaps within a few seconds, a careful analysis would re- veal that doing so requires performing a number of fairly elaborate inferences. For convenience let us agree to re- fer to such spontaneous and fast reasoning as reflexive reasoning. l lit is as if such reasoning is a rejlex response of our cognitive ap- paratus. Clearly, all human reasoning is not reflexive. SHASTRIANDAJJANAGADDE 563 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. In evaluating the complexity of reflexive reasoning it must be recognized that such reasoning is performed with reference to an extremely large body of knowledge. We believe that a conservative estimate of the number of ‘rules’ and ‘facts’ required to encode all relevant aspects of the domain of common sense will easily run into sev- eral million (perhaps, even more).2 In view of the above it appears that the time complexity of an inference algo- rithm for reflexive reasoning should at worst be sublinear in IKBI - where lK.Bl is the size of the knowledge base - and perhaps even be independent of IKBI. Reflexive reasoning inhoduces a very strong notion of effectiveness, one that seems formidable in view of the negative results cited above. In this paper we report an encouraging result. We identify a class of inference that is computable in time that is not only sublinear in I KB I but is also, in a sense, optimal. We describe a knowledge representation and reasoning system that can encode a re- stricted class of first-order sentences and answer a class of queries in time that is only proportional to the length of the shortest derivation of the query and is independent of I K B I. This work suggests that there exist interesting points in the tradeoffcontinuum between computational effectiveness and inferential/expressive power. It also demonstrates that it is possible to arrive at extremely ef- ficient yet fairly powerful knowledge representation sys- tems by explicitly recognizing the symbiotic relationship that exists between expressiveness, effectiveness of infer- ence, the choice of representation (data-structures), and the underlying model of computational. yes-no question. On the other hand, a query with existen- tially quantified variables corresponds to a wh-query and answering such a query involves findings variable bind- ing(s) for which the query follows from the rules and facts encoded in the system. It can be shown that the knowledge representation sys- ds soundly provided a query satisfies the fol- lowing conditions:3 1. 2. 3. The number of distinct constants specified in the query does not exceed 0, where Q is an implemen- tation parameter (see Section 3). Any rule that participates in the derivation of the query must obey the following constraint: Any vari- able occurring in multiple argument positions in the antecedent of such a rule must get bound during the reasoning process (this, via backward chaining). During the processing of the query, each predicate may only be instantiated with one set of argument bindings. This restriction, however, only applies to run-time or ‘dynamic’ instantiations of predicates and not to ‘long-term’ facts stored in the system. Recently, we have extended the system to allow it to represent up to Ic - where k is a system parameter - dynamic instant&ions of each predicate during the processing of a query [l 11. (This extension is not discussed in this paper.) Functional Specification The knowledge representation system encodes rules and facts of the following form: VXl, . . . . x, [Pl(...)AP2( . ..)...AP.(...) =s 321, . ..q Q(...)] The arguments of Pi’s are elements of {x1, x2, . ..xm}. An argument of Q is either an element of {x1, x2, . ..x.), or an element of (zl , z2 , . ..z~). or a constant. It is re- quired that any variable occurring in multiple argument positions in the antecedent of a rule must also appear in its consequent. If the above conditions are met, the system answers yes to all yes-no queries that follow from the encoded rules and facts in time proportional to the length of the shortest derivation of the query. The system obeys the closed world assumption and produces a IU) answer in time proportional to d, where d equals the diameter of the inferential dependency graph associated with the rule- base (see Section 3). wh-queries are also answered in time proportional to d. Finally, the space complexity of the system is just linear in IKBI. Cognitive Significance Facts are assumed to be partial or complete instanti- ations of predicates. Thus facts are atomic formulae of the form P(t1,t2...tk) where ti’s are either constants or distinct existentially quantified variables. A query has the same form as a fact. A query all of whose arguments are bound to constants, corresponds to a Reflexive reasoning takes place with reference to a large body of knowledge and even though each reasoning episode may involve a large number of rules and facts, there is considerable psychological evidence to suggest that most cognitive tasks performed efficiently without props involve only a small number of distinct entities. A reasonable estimate of the maximum number of distinct entities we can deal with at a time is around seven [12]. It must be emphasized that the limit is on the number of distinct entities and not on the number of variable (i.e., 2Thoxpe and Imbert [16] argue that even the number of visually 3The system is incomplete and the above conditions characterize its identifiable entities is around a hundred thousand1 incompleteness. 564 KNOWLEDGEREPRESENTATION role) bindings that these entities participate in during an episode of reasoning. We believe that the first condition listed in the previous section is consistent with this aspect of human reasoning and a psychologically plausible value of Q may be around seven. We also conjecture that any given episode of reflex- ive reasoning does not require the same predicate to be dynamically instantiated more than k: times, where a psy- chologically plausible value of k may be as low as three to five (observe that k greater than 1 allows for bounded recursion). Thus the third limitation of the reasoning sys- tem is also well motivated. In our view, the KB underlying reflexive reasoning pri- marily encodes an agent’s long-term and stable knowl- edge about the world. Although new rules do get added to an agent’s KB, the assimilation of a new rule in a form that allows its participation in reflexive reasoning takes time - not seconds, but perhaps minutes, days or even months. It is in this context that one must evaluate the realization of the reasoning system described below. Finally, the realization of the reasoning system is also biologically plausible in that it strictly adheres to the core features of the connectionist model. At the same time, there exists neurophysiological evidence to suggest that the basic mechanisms used below, namely, the propaga- tion of rhythmic patterns of activity, play a role in the representation and processing of information in the ani- mal brain. For a detailed discussion, refer to [ 13. ealization of the Reasoning System The representation and reasoning system is realized as a massively parallel network of simple processing elements (nodes). A major technical problem that must be solved in realizing such a ‘connectionist’ reasoning system is the run-time (dynamic) creation and propagation of variable bindings. The proposed system solves this problem while still using extremely simple oscillatory nodes (see next section). Ballard [2] was the first to propose a massively parallel inference system. He however, required that all possible variable bindings be explicitly pre-wired into a network. This requirement is too severe and unrealistic and greatly limits the generality of his system. Touretzky and Hin- ton’s DCPS does represent dynamic bindings [17] but its ability to do so is very limited. First, DCPS can only deal with single variable rules. Second, even though DCPS is a parallel system, it allows only one rule to fire at a time, and hence, does not satisfy the efficiency requirements of reflexive reasoning. A detailed comparison of our sys- tem with other massively parallel reasoning systems (for example, ROBIN, CONPOSIT, and NETL [8,3,5]) may he found in [15]. Conceptually, the proposed encoding of the knowledge base amounts to creating a directed ir#erential dependency Fig. 1 illustrates the encoding of the following facts and rules. 1. 2. 3. 4. 5. 6. Vx, Y, z [give@, Y, z) * own(y, %)I k Y bY(X, Y) 3 own(x, Y)l Vx, y [own(x, y) * can-sell(x, y)] give( John, Mary, Bookl) buy( John, x) own( Mary, Balll) graph: Each predicate argument is represented by a node in this graph and each rule is represented by links from nodes denoting the arguments of the consequent predicate to nodes denoting the arguments of the corresponding an- tecedent predicate. Facts are small networks attached to their respective predicates nodes. We describe the encod- ing with the help of an example. For simplicity we only describe the realization of single antecedent rules with- out constants and existentially quantified variables in the consequent. The encoding makes use of two types of nodes. These are p-btu nodes (depicted as circles) and r-and nodes (depicted as pentagons). The computational behavior of these nodes is as follows: A p-btu is a phase-sensitive binary threshold unit. When such a node becomes active, it produces an os- cillatory output in the form of a pulse train that has a period x and pulse width W. The timing (or the phase) of the pulse train produced by a p-btu node is precisely governed by the phase of the input to the node. A r-and node acts like a temporal and node. Such a node also oscillates with the same frequency as a p-btu node except that it becomes active only if it receives uninterrupted activation over a whole period of oscillation. Further- more, the width of the pulses produced by a T-and node equals ?r. The implementation parameter Q that governs the maximum number of distinct entities that may par- ticipate in the derivation of a yes-no query equals w/n (assume integer divide). The output pulse of a node propagates along every link emanating from the node. The encoding also makes use of inhibitory mod@ers. An inhibitory modifier is a link that impinges upon and inhibits another link. Thus a pulse propagating along an inhibitory modifier will block the propagation of a pulse propagating along the link it impinges upon. In Fig. 1, inhibitory modifiers arc shown as links ending in dark blobs. Each constant in the domain is encoded by a p-node. An n-q predicate is encoded by a pair of r-and nodes and n p-btu nodes, one for each of the n arguments. One of the T-and nodes is referred to as the enabler and the other as the collector. As a matter of convention, an en- abler always points upwards and is named e:[predicate- name]. A collector always points downwards and is named c:[predicate-name]. SHASTRI AND AJJANAGADDE 565 I I John 0 Mary 0 Book1 0 Ball1 0 Figure 1: An example encoding of rules and facts A rule is encoded by connecting the collector of the antecedent predicate to the collector of the consequent predicate, the enabler of the consequent predicate to the enabler of the antecedent predicate, and by connecting the argument nodes of the consequent predicate to the ar- gument nodes of the antecedent predicate in accordance with the correspondence between these arguments speci- fied in the rule (refer to Fig. 1.) A fact is encoded using a T-and node that receives an input from the enabler of the associated predicate. This input is modified by inhibitory modifiers f3om the argu- ment nodes of the associated predicate. If an argument is bound to a constant in the fact then the modifier from such an argument node is in turn modified by an inhibitory modi!Yier from the appropriate constant node. The out- put of the r-and node is connected to the collector of the associated predicate (refer to the encoding of the fact give( John, Mary, Bookl) and buy(John, x) in Fig. 1.) The number of nodes required to encode a knowledge base, (i.e., the space complexity) is only linear in 1 KB I. Specifically, the number of nodes required are O(r + f + a + c), where T is the number of rules, f is the number of facts, a is the total number of predicate arguments and c is the number of constants in the domain. The number of links required is also only linear in IKBI. Specifically, the number of links required is O(r1 + fl), where rl is the number of rules weighted by the number of predicate arguments occurring in each rule, and fl is the number of facts weighted by the number of arguments in the predicate associated with each fact. Inference Process Reasoning in the proposed system is the transient but systematic flow of rhythmic patterns of activation, where 566 KNOWLEDGEREPRESENTATION c:can-sell b-obj Book1 . Input lo e.can-sell L:::::~:::::~:::::~:::::~::;:;~:::t:(:::::~:::::~~ 0 1 2 3 4 5 6 7 8 time Figure 2: Activation trace for the query can-seZl(hZary, Bookl)? each phase (or time-slice) in the rhythmic pattern corre- sponds to a distinct constant involved in the reasoning process and where variable bindings are represented as the in-phase (i.e., synchronous) tiring of appropriate ar- gument and constant nodes. A fact behaves as a tem- poral pattern matcher that becomes ‘active’ when it de- tects that the bindings corresponding to it are present in system’s pattern of activity. Finally, rules are intercon- nection patterns that propagate and transform rhythmic patterns of activity. Below we describe the reasoning process in greater detail, complete details may be found in [15]. We first consider yes-no queries. The inference process may be thought of as consisting of two overlapping but conceptually distinct stages. The first stage corresponds to a parallel breadth-first exploration of the potentially huge inferential dependency graph. During this stage, all the facts that am relevant to the proof of the query become active. In the second stage the actual proof is constructed: activation from the relevant facts flows downwards along collector nodes to produce an answer to the query. A yes answer corresponds to the activation of the collector node of the query predicate. Posing a Query: Specifying Variable Bindings Posing a query to the system involves specifying the query predicate and the argument bindings specified in the query. In the proposed system this is done by simply activating the relevant nodes in the manner described be- low. In particular, posing a query 20 the system does not involve any hidden time or space costs such as rewiring of the network, or addition of new nodes and links. Let us choose an arbitrary point in time - say, to - as our point of reference for initiating the query. We SHASTRIANDAJJANAGADDE 567 assume that the system is in a quiescent state just prior to to. The query predicate is specified by activating the enabler of the query predicate, with a pulse train of width and periodic&y w starting at time to. The argument bindings specified in the query are com- municated to the network as follows: o Let the argument bindings in the query involve k distinct constants: cl, . . ., ck. With each of these k constants, associate a delay Si such that no two de- lays are within w of one another and the longest delay is less than ?r - w. Each of these delays may be viewed as a distinct phase within the period to and to + ?r. a The argument bindings of a constant ci are indicated to the system by providing an oscillatory pulse train of pulse width w and periodicity P starting at to + 6i 3 to ci and all arguments to which ci is bound. This is done for each constant ci (1 < i < k) and amounts to representing argument bindings by the in-phase or synchronous activation of the appropriate constant and argument nodes. An Example Once the query is posed, a parallel search for facts that are relevant to the proof of the query ensues. We illus- trate this process with the help of an example (refer to Fig. 1.) Consider the query can-seZZ(Mary,BookI). This query is posed by providing inputs to the constants Mary and Bookl, the arguments p-seller, cs-obj and the enabler e:can-sell as shown in Fig. 2. (In the illustration, R has been assumed to be 6.) Mary and p-seller receive in- phase activation and so do Bookl and cs-obj. Let us refer to the phase of activation of Mary and Book1 as phase- 1 and phase-2 respectively. As a result of these inputs, Mary and p-seller will fire synchronously in phase-l of every period of oscillation, while Book1 and cs-obj will fire synchronously in phase-2 of every period of oscilla- tion. The node e:can-sell will also oscillate and generate a pulse train of periodicity and pulse width X. The activations from the arguments p-seller and cs- obj reach the arguments owner and o-obj of the predicate own, and consequently, starting with the second period of oscillation, owner and o-obj become active in phase-l and phase-2, respectively. At the same time, the activation from e:can-sell activates e:own (Refer to Fig. 2). The system has essentially, created dynamic bindings for the arguments of predicate own. Mary has been bound to the argument owner, and Book1 has been bound to the argument own-object. These newly created bindings in conjunction with the activation of e:own can be thought of as encoding the query own(Mary,Bookl) (i.e., ‘Does Mary own Bookl?‘)! TheT-and node associated with the fact own(Mary, Balll) does not match the query and remains inactive. Observe that during phase-2, the activation from e:own going into the T-and node is blocked by the inhibitory activation from the argument owner. The activations from owner and o-obj reach the ar- guments recip and g-obj of give, and buyer and b- obj of buy respectively. Thus beginning with the third period of oscillation, arguments recip and buyer be- come active in phase-l, while arguments g-obj and b- obj become active in phase-2. In essence, the sys- tem has created new bindings for the predicates can-sell and buy that can be thought of as encoding two new queries: give(x,Mary,Bookl) (i.e., ‘Did someone give Mary Bookl?‘), and buy(Mary,BookI) (i.e., ‘Did Mary buy Bookl?‘). The T-and node associated with the fact buy(John, x) does not become active because the activation from e:buy is blocked by the inhibitory activations from the argu- ments buyer and b-obj. The r-and node associated with the fact give(John,Mary,BookI) (this is the T-and node labeled Fl in Fig. l), however, does become active as a result of the uninterrupted activation from e:give. The inhibitory inputs from recip and g-obj are blocked by the in-phase inputs from Mary and Bookl, respectively. The activation from this T-and node causes c:give, the collec- tor of give, to become active and the output from c:give in turn causes c:own to become active and transmit an output to c:can-sell. Consequently, c:can-sell, the collector of the query predicate can-sell, becomes active resulting in an affirmative answer to the query can-sell(Maqy,Bookl). (refer to Fig. 2). Encoding Complex Rules A rule with conjunctive predicates in the antecedent, i.e., a rule of the form P1( . ..) A P2( . ..) A . ..P. (...) * &( . ..). is encoded using an additional r-and node that has a threshold of m. The outputs of the collector nodes of fi , . . . , Pm are connected to this node which in turn is connected to the collector of &. This additional node becomes active if and only if it receives inputs from the collector nodes of all the m antecedent predicates. The interconnections between the argument nodes of the an- tecedent and consequent predicates remain unchanged. The encoding of rules and facts described in the previ- ous section assumes that constants or existentially quan- tified variables do not appear in the consequent of a rule. It also assumes that the same variable does not occur in multiple argument positions in the consequent of a rule. The encoding of such rules can be carried out by very simple mechanisms that involve detecting whether appro- priate nodes are tiring in synchrony or not. A complete description may be found in 1151. 568 KNOWLEDGEREPRESENTATION from c, from c2 from Cn “ANSWER signal Figure 3: Augmented encoding of a fact in order to support answer extraction Answering 6cWh-99queries In this section, we will discuss a way of extending the system to answer w/r-queries. Consider the proof of the query can-selZ(Mary,x) with respect to the net- work shown in Fig. 1. In the process of proving this query the two relevant facts own(Mary, Balll) and give(John, Mary, Bookl) will become active. The an- swer to the whquery ‘What can Mary sell?‘, can be ob- tained by simply identifying the constants that are bound to the arguments g-obj and b-obj, respectively, of the two active facts. This is not a coincidence - notice that the arguments g-obj and b-obj are precisely the ar- guments that map to the unbound argument cs-obj of can-sell via the rules encoded in the system. The sys- tem can easily extract this information by making use of the same binding propagation mechanism it uses to map arguments bound in the query. A straightforward way of doing so is f~ posit a separate answer extraction stage that is carried out after the yes-no query associated with the wh-query has produced a yes answer. For example, given the query ‘What can Mary sell?’ the system first computes the answer to the yes-no query ‘Can Mary sell something?’ and identifies the facts own(Mary, Bdl) and give( John, Mary, Bookl) that lead to a yes answer. The answer extraction stage follows and picks out the constants Bull1 and Book1 as the answers. The representation of a fact is augmented as shown in Fig. 3. in order to support answer extraction. The representation of a fact involving an n-ary predkaU3 is modified to include n + 1 additional nodes: for each of the n arguments of the associated predicate there exists a p-btu node with a threshold of two. For convenience we will refer to such a node as a binder node. The other node (shown as a filled-in pentagon) is like a binder node, except that once active, it remains so - even after the inputs are withdrawn. This node, which we will refer to as a latch node, receives an Answer input in addition to an input from the r-and node of the associated fact. At the end of the first stage, the outputs of the T-and nodes of all the relevant facts would be active. The out- put of these r-and nodes in conjunction with the Answer signal will turn on the associated latch nodes and provide one of the two inputs to the binder nodes. If the asso- ciated yes-no query results in a yes answer, the answer extraction stage is initiated. Inputs relating to the tirst stage are withdrawn and the relevant unbound argument of the query predicate ai is activated in a distinct phase. In addition a network wide Answer signal is also propa- gated. The activation of unbound query arguments results in a phase-sensitive propagation of activation and eventu- ally leads to the activation of arguments associated with facts relevant to the query. This provides an input to the appropriate binder nodes of these facts. As the binder SHASTRI ANDAJJANAGADDE 569 nodes were already receiving an input from a latch node, they become active and produce a phase-sensitive output that in turn activates the associated constants in-phase with Ui. The answer to the w&query will be precisely those constants that are active in-phase with ai. The time taken by the answer extraction step is bounded by the depth of the inferential dependency graph. Extensions The reasoning and expressive power of the system de- scribed in this paper can be enhanced by interfacing it with specialized reasoning modules such as a seman- tic network or a IS-A hierarchy. Such an interface al- lows terms in the rules and facts to be any concept (type/instance) in the IS-A hierarchy. Another important extension extends the expressiveness and reasoning power of the system by allowing a limited use of function terms in rules[l]. Conclusion The paper describes a knowledge representation and rea- soning system that performs a limited but interesting class of inferences over a restricted class of first-order sen- tences with optimal efficiency. This work suggests that extremely efficient yet fairly powerful knowledge repre- sentation systems can be obtained by limiting the expres- siveness and the inferential ability of a representation and reasoning systems in unusual ways. References [l] V. Ajjanagadde. Reasoning with function symbols in a connectionist network. In Proceedings of the Cognitive Science Conference, 1990. Submitted for publication. [2] D.H. Ballard Parallel logic inference and energy minimization. In Proceedings of AAAI 86, Philadel- phia, PA, July 1986. [3] J. Bamden. Neural-net implementation of complex symbol processing in a mental model approach to syllogistic reasoning. In Proceedings of IJCAI 89, Detroit, MI, August 1989. [4] R. Brachman and H. Levesque. The tractabil- ity of subsumption in frame-based description lan- guages. In Proceedings of the I984 Meeting of AAAI, pages 34-37, Austin, TX, 1984. [6] A.M. Frisch and J.F. Allen. Knowledge retrieval as limited inference. In D.W. Loveland, editor, Lecture Notes in Computer Science: Sixth Conference on Automated Deduction, Springer-Verlag, New York, 1982. [7] H.A. Kautz and B. Selman. Hard problems for sim- ple default logics. Arti>cial Intelligence, 1989. Sub- mitted. [8] T.E. Lange and M.G. Dyer. High-level inferenc- ing in a connectionist network. Connection Science, 1(2):181 - 217, 1989. [9] H.J. Levesque. Logic and the complexity of reason- ing. Journal of Philosophical Logic, 17, 1988. [lo] H.J. Levesque. Making believers out of computers. Artificial Intelligence, 30(l), 1986. [ 1 l] D.R. Mani and L. Shastri. Representing Multiple Dy- namic Instantiations of a Predicate in a Connection- ist System. Technical Report, University of Pennsyl- vania, Dept. of Computer and Information Science, Philadelphia, PA, 1990. To appear. [12] G.A. Miller. The magical number seven, plus or mi- nus two: some limits on our capacity for processing information. The Psychological Review, 63(2):81- 97, March 1956. [ 131 P. Patel-Schneider. A decidable first-order logic for knowledge representation. In Proceedings of IJCAI- 85, pages 455458, Los Angeles, CA, 1985. [14] L. Shastri. Semantic Networks: An Evidential For- mulation and its Connectionist Realization. Pit- man/Morgan Kaufman, London/Los Altos, 1988. [15] L. Shastri and V. Ajjanagadde. From simple associ- ations to systematic reasoning: A connectionist rep- resentation of rules, variables and dynamic bindings. Technical Report MS-CIS-90-05, Dept. of Computer and Information Science, University of Pennsylva- nia, Philadelphia, PA, January 1990. [16] S.J. Thorpe and M. Imbert. Biological constraints on connectionist modeling. In R. Pfeiffer, editor, Connectionism in Perspective, Springer, 1988. [I73 D. Touretzky and GE. Hinton. A distributed con- nectionist production system. Cognitive Science, 12(3):423 - 466, 1988. [S] S. Fahlman. NETL: A System for Representing Real- World Knowledge. MIT Press, Cambridge, MA, 1979. 570 KN~wLEDOEREPRESENTAT~ON
1990
108
858
It’s Not My Default: The Corn lexity of Membership Problems i Restricted Propositional Default Logics Jonathan S tillman Artificial Intelligence Program General Electric Research and Development Center P.O. Box 8 Schenectady, N.Y. 12301 e-mail: stillman@crd.ge.com Abstract We investigate the computational complexity of member- ship problems in a number of propositional default logics. We introduce a hierarchy of classes of propositional de- fault rules that extends that described in [Kautz and Selman 19891, and characterize the complexity of membership problems in these classes under various simplifying as- sumptions about the underlying propositional theory. Our work significantly extends both that presented in [Kautz and Selman 19891 and in [Stillman 199Oal. Introduction One of the central concerns of artificial intelligence re- search involves developing useful models of how one might emulate on computers the ‘common-sense’ reason- ing in the presence of incomplete information that people do as a matter of course. Traditional predicate logics, de- veloped for reasoning about mathematics, are inadequate as a formal framework for such research in that they are inherently monotonic: if one can derive a conclusion from a set of formulae then that same conclusion can also be derived from every superset of those formulae. It is ar- gued that people simply don’t reason this way: we are constantly making assumptions about the world and re- vising those assumptions as we obtain more information (see [McCarthy 19771 or [Minsky 19751, for instance). Many researchers have proposed modifications of tradi- tional logic to model the ability to revise conclusions in the presence of additional information (see, for instance, [McCarthy 19861, [Moore 19831, [Poole 19861). Such log- its are called nonmonotonic. Informally, the common idea in all these approaches is that one may want to be able to “jump to conclusions” that might have to be retracted later. While a detailed discussion of nonmonotonic logics is outside the scope of this paper, a good introduction to the topic can be found in [Etherington 19881, and a num- ber of the most important papers in the field have been collected in [Ginsberg 19871. One of the most prominent of the formal approaches to nonmonotonic reasoning, developed by Reiter ([Reiter 19801), is based on default rules, which are used to model decisions made in prototypical situations when specific or complete information is lacking. Reiter’s default logic is an extension of first order logic that allows the speciti- cation of default rules, which we will summarize shortly. Unfortunately, the decision problem for Reiter’s default logic is highly intractable in that it relies heavily on con- sistency checking for processing default rules, and is thus not even semi-decidable (this is not a weakness of Reiter’s logic alone; it is common to most nonmonotonic logics). This precludes the practical use of Reiter’s default logic in most situations. The motivation for searching for computationally tractable inference mechanisms for subclasses of propo- sitional default reasoning is based on the need to reason about relatively large propositional knowledge bases in which the default structures may be quite simple. Recent research involving inheritance networks with exceptions is particularly relevant, and is explored in depth in [Touret- zky 19861 and in Chapter 4 of [Etherington 19881, where the close relationship between default logic and inheritance networks with exceptions is explored. In order to gain computational tractability of reasoning in default logic, one must restrict the expressiveness con- siderably. Simply restricting the logic to reasoning about arbitrary propositions results in decision problems that are at least as hard as deciding standard propositional logic, regardless of restrictions on the types of default rules al- lowed. Since the satisfiability problem is intractable for propositional logic, one must consider further restrictions. Recently, Kautz and Selman [Kautz and Selman 19891 and S tillman [Stillman 199Oal have investigated default logics defined over subsets of propositional calculus with various restrictions on the syntactic form of default rules allowed. A partial order of such restrictions is described in [Kautz and Selman 19891, together with discussion of the com- plexity of several problems over this partial order when the propositional theory is restricted to consisting of a set of literals. Several of these restrictions were shown to re- sult in polynomial-time tests for determining whether cer- tain properties hold given such a restricted propositional theory. In particular, it was shown that one can decide in polynomial time whether there exists an extension that contains a given literal when the default rules are restricted to a class they called Horn default rules. They suggested that the ability to combine such default theories with non- STILLMAN 571 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. default propositional Horn theories would be particularly useful, but left open the question of whether the mem- bership problem (i.e., determining whether there exists an extension of a given default theory containing a specified literal) for such a combination of theories is tractable. In WiIlman 199Oa1, we showed that a restriction of this prob- lem is NP-complete, and presented several related results. The remainder of this paper is organized as follows: we begin with a brief description of Reiter’s default logic, followed by a short overview of NP-completeness, and a presentation of the restrictions considered by Kautz and Selman. Following this we introduce a hierarchy of classes of propositional default rules that significantly extends that presented in CKautz and Selman 19891. Next, we charac- terize the complexity of the membership problem for these classes. Finally, we summarize the results presented in this paper, and discuss related results and future work. Preliminaries Reiter’s Default Logic For a detailed discussion of Reiter’s default logic the in- terested reader is referred to [Reiter 19801. In this section we will simply review some of the immediately pertinent ideas. A default theory is a pair (D, W), where W is a set of closed well-formed formulae (wffs) in a first order language and D is a set of default rules. A default rule consists of a triple < a, p, y > : a is a formula called the prerequisite, /3 is a set of formulae called the justijka- tions, and y is a formula called the conclusion. Informally, a default rule denotes the statement “if the prerequisite is true, and the justifications are consistent with what is be- lieved, then one may infer the ConcZusion.” Default rules are written Cl!:0 Y If the conclusion of a default rule occurs in the justifi- cations, the default rule is said to be semi-normal; if the conclusion is identical to the justifications the rule is said to be normal. A default rule is closed if it does not have any free occurrences of variables, and a default theory is closed if all of its rules are closed. The maximally consistent sets that can follow from a default theory are called extensions. An extension can be thought of informally as one way of “filling in the gaps about the world.” Formally, an extension E of a closed set of wffs T is defined as the fixpoint of an operator I’, where I’(T) is the smallest set satisfying: e W C W9, o T(T) is deductively closed, e for each default d E D, if the prerequisite is in T(T), and T does not contain the negations of any of the justifications, then the conclusion is in T(T). Since the operator I’ is not necessarily monotonic, a default theory may not have any extensions. Normal default the- ories do not suffer from this, however (see [Reiter 19801), and always have at least one extension. There are several important properties that may hold for a default theory. Given a default theory (D, W), perhaps together with a literal Q, one might want to determine the following about its extensions: Existence Does there exist any extension of (D, W)? Membership Does there exist an extension of (D , W) that contains q? (This is called goal-directed reasoning by Kautz and Selman.) Entailment Does every extension of (D, W) contain 4 ? (This is closely related to skeptical reasoning, where a literal is believed if and only if it is included in all extensions.) NP-complete Problems NP is defined to be the class of languages accepted by a nondeterministic Turing machine in time polynomial in the size of the input string. The “hardest” languages’ in NP are called NP-complete: all such languages share the prop- erty that all languages in NP can be transformed into them via some polynomial time transformation. To show that a problem in NP is NP-complete one must demonstrate a polynomial-time transformation of an instance of a known NP-complete problem to an instance of the problem under consideration in such a way that a solution to one indicates a solution to the other. For a thorough discussion of the topic the interested reader is referred to [Garey and John- son 19791. The fastest known deterministic algorithms for NP-complete problems take time exponential in the prob- lem size. It is not known whether this is necessary: one of the central open problems in computer science is whether P = NP. Most researchers believe that P $ NP, and that NP-complete problems really do need exponential time to solve. Thus these problems are considered intractable, since if P j NP, we cannot hope to solve arbitrary in- stances of them with inputs of nontrivial size. Restricted Default Theories If practical reasoning systems are to be developed, one cannot ignore computational complexity. Each of the questions mentioned above is at least as hard as deciding the underlying theory W. Thus, if W consists of arbitrary first-order formulae, none of these questions is even semi- decidable, and a practical system must consider stronger restrictions. If W is restricted to arbitrary propositional formulae, each of the questions require deterministic time proportional to that needed to determine propositional sat- isfiability (approximately 2” where n is the number of atoms occurring in W, using the best algorithms currently known). It is unlikely that algorithms that perform sig- nificantly better will be developed in the future, under the assumption that P j NP. Thus, a necessary condition that must be satisfied to guarantee efficient answers to the questions posed above is that we limit ourselves to even ‘NP-completeness is often discussed in terms of deci- sion problem rather than languages, although the two are interchangeable. 572 KNOWLEDGE REPRESENTATION stronger restrictions on W. The propositional will consider are described below. theories we that potentially not occur. unresolvable circular dependencies can- Propositional literals: W consists of propositional atoms and their negations. In [Kautz and Selman 19891, this restriction is assumed throughout. Horn clauses: W consists of a conjunction of proposi- tional clauses, each of which contains at most one pos- itive literal. 24iteral clauses: W consists of a conjunction of proposi- tional clauses, each of which contains at most 2 literals. This restriction is assumed in network defauh theories, an important class of default theories described in detail in [Etherington 19881. Each of these restricted propositional theories is known to be decidable in linear time. The first case is trivial. For the second and third, see [Dowling & Gallier 19841 and [Apsvall, Plass, & Tarjan 19791, respectively. These theories provide us with a good starting point for building simple default theories. Note that while the first restriction forms a subset of each of the others, the second and third are incomparable with respect to the formulae they contain. In subsequent sections we will examine the complexity of reasoning in a number of restricted default theories. We will consider default theories for which W falls into one of the three subclasses of propositional formulae presented above. For each of these, we will consider a number of restrictions on what classes of default rules are allowed. These restrictions are discussed below. Prior Work on Restricted Default Theories In [Kautz and Selman 19891, Kautz and Selman presented a taxonomy of propositional default theories. They re- stricted W to contain only propositional literals, and re- stricted default rules to be semi-normal, with the precon- dition, justifications, and conclusions of each default rule consisted of conjunctions of literals (this restriction makes consistency checking a simple task). They also consid- ered the following further restrictions on the default rules allowed. Unary The prerequisite of each default must be a posi- tive literal, and the conclusion must be a literal. If the consequence is positive, the justification must be the conjunction of the consequence and a single negative literal; otherwise, the justification must be the conse- quence. Disjunction-Free Ordered The interested reader is re- ferred to [Etherington 19871 for a formal definition of ordered default theories, which we omit here. Intu- itively, in an or&red semi-normal default theory the literals can be ordered in such a way that potentially unresolvable circular dependencies cannot occur. The interested reader is referred to Etherington 19871 for a formal definition of ordered default theories, which we omit here. Intuitively, in an ordered semi-normal de- fault theory the literals can be ordered in such a way Ordered Unary These combine the restrictions of the first two theories described above. Kautz and Selman remark that these theories appear to be the simplest nec- essary to represent inheritance hierarchies with excep- tions (see [Touretzky 1986; Etherington 19881). Disjunction-Free Normal These are disjunction-free or- dered theories in which the consequence of each default rule is identical to the justification. orn The prerequisite literals in these default rules must each be positive, and the justification and consequence are each a single literal. Normal Wary The prerequisite in each of these default rules consists of a single positive literal, the conclusion must be a literal, and the justification must be identical to the consequence. These form the most simple class of default rule that is considered in [Kautz and Selman 19891. These restricted theories are related in a partial order as shown in Figure 1 below. Kautz and Selman examined the extension existence, membership, and entailment questions for these theories in [Kautz and Selman 19891. . . . w A DF-Ordered Unary Normal Unary Figure 1: Kautz and Selman’s hierarchy of restricted de- fault theories. Prompted by a gap in the characterization of restricted default theories, we showed recently in [Stillman 199Oal that the following problem is NP-complete. Morn Clauses with Normal Unary Default Rules (HC- NW Instance: A finite set H of propositional Horn clauses, to- gether with a finite set D of normal, unary, propositional default rules, and a distinguished literal 4. STILLMAN 573 Question: Does contains 4 ? there exist an extension of (D, H) that These restricted theories are related in a partial order. The hierarchy is shown in Figure 2. This result subsumed an open question cited in [Kautz and Selman 19891: Kautz and Selman were interested in whether one could add Horn default rules to Horn proposi- tional theories without introducing intractability. Unfortu- nately, our result answers this question negatively. Among other related results, we showed that the entailment prob- lem is co-NE-complete for these default theories. We subsequently examined even stronger restrictions on the classes of default rules allowed, hoping to find a class of rules that could be combined with Horn clauses while retaining the tractability of propositional Horn clause rea- soning. We also examined the complexity of restricted default reasoning under other restrictions on the proposi- tional theories allowed, as described above. In the follow- ing sections, we report on the results of this work. Expanding the Horizons Our investigation suggested a richer hierarchy of default rules, most of which result from disallowing any prerequi- sites in rules. This corresponds to introducing a “context- free” element to the reasoning, and seems to constitute the most simple type of default rule that is not completely trivial. In this section, we explore the complexity of mem- bership problems in default theories in which W belongs to one of the classes of formulae listed above, and in which D belongs either to one of the classes of default rules dis- cussed above or to one of the following: Prerequisite-Free Disjunction-free default rules with no prerequisites. Prerequisite-Free Unary The prerequisite of each rule is empty, and the conclusion must be a literal. If the con- sequence is positive, the justification must be the con- junction of the consequence and a single negative literal; otherwise, the justification must be the consequence. Prerequisite-Free Ordered Again, the reader is referred to [Etherington 19881 for a formal definition of or- dered theories; A prerequisite-free ordered theories is a disjunction-free ordered theory in which the prerequi- site is empty. Prerequisite-Free Ordered Unary These combine the restrictions of the first two theories described above. Prerequisite-Free Normal These are prerequisite-free or- dered theories in which the consequence of each default rule is identical to the justification. Prerequisite-Free Normal Unary The prerequisite in each of these default rules is empty, the conclusion must be a literal, and the justification must be identical to the consequence. Prerequisite-Free Positive Normal Wary The prereq- uisite in each of these default rules is empty, the con- clusion must be a positive literal, and the justification must be identical to the consequence. Worn Clause Theories After showing that the problem HC-N?J was NE-complete, we looked for even tighter restrictions on the default rules allowed that would provide us with tractable default rea- soning where the propositional theory consisted of Horn clauses. The results reported here were somewhat surpris- ing. Unfortunately, they are also largely negative. The membership problem remains intractable under very tight restrictions. In particular, for the following problem Horn Clauses with Prerequisite-Free Positive Normal Unary Default Rules (HC-2) Instance: A finite set H of propositional Horn clauses, together with a finite set D of prerequisite-free positive normal, unary, propositional default rules, and a distin- guished literal 4. Question: Does there exist an extension of (D, H) that contains 4 ? we prove: Theorem 1 He-2 is NP-complete. Proof: It is not difficult to demonstrate membership in NP: although the extension may be too large to describe explicitly, it suffices to provide the original set of Horn clauses, together with those default rules that were ap- plied, and verify that the default rules form a maximal set and can actually be applied consistently. Since these are disjunction-free, this can be done efficiently. To demonstrate NE-hardness we transform an instance of NOT-ALL-EQUAL SATISFIABILITY to one of HC- 2. NOT-ALL-EQUAL SATISFIABILII’Y can be stated as follows. Given sets Sl,S2,... , S, , each having 3 members, can the members be colored with two colors so that no set is all one color? In IShaefer 19781 it is shown that NOT-ALL-EQUAL SATISFIABILITY is NE-complete. Given an instance 1 of NOT-ALL-EQUAL SATISFIABILITY, let I: be the set of all elements appearing in any S,. For each such element 0, introduce the a new propositional atom u, and add the following default rule to D: :a T Next, for each set Si = {oil, ui2, ui,} in I introduce a new propositional atom Si, and add the following clauses to w: ( TT’il v -ui2 v -cq3) ( TT& v S;) ( -uiz v Si) (-ui, v m Finally, introduce a new propositional atom Q and add the following clause to W: (-271 v -s2 v . . . v -s, v q). 574 KN~WLEDOEREPRESENTATI~N PF Positive Normal Unary Figure 2: An expanded hierarchy of default rules. This completes the transformation, which results in only a linear increase in the size of the problem. It is a simple matter to verify that the transformed instance satisfies the restrictions on W and D, i.e., the clauses are Horn and the default rules are prerequisite-free positive normal unary. We now show that there exists an extension of (D, W) that contains q if and only if the original instance I of NOT-ALL-EQUAL SATISFIABILITY is satisfiable. (+). Suppose I is satisfiable. Then the elements of C must be two-colorable in such a way that none of the sets Si has all its elements the same color. Let us assume that the two colors correspond to the truth values true and false. There must exist a satisfying assignment to the elements of ZZ in which a maximal number of the elements of I; are colored true. We must show that we can, given such a maximal satisfying assignment LY for I, construct an extension of (D, W) that contains q. We proceed as follows. Each of the sets in S must have had at least one of its elements assigned the value -true. For each such element, assign the corresponding atom in the instance of HC-2 the value true. This can be done using the default rules that were added for each of the set elements. It is not hard to see that this can always be done consistently: the three element clauses introduced into W will not be contradicted since they correspond to at least one of the elements of each set being assigned the value false. We know that this can be done because we are given a solution to I. Since the assignment in I is maximal as described above, no other set elements can be made true without forcing at least one of the sets to have all its elements take the same value. Thus, none of the remaining default rules can be applied. Since each set has at least one of its members assigned the value true, each of the propositional atoms Si are true in the extension we are constructing. Thus, due to the clause (1271 v 7s2 v . . . v -sm v q) in W, the extension must contain the literal q. At this point it is easy to see that an extension containing q exists. (t). Suppose there exists an extension of (D, W) that contains q. We note that since it only contains non-unit Nom clauses, W is easily seen to be consistent. Thus (D, W) has only coherent extensions. It follows that each of the literals of the form Si : 1 5 i 5 m must be true (this is the only way to force q to be true). Furthermore, it follows that for each such literal, Si, at least one of the literals in the set { ui, , uiz, uiJ} must be true. The clause in W of the form ( -uil v -ui2 v ‘Ui3) forces at least one of these to be fake as well. This pro- vides us with at least one element of each set Si : 1 5 i 2 m that is true, and at least one that is false. Given this, it is easy to construct a satisfying assignment for for the instance I of NOT-ALL-EQUAL-SATISFIABILITY. 0 The implications of this result on the hierarchy above are summarized in Figure 3 below. 2-Literal Clauses A second interesting subclass of propositional formulae is 2-literal clauses. The classes formed by combining theo- ries consisting of 2-literal clauses with restricted default theories is assumed in network default theories, described in [Etherington 19881. We have investigated the complex- ity of membership problems for this class given the above STILLMAN 575 Figure 3: The complexity of membership problems with Horn theories. hierarchy of restrictions on D shown above. For the prob- lem 2-Literal Prerequisite-Free Normal Instance: A finite set W of propositional 2-literal clauses, together with a finite set D of prerequisite-free normal propositional default rules, and a distinguished literal 4. Question: Does there exist an extension of (D, W) that contains 4 ? we have the following theorem: Theorem 2 a-literal Prerequisite-Free Normal can be solved in polynomial time. We present an O(n3) algorithm deciding the membership problem for this class in [Stillman 199Obl. The basic idea is to exploit the structural property of 2-literal clauses that they resemble binary relations. As a result, we can effec- tively compute an implicational “closure” of the underly- ing propositional theory. Once this is done, it is relatively easy to determine whether there is a default rule that can be used to force q to be included in the extension. For the problem 2-Literal Normal Unary Instance: A finite set W of propositional 2-literal clauses, together with a finite set D of normal unary propositional default rules, and a distinguished literal 4. Question: Does there exist an extension of (D, W) that contains 4 ? we prove the following: Theorem 3 2-Literal Normal Unary is NP-complete. 576 KNOWLEDGEREPRESENTATION The proof is complex, and space restrictions do not al- low its inclusion herein. A complete proof is available in E%illman 199Obl. For the problem 2-Literal Prerequisite-Free Ordered Unary Instance: A finite set W of propositional 2-literal clauses, together with a finite set D of prerequisite-free ordered unary propositional default rules, and a distinguished lit- eral q. Question: Does there exist an extension of (D, W) that contains 4 ? we have Theorem 4 a-literal Prerequisite-Free Ordered Unary is NP-complete. This follows from the proof of Theorem 5 below. The proof is complex and thus omitted. These results are sum- marized in Figure 4 below. Single Literal Theories As mentioned above, this is the class that was investigated in [Kautz and Selman 19891. The complexity of reasoning in the theories they considered is described in [Kautz and Selman 19891; their results, together with ours, are illus- trated in Figure 5. Since these theories are contained in both of those considered above, problems easy for them are also easy for these. The new result we present for these theories is given below: Figure 4: The complexity of membership problems with 2-literal theories. Figure 5: The complexity of membership problems with single literal theories. STILLMAN 577 Single Literal Prerequisite-Free Ordered Unary Instance: A finite set W of propositional single literal clauses, together with a finite set D of prerequisite-free or- dered unary propositional default rules, and a distinguished literal 4. Question: Does there exist an extension of (D, W) that contains 4 ? Theorem 5 Single Literal Prerequisite-Free Ordered Unary is NP-complete. The complete proof appears in the full version of this pa- per. These results are summarized in Figure 5 below. Conclusions and Future Research We have presented a number of results that characterize the complexity of the membership problem for restricted default theories. This work significantly extends that pre- sented in [Kautz and Selman 19891 and [Stillman 199Oal. Our work considers very tight restrictions on the expres- siveness of default rules as well as the underlying propo- sitional theory. Unfortunately, our results show that even under these restrictions, membership problems almost in- variably remain intractable. This suggests that if practi- cal default reasoning systems are desired, one must ei- ther consider extremely restricted expressiveness or work to identify subcases of otherwise intractable classes that yield feasible complexity. A number of related results pertaining to the complexity of extension existence and entailment over the classes we have considered can be answered easily given minor mod- ifications of the proofs of the complexities membership problems. These results are presented in the full version of this paper ([Stillman 199Obl). Acknowledgements The author is indebted to Deepak Kapur and Robert Mattheyses for several helpful discussions concerning this work. References Aspvall, V., Plass, M.F., and Tarjan, R.E. 1979. A linear- time algorithm for testing the truth of certain quantified Boolean formulas. Information Processing Letters 8 (3). Dowling, W.F., and Gallier, J.H. 1984. Linear time algo- rithms for testing the satisfiability of propositional horn formulas. Journal of Logic Programming, 3~267-284. Etherington, D.W. 1987. Formalizing Non-Monotonic Reasoning Systems. Artiftcial Intelligence 31141-85. Etherington, D.W. 1988. Reasoning with Incomplete In- formation. Pitman, London. Garey, M.R., and Johnson, D.S. 1979. Computers and Intractability. W.H. Freeman, New York. Ginsberg, M.L., editor. 1987. Readings in Nonmonotonic Reasoning. Morgan Kaufman, Los Altos, CA, 1987. Kautz, H.A., and Selman, B. 1989. Hard problems for simple default logics. In Proceedings of the First In- ternational Conference on Principles of Knowledge Rep- resentation and Reasoning, 189-197, Toronto, Ontario, Canada. McCarthy, J. 1977. Epistemological problems of artificial intelligence. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence, 1038-1044. International Joint Conferences on Artificial Intelligence, Inc. McCarthy, J. 1986. Applications of circumscription to formalizing commonsense knowledge. Artificial Intelli- gence, 28:89-166. Minsky, M. 1975. A framework for representing knowl- edge. In Patrick Winston, editor, The Psychology of Com- puter Vision, pages 21 l-277. McGraw-Hill, New York. Moore, R.C. 1983. Semantical considerations on non- monotonic logic. In Proceedings of the Eighth Intema- tional Joint Conference on Artificial Intelligence, 272- 279, Karlsruhe, West Germany, International Joint Con- ferences on ArtXcial Intelligence, Inc. Poole, D.L. 1986. Default reasoning and diagnosis as theory formation. Technical Report CS-86-08, Dept. of Computer Science, University of Waterloo. Reiter, R. 1980. A logic for default reasoning. Arti>ciaZ Intelligence, 13:81-132. Shaefer, T.J. 1978. The Complexity of Satisfiability Prob- lems. In Proceedings of the 10th Annual ACM Symp. on Theory of Computing, Association for Computing Ma- chinery, New York. Stillman, J.P. 199Oa. The Complexity of Horn The- ’ ories with Normal Unary Defaults. In Proceedings of the 8th Canadian Artificial Intelligence Conference, (CSCSI/SCEIO-90), Ottawa, Canada. Stillman, J.P. 1990b. The Complexity of Reasoning in Restricted Propositional Default Logics. In preparation. Touretzky, D.S. 1986. The Mathematics of Inheritance Systems. Pitman, London. 578 KNowLEDOEREPIU~SE~A~~~N
1990
109
859
Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept in graph theory which has some surprising applications to a class of AI search problems. Graph bandwidth provides a link between the syntactic structure of a constraint satis- faction problem (CSP) and the complexity of the un- derlying search task. Bandwidth can be used to define a new class of easy CSP’s, namely those that have lim- ited constraint graph bandwidth. These CSP’s can be solved in polynomial time, essentially by divide and conquer. This in turn suggests that bandwidth pro- vides a mathematical measure of the decomposability of a search problem. In addition, bandwidth supplies a measure for comparing different search orderings for a given CSP. Statistical analysis suggests that back- tracking with small bandwidth orderings leads to a more efficient search than that obtained under order- ings with larger bandwidths. Small bandwidth order- ings also limit the pruning that can be done by intel- ligent backtracking. If small bandwidth orderings are indeed advantageous, then a large number of heuristics developed in numerical analysis to find such orderings may find applicability to solving constraint satisfaction problems. 1 Introduction The bandwidth of an ordering of a graph is the maxi- mum distance between two adjacent vertices, and the bandwidth of a graph is its minimum bandwidth under any ordering. ’ Bandwidth is one of the basic concepts in graph theory, and is related to almost every other mathematical property that graphs possess [Zinn et al., 19821. It is nonetheless surprising to discover some strong connections between graph bandwidth and a class of search problems encountered in AI. The search problems known as constraint satisfac- tion problems (CSP’s) have an associated constraint graph. The vertices of the graph consist of the vari- ables of the search problem, and there is an edge be- tween two vertices if there is a (non-trivial) constraint between those variables. ‘These definitions will be made more precise in section 3. The bandwidth of the constraint graph is strongly related to the complexity of the underlying constraint satisfaction problem. In particular, there is evidence for two claims. 6 The bandwidth of the constraint graph of a con- straint satisfaction problem serves as a measure of its decomposability. o The bandwidth of a search ordering provides a mea- sure of its quality, as backtracking with a small band- width ordering generally results in a smaller search tree. The first claim is supported by a proof that any prob- lem of limited bandwidth can be solved in polynomial time, essentially by divide and conquer. The second claim is supported by statistical analysis, and by evi- dence that small bandwidth orderings have additional important properties related to intelligent backtrack- ing. If small bandwidth orderings are indeed useful, then the large body of heuristics that has been devel- oped by numerical analysts for finding such orderings may prove to be useful for solving CSP’s. After a brief review of constraint satisfaction prob- lems, section 3 formally defines bandwidth and re- lates it to some other graph-theoretic notions in the CSP literature. Section 4 proves that any CSP whose bandwidth can be limited is solvable in polynomial time. Section 5 presents statistical data which suggests that small bandwidth orderings are generally superior, as well as describing some preliminary results about the interaction between such orderings and intelligent backtracking. 2 Constraint Satisfaction Problems A constraint satisfaction problem (CSP) has a set of variables and a domain of values, v = {VI, v2,. . . , v,} the set of variables, jlVlj = n, D = {dl, d2,. . . , dm} the set of values, lIDI = d. Every variable vi must be assigned a value dk. A CSP also consists of some constraints saying which assignments are compatible. Most interesting prob- lems are binary CSP’s, where the constraints involve 46 AUTOMATED REASONING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. pairs of variables. Such a constraint is a (proper) sub- set of D x D consisting of the simultaneously permitted assignments. A binary CSP has an associated con- straint graph G = (V, E), where E = { (vi, vi) ] there is a contraint between vi and vj ). Note that the constraint graph hides a great deal of the information about the search problem, particularly the tightness of the constraints. The constraint graph G may be assumed to be con- nected, as otherwise each connected component can be considered separately. Many standard combinatorial problems are binary CSP’s, including graph coloring and the n-queens problem. 3 Graph Definitions Given a graph G with n vertices, an ordering h is a one- to-one map from the vertices of G to the set { 1, . . . , n}. The bandwidth of a vertex v under an ordering h is the maximum value of Ih(v) - h(w)1 over all vertices w connected to v. The bandwidth of a graph under an ordering is the maximum bandwidth of any vertex, and the bandwidth of a graph is its minimum bandwidth under any ordering. The bandwidth of G under h will be written as B(G, h). 3.1 Relating bandwidth and other graph properties There are many notions akin to bandwidth which have been developed in the CSP literature. It is straightfor- ward to determine the relationship between bandwidth and three graph properties: front length (introduced by [Seidel, 1981]), width [Freuder, 19821, and induced width [Dechter and Pearl, 19871. Seidel [1981] defines an invasion of G to be a se- quence of subgraphs Gr, . . . , G, where Gi is a subgraph of Gr+l with i vertices and where G, = G. For a given invasion, a front & consists of the vertices in Gi that are adjacent (in G) to vertices not in Gi. The front length of an invasion is the maximum size of Fi. There is an invasion associated with any ordering h, defined by G = h-l(l), Gi+r = Gi u h-l(i + 1). The front length of G under the invasion associated with h will be written as F(G, h). Freuder [1982] defines the width of a vertex v under an ordering h to be the number of vertices that are connected to v which occur earlier than v under h. The width of a graph under an ordering is the maximum width of any vertex. The width of G under h will be written as W(G, h). Dechter and Pearl [1987] construct the induced graph of G under an ordering h by processing G’s ver- tices in the order {h-l(n), . . . , h-l(l)}, and adding an edge between any two neighbors of a vertex v that pre- cede v under h. The induced graph of G under h will be written as Gm. It is straightforward to analyze the relationships among these quantities. First, for a fixed ordering h, define pi d!f (h-‘(i - B(G, h)), . . . , h-‘(i - 1)) (assuming h”(j) = h-l (1) if j < 1 for simplicity). The important property of Pi is the following: if there is an edge between v and W, where v occurs before h(i) and w does not, then v E Pi. Theorem 3.1 For any graph G and ordering h BP, h) L J-W’, h). Proof: Consider a front Fi in the invasion defined by h. Clearly F! C Pd+l. But IIpS+lll L B(W). 0 Theorem 3.2 For any graph G and ordering h WV4 2 WG,h). Proof: All of the previous vertices that v = h(i) is connected to must be in Pa. • I Theorem 3.3 For any graph G and ordering h B(G, h) 2 W(Gh, h). Proofi The edges added when v = h(i) is processed will be between two elements of Pi. Addin Q such edges will never increase the bandwidth, so B(G , h) = B(G, h). Applying theorem 3.2 with G replaced by Gh gives the desired result. 0 Note that the above three inequalities cannot be made strict (i.e., it is invalid to replace ‘2’ by ‘>‘). Theorem 3.4 There exists a graph G and an order- ing h such that B(G,h) = F(G, h) = W(G, h) = W(Gh, h). Proof: Consider a graph G that is a triangle. Then, for any ordering h, B(G, h) = F(G, h) = W(G, h) = W(G*, h) = 2. CI 4 Limited Bandwidth CSP’s There is no technical definition of what it means for a search problem to be “local” or “nearly decompos- able”. The intuition behind these terms, however, is that certain search problems can be solved piece by piece, without taking the entire problem into consider- ation at each point in the search. Coloring a long and narrow graph is an example of such a search problem. There are several reasons to believe that the band- width of a CSP’s constraint graph reflects the locality of the search problem. First, note that edges in the constraint graph measure strong interactions between variables. If there is no edge between two variables, then assigning one a value can have no (direct) effect upon the other, so the lack of an edge reflects a sort of independence. If the CSP has limited bandwidth, each vertex in the constraint graph can have no more than a bounded number of neighbors. This suggests that small bandwidth graphs should be solvable by only worrying about a small subset of the variables at any instant. Additional anecdotal evidence to support this con- clusion can be obtained by visual inspection of graphs ZABIH 47 of various bandwidths. Graphs with small bandwidths tend to look long and thin. The highest bandwidth graph, on the other hand, is the complete graph where there is an edge between every pair of vertices. A stronger argument can be made on the basis of the claim that nearly decomposable search problems should be easy to solve. In particular, it should be possible to solve them efficiently by solving their sub- parts more or less independently (gluing the subparts back together into a solution by divide and conquer or by dynamic programming). It turns out that any CSP of limited bandwidth can be solved in polynomial time by dynamic programming. 4.1 Solving bandwidth-limited CSP’s in polynomial time The basic strategy is to find an ordering with mini- mal bandwidth, and then to use this ordering to solve the CSP. In general, determining the bandwidth of a graph is an NP-complete problem [Papadimitriou, 19761. However, if the graph’s bandwidth is no larger than b, a minimal bandwidth ordering can be found in time 0( n’) and also space 0 n’). This can be done via an algorithm of Saxe [1980 I and Sudborough [ 19841. as improved by Gurari Given an ordering h with bandwidth no larger than b, the CSP can then be solved via dynamic pro- gramming. There are three possible algorithms, with slightly different bounds. Because of the relation- ships between bandwidth, front length and induced width that are proven in section 3, either Seidel’s or Dechter and Pearl’s method may be used. In addition, Ferniindez-Baca has recently produced an algorithm that makes explicit use of bandwidth [Fernandez-Baca, 19SS]. However, Dechter and Pearl’s algorithm gives the bound that seems likely to be lowest in practice. Dechter and Pearl [1987] show how to solve a CSP under an ordering h with induced width w = W(Gh, h) in time O(d”+l ) and space O(dW). Applying theo- rem 3.3, w < b. Combining this with Saxe’s algorithm gives the desired result. Theorem 4.1 Any CSP whose constraint graph’s bandwidth is no larger than b can be solved in time O(nb + db+‘) and space O(nb + db). One corollary of this theorem is a very simple proof of a result first discovered by Monien and Sudborough [1985]. Corollary 4.2 Any graph of limited bandwidth can be colored in polynomial time. 5 Small Bandwidth Orderings Since small bandwidth CSP’s are easy to solve, the obvious generalization would be to claim that small bandwidth orderings are generally efficient. In fact, the method for solving small bandwidth CSP’s described above first finds a small bandwidth ordering (by Saxe’s Figure 1: Average tree size for a graph-coloring prob- lem, with 99% confidence intervals. 24000 22000 Search tree 20909 size 18000 16000 14000 A t t I I I I I I - 8 9 10 11 12 Graph bandwidth Figure 2: Average tree size as a function of ordering bandwidth, with 99% confidence intervals. algorithm), and next uses that ordering to solve the search problem. This suggests that there special about small bandwidth orderings. 1s something There is both empirical and theoretical evidence that ties. ing small ban advantageous small bandwidth orderings have special Empirically, there is statistical evidence .d width There proper- that us- orderings for backtrack search is are also theoretical evidence that links small bandwidth tracking. orderings with intelligent back- 5.1 Empirical results Statistical analysis of some preliminary experimen- tal results suggest that small bandwidth orderings are indeed advantageous. The problem that has been examined is the graph-coloring problem described in [Bruynooghe and Pereira, 19841. This CSP has 1176 solutions. Depth-first search (sometimes referred to as backtracking, or chronological backtracking) has been run on the problem several thousand times, under or- derings with different bandwidths. Figure 1 shows the average number of nodes in the search tree as a function of the bandwidth of the order- ing, together with the 99% confidence intervals. The graph in figure 2 presents the same data in a slightly more readable form. The data supports the hypothesis 48 AUTOMATED REASONING that the search trees resulting from small bandwidth orderings are significantly smaller than those resulting from orderings with greater bandwidth. More precisely, the data suggests that as the band- width of the ordering increases, the search tree gets larger. Furthermore each increment in bandwidth re- sults in a corresponding increase in search tree size which is statistically significant at the 99% confidence level. For this problem at least, it is not merely the case that the very smallest bandwidth orderings are better han the very largest. Of course, one would like to have evidence from more than one problem. The large number of trial runs nec- essary to obtain statistically significant results, how- ever, makes this something of a challenge. 5.2 Ordering bandwidth and intelligent backtracking Why should it be advantageous to use a small band- width ordering? Suppose that a CSP is solved by chronological backtracking under an ordering with bandwidth b. Such a process will construct a search tree in the standard manner: the nodes of the search tree will consist of labelings which assign values to some subset of the variables, the root node will consist of the empty labeling, and the children of a node will assign a value to one variable that was not assigned a value by the parent node. Assuming that the CSP is solved under a fixed ordering, each level in the search tree will correspond to a single variable which is as- signed a value at that depth in the tree.2 Some of the leaf nodes of the search tree will be solutions to the CSP; the remaining leaf nodes are failures, because all the possible values of some unassigned variable are in- compatible with the values of the variables that have been assigned values. At a given node of this search tree, the corresponding labeling will have assigned values to certain variables. However, only the most recent b variables to have been assigned values can have any effect on the remainder of the search. Any variables that were assigned values earlier will have no edges in the constraint graph con- necting them to variables that do not yet have values, and hence will have no effect. More precisely, consider two labelings at the same level in the search tree. If these two labelings assign the same values to the most recent b variables, then the subtrees underneath the two labelings will be iso- morphic. This is the property that the dynamic pro- gramming schemes mentioned in section 4.1 exploit. This is also an advantageous property for doing chronological backtracking. In a small bandwidth or- 21t is rather straightforward to extend all of the results in this paper to rearrangement search strategies such as [Purdom et al., 19811. The bandwidth of a dynamic search ordering is simply the maximum of the bandwidths of the orderings used down any branch of the search tree. dering, at any point in the search the only variables that matter are a small number of recent ones. When using backtrack search, the decision actually responsible for a failure can occur significantly before the failure itself is detected. This is a well-known mal- ady of backtracking which can lead to extremely poor performance. The greater the number of intervening decisions, the worse backtracking performs. Various schemes have been proposed to solve this problem. These schemes mostly work by invoking some sort of failure analysis to determine where responsibil- ity for a failure lies. The results of this analysis are then used to discard portions of the search tree which can be shown to contain no solution. The two most popular such schemes are dependency- directed backtracking [Stallman and Sussman, 19771 and intelligent backtracking [Bruynooghe, 19811. The major difference involves how much pruning of the search tree is done. Dependency-directed backtracking achieves a much greater reduction in search tree size, at the cost of a potentially exponential use of storage space. Intelligent backtracking uses very little space, but does less pruning. Researchers who have studied dependency-directed backtracking or intelligent backtracking have generally suspected that these schemes have some relationship with search order. For instance, it is very difficult to construct an example of a problem where dependency- directed backtracking is helpful without requiring that the problem be solved in a particular order. There is usually some other ordering which solves the problem without running into this problem. One example is the sample problem described by [de Kleer, 1986, page 1361. Suppose that a search problem is being solved by chronological backtracking under an ordering with bandwidth b. At any point in the search, only the last b variables that have been assigned values matter. A failure occurs because all the values at a particular variable vi have been eliminated. The responsible vari- ables must be among the last b to be assigned values, as no other variables have an edge connecting them with vi. This strongly suggests that the number of interven- ing decisions between the choice responsible for a fail- ure and the failure itself should be bounded by b. It is possible to prove a theorem that supports this intu- ition. The theorem applies to intelligent backtracking schemes, which are in fairly wide use in the PRO- LOG community. Bruynooghe [1981] gives one intelli- gent backtracking algorithm for solving CSP’s. There are numerous other very similar schemes, such as that given in [Bruynooghe and Pereira, 19841. The statement of the theorem, which applies to all these schemes, is as follows. If a CSP is searched in a fixed order using intelligent backtracking, the bandwidth of that ordering provides ZABIH 49 ‘. a bound on the amount of the search tree that intel- ligent backtracking will prune. This bound holds at almost all nodes in the search tree; the only exception is when intelligent backtracking declares the problem to be insolvable. It is possible to prove restrictions on the nodes in the search tree where these exceptions can occur, in terms of the Ic-consistency [Freuder, 19781 of the original CSP. This theorem and some related results will be dis- cussed in detail in a forthcoming paper. They provide an interesting relationship between search ordering and intelligent backtracking. Intelligent backtracking and search order are clearly not orthogonal. Choosing a good (i.e., small bandwith) ordering reduces the ad- vantages of using intelligent backtracking. A reason- able explanation would be that choosing a small band- width ordering ameliorates the problem that intelli- gent backtracking is designed to solve. This expla- nation is consistent with the statistical evidence that small bandwidth orderings are useful for (chronologi- cal) backtracking. 5.3 Bandwidth and adjacency One final interesting property of small bandwidth or- derings is their relationship to the “adjacency” heuris- tic. This is one of the simplest and most standard ordering heuristics, which is especially common for graph-coloring problems. Adjacency simply means that the ordering starts at one edge of the graph and looks at new vertices that are adjacent (i.e., connected) to a vertex that has already been examined. The adjacency heuristic is clearly related to minimiz- ing ordering bandwidth. A small bandwidth ordering, by definition, will examine all of a vertex v’s neighbors soon after it examines v. It is not always the case that a small bandwidth ordering will next examine a ver- tex adjacent to those already examined. However, it is usually true, and in any event the next vertex will be close to the previously examined vertices. 5.4 Consequences If it is indeed true that small bandwidth orderings are in general efficient, then there is a large set of algo- rithms developed by numerical analysts that can be applied to CSP’s. Bandwidth minimization, as men- tioned, is NP-complete. However, because it is an important problem in the field of numerical analysis, many heuristic procedures for finding small bandwidth orderings have been developed. Most of these have rel- atively low time overhead. [Zinn et al., 19821 surveys some of these algorithms; perhaps the most popular one is [Gibbs et al, 19761. These algorithms can be applied to CSP’s to yield small bandwidth orderings. The CSP may then be solved using chronological backtracking under that or- dering; or, as an alternative, it can be solved using the dynamic programming methods mentioned in sec- tion 4.1. 6 Conclusions Graph bandwidth has been defined and shown to have applicability to CSP’s. Bandwidth provides a new class of tractably solvable constraint satisfaction problems, and a possible measure of CSP decomposability. In addition, the bandwidth is an important characteristic of a search ordering. Small bandwidth orderings seem to result in a statistically significant increase in back- tracking efficiency, and these or nderings are also to intelligent backtracking and to adjacency. related 6.1 Acknowledgements Johan de Kleer, David McAllester and Joe Weening contributed to this research. The author is supported by a fellowship from the Fannie and John Hertz Foun- dation. References [Bruynooghe and Pereira, 19841 Maurice Bruynooghe and Luis Pereira. Deduction revision by intelligent backtracking. In J. A. Campbell, editor, Implemen- tations of Prolog, chapter 3, pages 196-215. Ellis Horwood, Chichester, 1984. [Bruynooghe, 19811 Maurice Bruynooghe. Solving combinatorial search problems by intelligent back- tracking. Information Processing Letters, 12(1):36- 39, 1981. [de Kleer, 19861 Johan de Kleer. An assumption-based TMS. Artificial Intelligence, 28(2):127-162, March 1986. [Dechter and Pearl, 19871 Rina Dechter and Judea Pearl. Network-based heuristics for constraint- satisfaction problems. Artificial Intelligence, 34(1):1-38, December 1987. [Fernandez-Baca, 19881 David Fernandez-Baca. Non- serial dynamic programming formulations of satisfi- ability. Information Processing Letters, 271323-326, May 1988. [Freuder , 19781 Eugene Freuder . Synthesizing con- straint expressions. Communications of the Associ- ation for Computing Machinery, 21:958-966, 1978. [Freuder, 19821 E u g ene Freuder. A sufficient condition for backtrack-free search. Journal of the Associ- ation for Computing Machinery, 29( 1):24-32, Jan- uary 1982. [Gibbs et al., 19761 N orman Gibbs, William Poole, and Paul S tockmeyer . An algorithm for reducing the bandwidth and profile of a sparse matrix. SIAM Journal of Numerical Analysis, 13(2), April 1976. [Gurari and Sudborough, 19841 Eitan Gurari and Ivan Sudborough. Improved dynamic programming algo- rithms for bandwidth minimization and the mincut linear arrangement problem. Journal of Algorithms, 5:531-546, December 1984. 50 AUTOMATEDREASONING d [Monien and Sudborough, 19851 Burkhard Monien and Ivan Sudborough. Bandwidth constrained NP- complete problems. Theoretical Computer Science, 41:141-167, 1985. [Papadimitriou, 19761 Christos Pa.padimitriou. The NP-completeness of the bandwidth minimization problem. Computing, 16:263-270, 1976. [Purdom et al., 1981] Paul Purdom, Cynthia Brown, and Edward Robertson. Backtracking with multi- level dynamic search rearrangement. Acta Informat- ica, 15:99-114, 1981. [Saxe, 19801 James Saxe. Dynamic programming al- gorithms for recognizing small bandwidth graphs in polynomial time. SIAM Journal on Algebraic and Discrete Methods, 1(4):363-369, December 1980. [Seidel, 19811 R aimund Seidel. A new method for solv- ing constraint satisfaction problems. In Proceedings of IJCAI-81, Vancouver, BC, pages 338-342, Au- gust 1981. [Stallman and Sussman, 19771 Richard Stallman and Gerald Jay Sussman. Forward reasoning and de- pendency directed backtracking in a system for computer-aided circuit analysis. Artificial Intelli- gence, 9:135-196, 1977. [Zinn et al., 19821 P. Z. Zinn, J. Chvatalova, A. I<. Dewdney, and N. E. Gibbs. The bandwidth prob- lem for graphs and matrices - a survey. Journal of Graph Theory, 6(3):223-254, 1982. ZABIH 51
1990
11
860
r Nonmo otonic Reasoning Hector Geffner hector@ibm.com T. J. Watson Research Center P. 0. Box 704, Room Hl-KlO Yorktown Heights, N.Y. 10598 Abstract C~vscsl theories are default theories which explicitly accommodate a distinction between ‘explained’ and ‘unexplained’ propositions. This is accomplished by means of an operator ‘C’ in the language for which propositions a! are assumed explained when literals of the form Car hold. The behavior of causal theories is determined by a preference relation on models based on the minimization of unexplained abnormality. We show that causal networks, general logic programs and theories for reasoning about change can be all naturally expressed as causal theories. We also develop a proof- theory for causal theories and discuss how they relate to autoepistemic theories, prioritized circumscription, and Pearl’s C-E calculus. Introduction Preferential entailment has emerged as a powerful means for specifying non-monotonic behavior. An or- der on interpretations determines the preferred models of a given theory, and those models determine in turn the propositions that the theory non-monotonically en- tails. Shoham [1988], who most clearly articulated the framework, used a preference order on interpretations to specify the behavior of temporal theories dealing with change. Similar proposals have been advanced for logic programs with negation [Przymusinski, 19871 and defeasible inheritance hierarchies [Krishnaprasad et QZ., 19891 among others. In all these proposals, the preference order adopted is tailored to the particular task or domain addressed. Still, the appeal of preferential entailment as a uni- fying framework for non-monotonic inference could be enhanced if a general domain-independent criterion for inferring preferences from theories could be developed. A general proof-theory, for instance, if available, could then be imported by any individual domain as well. Similarly, the resulting framework would subsume the expressive power of individual domains, enabling a user to express, say, theories which combine patterns of in- ference characteristic of theories for reasoning about change with those of general logic programs. An attempt in this direction was recently reported 524 KNOWLEDGEREPRESENTATION in [Geffner, 1989b], where it was shown that non- monotonic inference in several domains of interest in AI could be understood in terms of a model pref- erence criterion based on the minimization of unex- plained abnormality. ‘Explanations’ were defined as logical derivations constrained by the fomt of the for- mulas in the knowledge base. A formula lp j q, for example, was assumed to permit an explanation of q in terms of an assumption lp, but to exclude an expla- nation of p in terms of an assumption lq. Thus, the different semantics associated with two logically equiv- alent logic programs q +- lp and p t lq, for instance, could be accounted for. In this paper we pursue the same goal and intuition as in [Geffner, 1989131 but proceed with a different for- malization. Rather than appealing to the syntactic form of the rules in the knowledge base to distinguish between logically equivalent theories, we appeal to a richer language. Such a language includes a new op- erator ‘C’ for which propositions ar are assumed ex- plained when literals of the form Co hold. Default theories over the new language are called cszssal the- ories. As we will show, not only causal theories do abide by the principle that equal models imply equal behavior, but they also provide a significant additional expressive power. Causal networks, general logic pro- grams and theories about change can be all naturally expressed as causal theories. Moreover causal thee- ries lend themselves to a simple sound and complete proof-theory. Causal Theories A causal theory is a default theory augmented with a ‘causal’ operator ‘C.’ Default theories are essentially what McCarthy’s [1986] refers to as “abnormality” the- ories: classical first order theories in which certain lit- erals, abnormalities (resp. assumptions), are expected to be false (resp. true). ’ The language of causal theo- ries is closed under all standard connectives, while ex- ‘Note that unlike McCarthy, we will be concerned with abnormality liter& as opposed to abnormal indiuiduals. The trade-offs involved in that choice are discussed in [Geffner, 1989a, section 4.31. From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. eluding expressions with embedded causal operators. We usually denote abnormalities by atoms of the form abi(a), and use the symbol cy possibly indexed as a variable ranging over abnormalities. Additionally, we often partition the formulas in a causal theory T into two sets: a background context K containing the for- mulas which express generic knowledge (e.g. “penguins are birds”), and an evidence set E containing the for- mulas which express knowledge specific to the situa- tion at hand (e.g. “Tweety is a bird” ; see [Geffner and Pearl, 19871). The operator C is most commonly used to encode causal or explanatory rules of the form “if o then b” as sentences of the form Q + Cb (see [Pearl, 1988a]). A rule such as “rain causes the grass to be wet” may thus be expressed as a sentence rain + Cgrassset, which can then be read as saying that if rain is true, grass-wet is explained. We assume that the operator C obeys certain minimal constraints which correspond to the postulates of system T in modal logic [Hughes and Cresswell, 19681: [Cl] ccl =+ a [C2] C(cy 3 p> * (Ca =3 q?) [C3] If k cy then Ccu [Cl] forces every explained proposition to be true, while [C2] and [C3] guarantee the set of explained proposition to be closed under deduction. The ex- pression ‘ ig a’ in [C3] is an abbreviation of K t- a, which says that in any causal theory every proposition that logically follows from its background context is explained. An interpretation A4 that satisfies a causal theory T and the constraints [Cl]-[C3] will be said to be a model of T. Moreover, we will refer to the set of abnormalities rendered true by an interpretation M as the gap of the interpretation and denote it as A[M]. Since the prefer- ence relation on models will be an exclusive function of the model gaps we will find useful to group models into classes of models. Formally, the class C of T with a gap A[C] will represent the non-empty collection of models M of T with a gap A[M] C A[C]. Intuitively, since the negation of abnormalities are usszamptions expected to hold, a class C with a gap A[C] represents the collec- tion of models which validate all assumptions logically compatible with A[C]. Models and classes of T with minimal gaps will be said to be minimal. Moreover, we will say that a proposition p holds in a class C of T, when p holds in every model in C. Proof-theoretically this is equivalent to require that p be derivable from T and a set of assumptions compatible with A[C].2 The operator C is used as a device to order the classes of models of the theories T of interest. As in [Geffner, 1989b], such a preference order is defined as a function of the abnormalities and the expluined 2The notions o f derivability and consistency are to be understood relative to the postulates [Cl]-[C3]. abnormalities in the different classes. An abnormality a is explained in a class C when the literal Co holds in C. If we denote the set of explained abnormalities in a class C by AC[C] , then the preference relation on classes of a theory T can be described as follows: Definition 1 A class C is as preferred as a class C’ iff A[C] - A’[C] C A[C’]. C is preferred to C’ iff C is us preferred us CT but 6’ is not us preferred us C. In words, a class C is preferred to a class C’ when ev- ery abnormality in C but not in C’ has an explanation, but not vice versa. Notice that this preference rela- tion on classes is not necessarily transitive, and thus, certain care is required for defining the propositions which a given theory culssully entails. Let us say that a collection B of classes constitute a basis for a theory T if for every class C of T not in B, there is a class C’ in B such that C’ is preferred to C. Moreover let us say that a basis B svpports a proposition p if p holds in every class in 8. Then, we will say that a proposition p is cuvsully entailed by a causal theory T when there is a basis for T which supports p. Most theories T of interest will be well-founded in the sense that for any non-minimal model M of T there will be a minimul model M’ of T such that A[M’] c A[M].3 I n such cases, causal entailment can be computed by considering the minimal classes of T only. Indeed, B will be a basis for a well-founded the- ory T iff for every minimal class C of T not in 8, 8 includes a minimul class C’ preferred to C. Moreover, T will often possess a single minimal basis 8. In such cases, we will refer to the classes in M as the preferred classes of T. For such theories causal entailment can be cast in the more familiar form in which a proposition is causally entailed when it holds in all the preferred classes. Example 1 Let us consider first a simple causal the- ory T given by the single sentence labi =+ Cabs, where abl and ab2 are two different abnormalities. Such a theory admits two minimal classes: a class Cl, comprised by the models of T which only sanc- tion the abnormality abl, and a class C.2, comprised of the models which only sanction the abnormality ab2. Thus Ci has an associated gap A[&] = {abi), while C2 has an associated gap A[C2] = { ab2). Both classes represent the minimal classes of T, as there is no model of T that satisfies both labi and labs, together with the restriction Co =+ cy. The abnormal- ities cy explained in each class C can be determined by testing which literals Ca hold in C. As we said, this amounts checking whether there is a set of assump- tions legitimized by C which together with T implies Co. Thus, in the class C.2, the abnormality ab2 is 3A sufficien t c ondition for T to be well-founded is that T gives rise to a finite number of bound assumptions, where an assumption is bound when it is in conflict with other assumptions (see [Geffner, 1989a]). GEFFWER 525 of a unique abnormality causal rules: predicate abi and a pair of explained as the literal Cab2 logically follows from T and the assumption labi. On the other hand, the abnormality abl is not explained in Cl, as there is no set of assumptions validated by Cl which supports the literal Cabi. It follows then, that the class C2 is preferred to Cl, as A[&] - A’[&] = 0 C A[&], but A[&] - AC[G] = (abi) e A[&] = {abz}. Further- more, since the theory T is well-founded, and Cl and C2 are the only minimal classes of T, it follows then that 8 = (C2) is the single minimal basis of T, and thus, that C2 is the single preferred class of T. As a result, the propositions labi and ab2 which hold in C2 are (causally) entailed by T. Applications In this section we consider the use of causal theories for reasoning about change and for specifying and ex- tending the semantics of general logic programs. For the use of causal theories for inheritance and abductive reasoning see [Geffner , 1989a]. Reasoning about Change Theories for reasoning about change need to represent the effects of actions, the conditions which can prevent actions from achieving their normal effects, and the tendency of certain aspects of the world (fluents) to remain stable (see [McDermott, 19821). Here we will refer to the first type of rules as change rules, to the second type as cancellation rules, and to the third type as persistence rules. Change, cancellation and persistence rules can inter- act in various ways. The Yale shooting scenario [Hanks and McDermott, 19871 illustrates a problem that re- sults from spurious interactions between change and persistence rules. We now present general guidelines to locally map general theories for reasoning about change into causal theories which avoid those prob- lems. The guidelines are uncommitted about the par- ticular temporal notation used. For simplicity, we use a simple reified temporal language sufficient to illustrate the relevant issues. Other notations could be used as well. The notation p(z)t below, where p is a predicate and t is a time point, is used as an abbreviation of the sentence Holds(p(z), t), to read “fluent p(z) holds at time t.” We also assume for simplicity a discrete time where t precedes t + 1. First we specify the encoding of rules about change. A rule describing the effect e(x) of an action a(x) with precondition p(z) is encoded as a causal rule of the form: p(x)t A a(+ * Ce(x)t+i where 3: is a tuple of variables and both z and t are universally quantified. Such a rule can be read as stat- ing that given the precondition p(x), a(x) causes or explains e( 2). Often, however, rules about change are defeasible. Defeasible rules about change are encoded by means p(x)t A a(x)t A labi( * Ce(x)t+l p(x)t A a(x)t A C-e(x)t+l * Cabi( where the second rule simply asserts that the violation of an expected change is explained when there is an explanation for the negation of the expected effect (a similar rule is needed for modeling inheritance hierar- chies [Geffner , 1989a]). The persistence of a fluent f (e.g. on(a, b)), on the other hand, is encoded by the expressions: ft A labi( * ft+l Clft+l * Cabi( where abi(f) t is an abbreviation of the atom abi(f, t), read “the persistence of f holds at time t.” Thus, while the first rule expresses the tendency of fluents to remain stable, the second rule expresses that changes are explained when the negation of the projected fluent is explained. For the causal encoding of a version of the Yale Shooting problem, see [Geffner, 1989a]. Here we will consider a slightly richer example’due to Ginsberg and Smith [1988]. Example 2 Let us assume that there is a room with some ducts that maintain the room ventilated. More- over, an object sitting on a duct, blocks the duct, and if all ducts get blocked, the room becomes stuffy. This information is encoded in a causal theory with back- ground: duct(x) A 3~. on(y, x)t 3 Cblocked(x)t px.duct(x) =$ blocked(x)t] =S Cstuffyt+l move-t& y)t A lab& Y)t * Con@, y)t+i move-to(x, y)t A C-(x, y)t+i =P Cabl(x, y)t The persistence of the fluents on(x, y), stuff Jr, and blocked(x) y), and their negations, is expressed as stipulated above. To keep in mind that alJ these flu- ents are really terms,* we use the notation f to denote the fluent which is the complement of f. Thus, for instance, on(a, b) stands for the ‘negation’ of on(a, b). Namely, if on(a, b) holds at time t, on(a, b) will not, and vice versa. This is expressed by a constraint ft =dt which renders f and f incompatible, provided that the complement of 7 is f itself. Finally, we need to express that an object cannot be on two different places at the same time: on(x, y)t A on(x, z)t 3 y = x Given this background K, we consider a theory T = (K, E) describing a room with two ducts dl and d2. Furthermore, at time t = 0 it is known that the room ‘Recall that blocked(s, Y)t is an abbreviation of the atom Holds(blocked(z, y), t). 526 KNOWLEDGE REPRESENTATION is not stuffy, that a block a is sitting on top of duct dl, and that a block b is sitting on a place different than d2. Namely, E = {duct(zyj z 4 dl V z = d2, stuffmy on(a, dl)o, on@, d&}. In the context T, the fluents stuff yo, on(a, di)O, and on(b, d& project both forward and backward in time. If block b is moved to duct d:, at time t = 0, however, conflicts among these projections arise, r& sulting in three classes of minimal models: the in- tended class C where the action is successful and, as a result, the two ducts get blocked and the room be- comes stuffy; the class C’, where the action is successful but somehow the block a has been removed from duct dl; and the class, C”, where the action is unsuccessful and the block b remains in a place different than d2. Nonetheless, the interpretation of causal theories sin- gles out the intended class C as the only preferred class, capturing the intuition that block a stays on dl and that the room becomes stuffy. Note that such a behav- ior arises axioms. without the presence of explicit cancellation Logic Programming While the adequacy of the framework presented for reasoning about change rests mainly on empirical grounds -how natural it is to express knowledge about these domains and how closely the resulting behavior resembles the behavior intended by the user- a grow- ing body of work on the semantics of general logic pro- grams will permit us to assess the expressivity and semantics of causal theories on more formal grounds. As it is standard, we consider only the Herbrand models of programs. Moreover, since for answering ex- istential queries, a program involving variables can be shown to be equivalent to a program without variables, we will be dealing mainly with variable-free logic pro- grams. More precisely, we will analyze the semantics of general logic programs in terms of two mappings Cg[ -1, i = 1,2, each converting a program P into a causal theory Ci[P]. Each mapping associates a dif- ferent “meaning” with P. For the purposes of logic programming, C,[P] is the most relevant. The map- ping Cs[ .] will b e used mainly to illustrate the relation between the interpretation of general logic programs and the semantics of causal theories. We assume every atom in a logic program to be an “abnormality” and write CA to represent the class of Herbrand models M whose “abnormalities” are among those of A. Namely, if ic denotes the formulas not involving the causal op- erator, CA will stand for the collection of models M such that A4 n 1c 5 A. We consider first the mapping Ci [ e ] which converts each rule Y+al,...,%a,l 1,*--y P +na in P, where n > 0 and m 1 0, and o’s, p’s and 7 are atoms, into a causal rule of the form Ccvl A ...AcQr,AlplA...Alpm~cy . Example 3 Consider a program P given by the fol- lowing rules: c t a,-b d+-rc a+ P gives rise to two minimal models: A41 = {a, c} and M2 = {a, b, d}, the former of which is the sin- gle canonical or perfect model of P [Apt et al., 1987; Przymusinski, 19871. The mapping Cr[ -1 maps P into the causal theory Cl [PI: CaA-rb+Cc -c j Cd true * Ca Cl[P] gives rise to two minimal classes CM~ and CM~, with gaps Ml and MS as above. Furthermore, in the former class, the atoms a and c are both explained, as C,.[P], lb l- Ca A Cc holds, and lb is a legitimate assumption in cM1. On the other hand, only the atom a is explained in CM,. Thus, the class CM, is the single preferred class of Cl [PJ . As a result, the canonical model Ml of P and the preferred class CM~ of Cl[P] sanction the same non-causal (free from ‘C’) literals. As the example suggests, for stratified programs the following correspondence between the canonical model of P and the single preferred class of the theory Cl[P] can be established? Theorem 1 M is the canonical model of a stratified program P if and only if CM is the single preferred class of Cl [PI * Moreover, if we say that a class of models is per- fectly coherent when it explains every abnormality that it sanctions, we obtain a correspondence between the stable models of a program P [Gelfond and Lifschitz, 1988; Fine, 19891 and the perfectly coherent classes of the causal theory Cl [PI, even when P is not stratified. Theorem 2 M is a stable model of an arbitrary pro- gram P if and only if CM is a perfectly coherent class of the causal theory C,[P]. In spite of this correspondence, however, the seman- tics of causal theories Cl[P] and the stable seman- tics of logic programs P diverge outside the family of stratified programs. On the one hand, programs may lack stable models (e.g. {p + 1~)); on the other, causal theories C,[P] may give rise to multiple preferred classes even when P accepts a single stable model (e.g. P = {a t-b,b+-a,p+-b,p+-p}). Logic Programs and Causal Networks We now investigate the semantics associated with a second mapping Cz[ -1 of logic programs into causal theories. ‘Proofs can be found in [Ceffner, 1989a]. GEFFNER 527 For a logic program P, Cz[P] represents the collection of rules which result from mapping each rule in P, -y+a1,-*-, into a causal %a,-( 13.. P ‘9-l Pm rule of the form: a1 A... ACtnAyP1A . ..A+.,, =SCr The difference with the previous translation is that the positive antecedents of the resulting causal rules do not need to be “causally” established. This renders the models of the causal theory C2 [ P] as models of Cl [PI, though not the other way around. As a result, the semantics of causal theories of the form C2[P] differs from that of Cl[P] even within the family of stratified programs. For instance, the stratified program P = {Q + lp, p t r, r + p}, possesses a single canonical model M = {q}, and thus CM is the single causally preferred class of Cl[P]. On the other hand, the mapping C2[ -1 renders the causal theory C2[P] = {lp a Cq,r a Cp,p 3 Cr) which accepts two preferred classes CM and CMI, with M = {q) and M’ = (p,r}. In this example, the ‘anomalous’ behavior of the the- ory C2[P] is a consequence of the circularity relating the atoms p and r. ‘Circular’ explanations are pre- cluded in C,[P] but not in Cz[P]. What is interesting, however, is that once these circularities are removed, the ‘anomalous’ behavior is guaranteed to disappear. Let us say that a program P is acyclic when its dependency graph does not contain cycles. Acyclic programs are thus stratified. Moreover, acyclic pro- grams, not only preclude ‘recursion trough negation,’ but every type of recursion. For acyclic programs, the following result applies. Theorem 3 Let P be an acyclic program. Then the class CM, where M is the canonical model of P, is the unique preferred class of the theories C,[P] and Cz[P]. In other words, once recursion is removed the two mappings examined result into an identical behavior, in correspondence with the received semantics of logic programs. While the requirement of acyclicity is un- acceptably strong in the domain of programming, it is common among network representational languages, such as inheritance hierarchies [Touretzky, 19861 and Bayesian networks [Pearl, 1988b]. Indeed, causal the- ories of the form C2[P] for acyclic programs P, possi- bly augmented by integrity constraints, provide a suffi- ciently expressive language for reasoning in causal net- works. [Geffner, 1989a] discusses the use of such rep- resentations for abductive reasoning as well as some of their advantages over the representations resulting from the mapping Cl [ - 1. Proof-Theory The proof-theory of causal theories is structured in the form of an-argument-based system (e.g. [Loui, 19871). We assume the theories of interest are well-founded in the sense defined above. The notions of consistency and derivations are to be understood relative to pos- tulates [Cl]-[C3]. We start with some terminology. Assumptions are the complements of abnormalities. We denote the com- plement of a proposition p as p, thus, if p is an abnor- mality p is an assumption, and vice versa. Further- more, in a context T, a set A of assumptions con- stitutes an argument if the set A is consistent with T. Moreover, A is an argument for q, if q is derivable from the causal theory T + A, and an argument against q, if lq is derivable from T + A. When the context T is understood, we also say that A supports q and lq respectively. Now, if A is not an argument in T, then A is said to be a conjtict set. Two arguments are in conflict when their union is a conflict set. In such a case we also say that one argument refutes the other. For instance, in a theory T consisting of the ex- pressions labi j Cab2 and lab3 j Cq, the argu- ments A1 = {labi) and A2 = {labs) are in conflict. On the other hand, no argument is in conflict with A3 = {1ab31. By the minimality of preferred classes, it easily fol- lows that propositions supported by unrefuted argu- ments are entailed. Thus, for example, we can safely conclude that propositions Cq and q above are en- tailed, as they are supported by the unrefuted argu- ment As. Often, however, refuted arguments may also provide legitimate support. Once such example is the propo- sition ab2 which is entailed by T in spite of having a single minimal supporting argument Ar which is re- futed by Az. Intuitively, what is going on is that A1 not only refutes AZ, but also explains its negation. We will say in that case that Ar is protected from As. As we will see, propositions supported by protected argu- ments may also be entailed. Formally, let us say that an argument A explains a proposition p when A is an argument for Cp. Then the notion of protection can be defined as follows: Definition 2 An argument A is protected from a conflicting argument A’ i$ there is a set A” C A’ - A such that A + A’ - A” is not a conflict set, and every abnormality Q, ti E A”, is explained by A + A’ - A”. Similarly we will say that an argument A is stronger than a conflicting argument A’ when A is protected from A’ but A’ is not protected from A. In the exam- ple above, for instance, A1 is stronger than the con- flicting argument Aa. If we say that an argument is stable when it is stronger than any conflicting argu- ment, we obtain the following sufficient conditions for a proposition to be causally entailed: Theorem 4 If a proposition p is supported by an sta- ble argument, then p is causally entailed. Yet, theorem 4 does not provide necessary condi- tions. For instance, the proposition ab3 is causally en- tailed by the theory comprised of the formulas labi V 528 KN~WLEDOEREPRESENTATI~N lab2 =$ Cab3 and abl V ab2, and yet, ab3 is not supported by any stable argument. A simple extension of the definitions above takes care of such cases. Let us refer to a collection of argu- ments as a cover, and let us say that a cover supports a proposition p if every argument in the cover supports p. Furthermore, let us say that an argument is in conjtict with a cover when the argument is in conflict with ev- ery argument in the cover, and that a cover is stronger than a conflicting argument A when it contains an ar- gument stronger than A. Moreover, let us also say that a cover is stable when it is stronger than every conflicting argument. Then, the following sound and complete characterization of causal theories results: Theorem 5 (Main) A proposition p is causally en- tailed if and only if it is supported by a stable cover. For the theory comprised of the sentences labi V lab2 j Cab3 and abl V ab2, it is easy to show that the pair of arguments A1 = {labi} and A2 = (lab2) constitute a stable cover. Since such a cover supports the proposition of ab3, it follows then that ab3 is en- tailed by the theory. Related Work Causal theories are an elaboration of ideas in [Geffner, 1989b], where the notions of explanations, classes, and coherence were originally presented. The adop- tion here of a causal operator as part of the object- level language, however, has simplified matters con- siderably, providing additional expressive power and permitting the construction of a proof-theory. Part of the motivation for the move came from a proposal due to Pearl to explicitly incorporate a causal lan- guage into default theories. Pearl’s proposal [Pearl, 1988a] draws on work in causal probabilistic networks to suggest a distinction between defaults which en- code explanations (e.g. fire ---) smoke) from defaults which trigger explanations (e.g. smoke + fire). He calls the former defaults causal and the latter eviden- tial. He argues that the language of default theo- ries should accommodate such a distinction, and in particular, that explanation ‘giving’ defaults should be prevented from triggering explanation ‘seeking’ de- faults. Pearl’s proposal to preclude such chains con- sists of three parts. First, he labels every default as either causal, e.g. rain +c grass-wet, or eviden- tial, e.g. grass-wet -)E sprinklermq second, he distinguishes the status of propositions p established on causal grounds, Cp, from those established on evi- dential grounds, Ep; and third, he introduces a calcu- lus for reasoning with causal and evidential rules which purposely precludes deriving q from Cp and an eviden- tial rule p -)E q. Though differing in detail and goals, the reading of the operator ‘C’ in causal theories follows Pearl’s intu- itions. Pearl, however, focuses on evidential reasoning, while we focus on default reasoning. A proposal for performing evidential reasoning in causal theories is discussed in [Geffner , 1989a]. Causal theories are also related to Moore’s [1985] autoepistemic theories. The autoepistemic encodin L[P] of a stratified logic program P [Gelfond, k 1987 , for instance, turns out to be the “dual” of the causal encoding Ci [P] (i.e. in L[P] every negated atom is preceded with the autoepistemic operator ‘L’, while in Cl[P] every non-negated atom is preceded with the causal operator ‘C’) and they both legitimize the same behavior. Indeed, it is possible to understand the au- toepistemic operator L as an evidential operator, with Lo meaning “there is evidence for cy.” Namely, instead of using the causal operator C under the conventions that lo is an assumption Co + cy must hold for every (plain) sentence cy, and (Y is explained in a class when Co holds, we could have used conventions that an evidential operator E under the 1Ecu is an assumption cy + Ea, must hold for every (plain) sentence cy, and Ecu is explained in a class when a holds. Under such an approach the evidential encoding of a logic program would be identical to the autoepis- temic encoding, except for the presence of E’s instead of L’s. Moreover, both encodings would sanction an equivalent semantics for stratified programs. For non- stratified programs, however, as for most default theo- ries, the duality between causal and autoepistemic dis- appears. First, default theories may lack stable mod- els; second, the prefix lL, as no causal prefix, “gener- ates” the assumptions needed. The fact that the operator ‘C’ establishes a prefer- ence for the abnormality q over the abnormality p in a theory like lp + Cq, raises the question of whether the semantics of causal theories can be understood in terms of prioritized circumscription [McCarthy, 19861. The answer is a qualified no: there are causal thee- ries for which no priority order on the abnormalities will render an equivalent behavior. The causal theory {la A lb =2 Cc A Cd, lc A ld 3 Ca A Cb) for abnor- malities a, b, c, and d, is one such example. Still, the semantics of such theories could in principle be cap- tured by defining priorities for non-atomic formulas. Finally, another family of theories related to those treated in this paper is Shoham’s [1988] causal theo- ries. Shoham’s causal theories are epistemic theories designed for efficient reasoning about change. They are interpreted by a preference criterion which rewards models in which “as little is known for as long as possi- ble.” While there is no direct correspondence between our causal theories and Shoham’s, it seems possible to understand the intuition behind Shoham’s chronolog- ical minimization in terms of the ideas of explanation and coherence. If we recall that we regard an abnor- mality cv as explained in a class C when C validates a GEFFNER 529 set of assumption A which supports the truth of Co, chronological minimization assumes cy explained by A instead, when A supports the truth of (Y, without in- volving assumptions about times past cy. Acknowledgements. For the topic of this paper I have benefited from conversations with many people. I’d like to thank here K. Fine, M. Gelfond, M. Gins- berg, B. Grosof, K. Konolige, J. Pearl and H. Przy- musinska. References [Apt et al., 19871 K. Apt, H. Blair, and A. Walker. Towards a theory of declarative knowledge. In J. Minker, editor, Foundations of Deductive Databa- ses and Logic Programming, pages 89-148. Morgan Kaufmann, Los Altos, CA, 1987. [Fine, 19891 K. Fine. The justification of negation as failure. Proceedings of 8th International Congress of Logic Methodology and Philosophy of Science. North Holland, 1989. [Geffner and Pearl, 19871 H. Geffner and J. Pearl. A framework for reaasoning with defaults. Techni- cal Report TR-94, Cognitive Systems Laboratory, UCLA, Los Angeles, CA., August 1987. To appear in Knowledge Representation and Defeasible Infer- ence, H. Kyburg, R. Loui and G. Carlson (Eds), Kluwer, 1989. [Geffner, 1989a] H. Geffner. Default Reasoning: Cau- sal and Conditional Theories. PhD thesis, UCLA, Los Angeles, CA, November 1989. [Geffner, 1989b] H. Geffner. Default reasoning, mini- mality and coherence. Proceedings of the First Inter- national Conference on Principle of Knowledge Rep- resentation and Reasoning, pages 137-148, Toronto, Ontario, 1989. [Gelfond and Lifschitz, 1988] M. Gelfond and V. Lif- schitz. The stable model semantics for logic pro- gramming. Proceedings 1988 Symposium on Logic Programming, pages 1070-1080, Cambridge, Mass., 1988. MIT Press. [Gelfond, 19871 M. Gelfond. On stratified autoepis- temic theories. Proceedings AAAI-87, pages 207- 21 1, Seattle, Washington, 1987. [Ginsberg and Smith, 19881 M. Ginsberg and Smith. Reasoning about action I: A possible worlds ap- proach. Artificial Intelligence, 35:165-195, 1988. [Hanks and McDermott, 19871 S. Hanks and D. Mc- Dermott. Non-monotonic logics and temporal pro- jection. Artificial Intelligence, 33:379-412, 1987. [Hughes and Cresswell, 19681 G. Hughes and Cress- well. An Introduction to Modal Logic. Methuen and Co. LTD, London, Great Britain, 1968. [Krishnaprasad et al., 19891 T. Krishnaprasad, Kiefer, and D. Warren. On the circumscriptive semantics of inheritance networks. In Z. Ras and L. Saitta, edi- tors, Methodologies for Intelligent Systems 4. North Holland, New York, N.Y., 1989. [Loui, 19871 R. Loui. Defeat among arguments: A sys- tem of defeasible inference. Computational Intelli- gence, 1987. [McCarthy, 19861 J. McCarthy. Applications of cir- cumscription to formalizing commonsense knowl- edge. Artificial Intelligence, 28:89-116, 1986. [McDermott, 19821 D. McDermott. A temporal logic for reasoning about processes and plans. Cognitive Science, 6:101-155, 1982. [Moore, 19851 R. Moore. Semantical considerations on non-monotonic logics. Artificial Intelligence, 25:75- 94, 1985. [Pearl, 1988a] J. Pearl. Embracing causality in default reasoning. Artificial Intelligence, 35:259-271, 1988. [Pearl, 1988b] J. Pearl. Probabilistic Reasoning in In- telligent Systems. Morgan Kaufmann, Los Altos, CA., 1988. [Przymusinski, 19871 T. Przymusinski. On the declar- ative semantics of stratified deductive databases and logic programs. In J. Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 193-216. Morgan Kaufmann, Los Altos, CA, 1987. [Shoham, 19881 Y. Shoham. Reasoning about Change: Time and Causation from the Standpoint of Arti- ficial Intelligence. MIT Press, Cambridge, Mass., 1988. [Touretzky, 19861 D. Touretzky. The Mathematics of Inheritance Systems. Pitman, London, 1986. 530 KN~~EDOEREPRESENTATION
1990
110
861
Decidable Reasoning in First-Order Knowledge Bases with Perfect Introspection Gerhard Lakemeyer Department of Computer Science University of Toronto Toronto, Ontario, Canada, M5S lA4 e-mail: gerhard@ai.toronto.edu Abstract Since knowledge bases (KBs) are usually incom- plete, they should be able to provide informa- tion regarding their own incompleteness, which requires them to introspect on what they know and do not know. An important area of research is to devise models of introspective reasoning that take into account resource limitations. Under the view that a KB is completely characterized by the set of beliefs it represents (its epistemic state), it seems natural to model KBs in terms of belief. Reasoning can then be understood as the prob- lem of computing membership in the epistemic state of a KB. The best understood models of be- lief are based on possible-world semantics. How- ever, their computational properties are unaccept- able. In particular, they render reasoning in first- order KBs undecidable. In this paper, we propose a novel model of belief, which preserves many of the advantages of possible-world semantics yet, at the same time, guarantees reasoning to be decid- able, where a KB may contain sentences in full first-order logic. Moreover, such KBs have perfect knowledge about their own beliefs even though their beliefs about the world are limited. Introduction Since the information contained in a knowledge base (KB) is usually incomplete, a KB should not only be able to answer queries about the domain in question but also about its own state of incompleteness. In other words, a KB should possess self-knowledge, which it gains by introspection. For the purposes of this paper, we assume that a KB is capable of perfect introspection, that is, it knows exactly what it knows’ and does not know. Perfect introspection is not uncontroversial. For example, one may want to restrict knowledge of what is not known to concepts that one is at least aware of (see [14] for a formalization of this idea). However, we ‘Although this paper is concerned with belief rather than knowledge, we nevertheless use both terms interchangeably. have adopted perfect introspection here simply because it comes at no additional computational cost. An important area of research is to devise models of introspective reasoning that take into account re- source limitations [9]. Under the view that a KB is completely characterized by the set of beliefs it repre- sents (its epistemic state), it seems natural to model KBs in terms of belief. Reasoning can then be under- stood as the problem of computing membership in the epistemic state of a KB. The best understood models of belief are based on possible-world semantics [lo, 81. Most autoepistemic logics, a favorite framework to study introspection (e.g. [20, 17, 181), specify epistemic states that have possible-world models, as shown in [21, 5, 171. A big advantage of possible world models is their simplicity. Unfortunately, they also make the assumption that a KB’s (or agent’s) beliefs are closed under logical con- sequence, a property often referred to as logical omni- science, which renders reasoning undecidable in first- order KBs. An important problem then is to find mod- els of belief with better computational properties. While there has been some progress in finding com- putationally attractive models of belief in the proposi- tional case [14], adding quantifiers to the framework in the obvious way leads to undecidability. In this paper, we remedy this situation and propose a new model of belief that preserves much of the simplicity of possible- world semantics yet, at the same time, guarantees that reasoning in first-order KBs is computable.2 As in [14, 171, we use an approach that allows us to model the beliefs of a KB directly within the logic. Intuitively, a KB’s epistemic state can be characterized as the set of all sentences that are believed given that the sentences in the KB are all that is believed or, as we will say for short, ody-believed. This idea is formalized in a modal logic with two modal operators B and 0 for belief and only-believing, respectively. The epistemic state of a KB is characterized by the set of sentences cx for which OKB > Ba is valid.3 The ‘In section “A Decidable KR Service,” we also discuss restrictions under which reasoning is in fact tractable. 3Whenever KB occurs within a logical sentence, we LAKEMEYER 531 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. complexity of reasoning then reduces to the complexity of determining whether Bcu follows logically from OKB for a given KB and cy. The main contribution of this paper lies in the novel model-theoretic account of belief and only-believing. In particular, KBs under this model of belief may con- tain arbitrary first-order sentences and they have per- fect knowledge about what they do and do not believe even though their beliefs about the world are limited. Most importantly, the beliefs of such KBs are com- putable. In related work, Konolige [9] also addresses the is- sue of modeling introspection under resource limita- tions. However rather than proposing an actual in- stance of a computationally attractive reasoner, he presents a general framework in which one can be formalized. Work regarding decidable forms of first- order entailment (without introspection) is also rele- vant [7, 2, 22, 41, since it is a useful starting point when considering limited forms of belief. In particu- lar, Patel-Schneider’s [22] and Frisch’s [4] notions of entailment, which have a model-theoretic semantics, are good candidates as a basis for belief. Indeed the logic developed in this paper4 is shown to subsume Patel-Schneider’s t-entailment (Theorem 1). By us- ing a more expressive modal language, this paper goes beyond t-entailment by formalizing first-order beliefs about beliefs and agents introspecting on their own be- liefs. At the same time, we are able to preserve the computational benefits of t-entailment. Finally, belief is formalized in fairly intuitive model-theoretic terms. The rest of the paper is organized as follows. We begin by defining the syntax and semantics of OBL, the logic of belief and only-believing. This is followed by a brief discussion of the properties of belief. The next section formally establishes the decidability re- sult regarding which beliefs follow logically from only- believing a KB. After that, ,we apply these ideas to the specification of KR service routines ASK and TELL in the sense of [15]. Finally, we summarize the results and outline extensions of the current framework and future directions. The Logic OBL The key aspects of this logic are the semantics and properties of belief and only-believing, which ulti- mately gives us the specification of decidable, intro- spective reasoning in first-order KBs. Before turning to the technical definitions, we first introduce the two notions of belief informally. (The other logical connec- tives and quantifiers have the usual meaning.) mean the conjunction of all the sentences in the KB. *A preliminary model which addressed neither self- knowledge nor only-believing appeared in [II]. However, ultimately it proved to be too complex and was abandoned in favor of the current, much simpler model. Belief Belief is defined in a possible-world fashion. Roughly, an agent is assumed to imagine a set of states of affairs or situations M. The agent is then said to believe a sentence Q just in case o (or, as we will see below, a slightly modified CY) holds in all situations in M. Except for the definition of situations, which are described in more detail below, this framework is very similar to one that defines the logic weak 5’5 [6]. In fact, as in weulc S5, our approach results in agents capable of perfect introspection with respect to what they know and don’t know. In order to avoid the logical omniscience problem, we limit an agent’s ability to reason by cases in the fol- lowing two ways, which follow from the way situations are defined and used. For one, we allow beliefs not to be closed under modus ponens. For example, if p, q, and r are atomic facts, we allow an agent to believe (P v 4) A (3 v 4, yet fail to believe (p V r). Another way reasoning by cases is limited is by weakening the link between disjunction and existential quantification in the sense that an agent may believe P(a) V P(b) for a predicate P and distinct terms a and b, yet fail to believe 3zP(z). In OBL, closure under modus ponens is avoided by decoupling the notions of truth and falsity within situa- tions. Instead of assigning either true or false to atomic facts (predicates applied to individuals), situations are allowed to assign independent true-support and fulse- support to atoms. This corresponds to using four truth values {}, {t rue}, {false), and {true, false}, an idea originally proposed to provide a semantics for a fragment of relevance logic called tuutologicul entuil- ment [l, 3].5 Note that the classical worlds of possible- world semantics are a special kind of situations, namely those where each atomic fact has either true- or false- support but not both. In OBL, classical worlds are used to provide the standard notions of truth and vu- Zidity. Non-classical situations are only allowed to be part of what agents imagine (defining their beliefs). The fact that an agent’s imagination can be incomplete and inconsistent provides some intuition for situations that assign neither true- nor false-support to an atom or both true- and false-support. In order to weaken the link between disjunction and existential quantification, OBL restricts the interpre- tation of existential quantifiers within belief. Roughly, we require that an agent who believes the existence of an individual with a certain property must be able to name or give a description of that individual, although we do not require the agent to know who the individ- ual is. More concretely, for ~zP(x) to be believed there must be a closed term t (e.g. father(john)) such that P(t) is true in all accessible situations. In general, if the existential appears in the scope of universal quanti- fiers, the corresponding universals may (but need not) 5Levesque [16] was the first to introduce the notion of four-valued situations to model a limited form of belief in a propositional framework. 532 KNOWLEDGEREPRESENTATION occur in the description chosen for the existential. Only-Believing An agent who only-believes a sen- tence Q! believes cy and, intuitively, believes as little else as possible. In other words, the agent is maximally ig- norant while still believing a. As demonstrated in [14, 1’71, if belief is modeled by a set of situations, independent of whether they are four-valued or two-valued as in classical possible- world semantics, only-believing has a particularly sim- ple characterization: an agent only-believes a sentence a! if he or she believes a and the set of situations M the agent imagines is as large as possible, i.e., if we were to add any other world to M, the agent would no longer believe CX.~ With the special treatment of existential quantifica- tion as outlined above, there is, however, one compli- cation that needs to addressed. Consider the sentence a = %P( z). What should it mean for an agent, whose beliefs are modeled by a set of situations M, to only- believe a? Since a necessary requirement is that the agent believes CY!, there must be some closed term a such that P(a) is believed. It may be tempting to let M be the set of aldsituations where P(a) holds for some fixed closed term a. But that seems too strong. For example, to say that all the detective believes is that someone is the murderer conveys a lot less information than all the detective believes is that the driver of the red car is the murderer. One way around this problem is to require the terms that are used when only-believing an existen- tially quantified sentence to convey no information about the world. In other words, the terms should be- have like skolem functions or internal identifiers. For that reason, we introduce a special set, of function sym- bols which must be used when substituting existen- tials in the context of only-believing. Making sure that these function symbols carry no information about the world is treated as a pragmatic issue when applying the logic to knowledge bases. As we will see in section “A Decidable KR Service,” a simple way of dealing with the issue is to enforce that a user interacting with a KB is prevented from asking queries or telling the system sentences containing these special function symbols. The Language ,C The language is a modal first-order dialect with func- tion symbols, which are partitioned into two countably infinite sets F REG and FsK of every arity. The latter contains the function symbols that will be used to in- terpret existential quantifiers in the context of only- believing. The members of FsK are also referred to as Sk-functions. The language also contains a count- ably infinite set N of standard names, which are syn- tactically treated like constants. Their meaning is ex- plained below. 6M need not be unique for the same reasons as there are multiple extensions in autoepistemic logic (see P4, 171). Given the usual definitions of terms and atomic formulas, a primitive term (formula) is a term (atomic formula) with only standard names as argu- ments. We also distinguish a special primitive formula true (with the obvious intended meaning). The formulas of C are constructed in the usual way from the atomic formulas, the connectives 1 and V, the quantifier 3,7 and the modal operators B and 0 with the restriction that formulas of the form Ba! or Oa may not contain free variables. In other words, we ignore the issue of quantifying into belief (quantifying- in) as in 3zP(z)A-BP(x). Th is restriction will be lifted in a forthcoming paper [13]. To simplify the technical presentation below, we also require that no variable is bound more than once in a formula. Formulas without any occurrences of B or 0 are called objective, for- mulas without occurrences of 0 are called basic, and formulas whose predicate symbols all occur within the scope of a modal operator are called subjective. Sen- tences are, as usual, formulas without free variables. Notation: Sequences of terms or variables are some- times written in vector notation. E.g., a sequence of variables (~1, . . . , ok) is abbreviated as d. Also, 32 stands for 3~1. . .%k. If a formula a contains the free variables $1,. . . , ~b:, a[xl/tl, . . . , xk/tk] (some- times abbreviated as c~[Z/q) denotes CY with every oc- currence of xi replaced by ti. In the case of one vari- able, we sometimes write a: instead of (u[x/t]. Definition 1 A quantifier within a formula CE occurs at the objective level of Q if it does not occur within the scope of a modal operator. Definition 2 A formula a is existential-free ifl Q contains no existential quantifiers at the objective level within the scope of an even number of T-operators. A Formal Semantics The semantics of OBL relies on the notion of situations, which are essentially four-valued extensions of classical (two-valued) Kripke worlds [lo]. All situations are de- fined over a fixed universe of discourse, which we take to be the standard names of the language. None of the results in this paper depends on this choice,s but it greatly simplifies the technical presentation. For ex- ample, the true- and false-support of predicates can be described by the true- and false-support of primitive formulas. Also, at, each situation, the closed terms of the language are interpreted by mapping them into the standard names. Definition 3 Denotation Functions A denotation function d is a mapping from closed 70ther logical connectives like A, > , and 3 and the quantifier V are used freely and are defined in the usual way in terms of 1, V, and 3. 8The use of standard names as the universe of dis- course becomes much more significant in the context of quantifying-in [15, 131. LAKEMEYER 533 terms into the standard names such that d(n) = n for alln E N and d(f(tl, . . . ,tk)) = d(f(d(tl), . . . , d(tk))). (f may be 0-ary.) d is canonically extended to apply to sequences as well, i.e., d((tl , - . -, tk)) = (d(h), . . . d(tk)). It is easy to see that denotation functions are uniquely determined by the values they assign to primitive terms. Definition 4 Situations A situation s is a triple s = (T, F, d), where T and F are subsets of the set of primitive sentences such that true E T and true $ F. d is a denotation function. Definition 5 Worlds A situation (T, F, d) is called a world, QJ p(Z)~Tep(n’)$Ff or all primitive formulas P(G) The following definitions are needed for the seman- tics of B and 0. In particular, they describe how to substitute terms for existentially quantified variables when interpreting belief (definition 9) and what kinds of terms are eligible in the context of B (definition 7) and 0 (definition 8). Definition 6 Existentially Quantified Variables Let cx be a formula in L. A variable x is said to be existentially (universally) quantified in cx ifl x is bound in the scope of an even (odd) number of l- operators. Definition 7 Admissible Terms Let cx be objective and x existentially quantified in CY. A term t is said to be an admissible substitution for x with respect to cx ifl every variable y in t is universally quantified in cx and x is bound within the scope of y. If the context is clear, we often say t is admissible for x or t is admissible. Definition 8 Sk-terms Let a be a sentence and x an existentially quantified variable bound at the objective level of a. Let U(x) be a sequence of the universally quantified variables in whose scope x is bound. Let f E J& be a function sym- bol of arity IU(x)l occurring nowhere else in CY. Then f(U(x)) is called an Sk-term (for x). Note that Sk-terms are also admissible. Definition 9 Let Q be a sentence and let Z = (ml..., xk) be a sequence of the existentially quanti- &ed variables bound at the objective level of a. Let t = (t1 , . . . , tk) be a sequence of terms s.t. ti is admis- sible for xi for all i. c$[Z/q denotes cx with all 3xi removed and with all occurrences of xi replaced by t;. To illustrate the previous definition, let (Y = ;(;~Y(P(x,Y) V -3zQ(z))), tl a, and t2 Then both tl and t2 ar: admissible an: a’[x;tl, +a] = (+‘Y(+, Y) V lQ(f ($>))a We are now in a position to define the semantic rules 534 KNOWLEDGERJ~PRESENTA~ON for the sentences of G. The rules except for B and 0 are no different from those in classical logic except that they are somewhat more longwinded because the true- and false-support of sentences have to be considered separately. Let s be a situation and M a set of situations. The true- ( +r ) and false-support ( +r ) relations for sen- tences in L are defined as: (Let P(o be an atomic sen- tence. cx and /3 are sentences except in rule 4., where o! 1. 2. 3. 4. 5. 6. may contain the free variable x.) M, s b=T ~(9 ++ P(d(g) E T, where s = (T, F, d) M,s l==~ p(g - P@(g) E F M,S+T~CY w M,++ M,s +=F icy c M,s +=T (Y M&T~VP e M,s~+y-M,skd M,sbFaVp a M,sbFaandM,sb& M,s~~ZIX~ _ forsomenEN M,sb~az M,s~~~xcY. -forallnEN M,sbFag For the following rules, let d = (xi, . . . , xk) be a sequence of the existentially quantified variables bound at the objective level of cr. M,s+rBa _ for all s’, ifs’ E M then M, s’ k=T a3[Z/fl, where t”is a sequence of admissible terms. M,sbFBa c--“, M,s#TB~ M,sbTO~ _ for all s’, s’ E M iff M, s’ b* J’[Z/&K], where <sk is a sequence of distinct Sk-terms. M,s~~OCX _ M,~~TOLY Note that, in the definition of B and 0, the same t’or & must be chosen for all s’ E M. Also note how the definition of only-believing differs only in two places from that of belief. For one, the terms that can be sub- stituted for existentials are restricted to mention ex- actly one Sk-function (the “internal identifiers”). The only other change involves replacing the “if” in the def- inition of belief by an “iff”. This ensures that the set of situations M is as large as possible. The notions of truth, logical consequence, validity, and satisfiability are defined with respect to worlds and non-empty sets of situations. A formula cx is true at a non-empty set of situa- tions M and a world w if M, w FT a. a is false if M, w Fr CY. A formula CY is valid (F(Y) iff cx is true at every world w and every non-empty set of situations M. a is satisfiable iff lo is not valid.g ‘In [17, 141 ‘t 1 is shown that using arbitrary sets of situ- ations has the unintuitive effect that what is only-believed at an epistemic state (represented by a set of situations) is not completely determined by the basic beliefs at that state. This flaw can be overcome by using so-called maximal sets of situations [17, 141. S ince this issue is independent from Finally, if cx is objective, we often write s +r o in- disjunction are interpreted in the usual way and all the stead of M, s br cy, since nothing in the interpretation non-standard aspects are pushed into the semantics of of a depends on M (similar for s FF a). B and 0. Properties of Belief Apart from belief and only-believing, the logic behaves much like a classical first-order logic. For example, all the substitution instances of sentences that are valid in classical FOL are also valid in OBJ. In the rest of this section, we present important properties of belief. The following four examples of invalid sentences il- lustrate in what ways belief is not closed under classi- cal logical implication. Let P(a) and Q(b) be distinct atomic formulas. #W(4 v -p(a)> believed Valid sentences need not be #BP(u) A B(p(a) 3 Q(b)) 3 BQ(b) Ponens No Modus #BP(u) A BlP(u) > BQ(b) Inconsistent beliefs do not imply believing everything #B@(a) v P(b)) > B3xP(x) No existential generalization from disjunctions While the first three examples are a direct consequence of the four-valued situations, the fourth is a result of the special treatment of existential quantifiers within belief. Next we list some of the valid sentences concerning belief, which give an indication of what can be con- cluded from a given belief. Let cy and ,6 be arbitrary sentences. +Btrue A lB+,rue bB(cy A p) E BCY A B/3. /=B0B(oVp)AB(PVo) /=BVzcu > BCY: for any closed term t. k=BaF > B3zcu, where x is free in o and t is any closed term. +Ocr > Ba Believing objective sentences is strongly related to Patel-Schneider’s t-entailment: Theorem 1 OBL subsumes t-entailment Let CY and /3 be objective sentences containing neither standard names nor occurrences of true.” Then j==Ba > BP iff a bt ,8. This result is significant in itself, since it can be viewed as providing a new semantics for t-entailment. The original semantics of t-entailment has the peculiar property of interpreting disjunction in a non-standard way. In OBL, all the classical connectives including the main concern of this paper, we have chosen to ignore it here. “These restrictions are necessary in order to match the language of t-entailment. We conclude this section with a list of properties concerning self-knowledge. Let cy and /3 be arbitrary sentences and let p and cr be subjective sentences. Perfect Introspection: ~BCV > BBa and ~~BcY > BlBa Self-Knowledge is Accurate: bBa > o Self-Knowledge is Complete: FO > Ba Self-Knowledge is consistent: +Ba > 43~0 Self-Knowledge is Closed Under MP: l=w A (‘P v 4) 1 Ba The above results show that an agent with this model of belief has perfect knowledge about her own beliefs even if her beliefs about the world are limited. Computing What an Objective KB Knows The intuition behind only-believing the sentences in a knowledge base has been to capture what a KB knows or what epistemic state the KB represents. Ideally, one would like OKB to pick out a unique epistemic state from the range of states defined by the logic (in the form of sets of situations). Unfortunately, this is not the case for arbitrary KBs. For one, if the KB is not objective, OKB may be satisfied in multiple epistemic states for the same reason as there are multiple exten- sions in other autoepistemic logics such as [20]. Unlike other autoepistemic logics, OKB does not represent a unique epistemic state even if KB is objective. For example, 03xP(x) is satisfied by {s ] s br P(u)) for any constant a E J&. On the other hand, all those states are isomorphic up to renaming of Sk-functions. Moreover, they agree on all beliefs not mentioning sk- functions. In general, we obtain Theorem 2 Let KB be an objective sentence. Then for any sentence (Y not containing function sym- bols from &, exactly one of +OKB > Bcu or bOKB > lBcv holds. From a KB user’s point of view, this result can be ex- plained as follows: the user is not sure which internal identifiers (Sk-terms) the KB has chosen for its exis- tentially quantified variables, thus allowing for multi- ple possible epistemic states. However, the beliefs of the KB that matter to a user are those that are free of Sk-terms, and those, according to the theorem, are uniquely determined by the KB. In the next section, this view will be made explicit by defining routines that allow a user to interact with a KB. In the rest of this section, we prove that it is in fact decidable whether a belief without Sk-functions follows from only-believing a KB. Although the decidability result holds for beliefs containing OS, we restrict our LAKEMEYER 535 attention to basic beliefs in order to simplify the pre- sent at ion. The idea behind the procedure for deciding whether a KB believes CV., i.e., whether OKB > Ba is valid, is as follows. First we replace all occurrences of subsen- tences of the form By in a by true or ltrue depending on whether y is believed or not. This evaluation pro- ceeds from the innermost occurrence of B to the out- ermost so that at each step we are asking whether an objective sentence is believed, which can be computed using Patel-Schneider’s decidable t-entailment [22]. To perform the reduction, we need the following def- initions. Definition 10 Let KB and a be objective, CY without sk-function. RES[KB, crl = { true if FOKB 1 BCX ltrue if bOKB 1 -BCX Definition 11 Let KB be objective and cy basic. ]]o.]]xn = cv, for objective (Y IIlQlIKB = l 11Q11KB lb v PllKB = IbllKB v II&B l13xab = 3~ (I~IIKB llBallKB = RES[KB, ll~jlKB] The following three results are key to establishing de- cidability. Lemma 1 If KB is objective and CY basic without sk- functions, then /=OKB > Ba iff bOKB > B lla]lKB. Lemma 2 If KB and cy are objective, then /=OKB 1 BCX i$ ~:BKB 1 Ba. Theorem 3 (Patel-Schneider) t-entailment is decid- able. With these be proven. intermediate results, the main theorem can Theorem 4 The validity problem for sentences of the form OKB > Ba is decidable, assuming that KB is an objective sentence and a is a basic sentence not con- taining Sk-functions. Proof: Lemma 1 implies that deciding whether a KB believes an arbitrary basic sentence reduces to deciding whether it believes an objective sentence. Thus let us assume that both KB and o are objective. Next, with- out loss of generality, we replace every standard name in KB and a by a new constant occurring nowhere else. In addition, we simplify both sentences in case they contain occurrences of true (e.g. y V ltrue reduces to 7). Then IOKB 1 Ba iff FBKB 1 Ba (by lemma 2) iff (a) KB = false or (b) a = true or (c) KB it cx (by theorem l), which is decidable (theorem 3). A Decidable KR Service In this section, we apply the results of this paper to the specification of a KR service in the sense of [15]. The idea is that a KB can be defined in purely functional terms by two operations ASK and TELL that allow a user to ask the KB queries and to add new information to it. All a user has to know about is an interaction langusage in which to phrase queries and updates. By defining the interaction language to consist of the basic sentences of L that do not contain Sk-functions, the results of the previous section can be readily applied to define ASK and TELL. Note that, from a user’s point of view, the absence of Sk-functions is of no concern since there are an infinite supply of other function symbols (9&o) at hand. Definition 12 ASK and TELL Let KB be an objective sentence and 0 a basic sentence without Sk-functions. ( YES if bOKB 1 BCY A-B~CX TELL[KB, al = KBA II~IIKB - Note that the way TELLing a sentence o to a KB is han- dled. Any occurrence of a By within LY is first evalu- ated with respect to the old KB with the effect that an objective KB is always transformed into another objec- tive KB. ASK and TELL are also implementable, which follows easily from the last section. Corollary 1 ASK and TELL are decidable. Apart from being decidable, are these routines also ef- ficient? To answer this question, note that the com- plexity for both operations is dominated by the com- plexity of t-entailment, which follows easily from the way queries are evaluated using definition 10 and 11. Patel-Schneider [22] shows that, while t-entailment is intractable in general, it is indeed tractable under the following assumptions: the KB is in conjunctive nor- mal form (CNF); queries, when converted into CNF, are of size at most log(]KB]); individual clauses are of constant size; and finally, only log( /KB 1) clauses in the KB subsume a given clause in the query. From a KR point of view, these assumptions seem quite reason- able. The last condition, for example, can be satisfied if the KB uses many different predicates. Conclusions In this paper, we have developed a new model of be- lief and only-believing with perfect introspection for a full first-order language with function symbols. Most importantly, the model of belief has attractive compu- tational properties in that it specifies first-order knowl- edge bases whose epistemic states are computable and, under certain assumptions, efficiently computable. There are several ways this framework can be ex- tended. In a forthcoming paper [13], we show how an equality predicate and quantifying-in can be incorpo- rated, which allows us to make important distinctions between “knowing that” and “knowing what.” 536 KNOWLEDGEREPRESENTATION The deductive component of the current framework is rather weak. One way of increasing its power is by using a sorted logic approach as in [4]. Also, the work by McAllester et. al. [19] seems applicable in this context. Finally, the logic developed here captures aspects of nonmonotonic reasoning similar to other autoepis- temic logics. For example, the default assumption that Tweety flies unless known otherwise is captured by the valid” sentence O[lBlFly(tweety) 3 Fly(tweety)] > BFly(tweety). In the first-order case, it has so far been very difficult to investigate how default reasoning affects the overall complexity of reasoning because the underlying deduc- tive component is already undecidable. Our framework allows, for the first time, to investigate this issue with a decidable deductive component in hand. Acknowledgements I would like to thank Hector Levesque for many stim- ulating discussions on the subject of modeling belief. His comments on the paper and those of the anony- mous referees are greatly appreciated. PI PI PI PI PI PI PI PI References Belnap, N. D., A Useful Four-Valued Logic, in G. Ep- stein and J. M. Dunn (eds.), Modern Uses of Multiple- Valued Log&, Reidel, 1977. Davis, M., Obvious Logical Inferences, in Proceedings of the Seventh International Joint Conference on Ar- tificial Intelligence, Vancouver, B.C., 1981, pp. 530- 531. Dunn, J. M., Intuitive Semantics for First-Degree En- tailments and Coupled Trees, Philosophical Studies 29, 1976, pp. 149-168. Frisch, A. M., Knowledge Retrieval as Specialized In- feEpzce, Ph.D. Thesis, University of Rochester, De- partment of Computer Science, 1986. Halpern, J. Y. and Moses, Y. O., Towards a Theory of Knowledge and Ignorance: Preliminary Report, in Proceedings of The Non-Monotonic Workshop, New Paltz, NY, 1984, pp.125-143. Halpern, J. Y. and Moses, Y. O., A Guide to the Modal Logics of Knowledge and Belief, in Proc. of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA, 1985, pp. 480-490. Ketonen, J. and Weyhrauch, R., A Decidable Frag- ment of Predicate Calculus, Theoretical Computer Science 32, 1984, pp. 297-307. Hintikka, J., Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press, 1962. PI WI Pll w-4 PI P4 P51 PI El71 P81 WA PO1 WI [=I Konolige, K., A Computational Theory of Belief In- trospection. In Proceedings of the Ninth International Conference on Artificial Intelligence, Los Angeles, 1985, pp. 502-508. Kripke, S. A., Semantical Considerations on Modal Logic, Acta Philosophica Fennica 16, 1963, pp. 83- 94. Lakemeyer, G., Steps Towards a First-Order Logic of Explicit and Implicit Belief, in Proc. of the Con- ference on Theoretical Aspects of Reasoning about Knowledge, Asilomar, California, 1986, pp. 325-340. Lakemeyer, G., Decidable Reasoning in First-Order Knowledge Bases with Perfect Introspection, Techni- cal Report, Department of Computer Science, Uni- versity of Toronto, in preparation. Lakemeyer, G., A Model of Decidable, Introspective Reasoning with Quantifying-In, in preparation. Lakemeyer, G. and Levesque, H. J., A Tractable Knowledge Representation Service with Full Intro- spection, in Proc. of the Second Conference on Theo- retical Aspects of Reasoning about Knowledge, Asilo- mar, California, 1988, pp. 145-159. Levesque, H. J., Foundations of a Functional Ap- proach to Knowledge Representation, Artificial In- telligence, 23, 1984, pp. 155-212. Levesque, H. J., A Logic of Implicit and Explicit Be- lief, Tech. Rep. No. 32, Fairchild Lab. for AI Re- search, Palo Alto, 1984. Levesque, H. J., All I Know: A Study in Autoepis- temic Logic, Artificial Intelligence, North Holland, 42, 1990, pp. 263-309. Marek, W. and Truszczyriski, M., Relating Autoepis- temic and Default Logics. in Proc. of the First Inter- national Conference on Principles of Knowledge Rep- resentation and Reasoning, Morgan Kaufmann, San Mateo, CA, 1989, pp. 276-288. McAllester, D., Givan, B., and Fatima, T., Taxo- nomic Syntax for First Order Inference, in Proc. of the First Int. Conf. on Principles of Knowledge Rep- resentation and Reasoning, Morgan Kaufmann, San Mateo, 1989, pp. 289-300. Moore, R. C., Semantical Considerations on Non- monotonic Logic, in Proc. of the Eighth Interna- tional Joint Conference on Artificial Intelligence, Karlsruhe, FRG, 1983, pp. 272-279. Moore, R. C., Possible World Semantics for Autoepis- temic Logic, in The Non-Monotonic Reasoning Work- shop, New Paltz, NY, 1984, pp. 344-354. Patel-Schneider, P. F., Decidable, Logic- Based Knowledge Representation, Ph.D thesis, University of Toronto, 1987. ‘lNote that, even though OBL itself is monotonic, the epistemic states of KBs, which are specified in terms of only-believing, are nonmonotonic. LAKEMEYER 537
1990
111
862
Reasoning t No onotonic Leora Morgenstern leora@ibm.com IBM T.J. Watson Research P.O. Box 704, Mail Stop HlN08 Yorktown Heights, N.Y. 10598 (914)784-7151 Abstract This paper presents a formal theory of multiple agent non-monotonic reasoning. We introduce the subject of multiple agent non-monotonic reasoning for inquiry and motivate the field in terms of its applications for con-n-nonsense reasoning. We extend Moore’s [1985] Autoepistemic Logic to the multiple agent case, and show that the resulting logic is too weak for most ap- plications of commonsense reasoning. We then suggest some possible sets of principles for a logic of multiple- agent non-monotonic reasoning, based on the concept of an agent’s arrogance towards his knowledge of an- other agent’s ignorance. While the principles of ar- rogance are in general too strong, we demonstrate that restricted versions of these principles can work quite well for commonsense reasoning. In particular, we show that a restricted form of the principle of ar- rogance yields results that are equivalent to EMAT [Morgenstern, 19891, a non-monotonic logic which was designed to reason about temporal projection in epis- temic contexts. 1. Introduction and Motivation Past research in non-monotonic reasoning (cf. [Gins- berg, 19871) h as concentrated almost exclusively on the single-agent case. ’ The focus has been either on how a single agent uses default rules of reasoning ([Reiter, 19801, [McDermott and Doyle, 1980]), or how an agent reasons about his own beliefs [Moore, 19851. In fact, much practical non-monotonic reasoning involves rea- soning about multiple agents. An agent who wishes to predict what another agent believes must reason about how the second agent’s reasoning is non-monotonic. If I tell Susan that I have just bought a bird, I should be able to predict that Susan will believe that the bird flies. In order to do that, I will have to understand ‘A notable exception is the work of [Perrault, 19871 and [Appelt and Konolige, 19881 on speech acts. The empha- sis there, however, is on the default assumptions that the speaker [resp. hearer] of a speech act must make about the hearer’s [resp. speaker’s] beliefs. There is no discussion of the ways in which agents reason about other agent’s abili- ties to reason non-monotonically, the focus of this paper. 538 KNOWLEDGEREPRESENTATI~N how Susan reasons with default rules and to know the default rules that Susan uses. The ability to reason about how other agents rea- son non-monotonically is particularly crucial for multi- agent planning domains. An agent in a multi-agent domain who constructs any complex plan will have to reason about the ways in which other agents’ actions interact with his plan. That is, he must be able to pre- dict how other agents will act. This, in turn, entails having some set of beliefs about other agents’ beliefs and goals. For example, suppose that I plan to meet Carol at a restaurant five blocks from my home at 8 P.M. Carol lives 20 miles away, and I know that she assumes that it typically takes 40 minutes to travel that particular stretch of road. I can thus reason that Carol leaves her home at approximately 7:20 P.M. If at 7:40, I turn on the radio and discover that there is a major traffic jam on the route that Carol takes, I can conclude that Carol will be quite late. Most likely, I will not go down to the restaurant at 8, but will postpone going down until sometime later. In order to engage in this sort of reasoning, it is necessary for me to reason about Carol’s default beliefs regarding the time it takes her to travel. Other more complex examples involve the frame problem and temporal projection. Agents who plan must reason about what stays the same when actions are performed. Typically, agents perform temporal projection by doing some sort of non-monotonic rea- soning. If I count on someone else to perform an ac- tion in my plan, I must be able to reason about how he performs temporal projection. Thus, I must be able to reason about how he reasons non-monotonically. A specific example of this sort of reasoning is given in Section 3. Another rich source of examples comes from speech acts theory. Gricean theory [Grice, 19571 is best mod- elled in terms of mutual default assumptions about the other agents’ beliefs regarding the conventions that they both hold. Russell [1987] has argued that com- munication is enabled by a muhal absence of doubt on the part of the agents that they have different conven- From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. tions. This mutual absence of doubt is likewise best modelled by multi-agent non-monotonic reasoning. 2. Extending AEL to MANML We begin our formalization of a Multiple-Agent Non- Monotonic Logic (which we will call MANML) by ex- tending a standard theory of plausible reasoning to the multi-agent case. There are several candidates for such a theory: Circumscription [McCarthy, 19801, De- fault Logic [Reiter, 19801, Non-monotonic Logic [Mc- Dermott and Doyle, 19801, and Autoepistemic Logic (AEL) [Moore, 19851. We choose to extend Moore’s AEL to the multiple-agent case. 2 This is primarily because the semantics underlying AEL is that of belief, and when we talk about agents engaging in multiple- agent non-monotonic reasoning, we most often talk in terms of knowledge and belief. I believe that Carol be- lieves that it takes 40 minutes to get from her home to my neighborhood; Susan believes that my bird can fly. It is therefore reasonable to formulate such a system of reasoning within a logic that makes explicit reference to an agent’s beliefs. AEL was designed to formalize how an agent reasons about his own beliefs. Sentences of AEL are defined by the following rules: (1) if 4 is a formula of the predicate calculus, 4 E AEL; (2) if 4 E AEL, L4 E AEL, where L is the standard belief operator; (3) if 4 and $ are sentences of AEL, so are 4 A $ and 14. We say a theory T of AEL is a stable set if it obeys the following three rules: [l] T is closed under logical consequence [2] if P E T, then LP E T [3] if P 4 T, then 1LP E T. That is, AEL models a perfect reasoner who has perfect positive and negative introspection. Since Moore only considered single agents there was no need to index the belief operator L. Since we are modelling a multiple agent world, we do. We thus state the formation rules of MANML as follows: (1) if 4 is a sentence of the predicate calculus, 4 is a sentence of MANML. (2) if 4 is a sentence of MANML, L,qf~ is a sentence of MANML, where a is a constant of the language that represents an agent (3) if 4 and 1c, are sentences of MANML, so are 4 A 1c, and 14 Once we introduce multiple agents into the theory, the stable set formation rules of AEL should no longer hold. If P is in T, we do not necessarily want to say 2Moore’s Autoepistemic Logic has in some sense been superseded by Konolige’s extensions to it [Konolige, 19871. We use here Konolige’s first extension: his move from propositional to predicate logic. In subsequent extensions, Konolige also gave a stronger notion of groundedness that eliminates circular reasoning. We choose the first extension as a starting point because of its simplicity. that L,P is in T, for any a. Just because a formula is true in a theory, we would not want to say that any agent believes that formula. Nevertheless, we wish to get the effect of these stable set formation rules, so that agents can reason autoepis- temically, and so that agents can reason about other agents reasoning autoepistemically. The simplest way to do this is to alter the stable set formation rules by adding an explicit level of indexing in the obvious way. This yields the following set of rules: 0. if PI, . . . . Pn E T, Pr...P,, I- Q, then Q E T. 1. if L,Pl, . . . . L,P, E T, Pi...P, I- Q, then La& E T 2. if L,P E T, then L,L,P E T 3. if L,P 6 T, then L,lL,P E T Note that rule 3. is close, but not identical, to the principle of negative introspection in theories of belief: lL,P * L,lL,P. If we also assume that agents never believe contra- dictions, we get the following consequences: 4. if L,L,P ET, then L,P E T 5. if L,lL,P E T, then L,P 4 T Default rules must also be indexed appropriately. Bill’s belief that he would know if he had an older brother is represented as L~iil (P + L~ill P), where P stands for the sentence: Bill has an older brother. Sup- pose L~illP 4 T. By l., L~dll(lL~illP =P 1P). But, since L~illP 4 T, by 3., L BilllLBdllP E T. Thus, by l., L~illlP. Thus, we can see that the MANML sta- ble set formation rules allow an agent to reason from his lack of belief in a particular proposition to the fact that he does not believe a proposition. The MANML stable set formation rules 1. - 3. were chosen so that agents in MANML could perform autoepistemic reasoning. To show that that this is the case, we must prove a formal connection between AEL’s and MANML’s stable set formation rules. We begin with some definitions: Let T be a set of sentences of MANML. T is single- indexed if all occurrences of L are indexed by the same constant. T is epistemically closed if all sentences in T are of the form L, P for some cr and P. For ex- ample, the theory { LB(P + LB P)} is single-indexed and epistemically closed; the theory {P A Q + L, P)} is single-indexed but not epistemically closed, and the theory {J&P, Lbq} is not single-indexed, but is epistem- ically closed. We define the following function on single-indexed theories: Strip(T) pl re aces every instance of L, in T by L. We then have the following theorem: MORGENSTERN 539 L JamesLsusanFzy(X) follows by negative introspection and consequential closure. Again, these constraints go directly against the spirit of non-monotonic reasoning. The whole point is that agents need not have explicit knowledge of the conditions that are assumed to be true by default. The question, then, at the heart of a system of multiple-agent non-monotonic reasoning is this: How is one agent to reason about a second agent’s non- monotonic reasoning abilities? What can any agent co- herently conclude about the beliefs that another agent does not have? It is crucial to note that the multi-agent case is not at all symmetric with the single-agent case. In the single-agent case the given theory was a complete de- scription of the mind of some agent. In the multi-agent case, agents have at best a partial description of other agents’ beliefs. The core of our approach to modelling multi-agent non-monotonic reasoning is this: Agents reason about how other agents reason non-monotonically by mak- ing default assumptions about what these agents do not believe. We make two important observations: Firstly, one agent may incorrectly assume that a sec- ond agent does not believes some statement P. Thus, the first agent’s default assumptions are defeasible. In this sense, MANML is very different from AEL, which as Moore pointed out, is not defeasible at all. Secondly, this strategy embodies a certain amount of arrogance. An agent who reasons about a second agent’s non- monotonic reasoning process must be arrogant with respect to his beliefs about the limitations of the sec- ond agent’s beliefs. That is, he must in some sense believe that he knows all that is important to know about the second agent’s beliefs. Even if necessary, arrogance is not an attractive quality, and in too large doses, will certainly lead to wrong conclusions. Our aim, therefore, is to limit this arrogance as much as possible. For any default rule of the form LaaAlLaP j y let us call -Lap the negative part of the rule, since it deals with an agent’s negative beliefs. To enable multi-agent non-monotonic reason- ing, we need only assume that agents are arrogant with respect to the negative parts of the default rules. A first step towards a principle of inference for MANML might therefore be: If an agent X believes that a second agent Y believes some default rule Ly cv A -Ly p + y, and X believes that Y believes ac and has no reason to believe that Y believes p, X can conclude that Y believes y. Theoreml: Let T be a single-indexed and epistem- ically closed collection of sentences of MANML. Let Ta = {dIL,d E 57). Th en, L,P is a MANML stable-set consequence of-T iff P is an AE stable-set consequence of Strip(T,). The theorem follows directly from the definitions. Note that, in restricted cases, MANML seems to permit other agents to reason about an individual agent’s autoepistemic reasoning. Assume that the principle of negative introspection: -La P 3 La-La P holds in T. Let Q stand for the sentence: Alex has an older brother. Now suppose that T con- tains the following axioms: LBillLAles(Q j LAlesQ)) ( Bill believes that Alex believes that if Alex had an older brother, Alex would know about it) and LBill(lLAlerQ) By negative introspection and rule 1. of MANML, LBi[[LA[esc’LAletQ, and thus, by rule 1. we get Lsilr LAleclQ- It should be noticed that in the foregoing example, Bill did not really reason about Alex’s-autoepistemic reasoning abilities at all. He started with two beliefs about Alex: Alex’s default belief that if he had an older brother, he would know about it, and that Alex didn’t believe he had an older brother. Given Bill’s explicit belief that Alex did not have a belief about having an older brother, he was able to conclude that Alex believed that he did not have an older brother using only the principles of negative introspection and consequential closure. But this goes against the spirit of autoepistemic reasoning. The point is to start out from the positive beliefs in one’s data base, use the stable set principles to conclude that there are beliefs that one doesn’t-have, and to go from there to negative beliefs. One should not have to explicitly assume the lack of positive beliefs in order to-conclude that one has negative beliefs. Similarly, suppose Susan believes that birds typ- ically fly. Following Konolige’s [1987] suggestion for representing default rules in autoepistemic logic, adding the appropriate indexing, and doing universal instantiation, we get: LSusan(LSusan Bird(X) A ~Lsusan~F’ly(X) * FZYGV, where X stands for Tweety. Suppose also that Susan knows that Tweety is a bird: L su,adjird(X) and that James knows that Susan has these beliefs: L JarneJLsUJan(LszrsanBird(X) lLsusanlFzy(X) 3 Fly(X)) L JamesLsusanBird(X) A In order for James to conclude that Susan believes that Tweety flies, James must also believe that Susan does not believe that Tweety cannot fly. That is, there must exist in T the belief: LJames7LSusan lFZY(X)). Then the desired conclusion: Formally, suppose LxLY(LY~ A-LYP * Y) E T, LxLya ET, LxLYP 4 T. Then, LxLyy E T. 540 KN~WLEDOEREP~SENTATION Note that the Lya part of the rule may be empty; the rule thus covers autoepistemic rules of the form: P * LXP. We will call the above principle the Principle of Mod- erate Arrogance (PMA). If Q can be inferred from a theory T using the MANML stable set principles and PMA, we say Q is a PMA-stable-set consequence of T. It can easily be seen that the Principle of Moderate Arrogance allows us to model in a rational manner how Bill comes to conclude that Alex believes that he has no older brother. In particular, it is an instance of the following theorem: Theorem 2: Let T = {LxLy(P + L,P)} be a theory of MANML + PMA. Then LX Ly-P is a PMA- stable-set consequence of T. Often, even an arrogant agent finds it worthwhile to be more circumspect about ascribing the absence of beliefs to other agents. This is particularly the case when the arrogant agent does believe the negative part of some default rule. It is very often difficult to believe that someone knows less than you do - so if you have stumbled across some unexpected circumstance, you would not want to assume that other agents are igno- rant of it. This is especially true if the first agent has any reason to mistrust, or fear the actions of, the sec- ond agent (e.g., in cases where agents have conflicting goals, such as enemy agents in wartime). We call this rule of inference the Principle of Cau- tious Arrogance (PCA). It is formalized as follows: Suppose LxLY(LYQA-LYP * Y) E T, LxLya E T, LxLYP 4 T, and LxP 4 T. Then LxLyy E 7’. The PCA may be too cautious at times. There are certainly cases in which one agent knows that another agent is more ignorant than he - as in the restaurant example in section 1 - and in these cases one would rather adopt PMA than PCA. In general, however, both principles are much too permissive. For exam- ple, it is reasonable to assume that virtually all agents believe that if they had an older brother, they would know about it. Nevertheless, I would not want to con- clude of every new person that I meet that they do not have an older brother, simply because I do not know that they believe that they have an older brother! Yet these conclusions would be sanctioned by both the PMA and PCA. Clearly, we need restricted versions of the PMA and PCA for different situations. There are two ways to go about formalizing these restrictions: (1) posit that ar- rogance is a binary relation between agents. Bill may be arrogant about Alex’s knowledge but not about Susan’s, if he knows Alex very well and Susan only slightly. (2) restrict the types of defaults about which agents are arrogant. In particular, it may be the case that agents do some sort of default reasoning more readily the others. The restrictions should capture these tendencies. Much research needs to be done on both these fronts in order to develop a realistic system of multiple-agent non-monotonic reasoning. In particular, we believe that it will be instructive to look at specific domains of multiple-agent non-monotonic reasoning in which re- stricted versions of PMA or PCA do seem reasonable. These exercises should give us insight into the reasons the arrogant rules of inference yield intuitive results in many cases, and ought to point us toward modifying these principles into truly reasonable rules of inference. 3. Epistemic Motivated Action Theory in MANME A promising domain for the formalization of a re- stricted version of PMA is that of temporal reason- ing, and specifically, of temporal projection. Agents who operate in multi-agent domains must reason about other agents’ abilities to predict which facts about the world stay the same, and which change. We exam- ine below a difficult problem in temporal projection, discuss its solution in a model-preference based theory known as EMAT [Morgenstern, 19891, and show that the principle embodied in EMAT can be recast as a restricted form of PMA. Consider, then, the following problem, which we will call the Chain Request Frame Problem: 3 Suppose Alice wants to open a safe. She knows that the com- bination of the safe is a sequence of three two-digit numbers, but she does not know which. That is, she knows a finite disjunction of the form: Comb = Nl or Comb = N2 or . . . . . The combination of the safe is a fluent; various authorized individuals may change the combination of the safe. However, typically, the combination does not change; this is a fluent with a long persistence. Given the large number of possible combinations, it would not be wise to attempt all of them. Alice knows that Jim knows the combination of the safe, but she is not on good enough terms with him to ask him. However, she knows Susan quite well, and Susan is on good terms with Jim. Alice therefore constructs the following &step, multi-agent plan: 1. Alice asks Susan to ask Jim for the combination 2. Susan asks Jim for the combination 3The Chain Request Frame Problem is a synthesis and simplification of two variant frame problems, the Third Agent Frame Problem, and the Vicarious Planning Frame Problem, which are discussed in [Morgenstern, 19891. These frame problems and their solutions in EMAT, were developed in a rich logical language that allowed quantifi- cation into epistemic contexts. AEL (even Konolige’s ex- tended version) and therefore MANML do not allow quan- tification into epistemic contexts. MORGENSTERN 541 3. Jim tells Susan the combination 4. Susan tells Alice the combination Unfortunately, Alice cannot prove that this plan will work. The reason is that although Jim knows the com- bination at the time when Alice begins her plan, Alice does not know that he will still know the combination by the time Susan asks him for it. Frame axioms will not work: since Jim is not involved in the initial stage of the plan, he may not know what happens, and there- fore will not be able to apply the frame axioms. For the same reason, neither do standard non-monotonic temporal logics ([Lifschitz, 19871, [Haugh, 19871, and [Baker and Ginsberg, 19881). Very briefly, the reason these logics will not work is that they are based on the situation calculus, and are therefore dense and/or complete in the following sense: A theory is dense if there are no gaps; for any interval in the past, one always knows of some action or subaction that has oc- curred during that subinterval. A theory is complete if all actions that have occurred are known to have occurred. In cases where a particular chronicle is not dense and/or not complete for a particular agent - as in the case of Jim, above - such logics cannot be used. Nevertheless, a system capable of commonsense rea- soning should be able to conclude that Alice’s plan will probably work. Most likely, the combination of the safe will not change. Jim knows this. Therefore, as long as Jim does not know of anything that would indicate that the combination has changed, he will as- sume that it has not changed; i.e., he will still know the combination. This is true regardless of whether Jim knows what has happened in the initial stage of Alice’s plan. Alice does not know of anything that Jim knows that would indicate a change; therefore she rea- sons that he will know the combination when Susan asks him, and will be able to tell her. The basic principle underlying the foregoing reason- ing is that actions happen only if they have to hap- pen, or are motivated. This principle has been formal- ized in a non-monotonic logic called Motivated Action Theory (MAT) [M or g enstern and Stein, 1988] and is given in terms of a preference criterion over models. We assume a theory of causal and persistence rules TH, and a collection of statements giving a partial de- scription of a chronicle, called a chronicle description CD. CDUTH = TI, a particular theory instantiation. All rules are monotonic; non-monotonicity is achieved through the preference criterion. A statement is said to be motivated if it has to be in all models of TI (is a theorem of TI), * a statement is said to be motivated with respect to a particular model if it has to be true, given rules and boundary conditions, within that par- ticular model. More specifically, if TH contains a rule of the form where ar is a conjunction of sentences of the form True(t) Occurs(act)), i.e., the triggering events of the causal rule, p contains the conditions which must be true for the triggering events to have an effect, and y is the result of the triggering events, and it is the case that cy is motivated, and p is true with respect to some particular model, then 7 is motivated with respect to that model. We prefer models which minimize state- ments of the form True(t, Occurs(act)); i.e., in wiFt: as few unmotivated actions as possible occur. that this is not the same as minimizing occurrences; in particular, the two concepts are different for causal chains of events in which the triggering event is moti- vated. We have demonstrated that MAT models both for- ward and backward temporal reasoning, and in partic- ular have shown that it gives intuitive results for the Yale Shooting Problem and a host of related problems. 4 EMAT extends MAT by parameterizing theory in- stantiations with respect to agents and times. For ex- ample, TI(a, tl) describes a’s beliefs at 11 with regard to the causal theory and the description of the chron- icle that he is in; TI(a, tl, b, t2) = TI(a, tl)(b,t2) de- scribes what a at tl believes b at t2 believes. Mo- tivation within a parameterized theory instantiation is analogous to motivation within a standard theory instantiation; similarly, the preference criterion over models of a parameterized theory instantiation is anal- ogous to the preference criterion over models of a stan- dard theory instantiation. The net result is that agents always assume that other agents reason using MAT on the theory instantiations which they ascribe to them. In the above example, EMAT allows Alice to prove that her 4-step plan will work. The theory instantia- tion TI(Alice,l,Jim,S) contains the statement that the combination at time 3 is identical to the combination at time 1; thus, Jim knows the combination. EMAT provides a simple, intuitive solution to the problem of temporal projection in epistemic contexts. It is interesting to note, however, that a very basic as- sumption of arrogance lies at the foundation of EMAT. By using the inference rules of MAT within a param- eterized theory instantiation, agents in EMAT implic- itly assume that the partial characterization that they have of the other agents’ theory instantiations is suf- ficient for their purposes. It is implicitly assumed that if TI(b,t2) contained some unexpected action, then TI(a, tl, b, t2) would contain this action as well. That is, agents are arrogant with respect to what they know regarding other agents’ beliefs about the course of events. In fact, it is straightforward to model the basic prin- ciple of EMAT as a restricted form of PMA. To see *Specifically, MAT yields the desired results for the Bloodless Yale Shooting Problem [Morgenstern and Stein, 19881, the Stanford Murder Mystery [Baker and Ginsberg, 19881, the Waiting Can Kill You [unnamed problem in [Baker and Ginsberg, 19881, p. 51, and the Message Passing Problem [Morgenstern and Stein, 19881 542 KNOWLEDGEREPRESENTATION this, note that the intuition underlying MAT - actions happen only if they have to happen - can be captured by the following axiom schema of MANML: L,(iL,Occurs(act) j 7Occurs(act)) Equivalently, L,(Occurs(act) * L,Occurs(act)) That is, a believes that if it is consistent for him to believe that an action has not occurred, then the action has not occurred. In other words, it is assumed by default that unmotivated actions do not occur. This assumption can be made explicit in the follow- ing restricted form of the PMA, which is limited to default rules of causal reasoning. This restricted form ;f the PMA, (EMAT-PMA) can be stated as follows: Suppose Lx Ly(Lyar\lLyTrue(t, Occurs(act)) + y) E T LxLya ET Lx Ly True(t) Occurs, act)) 4 T Then LXLYY ET This gives us a powerful, but not overly permissive, inference rule for non-monotonic temporal reasoning. Thus far, we have shown that MANML + EMAT- PMA gives identical results to EMAT for the Chaining Request Frame Problem, and the class of Yale Shooting Problems. We are currently working on a proof of the claim that MANML + EMAT-PMA is equivalent to EMAT, modulo quantification into epistemic contexts. Naturally, EMAT-PMA models only some of the rea- soning power that a genuine theory of commonsense reasoning must have. Nevertheless, the reasonable- ness of this inference rule suggest the possibility that a group of rules of this sort, each expressing a restric- tion of PMA for some sort of reasoning, is a good first step toward building a general purpose theory of multi- agent non-monotonic reasoning. 4. Conclusions and Future Work We have argued that a theory of multiple-agent non- monotonic reasoning is crucial for a realistic theory of commonsense reasoning. We have presented MANML, a logic which is capable of expressing such reasoning, and have suggested two inference rules to allow this type of non-monotonic reasoning, based on the con- cept of an agent’s arrogance towards his knowledge of another agent’s ignorance. While they are good first approximations, these rules were shown to be overly permissive. It was suggested that domain-specific re- strictions of the principles of arrogance would give a 5To ensure that theorems of MAT are also theorems of MANML + EMAT-PMA, we must also add the axiom of privileged access [Davis, 19901: Lx LxP + LxP. This is of course a consequence of the stable set principles of MANML if we assume that agents do not believe contradictions. more realistic theory. Finally, we demonstrated that an existing theory of temporal reasoning, which al- lowed for limited multiple-agent non-monotonic rea- soning, could be duplicated by a restricted form of one of the principles of arrogance. Future work includes investigating further restric- tions of the PMA or PCA for specific domains of com- monsense reasoning. One promising domain seems to be that of speech acts theory. Gricean theory [Grice, 19571 has argued that a mutual knowledge of con- vention is a prerequisite for successful communication; Russell [ 19871 h as argued that a more realistic theory would be based on the concept of a mutual absence of doubt on the part of the speakers that they have dif- ferent conventions. That is, it should be reasonable to assume that the agent with whom you are communicat- ing shares your conventions unless you can prove oth- erwise. This assumption is predicated on some amount of arrogance towards the other agent; you believe that if he had different conventions (i.e., did not believe the accepted conventions), you would know about it. Let us introduce the operator LCL, which we define equiv- alent to [Cohen and Levesque, 19871 BMB (believe that it is mutually believed) operator. Then, a first pass at modelling this sort of inference rule might be: Suppose Lx(lLCLy(X, Y, Conventioni)) $ T. Then LX LCLy(X, Y, Convention;) E T. This is clearly a restricted form of PMA. The accu- racy and usefulness of this inference rule for formaliz- ing speech acts theory is a topic for further investiga- tion. Finally, we plan to integrate MANML with various existing theories of multiple-agent commonsense rea- soning, starting with a robust theory of planning and action. We can then test the utility of MANML on AI commonsense reasoning problems that thus far have been solvable only within a monotonic logic. Acknowledgements This work has benefited from the criticisms, sugges- tions and comments of Francisco Corrella, Ernie Davis, Hector Geffner, Benjamin Grosof, Daniel Lehmann, Lynn Stein, and Wlodek Zadrozny. References [Appelt and Konolige, 1988] Appelt, Douglas and Kurt Konolige: “A Practical Non- monotonic Theory for Reasoning About Speech Acts, ” Proceedings of the 26th Conference of the ACL, 1988 [Baker, 19891 Baker, Andrew: “A Simple Solution to the Yale Shooting Problem,” Proceedings, First Conference on Principles of Knowledge Represen- tation and Reasoning, 1988 [Baker and Ginsberg, 1988] Baker, Andrew and Matthew Ginsberg: “Some Problems in Tempo- ral Reasoning,” manuscript, 1988 MORGENSTERN 543 [Cohen and Levesque, 19871 Cohen, Philip and Hec- tor Levesque: “Rational Interaction as the Basis for Communication, ” Proceedings, Symposium on Plans and Intentions in Communication and Dis- course, Monterey, 1987 [Davis, 19901 Davis, Ernest: Representations of Com- monsense Knowledge, Morgan Kaufman, 1990 [Ginsberg, 19871 Ginsberg, Matthew, ed: Readings in Nonmonotonic Reasoning, Morgan Kaufman, Los Altos, 1987 [Grice, 19571 Grice, H.P.: “Meaning,” Philosophical Review, 1957 [Haugh, 19871 Haugh, Brian: “Simple Causal Mini- mizations for Temporal Persistence and Projec- tion,” Proceedings, AAAI 1987 [Konolige, 19871 Konolige, Kurt: “On the Relation Between Default Theories and Autoepistemic Logic,” Proceedings, IJCA I 1987 [Lifschitz, 19871 Lifschitz, Vladimir: “Formal Theo- ries of Action,” Proceedings, IJCAI 1987 [McCarthy, 19801 McCarthy, John: “Circumscrip- tion,” Artificial Intelligence, Vol. 13, 1980 [McDermott and Doyle, 19801 McDermott, Drew and Jon Doyle: “Non-monotonic Logic I,” Artificial Intelligence, Vol. 13, 1980 [Moore, 19851 Moore, Robert: “Semantical Consider- ations on Nonmonotonic Logic,” Artificial Intelli- gence, Vol.25, 1985 [Morgenstern, 19891 Morgenstern, Leora: “Knowledge and the Frame Problem,” Workshop on the Frame Problem, Pensacola, 1989. To appear in: Kenneth Ford and Patrick Hayes, eds: Advances in Human and Machine Cognition, Vol. I: The Frame Prob- lem in Artificial Intelligence, JAI Press, Green- wich, 1990 [Morgenstern and Stein, 19881 Morgenstern, Leora and Lynn Andrea Stein: “Why Things go Wrong: A Formal Theory of Causal Reasoning,” Proceedings, AAAI 1988 [Perrault, 19871 Perrault, Ray: “An Application of Default Logic to Speech Act Theory,” Proceed- ings, Symposium on Intentions and Plans in Communication and Discourse, Monterey, 1987 [Reiter, 19801 Reiter, Ray: “A Logic for Default Rea- soning,” Artificial Intelligence, Vol. 13, 1980 [Russell, 19871 Russell, Stuart: “Rationality as an Ex- planation of Language?,” Behavioral and Bruin Sciences, Vol. 10, 1987 544 KN~WLEDGEREPRESENTATION
1990
112
863
A Circumscriptive Theory for Causal and Evidential Support Eunok Paek Department of Computer Science Stanford University Stanford, California 94305 paek@cs.stanford.edu Abstract Reasoning about causality is an interesting applica- tion area of formal nonmonotonic theories. Here we focus our attention on a certain aspect of causal rea- soning, namely causaZ asymmetry. In order to provide a qualitative account of causal asymmetry, we present a justification-based approach that uses circurnscrip- tion to obtain the minimality of causes. We define the notion of causal and evidential support in terms of a justification change with respect to a circumscriptive theory and show how the definition provides desirable interactions between causal and evidential support. 1 Introduction Commonsense reasoning has provided a rich applica- tion area for nonmonotonic logic. As some researchers have noted, however, certain aspects of causal reason- ing which are prevalent in ordinary discourse have not received due treatment by nonmonotonic logic[Pearl, 19SSa]. In this paper, we will focus on the phenomenon of causal asymmetry which results because two causes of an observation interact differently than two conse- quences of a common cause. In [Pearl, 19SSa], Pearl presents a causal reasoning system as an attempt to provide a qualitative account of such causal direc- tionality shown in his Bayesian analysis[Pearl, 1988b]. His system, however, generates certain counterintuitive conclusions, as Section 3 will illustrate. Here we introduce a better qualitative account of causal directionality that overcomes the difficulty men- tioned in Section 3. Our approach is not as detailed as the probabilistic account of causal directionality, but a qualitative approach has its advantages: it is simpler and more intuitive than the probabilistic approach. In Section 2, we will briefly describe Pearl’s causal reasoning system and clarify a few implicit assump- tions made by the system. In the following sec- tion, we will look into some difficulty with his frame- work. In Section 4, we present a simple circum- scriptive theory[McCarthy, 19861 that allows us to draw explanatory conclusions. Finally we will give a justification-based account of causal directionality us- ing this circumscriptive theory and also discuss how our approach improves on Pearl’s. 2 I?earl’s Causal Reasoning System In [Pearl, 1988a], Pearl states that it is necessary to know whether a given proposition was established by causal or evidential considerations, and that there is a causal asymmetry stemming from the fact that two causes of an observation interact differently than two consequences of a common cause; in the absence of a di- rect relation between the two, the former compete with each other, while the latter support each other. For ex- ample, when we observe a rash we are more likely to observe fever as well because measles always involves both fever and rash. In contrast, when we know that the cause of a rash is measles, it is less likely that the patient is also suffering from an allergy. In order to produce such an asymmetry, Pearl proposes a reason- ing system in which every proposition is labeled as ei- ther cuusalty or evidentially established, every rule is la- beled as either cuusuZor evidential, and inference rules are defined according to these labels. That is, each rule in the system is labeled as C (connoting “causal”) or E (connoting “evidential”), and P -)c Q means “P causes Q” and Q +E P means “Q is evidence for P”. Similarly each proposition is labeled either as E or C where E(P) means that P is believed for evidential reasons and C(Q) means that Q is believed for causal reasons. The semantics of the C - E distinction are defined by the following three inference rules P--e& P--v& P-E& C(P) E(P) E(P) C(Q) C(Q) E(Q) while purposely precluding the inference rule P-+E& cm Q Before we explain how these inference rules deal with causal asymmetry, we want to clarify a few assump- PAEK 545 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. tions made by Pearl’s reasoning system. First, in his reasoning system it is not specified how we initially obtain propositional labels. When we have some la- bels for certain propositions initially, we can deduce other labeled propositions by using the initially labeled propositions and rules according to the inference rules sanctioned by the system. Without any initial labeling, however, we cannot use the inference rules at all. But from the fact that we can use a causal rule P -)c Q to- gether with a proposition P regardless of its label, we can obtain labels for certain propositions without any initial labeling for P. From now on, whenever we have a rule P -+c Q and a proposition P without any label, we will assume that C(Q) follows. Secondly, there is an implicit correspondence between causal and evidential rules. That is, the database must have Q -fE P when- ever it has P -)c Q. If we can represent evidential rules without assuming the existence of their causal counterparts, some inference rules can be counterintu- itive. If there is a strong correlation between two events without any causal connection, sometimes we should allow the inference prohibited by Pearl’s reasoning sys- tem. For example, suppose that the database has the sentence ‘v’x.Take,cs22l(x) +E BrownHair to- gether with C(Take,cs221( John)); we can easily imag- ine a situation in which John is taking cs221 to fulfill his course requirements, hence Take As221 (John) is “causally established.” It is apparent that we should conclude BrownHair( John). We can justify the rea- soning system presented above only when we assume the coexistence of both causal and evidential rules. With these assumptions in mind, let us look at the following example, which is slightly modified from [Pearl, 1988a]. Example 1 Suppose we have the knowledge about causation that rain causes grass to be wet, that a sprinkler also causes wet grass and that rain causes slippery roads. Its trans- lation into Pearl’s representation will be as follows: Rain +C WetGrass; Sprinkler -+c WetGrass; Rain +C SlipperyRoad; Wet Grass +E Rain; WetGrass +E Sprinkler; SbipperyRoad -fE Rain. If we came to know WetGrass because of Rain, WetGrass would be labeled C according to the as- sumption mentioned above. When WetGrass is la- beled C, it cannot be used to infer Sprinkler together with the evidential rule WetGrass -)E Sprinkler, be- cause it is not sanctioned by the system. This shows how two causes of a single effect work against each other. On the other hand, if SlipperyRoad were la- beled E, we would deduce E(Rain) using the eviden- tial rule SlipperyRoad -)E Rain, and C(WetGrass) would follow from E(Rain) and Rain +C WetGrass. In this case, two consequences of a single cause work for each other. 3 Difficulty with Pearl’s System With the assumptions mentioned in the previous sec- tion in mind, we will now see what can go wrong with Pearl’s reasoning system. For instance, when WetGrass is established evidentially, it can be used to infer both Rain and Sprinkler, which is rather coun- terintuitive. Instead, what we would like to conclude from WetGrass is Rain or Sprinkler. One way to overcome this counterintuitive result is to weaken the meaning of the labels (C and E) from that of acceptance to that of support. That is, we rein- terpret the meaning of E(Sprinkder) as there is evi- dential support for Sprinkler, not as Sprinkler holds for an evidential reason. Once we reinterpret the E la- bel as support, concluding E(Rain) and E(Sprinkler) from WetGrass is no longer counterintuitive. We have reason to believe that it might have rained, and also that the sprinkler might have been on. Once we weaken the meaning of the E label from ac- ceptance to support, we must also weaken the meaning of the C label. Since one of the three inference rules allows us to infer C(Q) from E(P) and P -+c Q and E(P) means only that P is evidentially supported, we are no longer justified in saying that Q holds for a causal reason. Instead, we can say that Q is cuusuNy supported. Having a notion of support and making a distinction between causal and evidential support may be useful. However, we still need a mechanism to draw conclu- sions. If we know it rains, it is clear that we should conclude that the grass is wet, in addition to conclud- ing that there is causal support for the wet grass. In the following sections, we will show how a simple cir- cumscriptive theory can be used to draw conclusions and then see how we can use this circumscriptive the- ory to define the notion of support. 4 Circumscription for Minimization of Causes In this section, we propose a simple circurnscriptive theory which allows us to draw explanatory conclusions by minimizing causes. In order to minimize causes, we will reify causal in- formation by using the predicate causes(P, Q). The intended meaning of causes(P, Q) is that P causes Q to hold. We will also use the predicates holds(P) and holds,acausadly( Q) , meaning that P is true and that Q is true without any cause being known, respec- tively. Let the causal theory be divided into two parts, 7 =< R,F >. R consists of instances of the causes predicate together with the following axiom: 546 KNOWLEDGEREPRESENTATION Vx.holds(x) - [jy.causes( y, x) A holds( y)] V holds,acausally(x) (1) F consists of instances of the holds predicate’. Given a two-part background theory < R, F >, we will circumscribe it by minimizing causes and holds&causally with causes given higher priority. causes is given higher priority because we would like to say that something holds acausally only when we cannot find a cause for it from all we know. That is, we are justified in saying that an event holds acausally only when we do not have any information about its cause. Example 2 Let our background theory be as follows: R : (causes(Rain, WetGrass), causes(Rain, SlipperyRoad), causes(Sprinkler, WetGrass)) with Axiom (1) F : (holds(WetGrass)) If we circumscribe the background theory in this example, we can conclude holds(Rain) V hoZds( Sprinkler). Consider all the minimal models that satisfy the result of circumscription. In all min- imal models, nothing holds acausally. By the ax- iom (l), we know that hoZds(WetGrass) is true if and only if any of its causes holds. In this exam- ple, the only causes for WetGrass are Rain and Sprinkler. Hence holds(Rain) V holds(Sprinkler) is true in all minimal models. However, we will not have holdstRain) /\ holds(SprinkIer). Using the circumscribed background theory, we will define the notion of support in the following section. 5 Supports As we saw in the previous section, circumscription al- lows us to draw explanatory conclusions without any unintuitive behavior, and that without the burden of specifying propositional labels initially. However, the notion of causal/evidential support may be useful for certain problems. For instance, we may want to know how one event causally/evidentially affects an- other even if this event does not logically follow from the other. As a logical abstraction of probabilistic analysis, we will use a justification-bused notion of support. First, we will define what a justification is. Then we will define causal and evidential support in terms of justifi- ‘How to axiomatize causality is an important problem in and of itself, but it is not what we are interested in. We are interested in how to obtain the proper interaction between causal and evidential support. We believe that the results in the following sections apply independently of the axiomatization used for a causal theory. cation change and compare our notion of support with the propositional labels in Pearl’s C - E system. 5.1 Justification Informally, a justification for a certain proposition is a reason to believe that proposition. Let 7 be our back- ground theory. We will define Jr to be a mapping from a well-formed formula to a well-formed formula. If Jo = /?, then Q will be true whenever p (a jus- tification for (w) is true in the models of the theory 7. That is, 7 U (p} b LY. A formal definition for a justi- fication follows. Definition 5.1.1 (Justification): Given a set of first-order sentences ‘7 and a well-formed formula (Y, JI(CV) = /3 if and only if for each /3j such that (1) 7 U {@j} is satisfiable; (2) 7 U {Pj) I= a; and (3) pi is a conjunction of literals. Our definition of justification is closely connected with the definition of minimal support in the Clause Maintenance System (CMS) by Reiter and deKleer [Reiter and de Kleer, 19871. They define support as a set of literals which satisfies the conditions (1) and (2) in Definition 5.1 .l, and minimal support as a minimal such set. Here we obtain minimality of justification by taking a disjunction of all &‘s rather than requiring each ,L$ to be minimal. That is, we would like to think of justification as a well-formed formula in disjunctive normal form. Viewing justification as a formula allows us to handle disjunctive explanation easily, thus giving us much more flexibility in defining support. The third condition deserves some attention. Let us see what happens if we don’t have this condition. Given an empty background theory, the justification for P will be P itself. That is, it can only be self- justified because we do not know anything about P. Once we add Q, which may have nothing to do with P, justification for P will change to 1Q V P. This is equivalent to Q > P. This is undesirable because Q can be a random proposition which may have no relevance to P. If we don’t have the condition (3), T Z) o will be a valid /3i for justification of cy for any r in 7. The addition of condition (3) creates another inter- esting effect. Suppose our initial background theory was R z (1Q v P), i.e., we explicitly name Q > P as R. Now JT(P) = R A Q. Simply giving a name for Q > P causes it to become a part of justification for P. At first glance, it looks rather strange, but in a sense we gave a possibility of using certain literals, in this example R and Q, to express justifications by men- tioning them in the background theory. Once we note PAEK 547 this feature, we can use it as a guide for characterizing the terms in which we should express justifications. Now we will define a partial order on justifications in terms of entailment. Definition 5.1.2 (Ordering on Justifications): Given two well-formed formulas ,& and ,&, (I) Pi < P2 if and only if & k &, and (2) PI < P2 if and only if ,h < ,& and ,4 $ PI. Given two different justifications for a, PI and &, we will say that p2 is better than ,& if ,@I < p2, that is, we will say that a semantically weaker justification is better. Essentially a justification for cy is a formula we have to add to the background theory so that we can deduce CY. If one justification is semantically weaker than others, it means that what we must have in addi- tion to the background theory is weaker. That’s why we like a semantically weaker justification better. We will say that a proposition CY is supported when its justification gets better due to the addition of some other proposition ,8 2. In the next, section, we will define support in terms of an increase in justification, and see how our notion of support is different from Pearl’s propositional labels. 5.2 Causal and Evidential Supports Now we will define causal and evidential supports. In the following, circumscription minimizes causes and holds&causally with causes given higher priority. Definition 5.2.1 (Evidential Support): A proposition cy is evidentially supported with respect to a background theory 7 =< R, F > if and only if there is a formula p such that (1) 7 I= P; t2) JR(a) f JR”{/3}(f$ (3) bRC(R”F--(P))(Q) < &IRC(R”F)(+ Definition 5.2.1 says that /3 evidentially supports cr when there is no causal relation between ,B and a (the second clause), but, rather there is evidential relation between them so that the justification for a! increases due to circumscription, but not due to the initial back- ground theory. Definition 5.2.2 (Causal Support): A proposition O! is causally supported with respect to a background theory 7 =< R, F > if and only if there is a formula /3 such that 2 We were insp ired by Gardenfors’ work on explanation [Gardenfors, 19881 in that we require an increase in justi- fication, hence a decrease in surprise, for a proposition to be supported. (1) p is supported (either evidentially or causally) 01: F I= P; (2) JR(~)< JRU{/+); (3) P &t cY* p causally supports o when /3 itself is supported and there is a causal relation between ,8 and CY. The third clause is to prevent a circular definition of support. Without it, cy may be causally supported if it is either evidentially or causally supported. Let us work through some examples. Let our back- ground theory have the same R as in Example 2. When F is (holds(SlipperyRoad)), holds(Rain) is eviden- tially supported. In order to see this, we have to com- pare the justifications for hoZds(Rain) with and with- out hoZds(SlipperyRoad), given a circumscribed back- ground theory. The following relation holds between two different justifications: JcIRc(R"F-(holds(slippergrRoad)3)(holdS(in)) < Jcmc(RuF) (hOlds( Rain)) Without holds(SZipperyRoad) in the background theory, the justification for holds(Ruin) with re- spect to the circumscribed theory is unknown3. With hoZds(SZipperyRoad) in the background the- ory, the result of circumscription makes the jus- tification for holds(Rain) True. Hence we can say that holds(SlipperyRoad) makes the justifica- tion for hoZds(Ruin) increase. Also with this back- ground theory, hoZds( WetGrass) is causally supported because holds( Ruin) is evidentially supported and causes(Rain, WetGrass). Here two consequences of a common cause holds(Ruin) support each other. When F is (holds(Rain)), holds(Sprinkler) is not evidentially supported because its justification does not increase but, rather decreases. Justifications for hoZds(Sprinkler) with respect to the circumscribed background theory with and without hoZds(Ruin) are as follows: JCrRc(RuF-{holas(Raan)))(hOldS(sPrinkler)) - holds( WetGrass) A lholds( Rain) JcIRc(i”F)(holds(Sprinkler)) = unknown This situation arises because if we apply cir- cumscription to the background theory without holds( Rain), we can infer holds(WetGrass) > holds( Rain) V holds(Sprinkler). With holds(Rain) in the background theory, however, the justification for hoZds(Sprinkler) becomes unknown. Hence, the justification for hoZds(Sprinkber) decreases due to the addition of holds(Ruin) to the background theory. In this case, two causes of an observation do not support 3When a proposition can only be refer to its justification as unknown. self-justified, we will 548 KN~wLEDOEF&P~SENM+I~N each other but rather compete with each other. Our definition of support is different from Pearl’s propositional labels in two ways. The first difference is in its partiality. That is, when a conjunction of multiple events El, E2, and E3 causes another event E4, any combination of El, E2, E3 will causally sup- port E4; this differs from Pearl’s system, in which all three events must happen in order to causally sup- port E4. As for evidential support, Pearl’s system is already partial but there still is a subtle differ- ence. To illustrate this, let R be as in Example 2 and F be (holds(WetGrass), holds(SlipperyRoad)). According to definition 5.2.1, hoZds(Rain) is eviden- tially supported, but hoZds(SprinkZer) is not. In Pearl’s system E(SprinkZer) will follow if WetGrass and SlipperyRoad are evidentially supported. We be- lieve that this is another aspect of causal reasoning that a logical framework should capture (i.e., when a certain cause must hold in order to explain mul- tiple evidence, it explains away others). There is one more difference in the way we define support for events in F. For instance, when our background the- ory has R : (causes(P,Q)}, F : {holds(Q)}, holds(P) is evidentially supported and holds(Q) is causally sup- ported. However, in Pearl’s system, neither E(P) nor C(Q) follows from the theory, {P -+c Q, Q} . This again shows the advantage of our approach. 6 Conclusion We have presented a very simple circumscriptive the- ory to draw explanatory conclusions from a causal background theory. We have also showed that we can define the notion of causal/evidential support using cir- cumscription and changes in justification. Not only does our approach avoid certain counterintuitive re- sults, but it also serves as a better logical abstraction of the probabilistic account of causality in the sense that various desirable interactions between causal and evidential supports fall out naturally from their seman- tic definitions. Our approach provides a notion of both acceptance and support of a proposition. It also al- lows us to define support without any initial labeling of propositions in the background theory. Finally, it han- dles the phenomenon of causal asymmetry in a more sophisticated way in that one cause explains away oth- ers not only when it is directly known, but also when it can be inferred indirectly from some other proposi- tions. Our approach is not as detailed as a probabilistic analysis, but it is an improvement over Pearl’s system. Acknowledgement The author would like to thank Matt Ginsberg for his interest, criticism and technical help. Thanks also to Kurt Konolige, Karen Myers, Nils Nilsson, and Martha Pollack for valuable discussions. This work has been supported by NSF under grant IRI-89-12188, by NASA under the research contract NCC2-494, and by General Dynamics. References [Gardenfors, 19881 Peter Gardenfors. Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT Press, Cambridge, Massachusetts, 1988. [McCarthy, 19861 John McCarthy. Applications of cir- cumscription to formalizing common sense knowl- edge. Artificial Intelligence, 28:89-116, 1986. [Pearl, 1988a] Judea Pearl. Embracing causality in de- fault reasoning. Artificial Intelligence, 35:259-271, 1988. [Pearl, 1988b] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Infer- ence. Morgan Kaufmann, San Mateo, California, 1988. [Reiter and de Kleer, 19871 Raymond Reiter and Jo- han de Kleer. Foundations of assumption-based truth maintenance systems: Preliminary report. In Proceedings of the Sixth National Conference on Ar- tificial Intelligence, pages 183-188, 1987. PAEK 549
1990
113
864
Conditional Logics of Normality as Modal Systems Craig Boutilier Department of Computer Science University of Toronto Toronto, Canada M5S lA4 Abstract Recently, conditional logics have been developed for ap- plication to problems in default reasoning. We present a uniform framework for the development and investi- gation of conditional logics to represent and reason with “normality”, and demonstrate these logics to be equiv- alent to extensions of the modal system S4. We also show that two conditional logics, recently proposed to reason with default knowledge, are equivalent to frag- ments of two logics developed in this framework. Introduction It is widely acknowledged that commonsense reasoning is nonmonotonic, or defeasible. Given a certain body of knowledge, the facts one infers may not be accepted if this knowledge is augmented with new information. One reason for this defeasibility is that we often reason by default, or jump to conclusions by assuming that the state of affairs represented by our knowledge is in some sense typical or normal. Default reasoning lies at the heart of a theory of AI, and much effort has been expended in developing formalisms to represent and reason with “default” knowledge (see (Reiter 1987) for a survey). Recently, the use of conditional logics in nonmono- tonic reasoning has been explored (see, e.g., (Bell 1990; Boutilier 1988; Delgrande 1987; Delgrande 1988; Lehmann 1989; Kraus, Lehmann and Magidor 1990; Nute 1984)). Conditional logics were originally devel- oped to account for properties of conditional statements in natural language. These logics consist of the classical propositional logic (CPL) augmented with a conditional connective, often written >. This additional connective is necessitated as it is generally agreed that the mate- rial conditional does not adequately reflect linguistic usage of “if-then” constructs. Logics for subjunctive conditionals have been widely studied (e.g. (Stalnaker 1968; Lewis 1973)) an d are frequently based on possible worlds semantics which follow (roughly) the suggestion of Stalnaker (1968): determine the truth of a condi- tional in a certain situation by evaluating the truth of the consequent in the most similar situation in which the antecedent is true. These logics possess a number of properties which are not only intuitively valid of sub- junctives, but also reasonable for an account of “default rules”. For instance, strengthening and transitivity are not generally valid for the conditional connective: Str From B > C, infer (A A B) > C Tran From A > B and B > C, infer A > C. One cannot infer that a wet match would light, given that a match would; neither can one infer that penguins fly, given that penguins are birds and birds fly. Writing the conditional connective as j (to distin- guish it from the subjunctive interpretation), we will interpret a sentence A * B is as meaning “In the most normal course of events in which A holds, B holds as well” , or “A normally implies B” . Rather than evalu- ating the truth of the consequent in the most similar situation where the antecedent holds, we intend to eval- uate it in the most normal situation. Arguably, much of our default knowledge can be interpreted as being of this form. In (Boutilier 1988), the conditional logic E was pre- sented as an extension of Delgrande’s (1987) logic N, and was investigated as a basis for default reasoning. There a connection was shown to exist between E and the modal system S4.3. In this note, we will develop this connection further, between conditional logics of normality (CLNs) and modal logic. In the next sec- tion, we will provide a uniform framework for exploring CLNs and discuss several such logics. In particular, we will show these logics to be equivalent to extensions of the modal system S4 (KT4). This contrasts with the analysis of subjunctives discussed above, for as Lewis (1973) points out, > cannot be defined in terms of the standard unary modal operator q and truth-functional connectives. In the following section, we will show that two conditional logics for default reasoning recently pre- sented in the literature are equivalent to fragments of logics developed here (those fragments without nested occurrences of the conditional connective), and hence to fragments of S4-systems. We will conclude by dis- cussing some advantages of viewing CLNs in the man- ner proposed. Complete proofs of theorems can be 594 KNowLEDOEREPRESENTA~-I~N From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. found in (Boutilier 1989). Conditional Logics of Normality In this section, we will present a possible worlds se- mantics for CLNs. The sentence A + B is intended to represent “A normally implies B” . We will take this to mean that B holds at the most normal (or least excep- tional) worlds at which A holds. The concept of nor- mality will be represented by an accessibility relation R between possible worlds. World v will be accessible to world w if v represents a state of affairs which is at least as unexceptional (or normal) as that represented by w. There are some restrictions which should be placed on any R intended to be interpreted in this manner. We will insist that R be reflexive, so that a state of affairs is no more exceptional than itself, and transitive. Given these restrictions on R, we can be more precise about the meaning of A + B. If A 3 B is true at some world w, then in those least exceptional worlds (as seen by w) where A holds, B holds as well. This means, at any more normal state of affairs, either A is necessar- ily false at all less exceptional worlds, or there exists a less exceptional world where A and B hold, and A 3 B holds at all worlds more normal than that one. In the language of modal logic, A 3 B holds iff the following does: q (o~AvO(AA q (A> B))). This seems to capture the notion of least-exceptional A-worlds. We will now formalize these ideas. The language of CLNs (denoted Lc) is formed from a denumerable set P of propositional variables, together with the connectives -), > and 3. The connectives A, V and E are defined in terms of these in the usual way, and we define cy J) p as ~(a a /?). As is customary (e.g. (Stalnaker 1968)), •I cy is defined to be -XX 3 CY, and Oo! is l(o =+ lo). Definition A CT&model is a triple M = (W, R, cp), where W is a set (of possible worlds), R is a reflex- ive, transitive binary relation on W (the accessibility relation), and ‘p maps P into 2w (p(A) is the set of worlds where A holds). Definition Let M = (W, R, ‘p) be a CT4-model, with w E W. The truth of a formula a at w in M (where M +2u o means c~ is true at w) is defined inductively as: 1. M b=20 CY iff w E (p(o) for atomic sentence (Y. 2. Mb=,-aiffMkwa. 3. M(=,a!3/?iffMb,/?orMp,<r. 4. M b=w c~ + /? iff for each wi such that wRwl either (a) there is some w2 such that wi Rw2, M bwuz a, and for each 203 such that W~RWQ, M kwug a or M h3 P; or (b) for every w2 such that wlRw2, M &tzuz a. It is easy to verify that the connectives o and 0, in- troduced by definition, have the following familiar truth conditions: 1. M bw q aciffMb,, CY for each 201 such that wRwl. 2. M b,,, Oaiff M kwl a for some wi such that wRwl. 17 CY can be interpreted as “In all less exceptional worlds (Y holds”, and Oa as “In some less exceptional world o holds” . Definition A CT4model M = (W, R, ‘p) satisfies a sentence o (written M b a) iff M bw CE for each w E W. A sentence cy is CT4-valid (written ~CTJ CX) just when M b Q! for every CT4model M. We will now define the logic CT4. Since the modal system S4 is characterized by the class of reflexive, tran- sitive models, we will base our axiomatization on a stan- dard one for S4l. Completeness will follow quite easily from the completeness of S4 and the interdefinability of + and q . Definition The conditional logic CT4 is the smallest S 5 LC such that S contains CPL and the follow- ing axioms, and is closed under the following rules of inference: M q (A>B)>(nA> q B) T q A>A 4 q A>ooA C (A=sB)Eo(cI~AVO(AAO(A>B))) Nes From A infer 1A j A. IMP From A > B and A infer B. US From A infer A’, where A’ is a substitution in- stance of A. Definition A sentence a is provable in CT4 (written i-CT4 a) iff c-t! E CT4. a is derivable from a set I? 5 LC (written P l--CT4 a) if there is some finite subset (WY., ora) of I such that )-CT4 (CQ A.. .A an) 3 a. Theorem 1 The system CT4 is characterized by the ckss of CT,$-models; that is, f-CT4 a i$ kCT4 a. The connection between CT4 and S4 is now quite clear. The semantics of CT4 is based on the same class of models as that of S4, and the axiomatic basis of CT4 is merely an adaptation of one for S4 (plus the “charac- teristic” conditional axiom C). In fact, in a very strong sense, these two logics are equivalent. We can translate sentences from LC into the language of modal systems, LM, and conversely, as follows: Definition For CK E Lc, the translation of c~ into LM (denoted cu’) is defined inductively as follows: 1. cy, if Q is atomic. 2. --$O, if o has the form +. 3. p” > y” , if cx has the form p 3 y. ‘See, e.g., (Hughes and Cresswell 1984). We use the abbreviation •I in the axiomatization for clarity. BOUTILIER 595 4. q (CI-p”VO(p”AO(/3” >yO))),ifahastheform P * 7. Definition For a! E LM, the translation of cy into LC (denoted cu*) is defined inductively as follows: 1. 0, if o is atomic. 2. +P, if CI! has the form l/3. 3. p* >r*,ifahastheform/3>7. 4. lp*+p*,ifcllhastheform OP. These mappings induce isomorphisms between the Lin- denbaum algebras of the logics CT4 and S4, and each induces the inverse of the other (see (Boutilier 1989)). So reasoning done with one logic can just as easily be done with the other. In this sense, the logics are equiva- lent. In fact, they can be viewed as definitional variants of each other. Theorem 2 i-CT4 a G (a’)* and b-s4 a E (a*)‘. Also, kCT4 a > /? iff l--s4 a0 > PO. In other words, CT4 and S4 are equivalent. CT4 does capture many of the properties expected of a logic of normality. For instance, each of the following sentences or rules (see, e.g., (Delgrande 1987; Lehmann 1989)) is valid in CT4: ID AaA CC ((A 3 B) A (A + C)) > (A a (B AC)) RT (A a B) > (((A A B) a C) > (A e C)) CC’ ((A + C) A (B + C)) > ((A V B) + C) RCM From III(B > C), infer (A + B) > (A a C) CM FromA+BandA+C,infer(AAB)+C Furthermore, neither of Str or Tran is valid, and the connective j is exception-allowing, as the following set is consistent in CT4. {Bird, Bird j Fly, ~Fly} As well, the sentence OAAA+BAA+B is satisfiable2, showing that A need not normally indi- cate an attribute B or its negation; but, the sentence is valid, meaning A cannot normally imply both an at- tribute and its negation (unless A is not possible). An interesting fact is that A a B E •I (A Y?- B) is a the- orem of CT4; if A normally implies B in some state of affairs, then in all less exceptional states, it should continue to hold. _- While CT4 captures many aspects of normal implica- tion, there are some theorems, intuitively valid in many circumstances, which fail to hold. For instance, the rule of Rational Monotony (Lehmann 1989), and the related axiom CV (Delgrande 1987), are not valid in CT4. 2Contrast this with Stalnaker’s (1968) semantics, which validates the Conditional Law of Excluded Middle (A > B) v (A > +I). RM FromA~CandAAB#C,inferA~~B CV (A+B)>((A=&‘)>(AAlB+C)) In order to discover conditional logics which validate these schemata, we will allow any system in the lan- guage LC which extends CT4 to be called a conditional logic of normality. Definition A conditional logic of normality (CLN) is any system S C LC closed under the inference rules Nes, MP and US, such that CT4 E S. Corollary 1 Any modal system which extends S4 is equivalent to some CLN, and any CLN is equivalent to some modal system which extends S4. Of particular interest is the system CT4D, or CT4 plus the axiom D : D q (OA>B)VO(OB>A). CT4D was first studied as the logic E in (Boutilier 1988), and was presented there as an extension of Del- grande’s (1987) system N 3. Of course, the axiom D is one used to extend the modal system S4 into S4.3 (or KT$D), so it is not surprising that CT4D is charac- terized by the class of connected CT4-models, or that CT4D is equivalent to S4.3. Definition M = (W, R, p) is a CTdD-model iff M is a CT4-model and R is a connected relation (that is, if uRv and uRw, then either.vRw or wRv). Theorem 3 bCT4D a i$t-CT4D a. Theorem 4 CT4D and S4.3 are equivalent. Theorem 3 is also proven in (Boutilier 1988), as is the- orem 4, which uses the mappings * and o. By insisting on a connected relation, we require that any (accessible) situations be comparable. If neither of wi or w2 is more normal than the other, then they must be equally normal (rather than incomparable). CT4D validates both RM and CV, and enjoys the other prop- erties described above as belonging to CT4. It seems to be a very suitable logic for reasoning with default and prototypical properties. Of course, it is not the only ex- tension of CT4 which merits examination. Many other CLNs extending CT4 (and hence modal systems ex- tending S4) may prove interesting and useful as logics of normality. For instance, consider CT$G, equivalent to S4.2, which is CT4 plus the axiom G: G OoA> oOA. This logic, contained in CT4D, fails to validate RM but does include a weaker version of it:4 3CT4D extends N in that it treats sentences properly which have nested occurrences of the conditional connective, and it validates the rule CM, which N does not. *T is the identically true proposition (e.g., any truth functional tautology). T + A is interpreted as “Normally A” . 596 KNOWLEDGEREPRESENTATION WRM From A 3 C and A A B y$ 6, infer (A 3 d?) v (T j ‘A) The logic CT45 has as an additional axiom 5: 5OA>ciOA. Equivalent to S5, its application as a CLN appears to be somewhat limited in that normal implication reduces to strict implication in this system. That is, Families of models are proposed to characterize these notions of consequence. These models only determine the truth of conditional assertions. Definition (Lehmann 1989) Let (X,4) be a poset. V C X is smooth iff for each v E V, either v is mini- ma1 in V (that is, there is no z E V such that z 4 v) or there is some element zu minimal in V such that w + v. Preferential and Rational Consequence elations In this section, we will examine the nonmonotonic consequence relations of Kraus, Lehmann and Magi- dor (Lehmann 1989; Kraus, Lehmann and Magidor 1990), and show their relationship to particular CLNs. Gabbay (1985) has proposed studying nonmonotonic reasoning systems as consequence relations5, and this approach has been developed by Besnard (1988) and Lehmann et al. (1989; 1990), among others. The lan- guage of this approach is that of CPL together with a binary relation symbol b. For any propositional formu- lae cy and p, ok p is called a con&tionaZ assertion and is intended to mean that if a is known, one may sensibly conclude /3. In (Lehmann 1989; Kraus, Lehmann and Magidor 1990), a consequence relation is defined as any binary relation R between propositional formulae for which certain properties hold. If the pair (CY, ,@) is in R, then using this notion of consequence, one may sensibly conclude p given Q, and we write LYE ,0. ak p means (Q, p) is not in R. In particular, two types of conse- quence are studied in (Lehmann 1989; Kraus, Lehmann and Magidor 1990). Definition (Lehmann 1989) A preferential conse- quence red&ion is a consequence relation which sat- isfies the following rules of inference (some of which are renamed): LLE From bcp~ (Y - p and ok y, infer & y RCM From bcp~ cr > ,0 and rk Q, infer rk ,0 ID +cu And From ab p and IY~ y, infer ak @AT Or From crb y and ,6k y, infer 0 V P/-J y CM From&b@ and okr, infer al\Pkr Definition (Lehmann 1989) A rational consequence relation is a preferential consequence relation which satisfies the following rule of inference: RM From ak y and (Y A &J y, infer ok +3 5While the original notions of consequence studied by Tarski and Scott (cf. (Czelakowski and Malinowski 1985)) require relations to be monotonic, Gabbay’s (1985) proposal is to generalize these ideas by allowing nonmonotonic con- sequence relations, and thus provide logical foundations for nonmonotonic reasoning. Definition (Lehmann 1989) A preferential model (P- model) M is a triple (S, (p, 4) where S is a set (of possible worlds), ‘p maps propositional formulae into 2s so as to respect the interpretation of connectives (4 J (Y is the set of worlds where o holds, denoted 11o11 ), and 4 is a strict partial order on S such that for all propositional formulae o, IloIIM is smooth. Definition (Lehmann 1989) A ranked moded (IL model) is a preferential model A4 = (S, cp, 4) where the relation + is such that there exists a totally or- dered set (T, <) and a function f : S + T, where s -4 t iff f(s) < f(t). Definition (Lehmann 1989) A P-model or R-model M = (S, cp, 4) satisfies a conditional assertion ok /3 (written;kM /3) iff for any -i-minimal s in Ilall”, y$PI I . 1”~ is the consequence relation defined . The following completeness results are also obtained. Theorem 5 (Lehmann 1989) k is a preferential con- sequence relation ifl it is the consequence relation de- fined by some P-moded. k is a rational consequence re- Zation i$ it is the consequence relation defined by some R-model. The logic defined by preferential consequence relations is denoted P in (Lehmann 1989; Kraus, Lehmann and Magidor 1990). We will denote the logic of rational relations by R. An apparently discouraging result pre- sented in (Lehmann 1989) states that an assertion Q follows in P from a set of assertions KB iff o follows in R from that set. This result, however, is due to the limited language in which reasoning is done. KB U {a} must contain only sentences of the form /3b y. In par- ticular, one cannot assert as a premise, nor derive as a consequence, propositions or boolean combinations of assertions such as -.I((Y~ ,0) or (ok /3) V (ark y). How- ever, P and R can be extended in an obvious way to include this enriched language: we will allow as well- formed formulae any propositional formula, any con- ditional assertion formed from propositional formulae, and any boolean combination of these. In particular, only nested conditional assertions, of the form, say, ok (/?k y) are disqualified. Such well-formed formulae will be called extended conditional assertions. In order to capture reasonable inferences using this language of extended assertions, we must enhance the systems R and P to reason with propositions. P* and R* will de- note the systems obtained by augmenting P and R with BOUTILIER 597 the axiom and rule schemata of CPL together with the axiom (1Ak A) > A6. The notions of satisfiability and validity in P-models will be adjusted as follows: Definition Let M = (S, ‘p, -x) be a P-model, and let s E S. The truth of an extended conditional assertion CI! at s (M j==s o means o is true at s) is defined inductively as follows: 1. A4 +=s a iff s E p(a) for atomic sentence a. 2. Mj=,7aiffM~,cu. 3. M~=,a!>/3iffM~=,/3orM~,a. 4. M#==,ak/3iffakMp. M satisfies c~! (M b a) iff M b8 CY for each s E S. a is P*-valid (bp* CY) iff M b o for each P-model M. o is R*-valid (k R* a) iff M b 0 for each R-model M. It is not hard to see P* and R* correspond to the classes of P-models and R-models, respectively, using this extended notion of validity, and that these logics extend P and R in a very natural way. In fact, P* and R* are not much more interesting than P and R, except they will allow us to show a correspondence between the notions of consequence described in (Lehmann 1989; Kraus, Lehmann and Magidor 1990) and CLNs. The language of LC allows nested occurrences of the conditional connective, something which is forbidden in the language of extended assertions, so we will define L, to be the conditional language without such nesting. For any CLN S, S- will denote S restricted to sentences of L,. To show the connection between CLNs and the notions of preferential and rational consequence, we will consider the logics P* and R* to be as before with the relation symbol I N replaced by the connective + in every sentence of the language of extended assertions. Theorem 6 Let CY E LE. b:p* a ifl +cT~- Q. Theorem 7 Let a E LE. FR* c~ i# ACTED- cy. The “only if” half of these theorems is easy to show by demonstrating the validity of the inference rules of P* (R*) in CT4 (CT4D). The “if” portion is proven by showing any P*-satisfiable (R*-satisfiable) sentence is satisfiable in CT4 (CT4D). The interesting case is for conditional sentences and proceeds by constructing a CT4-model (CT$D-model) which satisfies the same ex- tended assertions (or unnested conditionals) as a par- ticular P-model (R-model). These theorems show that P* and R* are equivalent to the “flat” portions of CT4 and CT4D, respectively, and hence are equivalent to the “flat” portions of the modal systems S4 and S4.37. This is somewhat surpris- ing. That two independently motivated and developed ‘CPL allows propositional reasoning and the new ax- iom captures the interaction between propositions and assertions. 7A fortiori P and R are equivalent to even smaller frag- ments of these logics, those which only allow “conditional assertions” of the form (Y + ,B. conditional logics for default reasoning should turn out to be equivalent to standard modal systems is rather unexpected. Concluding Remarks The framework presented for conditional logics of nor- mality seems very general and intuitively appealing. However, its generality and applicability is reinforced by the fact that logics within the literature, while inde- pendently motivated, turn out to be equivalent to the “unnested” fragments of logics developed in this frame- work. Viewing CLNs as extensions of CT4 provides a num- ber of conceptual and practical advantages from the standpoint of default reasoning research. This per- spective suggests a wide variety of conditional logics, which may determine useful interpretations of “normal- ity”. The correspondence with standard modal systems provides a widely-studied, and well-developed and un- derstood, semantics for such logics. Furthermore, this relationship allows the appropriation of a host of ready- made results for these logics, results regarding axiom- atizability, axiomatic bases, decision procedures and their complexity, and the like. For example, Lehmann (1989) showed that deciding whether K +p cr+ /3 is a problem in co-NP when K is a finite set of assertions. Using the correspondence between R* and CT4D, and the fact that the problem of deciding S4.3-satisfiability is NP-complete (One and Nakimura 1980), we can state the following stronger result. Corollary 2 For a finite set of extended assertions KU {a}, deciding whether K j=~+, c~ is in co-NP. As well, the validity problem for CT4D- is in co-NP and that of CT4D is co-NP-hard. Regarding conditional logics as CLNs not only pro- vides a uniform basis for comparison of such logics, but also extends the sort of reasoning that can be performed using conditional logics, as they typically appear in the literature. More specifically, conditional logics, includ- ing those of (Delgrande 1987; Delgrande 1988; Lehmann 1989; Kraus, Lehmann and Magidor 1990; Nute 1984), do not allow nested occurrences of the conditional con- nective in the language or do not provide an adequate semantic account of such sentences. CLNs, on the other hand, do allow such sentences, which are of some value. For example, the following sentences are theorems of CT4 and its extensions: (A A (A =G- B)) 3 B, (A * C) =+ ((A A B) + C). The first sentence appears to embody a rough version of the probabilistic principle of direct inference (cf. (Bac- thus 1988)) whereby the degree of belief associated with a sentence I?, given that A holds, is equal to the con- ditional probability P(BIA). Here we do not deal with degrees of belief or numerical probabilities, but rather 598 KNOWLEDGEREPRESENTATION with acceptance or rejection of facts, assuming “nor- mality”. So when A and A =j B hold, we are willing to conclude B (in normal circumstances). The latter sentence is important when dealing with a “principle of irrelevance” (see (Delgrande 1988; Lehmann 1989; Pearl 1988)), which states that unless otherwise informed, assume that attributes are irrel- evant or independent of one another. This principle allows one to conclude, for instance, that yellow birds normally fly, given that birds normally fly. This in- ference is problematic for most logics of normality (and probabilistic logics (Pearl 1988)) and requires the meta- inference of irrelevance. This theorem of CT4 can be seen as justifying this principle as being true in the normal state of affairs, and therefore “irrelevance” (or “independence” in probabilistic terms) is just another default inference. This idea has been used to develop an account of default reasoning with conditionals based on the notion of minimal or preferred models (Boutilier 1988). This system is similar in spirit to those of (Del- grande 1988; Lehmann 1989), except the “supported” sentences which characterize independence are deriv- able from the more standard assumption of normality, whereby one concludes A > B from A 3 B. Thus, independence derives from theorems in the form of the second sentence above. Several avenues for future study of CLNs remain .open. One concerns weaker notions of normal impli- cation. These may be investigated by studying logics weaker than CT4, or by allowing weaker definitions of the connective 3. For instance, in CT4D, A 3 B is equivalent to q lAvO(AA q (A> B)) which is weaker than its definition for CLNs in gen- eral. This weaker notion of normal implication may be interesting in subsystems of CT4D. In CT4, for exam- ple, this definition allows A a B and A =+- 1B to be consistent with A, and has possibly useful interpreta- tions. Also, the connection between the logic P and the notion of probabilistic entailment (Adams 1975; Pearl 1988) has been discussed by Lehmann (1989). These re- sults show that probabilistic entailment is reducible to “flat” S4. This suggests a deeper connection, as yet un- explored, between probabilistic entailment and certain modal systems. Acknowledgements I would like to thank Ray Reiter for suggesting this avenue of research and anonymous referees for helpful advice. This work was supported by an Natural Sci- ences and Engineering Research Council Postgraduate Scholarship. References Adams, E. W. 1975. The Logic of Conditionab. D.Reidel, Dordrecht. Bacchus, F. 1988. Representing and reasoning with un- certainty. Research Report CS-88-31, University of Waterloo. Bell, J. 1990. The logic of nonmonotonicity. Artificial Intelligence, 41:365-374. Besnard, P. 1988. Axiomatizations in the metatheory of nonmonotonic inference systems. In Proceedings of Canadian Society for Computational Studies of In- telligence Conference, pages 117-124, Edmonton. Boutilier, C. 1988. Default reasoning with the con- ditional logic E. Master’s thesis, University of Toronto, Toronto. Boutilier, C. 1989. Viewing conditional logics of nor- mality as extensions of the modal system S4. Tech- nical report (forthcoming), University of Toronto. Czelakowski, J. and Malinowski, G. 1985. Key notions of tarski’s methodology of deductive systems. Stu- dia Logica, 44(4):321-351. Delgrande, J. P. 1987. A first-order logic for prototyp- ical properties. Artificiad Intelligence, 33:105-130. Delgrande, J. P. 1988. An approach to default reason- ing based on a first-order conditional logic: Revised report. Artificial Intelligence, 36:63-90. Gabbay, D. 1985. Theoretical foundations for non- monotonic reasoning in expert systems. In Apt, K. R., editor, Logics and Modeb of Concurrent Systems, pages 439-457. Springer-Verlag, Berlin. Hughes, 6. E. and Cresswell, M. J. 1984. A Companion to Modal Logic. Methuen, London. Kraus, S., Lehmann, D., and Magidor, M. 1990. Non- monotonic reasoning, preferential models and cu- mulative logics. Artificial InteZZigence. To appear. Lehmann, D. 1989. What does a conditional knowledge base entail? In Proceedings of the Conference on Principles of Reasoning and Knowledge Represen- tation, pages 212-222, Toronto. Lewis, D. 1973. Counterfactuals. Blackwell, Oxford. Nute, D. 1984. A non-monotonic logic based on condi- tional logic. Research Report 01-0007, Univeristy of Georgia, Athens. Ono, H. and Nakimura, A. 1980. On the size of refuta- tion kripke models for some linear modal and tense logics. Studia Logica, 39(4):325-333. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of PZausibZe Inference. Morgan Kaufmann, San Mateo. Reiter, R. 1987. Nonmonotonic reasoning. Annual Re- views of Computer Science, 2~147-186. Stalnaker, R. C. 1968. A theory of conditionals. In Harper, W., Stalnaker, R., and Pearce, G., editors, Ifs, pages 41-55. D. Reidel, Dordrecht. 1981. BOUTILIER 599
1990
114
865
the Scope of Reasoning: Preliminary Report David W. Etherington Artificial Intelligence Principles Research Department AT&T Bell Laboratories Murray Hill, NJ 07974 ether@research.att.com Sarit Kraus* Institute for Advanced Computer Studies Department of Computer Science University of Maryland College Park, MD 20742 sarit@umiacs.umd.edu Abstract Existing formalisms for default reasoning capture some aspects of the nonmonotonicity of human com- monsense reasoning. However, Perlis has shown that one of these formalisms, circumscription, is subject to certain counterintuitive limitations. Kraus and Perlis suggested a partial solution, but significant problems remain. In this paper, we observe that the unfortunate limitations of circumscription are even broader than Perlis originally pointed out. Moreover, these prob- lems are not confined to circumscription; they appear to be endemic in current nonmonotonic reasoning for- malisms. We develop a much more general solution than that of Kraus and Perlis, involving restricting the scope of nonmonotonic reasoning, and show that it remedies these problems in a variety of formalisms. Introduction The search for theories of nonmonotonic reasoning- theories of how to reach reasonable conclusions that are not &idly entailed by what is known, and hence are subject to retraction-has yielded many promising formal systems. While these formalisms provide many useful insights, each has some persistent problems that have, thus far, resisted solution. In many naturally- occurring cases, the straightforward encoding of a sit- uation either leads these commonsense-reasoning for- malisms to quite unintuitive conclusions or prevents the derivation of intuitively-obvious conclusions. We discuss several such significant problems, and show their manifestations in each of the major for- malisms. We argue that these problems are actually aspects of a single, more general, problem, having more *Supported in part by AR0 research contract no. DAAL03-88-K0087. Donald Perlis* Department of Computer Science Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 perlis@cs.umd.edu to do with the underlying understanding of the func- tion of nonmonotonic reasoning than with the particu- lar details of existing frameworks. We then show that a simple idea, simple in its realization, solves these prob- lems. This not only greatly enhances the usefulness of the theories, but seems to bring them into much closer harmony with an intuitive understanding of common- sense reasoning. “Paradoxes” of Nonmonotonic Reasoning A study of the problems with existing theories of non- monotonic reasoning-Default Logic [Reiter 1980b], Circumscription [McCarthy 1980; 19861, and Autoepis- temic Logic [Moore 1985]-presupposes at least some familiarity with those formalisms. Space limitations preclude reintroducing the formalisms here; the unfa- miliar reader is referred to [Etherington 1988] for a detailed introduction. Familiarity with the basics of nonmonotonic reasoning should suffice for most pur- poses in this paper. Different variants of these formalisms have been studied for many years. For most of that time it was believed that they (or at least some of them) cap- tured the essential ideas of nonmonotonic reasoning and that it would only be a matter of time before they could be adapted to practical reasoning systems. Re- cently, problems have been noticed that seem to shake these optimistic projections. Some of these-such as the “Yale Shooting Problem” [Hanks and McDermott 1986]-seem more indicative of the difficulty of ade- quately axiomatizing even a relatively simple world; others seem more paradoxical, since the formalisms’ basic mechanisms block the conclusions they were, in- tuitively, designed to produce. We briefly recount four such “paradoxes” of non- 600 KNOWLEDGEREPRESENTATION From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. monotonic reasoning, and show how they affect the various formalisms. We then argue that the observed problems can be viewed as stemming from a com- mon root-a misapprehension, common to all the ap- proaches, of the principles underlying this type of rea- soning. Once identified, this deficiency is readily cor- rected with simple tools whose benefits, we believe, easily outweigh their cost. The Lottery Paradox The first problematic example is the “Lottery Para- dox” [Kyburg 1961; Perlis 19861. The lottery paradox arises in situations in which the conjunction of a set of assumptions, each reasonable individually, is incon- sistent with what is known about the world. For ex- ample, in the paradigmatic case, it is usually safe to assume that any particular ticket in a lottery will not win-given the overwhelming odds against it. Assum- ing the lottery is “fair”, however, the conjunction of such an assumption for each ticket with the fact that some ticket must win is inconsistent.’ To maintain consistency, some (or all) of the as- sumptions about tickets not winning must be fore- gone. Since there is no basis for determining which assumptions to forego, however, any is as good as any other, and none are unequivocally sanctioned. There are as many preferred models (or extensions) as there are tickets, each with a different ticket chosen as the winner. Since nonmonotonic formalisms generally li- cense conjectures based on what is true in all preferred models (extensions), nothing can be assumed about the individual tickets. The most that can be assumed is that if some particular ticket wins, it will be the only one. Counterexample Axioms Problems also occur when there are counterexample ax- ioms [Perlis 19861 that assert that there are exceptions to defaults. Counterexample axioms specify the ex- istence of individuals lacking some default property, without specifying their identities. For example, given the “birds fly” default, a counterexample axiom might look like 3~. Bird(x) A +‘Zies(z). Circumscription has trouble with such axioms because it stipulates that there are as few exceptions as possible, without neces- sarily determining which individuals are exceptional. Thus, any of a number of individuals might be excep- tional without changing the number of exceptions. For example, if we minimize the set of flightless birds in the theory (Bird(Tweety), 3x. Bird(x) A lFZies(x)}, we cannot conjecture that Tweety flies, since there is a minimal model in which Tweety is the only bird, and hence the flightless bird stipulated by the counterex- ample axiom. Even if we posit the existence of other ’ We assume the set of tickets related, problems arise if not. is fixed and finite. Other, birds different from Tweety,2 circumscription has no way to prefer Tweety’s flying to that of any other bird. The obvious patch is to try to somehow distin- guish Tweety from the existentially-specified flight- less bird, for instance by naming the latter (say Opus), and replacing the original counterexample ax- iom by a Skolemized version such as: Bird(Opus) A lFZies(Opus). However, Flies(Tweety) still does not follow by circumscription unless the further axiom that Tweety # Op us is adopted. But this amounts to as- suming that Tweety is not the exceptional bird-which seems to obviate the circumscription. Default logic and autoepistemic logic are less suscep- tible to counterexample axioms, since their conclusions can affect the ontology, but they are not immune. The peculiar conclusions that sometimes arise, especially in the context of domain closure axioms3 or axioms re- stricting the reference class (e.g., Bird), are discussed in [Etherington et al. 19901. Everything. is Abnormal Yet another inappropriate result occurs when there are defaults describing the typical values of a variety of (possibly orthogonal) properties for some class. If that class consists of several subclasses, each but one of which is atypical with respect to a different property, then current nonmonotonic formalisms will conjecture that individuals known to belong to the class must be- long to the completely typical subclass [Poole 19891. For example, imagine that birds typically fly, sing, are drab, and build nests, except that penguins don’t fly, swans don’t sing, and mynahs don’t build nests. Now if birds must be penguins, swans, mynahs, or ca- naries, default reasoners of the type envisioned in the literature will assume that arbitrary birds are canaries, in order to minimize the violation of defaults! Even more counterintuitively, if it turns out that all subclasses are atypical (e.g., canaries are found to be abnormal by virtue of being brightly coloured), then nonmonotonic formalisms will suddenly no longer be able to make any normality conjectures: different atyp- icalities will hold in different minimal models (exten- sions); the theory entails that some abnormality holds in each. Thus, e.g., learning that canaries are not drab blocks the assumption that Tweety flies. Poole [1989] and others have noticed that situations in which everything is abnormal in some way occur fre- quently in practice. This suggests that the problem is not an isolated baroque instance where the formalisms do not perform well but is, rather, symptomatic of fun- damental difficulties. 2 Since circumscription cannot generate new equality facts without resorting to variable terms [Etherington et al. 19851, explicit inequalities are needed to rule out models where only Tweety is a bird, but she goes by various aliases. 3 A domain-closure aza’om (DCA) [Reiter 1980a] is a formula of the form Vx. x = tl V . . . V I: = t,, for some set of ground terms, tl, . . . . t,. ETHERINGTONETAL. 601 There’s Nobody Here But Us Chickens4 Another counterintuitive aspect of some nonmonotonic formalisms is that, in their efforts to maximize typical- ity, they conjecture that exceptional classes are empty. Since belonging to an exceptional class entails violating a default, they naturally infer that exceptional classes have as few members as possible. This is both rea- sonable and nonsensical: reasonable because default reasoning does seem to involve assuming things are as normal as possible; nonsensical because the assump- tion that some object of interest is typical should not necessarily rest on the absence of atypicality elsewhere in the world. Circumscription is particularly susceptible, explic- itly stating that there is no less exceptional world than this; its semantics explicitly prefers those models where all exceptions are forced. For example, if we are told that penguins are flightless birds, that birds normally fly, and that Tweety is a bird: Qx. Penguin(x) > Bird(x) A iFlies Qx. Bird(x) A TAbnormal > Flies(x) Bird(T’weety) and minimize the set of abnormal individuals, we con- elude that Tweety flies, and hence is not a penguin- but also that there are no penguins! Conversely, if the objection to this conclusion is made explicit, by asserting 3x. Penguin(x), the enriched theory implies the counterexample axiom, 3x. Bird(x) A lFlies(x), and FZies(Tweety) is no longer conjectured. The obvious answer, including Penguin in the set of fixed predicates, prevents the conclusion that there are no penguins, at the expense of the ability to conclude that Tweety flies. With Penguin fixed, the strongest conjecture that can be made about Tweety is that she flies unless she is a penguin, which seems unsatisfac- tory. The problem is more subtle in default logic, since the effects of default reasoning are conditioned by the prov- ability of the prerequisites of defaults, and the form of the default plays a greater role. For example, the Bird(x) : Flies(x) default: Flies(x) will sanction the conjec- ture that none of the known birds are penguins, but not that there are no penguins at all. The former seems more innocuous, although perhaps less so as the num- ber of known birds becomes very large. If all birds are known, the conclusion that there are no penguins follows. Other popular default representations (e.g., “abnormality” theories) can exaggerate the problem. This is discussed in detail in [Etherington et al. 19901, where similar problems with autoepistemic logic are also outlined. 4 Or whatever class of birds is quintessentially A Common Thread Each of the above difficulties with existing theories of nonmonotonic reasoning can be attributed to a sin- gle cause-overzealousness. In the attempt to capture default reasoning, a subtle twist has been introduced. The commonsense notion that such reasoning is es- sentially the elimination of unforced abnormalities has become the notion of the introduction of forced nor- malities. Assumptions are necessary in everyday reasoning be- cause what follows from what we know about the world leaves too many questions undecided. Paradoxically, the mechanisms developed to redress this shortcoming leave too few questions undecided. Using such tools to decide whether Tweety flies is akin to cracking walnuts with a cannon-not only are there likely to be unde- sired side-effects, but the meat of the matter may be much harder to find among the irrelevant fragments. We frequently know that there are exceptional indi- viduals without knowing who they are. If defaults are applied injudiciously, paradoxes are bound to arise- yet paradoxes rarely arise in people’s default reasoning. It seems clear that defaults are usually not broadly ap- plied. The directed nature of reasoning seems to have been ignored. We contend that the intention of default rea- soning is generally not to determine the properties of every individual in the domain, but rather those of some particular individual(s) of interest. Incorporat- ing uncertain beliefs into a belief system when those beliefs are not of direct interest is likely to be coun- terproductive, simply increasing the probability tha.t some beliefs will have to be retracted. Reconsider the paradoxes discussed above. In each case, problems arise because something atypical must exist and default reasoning might encompass it. In the case of the lottery paradox, by considering the fate of every ticket, we face the problem that some ticket must win-giving rise to numerous “preferred” mod- els. If we could reason about only the small set of tickets we might consider buying, there would be no problem with assuming that none of them would win, and we would find ourselves safely past the lottery ven- dor. Similarly, faced with a counterexample axiom, so long as there was no expectation that the posited coun- terexample was among the individuals of interest, one could make assumptions about the interesting cases without wrestling with the identity of the counterex- ample. Analogously, when everything is abnormal in some aspect or other, it should be possible to reason about a few aspects of interest, and ignore all the oth- ers. Finally, when the scope of interest does not cover whole domains, conjectures to the effect that atypical classes are empty would not arise. The risk associated with making any particular con- jecture on the basis that it is supported by all exten- sions of a scoped theory is generally higher than the prototypical. correspond risk for standard default reasoning. How- 602 KNOWLEDGEREPRESENTATION ever, provided the scope of interest is sufficiently nar- row vis B vis the antecedent class(es) for the defaults, the risk does not seem disproportionate to that of doing default reasoning in the first place. Intuitively, since fewer substantive default conclusions are made, it is reasonable to believe that the net result is more prob- able. Of course, if the scope is too broad, or there is evidence that exceptional cases are within the scope, the advisability of making assumptions decreases pro- portionally. Scope in Nonmonotonic Reasoning At the conceptual level, then, it is clear that making the default reasoning processes dependent on the scope of interest enables intuitively-desirable conclusions in otherwise intransigent cases. We next show that this can be done easily for the existing formalisms, that more powerful conjectures obtain, and that appropri- ate notions of consistency are preserved. As a methodological point, we require that the scope of reasoning be narrow. We do not attempt to de- fine or enforce this, beyond noting that the scope of interest should not include a “significant fraction” of whatever reference class we are drawing default con- clusions about. Our approach to limiting the scope of reasoning ensures that-even when this requirement is violated-performance and consistency will be at least as good as that of the unscoped approaches, however. The technical requirements for limiting the scope of default reasoning are methodological rather than struc- tural. The contribution of this work is not sophisti- cated new versions of the formalisms-developing yet another nonmonotonic formalism is unnecessary. The important result is that a simple, uniform, represen- tational technique provides significant leverage on a variety of problems across a variety of formalisms. Scoped Circumscription Circumscription can accommodate scope by minimiz- ing only within the extent of a predicate represent- ing the scope of interest. Specifically, we minimize W[P, y] A Scope(y) rather than just W[P, y], resulting in the scoped circumscription schema, CIRCS,,,~,:~ A[P’] A [Qy.W[P’, y] Mcope’(y) --) W[p, ?/I~Scope(~)l + [VY.VV[P, y] A Scope(y) -+ W[P’, ~1 A SCOP’(Y)] - Scoped circumscription overcomes many of the limi- tations of its unscoped counterpart. For example, it provides a solution to the counterexample problem. Given a nontrivial domain with Tweety in the scope of concern, it is possible to conclude that Tweety flies 5 Notice that this is not a new form of circumscription. Rather, the circumscription is made relative to the Scope predicate. This approach can be used independently of which major variant of circumscription is chosen. from Bird(Tweety), despite the presence of a coun- terexample axiom. To see this, consider the following axioms, A[Scope, Bird, Flies] : Bird(Tweety) 3x. Bird(x) A lFlies(x) Charlie # Tweety Scope(Tweety). We introduce Charlie here to ensure an ontol- ogy rich enough to allow the formation of various interpretations. In particular, we need an object other than Tweety that we can at least imagine to be a potential flightless bird, to let Tweety off the hook. However, Charlie’s role as “scapebird” is quite limited-we do not conclude lFlies(Charlie) nor even Bird(CharZie). It would even suffice to have simply 3x. x # Tweety instead of Charlie # Tweety. Since, in general, we expect any realistic ontology to provide many individuals, this requirement presents no partic- ular hardship. From A and CIRCscope, with W[Bird, Flies, y] being -Flies(y), FZies(Tweety) follows. The nec- essary substitutions are x = x for Bird’(x), and X = Tweety for Scope’(x) and Flies’(x). We get Qx. Flies(x) V +Scope(x)-all non-fliers are outside the scope of reasoning-and so Flies(Tweety), since we have Scope(Tweety). More generally, even given some known scoped ex- ceptions, scoped circumscription can frequently pre- elude unknown exceptions theorem shows. in the scope, as the following Theorem 1 If A I- W(P, cq) A Scope(cq), for ground terms CV~ E { ~1, . . . . on}, and no consis- tent extension of A by ground (in)equalities entails 3x. x # crl A . . . A x # a, A W(P,x) A Scope(x), then CIRCs,,,[A] t- Qx. [x # al A . . . A x # on A Scope(x)] 3 lW(P, x), provided all predi- cates are variable, and A entails a domain-closure axiom. It is easily seen that scoped circumscription larly effective in the other paradoxical cases. is As the example just above shows, the restrictions in Theorem 1 are stronger than necessary. Essentially, what is required is an ontology with “enough” distinct individuals, but in which exceptions and the scope do not depend on the ontology of the model. Thus, for example, the result cannot be generalized to cover the- ories such as: afb [Qx. x = aVx = b] > P(a) A Scope(a), where P is to be minimized since, in models with domain {a, b}, a must be a scoped exception, even though a need not be exceptional (nor scoped) in gen- eral. The need for “domain independence”, captured in the conditions imposed on equality in the theorem, ETHERINGTONETAL. 603 is a consequence of circumscription’s inability (without use of variable terms) to produce conjectures entailing new facts about the ontology [Etherington 19881. It may be possible to relax this requirement by allowing variable terms, or using “Equality Circumscription” [Rathmann and Winslett 19891. This remains to be investigated. Although the necessary conditions for effective scoped circumscription are difficult to make precise, the problematic cases do not seem particularly trou- blesome. It seems likely that a realistic theory of a reasonably-complex problem domain will have an abundance of individuals known to be distinct from those known to be in the scope. Similarly, predicating exceptionalness on what, exists or what, things are iden- tical seems inappropriate for commonsense theories. It is crucial that the theory not entail that the unknown exceptional individuals claimed to exist are also in the scope; otherwise the problem resurfaces. We argue that it is unreasonable for an agent to use a default while believing that an anonymous object of concern is a counterexample to that default. No- tice that there is no problem, however, in believing that there are known exceptions in the scope (e.g., Bird(Opus) A -Flies(Opus) A Scope(Opus)). Is scoped circumscription consistent, however? This question is important because inconsistency has plagued certain applications of circumscri tion [Ether- ington et al. 19851. Etherington [1988 P shows that theories without existential quantifiers have consistent circumscriptions, but counterexample axioms take us out, from under this umbrella of safety. Nevertheless, scoped circumscription is consistent, regardless of the form of the original theory, provided the scope is finite. Theorem 2 If A has a model in which Scope is finite, then CIRCscope[A] is consistent. We consider other cases that are “well-behaved”, and what can be said about them, in [Etherington e2 al. 19901. Scoped Default Logic The greater expressive power of default logic [Ethering- L ton 19871 means there are many more candidate meth- ods for restricting the scope of reasoning in default logic than were available in circumscription. In [Ether- ington et al. 19901, we study a variety of possibilities, and compare their representational power. Here, we restrict our attention to one particular representation, and say the scoped representation of a normal default, CYX : ~x,is BX cyx * Scope(z) ’ Ox .6 The latter default BX says that individuals known be assumed to be p’s. to be CY’S in the scope can 6 (Y and p may be free. arbitrary formulae in which 2: occurs The introduction of scope to default logic is sufficient to circumvent, the lottery paradox, as the following ex- ample shows. Imagine a lottery with 10,000 tickets, t1 , .-.,t10,000, and imagine we are considering buying one of the tickets, t100-4175, available at the corner store. This corresponds to the theory with the axioms: Vt. Ticket(t) E t = tl V . . . V t = tlo,ooo Scwe(tl00), . . . . Scope(tl75) 3. Ticket(t) A Wins(t) and the default: Ticket(t) A Scope(t) : -Wins(t) -Wins(t) . This theory has a unique extension in which -Wins(t~00), . . . . -Wins(tl75), but the fate of the re- maining tickets is undecided. Conversely, the unscoped theory has 10,000 extensions, including 76 in which one of the tickets of interest wins. It is no accident that the desired result holds; we have proved that ground terms in the scope are con- jectured to be unexceptional whenever possible. Theorem 3 If D = 1 Qx AScope : Qx I 1 YX J and W y 3x. @x/l-\ExAScope(x), then any exten- sion, E, for A = (D, W) has no scoped exceptions. Specifically, if E I- iPa A Scope(a) then E I- Qa , for any ground term, (Y. m Analogous results hold for the other representations, and the results generalize to cases where there are known exceptions, and/or multiple defaults. It can be shown, in many cases, that every extension of the scope-limited theory is a subset, of an extension of the unscoped version. These results are comfort- ing, since they mean that narrowly-scoped reasoning does not lead in directions that would be rejected as unreasonable if the scope of reasoning were broader. Theorem 4 Let A = (D, W) be a normal default theory, and let D’ be t‘he res&lt of replacing each default in D with its scoped counterpart. Then every extension of A’ = (D’, W) is contained in an extension of A. Scoped Autoegistemic Logic Unscoped reasoning also presents problems in au- toepistemic logic which are ameliorated by restrict- ing the scope of reasoning. However, since a fully- quantificational first-order autoepistemic logic has not yet been formalized (but see [Konolige 1988; Levesque 19871 for suggestions), we restrict our discussion to a propositional version that approximates quantification by grounding variables over a closed domain. First, consider the Lottery Paradox again. As in the previous section, suppose we have 10,000 lottery tickets, and wish to buy one among the 76 from 604 KNOWLEDGEREPRESENTATION ilOO* 475. The only change required for autoepis- temic logic is that, instead of a default rule, we use the schema: LTicket(t) A LScope(t) A -LWins(t) --) Twins(t) where t ranges over the 10,000 ticket constants. In such a case, as with default logic, we get only one extension, in which we have -Wins(t) for the 76 scoped tickets but not for the rest. To see how multiple, orthogonal, properties can be handled in this framework, suppose there are only three kinds of bird, Canary, Mynah, and Penguin, and that canaries are typical but mynahs and penguins are not, since mynahs do not build nests and penguins do not fly, as in [Poole 19891: Vx. Mynah(x) > -Nests(x) Vx. Penguin(x) 2 -Flies(x) Vx. Bird(x) G Mynah(x) V Penguin(x) VCanary(x). We certainly do not want to conclude that all birds are canaries, although that is the result of straightforward application of autoepistemic logic. Specifically, from the above axioms and the defaults that birds typically fly and build nests: LBird(b) A ILlFlies -+ Flies(b) LBird(b) A -L-Nests(b) ---t Nests(b) (where again b ranges over the finite set of constants), we get that there are no mynahs or penguins-i.e., all birds are canaries. Scope can help if we employ the two scope-limited schemata: LBird(b) A LScope(b) A LScope(flying) AlL+‘Zies( b) + Flies(b) LBird(b) A LScope(b) A LScope(nesting) AlLyNests + Nests(b). The new constants, flying and nesting, represent par- ticular aspects of the descriptions of birds that might be of interest at a particular time (see, for example, [McCarthy 1986]). P rovided scope is narrow and in- cludes nesting and flying, there will be only one ex- tension, in which all scoped birds are canaries, but un- scoped birds are indeterminate as to species (as well as flying and nesting behaviours). If Scope only includes flying, we conclude that birds in the scope fly and are not penguins, but remain agnostic on their nesting behaviour . The examples suggest that scoped autoepistemic reasoning provides an intuitively-plausible solution to the paradoxes. Obviously, general results would be better, even if based on strong restrictions. For the case of strongly-grounded autoepistemic extensions (see [Konolige 1988]), we can provide such results. We begin with a sufficiency result,. Theorem 5 If W entails a domain closure axiom, W I/ 3x. @XA -\Ex~Scope(x), and D = {L@(c) A LScope(c)AlLlQ(c) > U(c)} is a schema over all the constants, c, of W, then no strongly grounded autoepistemic extension of W U D contains any scoped exceptions. The obvious generalizations to n-ary predicates and multiple scope terms follow directly. Similarly, we get a consistency result analogous to Theorem 4. Theorem 6 Suppose W entails a DCA and is L-free, and D consists of schemata of the form L@(c) A-L-G(c) > XI!(c). Let D’ be the result of replacing each schema in D by LQi(c)ALScope(c)A lL+JF(c) > e(c). Th en the L-free subtheory of any strongly-grounded autoepistemic extension of W U D’ is contained in the L-free subtheory of a strongly-grounded autoepistemic extension of WuD. These results are not as broad as those above for circumscription or default logic; however, they suggest the same trend, indicating that a Scope predicate can be useful in treating the “paradoxes” of overzealous- ness (forced normalities) surveyed above. Related Work Kraus and Perlis [1988] suggest restricting default rea- soning to “named” individuals (individuals for whom the reasoner has a standard name) in order to solve the counterexample problem in a particular variant of circumscription. This approach does not seem to gen- eralize to the other problems we addressed here, nor has it been worked out for the other formalisms we treat. Furthermore, the notion of limiting the scope of reasoning seems to be more flexible and intuitive than that of restricting reasoning to named individuals. Poole’s [1988] THEORIST system provides for. goal- directed default reasoning by searching for eqlanu- tions for goals. An explanation consists of a set of defaults which are mutually consistent with the known facts and jointly entail the goal. In paradoxical sit- uations such as those we have discussed, however, THEORIST can generally explain both a goal (e.g., lWins(ticketl)) and its negation (Wins(ticketl)), de- pending on which defaults it chooses to apply. Based on the correspondence between THEORIST'S defaults and those of default logic [Poole 19881, it appears that our notion of scope can be added directly to THEORIST, providing both more tightly focused reasoning and an alternative to paradox. Similarly, Ginsberg’s [1988] circumscriptive theorem prover provides facilities for goal-directed nonmonotonic reasoning, but the conclu- sions it reaches are circumscriptively sound and hence subject to paradox. It seems, therefore that Ginsberg’s system might also benefit from our approach. ETHERINGTONETAL. 605 Conclusions and Future Work We have pointed out common roots underlying four significant problems with existing approaches to non- monotonic reasoning. We showed that these problems visit all the major current approaches, and argued that they were real impediments to using these formalisms for commonsense reasoning. We then introduced the idea of restricting the scope of reasoning, providing powerful leverage on the prob- lems. This idea has direct application in the variants of circumscription, in default logic, and in autoepistemic logic; it is similarly effective in each. Even more satis- fying, we showed that what is required to achieve these benefits involves simple methodological changes, rather than development of new formalisms or new variants of existing formalisms. We outlined how restricting the scope of nonmono- tonic reasoning provides acceptable, commonsensical, solutions to the problems in question. These include the lottery paradox, the problem of anonymous excep- tions to defaults, the problems arising when almost ev- erything is atypical in some respect, and the tendency to conjecture typicality by rejecting the existence of atypicality. For the formalisms in question, we showed that the conclusions sanctioned by our strengthened, scope- limited, approach are generally in accord with (some subset of) the preferred models of the original the- ory. This is comforting, since it means that we have strengthened the theories, rather than simply subvert- ing them. We also showed that appropriate notions of consistency are preserved. Our framework not only avoids paradox, but also adapts naturally to goal-directed reasoning. Assump- tions are sanctioned only about objects of interest; this appears to be much more natural than current maximal-consistent-set approaches. This focussing of- fers promise for the development of practical nonmono- tonic reasoning systems. The most obvious outstanding question concerns the nature of the scope theory. Ideally, it should be pos- sible to determine scope from the current context, at- tention, and goals of the agent, although we have not yet worked on this. Among other things, we imagine that the individuals mentioned in a query or goal state- ment, or attended to as the result of recent discourse or experience will be scoped. We sus work such as [Halpern and Rabin 1987 , [Halpern and P ect, too, that McAllester 19891, [Halpern and Moses 19841, [Drapkin et al. 19871, and [Nutter 19831 will be relevant. In par- ticular the notion of an awareness set seems to have a similar spirit. We imagine “scope” to be slightly dif- ferent, however-more like “of concern” or “relevant to making a decision”. In this respect, it is encourag- ing that the approach seems robust enough to tolerate fairly gross determinations of scope. In this paper, we have skirted some of the difficult issues of equality and domain closure that face theories of nonmonotonic reasoning. Some of these are taken up in [Etherington et al. 19901 -in particular, we consider the effects of various ontological assumptions on the relationship between scoped and unscoped nonmouo- tonic reasoning. Much work, however, remains to be done in pursuit of a commonsense theory of ontology. Acknowledgements We thank Kurt Konolige and Matt Ginsberg for help- ful discussions and useful comments about this work. Kurt independently observed that changes to the cir- cumscription schema employed in an early draft were unnecessary. References Drapkin, J., Miller, M., and Perlis, D. 1987. Life on a desert island. In Proc. Workshop on The Frame Prob- lem in Artificial Intelligence, pages 349-357. American Association for Artificial Intelligence. Etherington, D. W., Mercer, R. E., and Reiter, R. 1985. On the adequacy of predicate circumscription for closed-world reasoning. Computational Intelligen.ce, l:ll-15. Etherington, D. W., Kraus, S., and Perlis, D. 1990. Nonmonotonicity and the scope of reasoning. Techni- cal Report UMIACS TR-90-56 and CS-TR-2457, Uni- versity of Maryland. Submitted for publication. Etherington, D. W. 1987. Relating default logic and circumscription. In Proc. Tenth Int ‘1 Joint Conf. on Artificial Intelligence, pages 489-494, h/Iilan, Italy. Int’l Joint Conferences on Artificial Intelligence, Inc. Etherington, D. W. 1988. Reasoning with Incomplete Information. Research Notes in Artificial Intelligence. Pitman Publishing, London, GB. Ginsberg, M. L. 1988. A circumscriptive theorem prover: Preliminary report. In Proc. Seventh Nation01 Conference on Artificial Intelligence, pages 470-474, St. Paul, MN. American Association for Artificial In- telligence. Halpern, J. Y. and McAllester, D. A. 1989. Likeli- hood, probability and knowledge. Computational In- telligence, 5:151-160. Halpern, J. Y. and Moses, Y. 1984. Towards a theory of knowledge and ignorance: preliminary report. In Proc. Workshop on Nonmonotonic Reasoning, pages 125-143, New Paltz, NY. American Association for Ar- tificial Intelligence. Halpern, J. Y. and Rabin, M. 0. 1987. A logic to reason about likelihood. Artificial Intelligence, 32:379- 405. Hanks, S. and McDermott, D. 1986. Default reason- ing, nonmonotonic logic, and the frame problem. In Proc. Fifth National Conference on Artificial Intelli- gence, pages 328-333, Philadelphia, PA. American As- sociation for Artificial Intelligence. 606 KN~wLEDGEREPREsENTATI~N Konolige, K. 1988. On the relation between default and autoepistemic logic. Artificial Intelligence, 35:343-382. Kraus, S. and Perlis, D. 1988. Names and non- monotonicity. Technical Report UMIACS TR-88-84 and CS-TR-2140, University of Maryland. Kyburg, H. 1961. Probability and the Logic of Rational Belief Wesleyan University Press, Middletown, CT. Levesque, H. 1987. All I know: An abridged report. In Proc. Sixth National Conference on Artificial Intel- ligence, pages 426431, Seattle, WA. American Asso- ciation for Artificial Intelligence. McCarthy, J. 1980. Circumscription-a form of non- monotonic reasoning. Artificial Intelligence, 13:27-39. McCarthy, J. 1986. Applications of circumscription to formalizing common-sense knowledge. Artificial Intel- ligence, 28:89-l 18. Moore, R. 1985. Semantical considerations on non- monotonic logic. Artificial Intelligence, 25:75-94. Nutter, J. T. 1983. Default reasoning using monotonic logic: a modest proposal. In Proc. Third National Conference on Artificial Intelligence, pages 297-300. American Association for Artificial Intelligence. Perlis, D. 1986. On the consistency of commonsense reasoning. Computational Intelligence, 2:180-190. Poole, D. 1988. A logical framework for dft reasoning. Artificial Intelligence, 36~27-47. Poole, D. 1989. What the lottery paradox tells us about nonmonotonic reasoning. In Proc. First Int ‘I Conf. on Principles of Knowledge Representation and Reasoning, pages 333-340. Rathmann, P. K. and Winslett, M. 1989. Circum- scribing equality. In Proc. Eleventh Int’l Joint Conf. on ArtificiaZ Intelligence, pages 468-473, Detroit, MI. Int’l Joint Conferences on Artificial Intelligence, Inc. Reiter, R. 1980. Equality and domain closure in first- order databases. J. ACM, 27:235-249. Reiter, R. 1980. A logic for default reasoning. Artificial Intelligence, 13~81-132. ETHERINGTONETAL. 607
1990
115
866
The Representation of Defaults in Cyc Ramanathan V. Guha MCC 3500 West Balcones Center Drive Austin, Texas, 78759-6509 guha@mcc.com Abstract This paper provides an account of the representa- tion of defaults in Cyc and their semantics in terms of first order logic with reification. Default reason- ing is a complex thing, and we have found it benefi- cial to separate various complex issues whose “cur- rent best solution” is likely to change now and then - such as deciding between extensions, preferring one default to another, etc. - and deal with them explicitly in the knowledge base, thus allowing us to adopt a simple (and hopefully fixed) logical mechanism to handle the basic non-monotonicity itself. We also briefly describe how this default reasoning scheme is implemented in Cyc. Background The Cyc project (3) is an effort aimed at building a large common sense knowledge base. CycL is the language in which the Cyc KB is encoded. Since much of common sense knowledge is default in nature, it is important for CycL to provide facilities for expressing defaults. This paper describes the scheme used to do default reasoning in Cyc. We are trying to build a Knowledge Base (KB) that can be used by a number of programs and it is im- portant for us to be able to provide an account of the contents of the KB in a language with clean and simple semantics. We are also interested in providing certain inferential services with the KB and would like these to be efficient. Various special purpose constructs for deal- ing with common cases, special inference procedures (and associated special representations, etc.) are used to improve the efficiency of the inference mechanism. However, special constructs and domain specific infer- ence procedures make the task of giving an account of the contents of the KB in a simple language very diffi- cult. Since these two requirements, simplicity and effi- ciency, are hard to obtain in a single language, we have divided CycL into two levels, one for obtaining each of these goals. The Epistemological Level (EL) is meant for communicating the contents of the KB to humans and other programs and attempts to use a simple lan- guage, while the Heuristic Level (HL) has a variety of special representations and inference procedures to help speed up inference. This distinction follows (7). A translator, the TA (l), is capable of translating expres- sions from the EL to the HL and vice versa. The EL uses first order predicate calculus with reifi- cation, and defaults are stated using these. This paper provides an account of the defaults in CycL largely at the EL. Guha and Lenat (9)) (3) and a forthcoming pa- per go into the full details of the HL, so this paper limits its description to some of the more important issues re- lated to implementing the scheme presented here. Research on default reasoning is not our primary goal and the only reason we are building CycL is so we can encode Cyc in it. The representation of defaults is an active area of research and there is no commonly ac- cepted standard. This makes it very hard for a person to build a representation language with the intention of using it to build a KB over a number of years. Though we expect the MB to keep changing and growing, it would be most inconvenient for us to have to change the logic underlying our language as we encounter dif- ficulties in default reasoning. Default reasoning is complex and it would be ben- eficial to separate logic level issues such as nonmono- tonicity from other issues such as deciding between ex- tensions, preferring one default to another, etc. We therefore use only the simplest logical mechanisms to obtain the basic nonmonotonicity, and we deal with the other issues in the KB rather than in the logic. The next section describes the intuition behind our scheme and the following section provides a more for- mal discussion. We then discuss certain crucial issues in default reasoning such as preferring one of a set of pos- sible (mutually contradictory) default conclusions over another. Then comes the discussion of how this is cur- rently implemented in CycL, and the last section is a sketch of some promising directions for our future re- search. 608 KN~WLED~EREFWSENTATION From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. The Intuition The basic idea is as follows. When determing whether some proposition P is true, one constructs arguments for and against that proposition and decides one way or the other after comparing these arguments. The ad- dition of information can change the availability of ar- guments for or against P, and this is what is responsible for the nonmonotonicity. The comparison of arguments is a complex issue and an arbitrary amount of knowl- edge could potentially be used in this comparison pro- cess. This suggests that this comparison be done using a knowledge based approach through axioms in the KB explicitly for this purpose. Such axioms (which enable us to determine which argument to prefer) are called preference axioms and are as much a part of the KB as are axioms about phenomena such as eating and sleep- ing. The aim is to have all the mechanisms available to do common sense reasoning and expert reasoning should be available to deal with default reasoning as well. In order to be able to state the preference axioms as regular axioms in the KB, we need to be able to treat arguments as first class objects. Since arguments are sequences (or possibly richer structures) of sentences, we use reification (5), allowing arbitrary sentences to be reified. These reifications can be treated as first class rich (5) objects. Axioms that refer to arguments for sentences will need to use reified forms of these sentences. Since we need to relate arguments for a sentence to the truth- value of the sentence, a single axiom might refer to both a sentence and its reification; i.e., we are going to need mixed level statements. Structure Of Default Statements Intuitively, the main difference between “normal” ax- ioms (i.e. ones that are not defaults) and defaults is that the defaults are weaker and incorporate some scheme by means of which they may be “beaten”. The concept of abnormality predicates (6) is an ideal candidate for ex- pressing this weakening and is the one used to encode the defaults in Cyc. So if we consider the canonical ex- ample of birds flying as a default and penguins being an exception, the syntactic structure of the defaults as they would be stated in Cyc is as suggested by McCarthy (6) and is as follows:’ lab(2, Aspectl)Abird(z)>flies(z) penguin(z)>ab(z, Aspectl) However, unlike circumscription, we don’t minimize the extent of ab to conclude lab(Tweety,Aspectl). In- stead we use the concept of arguments to conclude, from the above default, that Tweety flies. The ‘ab-literals’ (the literals such as lab(Tweety,Aspectl)) h ave a special status in that we ‘The symbols x, al, unless other mentioned a2 etc. are all universally quantified assume they can be distinguished from other formulae. This special status is quite easily represented by defin- ing a unary predicate ubLiterud such that (abLitera1 x) is true if and only if x is the reification of a negated literal involving an abnormality predicate. The next section describes what exactly an argument is and how these are used to derive conclusions. Arguments and their Use If there is a proof for a sentence, then that sentence is logically entailed by our KB (provided we are using sound inference rules) and therefore proofs are the pri- mary mechanism for obtaining conclusions from a KB. The role of arguments in our default reasoning scheme is analogous to the role of proofs in monotonic theories. Since an argument is very similar to a proof both in structure and use, we give a description of an argument by comparing it to a proof. A proof is a finite sequence of sentences such that the last sentence is the sentence being proved and each sentence is either a given axiom, an instantiation of a given axiom schema, or follows by the application of an inference rule to some set of sentences earlier in the proof. Therefore each sentence in the proof is also a theorem that follows from the KB. An argument is similar, but we weaken it a bit. For a given sentence P, an argument for P is a finite sequence of sentences (ending with P) such that each sentence is either a given axiom, an instantiation of an axiom schema, follows by the application of an inference rule to previous sentences in the argument, or is a negative abnormality literal (i.e., the reification of the sentence satisfies the predicate ubLiterul.)(8) gives the complete set of axioms that defines arguments (and the predicate argumentFor mentioned below.) Intuitively, an argument is a weakening of a proof in the sense that we are ‘asking’ to be allowed to make a certain set of assumptions and these assumptions are nothing but the abnormality literals (ab-literals). The concept of an ab-literal is closely related to that of an assumption and so the predicate ab-literal can easily be generalized to a predicate such as assumable, using which any sentence can be specified to be assumable. Another difference between arguments and proofs is that while proofs are (usually) only objects in the meta- theory of the logic, arguments are objects in the domain of discourse (i.e. are “things” in the KB.) How does the existence of an argument for P relate to whether P is a theorem ? Intuitively, if we have an ‘acceptable’ argument for P, and if there isn’t any ‘bet- ter’ argument for -P, then we would like to accept P as a theorem. This notion is captured by the following ax- iom, the Argumentation Axiom (which is also an axiom in the KB along with axioms that characterize what an argument is). GUHA 609 (V (ai ,‘p) (argumentF’or(ai ,‘P)~ A +nvalidArg(ar) A (V (~2) (argumentFor(a2,‘lp) > (invalidArg(u2) V preferred( al ,a~))))) 1 hue A few comments about this axiom are in order. (As- sume we want to know if P is a theorem.) a. b. C. d. e. f. If we had to prove that there does not exist a pre- ferred argument for ‘P, we might never conclude any- thing. We therefore make a closed world assumption for the predicates argumentFor and invulidArg. This can be done by using any of the existing formalisms for obtaining nonmonotonicity. This closed world as- sumption for these two predicates is the only mech- anism that is used to obtain nonmonotonicity. An alternative to making the closed world assumption would be to add an axiom schema that minimized the extents of argumentFor and invalidArgument (in that order.) The presence of a truth predicate in the language leads to the possibility of paradoxs. To avoid this, our truth predicate is a weak one. I.e. given a reified sentence P, (True(‘P) V True(‘lP)) is not a theorem. The truth predicate True differs from truth-values of formulae on those formulae capable of leading to paradoxes. Given an argument A for P, another argument Al can be obtained by simply adding more true sentences to A. Though it is tempting to restrict our attention to minimal arguments, we resist doing this for the following reason. There are times when we would like to allow extra information in an argument that makes it easier to compare it to other arguments. This turns out to be useful for solving problems such as the Yale Shooting Problem (10). Later examples of preference criteria can be seen to make use of this. However, as a default, if an argument Al for P subsumes another argument A for P, we prefer A to Al. This is a case of using default reasoning to determine the preference ordering between arguments itself. While it is acceptable to make assumptions about the truth-value of ab-literals, we don’t want to assume that a particular ab-literal is true if we can prove that it is not. I.e., in such cases, we would like to consider the argument as being invalid. This is captured by the following axiom schema: sentenceInArg(a,‘q) A lq 3 invalidArg(a) As mentioned earlier, we determine whether or not one argument is preferred to another using axioms in the KB. Some examples of these axioms (the prefer- ence axioms) are given in the next section. One can imagine a rich theory of argument types and dialectic patterns being used to do default reasoning. 2 We use ‘P to refer to the reification of the formula P Sample categories of argument types include reduc- tio ad ubsurdum arguments, inductive arguments, ar- guments for the truth of a sentence that provide a possible sequence of events that could have caused it, etc. Associated with each of these could be ways of countering the argument, reinforcing it, etc. We later present one of these cliched argument patterns called a narration. The scheme presented here seems to provide an adequate framework for capturing all this. . One of the aspects of a proof (which carries over to arguments) is that it is finite in length. Since we can’t axiomatize the notion of finite, we need to place some restriction on the length of our arguments. Also, given we can expend only a finite amount of resources on searching for arguments and in ensur- ing that they are not invalid, we need to incorporate some means for specifying the resources that may be spent in doing this. We do this by saying that any argument that requires more than a certain amount of resources to compute is invalid. Since attempts to prove arguments invalid are themselves likely to involve arguments, this also has the effect of limit- ing the resources spent on trying to prove arguments invalid. This notion is captured by the following ax- iom. (> (resourcesRequired al) (resources-available)) > invalidArg( al) In this axiom we use the indexical function resources- uvuilubIe to compute the resources available for a given problem. There are two ways in which this axiom can be used. If we have used up the resources available for generating arguments, then this axiom tells us that we are justified in giving up, since even if we could carry on (overusing the available resources), any argument we generated would be invalid and hence useless. Alternately, if we had some means of estimating the resources that would be required to generate an argument, and this turns out to be greater than the resources available, then this axiom gives us a justification for ignoring such arguments. So if there doesn’t seem to be any obvious argument, but some reasoning suggests the possibility of some highly contrived arguments, this axiom provides the justification for ignoring such arguments. Preference Criteria Clearly, one of the central issues in this whole scheme is the task of coming up with and axiomatizing criteria for comparing arguments and deciding which to prefer. In this section we describe a few sample criteria and, as an example, show how one of these can be axiomatized. Some sample preference criteria include the following: o Using Inferential Distance: Inheriting properties from classes is a common use of default reasoning. 6 10 KNOWLEDGE REPRESENTATION Sometimes, the properties inherited from two differ- ent, classes could be contradictory. In such cases, if one class is a subset of the other, we can prefer the value inherited from the smaller set (2). (For exam- ple, consider the average IQ of Mammals in general, and Humans in particular.) This criterion is cur- rently used in Cyc. o Causal Arguments: A strong case has been made elsewhere (4) that arguments with a ‘causal flavor’ are preferred to arguments that use reductio ad ab- surdum and other non-causal and non-constructive methods. This intuition can be made more precise as follows. Certain sentences which include the material implication operator are labeled as being ‘causal’.3 Once we have this labeling, we can prefer an argu- ment, that uses one of these causal sentences to one that does not. This criterion is used with certain spe- cialized kinds of arguments and an example of this is presented later. One has to be careful about intro- ducing redundant causal sentences to support argu- ments and this is taken care off by the default, that we prefer subsumed arguments. There are many useful specializations of this criterion and some of these are currently used by Cyc. e Bias From Desires: Consider reasoning about, the beliefs of an agent. It is well known that given ev- idence for and against some fact, there is a bias to- wards believing in the position that is favourable to ones own goals and desires. Though this might not be the most rational thing to do in general, it, is something to be taken into account when reason- ing about the beliefs of other agents. This can be formalized quite easily in this scheme. Given argu- ments for beliefs(A,P) and for beliefs(A,lP) (which implies lbeliefs(A,P)), if P also happens to be one of the desires of A (i.e. desires(A,P) is true), then the argument for beliefs(A,P) is preferred over that for beliefs(A,lP) (and lbeliefs(A,P).) More elabo- rate versions of this can be obtained by incorporat- ing notions of the objectivity of the agent involved in this preference criterion. We are planning on includ- ing this preference criterion into Cyc. e Avoiding Ignorance: Tversky (12) describes ex- periments where human subjects were found to be more willing to retract some belief if they were given an alternative, as opposed to simply retracting the belief without substituting another belief in its place. For example, in reading a murder mystery, the reader often hypothesizes that some particular sus- 3This can be easily done using reification. Of course if the sentence (p>q) is labeled causal, this does not in itself imply that (lq> 1~) is causal. The sentence (lq> up) will not be labeled causal just because (p>q) has been, since labeling is on the reified forms of the sentences. Remember we reify sentences, not propositions; the reification of (p>q) is not equal to the reification of (lq>lp). pect (Fred) was the murderer, even if there is some contradictory evidence, rather than remaining ‘un- committed’ about who the murderer was. However, faced with the same counter evidence but supplied with suggestions of a particular alternative suspect (Jane), the same reader might decide to switch his ‘running hypothesis’ of the guilty party from Fred to Jane. We can formalize this somewhat, irrational ten- dency of humans to cling onto tenous beliefs in the absence of alternate beliefs in certain cases as follows. Given a sentence of the form s(u,vl), we prefer an ar- gument, for this over an argument for its negation if there exists no other v2 such that s(u,v2). So if we have an argument for s(u,vl) and one for ~(u,vl) and these two arguments are incomparable (without using this heuristic) then, if there exists no v2 such as s(u,v2), we prefer the argument, for s(u,vl) over that for ls(u,vl). If there does exist, such a v2, we might prefer the argument for ls(u,vl) or we might leave the situation unresolved (i.e. neither s(u,vl) nor ls(u,vl) is a theorem.) This preference criterion is included in the current version of Cyc. We now describe how the first criterion can be for- malized. This exercise is meant largely to provide a flavor for these axioms. After that we provide an infor- mal description of the second criterion and show this can be used to solve the Yale Shooting Problem. Example 1: Animals are, as a default, quite stupid, but humans (who are animals) are quite smart. Also, nothing can be both stupid and smart. Given a human (Fred), we have one argument that he is smart and another argument that he is stupid. We would like to conclude that he is smart. The axiomatization of this example follows. (Rl)isa(z,Human)Alab(a:,Human)>iq(z,High) (R2)isa(z,Animal)/\lab(z,Animal)>iq(a:,Low) (R3>-(iq(~,High)r\iq(~, Low)) (R4) subClass( Animal,Human) (H) isa(Fred,Human) (A) isa(Fred,Animal) ;;; from H and R4 Given that Fred is an Animal, either iq(Fred, High) (denoted as P) holds or iq(Fred, Low) (denoted as Q) holds, but not both. We want to write a preference ax- iom that will enable us to conclude P (because Human is a subset of, hence more specific than, Animal). This will happen if we can somehow defeat the arguments for 7P and Q. The preference axiom that gives us this result is as follows: (V(al,‘p,a2)(argumentFor(u~,‘p)AargumentFor(az,‘lp)~ (V (ablitz) abLitOfArg(u2,ubdit2) A classOfAbLit(ubZ&,c2) A (3 (&lit,) abLitOfArg(u1 ,&ill) A classOfAbLit(ublitl ,cl) A subclass(c2,cl)))) > preferred ~2)) When determining whether P is true, we have an argument for it based on Fred being a Human. The GUHA 611 counter argument to this is that since he an Animal, Q must be true and since P and Q can’t both be true, P must be false. This counter argument is how- ever defeated since the counter argument assumes that ab(Fred, Animal) is false while the argument for P as- sumes that ab(Fred, Human) is false. But since Human is a subclass of Animal, the preference axiom applies and PA~Q follows. The actual arguments for P and 1P are : For P : [H, lab(Fred, Human), Rl, P] For 1P : [H, R4, A, lab(Fred, Animal), R2, Q, R3, -PI It is easy to see that using the preference axiom, we can conclude that the argument for P is preferred over the argument for ‘P, from which (and R3) 1Q follows. Actually this is just one of the axioms for capturing the preference criterion based on inferential distance. Also, these heuristics can be made more intuitive by using a class of arguments corresponding to using in- heritance to conclude default properties etc. Example 2: Let us take a look at an example of the use of a specialized kind of argument. A standard kind of argument for the truth of a temporal fact is to provide a plausible sequence of events (along with the changes they caused) to explain how some fact came to be true. We call such an argument a narration argument. We make this notion a little more precise and show how this can be used to solve the Yale Shooting Problem. A narration argument has the following structure. The argument is divided into a number of subsequences, with each corresponding to one step/increment in time. Each subsequence consists of three parts : o The sentences describing the world before that step, l The sentences describing actions that took place dur- ing the event, the sentences describing the effects of these actions (most of which are likely to be causal sentences) and possibly certain assumptions such as those made by the frame axiom, * The state of the world after that step. Sentences describing the intra-state contraints, the frame-axiom, etc. could be included in a header to the argument. The first block of one sub-sequence can be the last block of a preceding sub-sequence. The same set of descriptors is supposed to be used in describing all the situations. This set of descriptors is not a com- plete description of the world, but only includes those relevant to P. The exact form of these descriptors de- pends on the formalism being used for time. So if one were using situation calculus, these descriptors would be of the form holds(f,s) and the same set of fluents should be used in the first and third block of every sub- sequence. The last sentence of the third block of the last subsequence is the sentence we are trying to prove.4 *It should be noted that all we have done here is to add some sentences to the argument and give it more structure. All that was said about arguments holds good for this kind of argument. Given two such narrations, one can exploit the notion of causation to compare them. As we mentioned earlier, we can label certain sentences as being causal in nature and we label changes (each change in the truthvalue of a state descriptor is a change) deduced by using causal sentences as being causal changes. Given two narra- tives, we prefer the one with fewer non-causal changes. We could go one step further and prefer narrative proofs with no non-causal changes to a non-narrative proof. Let us now see how this approach can be used to solve the Yale Shooting Problem. Stated informally, the Yale Shooting Problem is as follows. At time SO we have a loaded gun and Fred is alive. We wait for a step and at time s1 we shoot Fred. We want to know whether Fred is dead in ~2. We are given a background theory which says that if a person is shot with a loaded gun, the person dies. The frame problem comes into play since we need to deduce that if the gun became loaded in SO, it “must” still be loaded at time sl. One way to solve this quite generally is to use the frame axiom which is as follows. Unless a fact is abnormal with respect to some action in a situation (i.e. some action capable of changing this takes place in the situation), it remains true in the next situation. As a result, the gun can remain loaded at time ~1, therefore Fred dies. However this is not the only possibility. The other is to start with the fact that Fred is alive at time ~1, decide that this remains true, which means that the gun was somehow unloaded at time se. Note that the second possibility involves the same number of abnormalities as the first one (in the first one the fact that Fred lives changes, and in the second one the gun being loaded changes). How does one eliminate the second possibility?5 Intuitively, the second possibility seems flawed since the gun somehow miraculously became unloaded in so. Let’s see how this intuition can be captured in our framework. We label the axiom that says “a person dies when shot with a loaded gun” as being causal. Then, using the heuristic about preferring causal narrative ar- guments, we get the right answer as follows. While the first possibility has a straightforward narrative argu- ment (using only causal rules to explain changes), there does not seem to be any simple narrative for the second possibility. Though one can add enough statements to the header to obtain a narrative argument for the sec- ond possibility, the change in the gun being loaded from so to s1 (when a wait is performed) does not have any causal sentence associated with it. Because of this, we prefer the first argument to the second from which it follows that Fred dies. However this solution suffers from the following defect6. Consider the following extension to the prob- 5We would like to keep as much as the original axiomati- zation as possible. Completely redoing it would be cheating! 6This was pointed out by Johu McCarthy and turned out to be a good exercise in correcting undesired behaviour by 612 KNOWLEDGEREPRESENTATION lem. We are told that there are two waits performed ( i.e. we wait for two steps in time) and then shoot. We are also told that Fred is alive after the shooting. The only way in which this could have happened is if the gun got unloaded. So we would like to conclude that the gun got unloaded either during the first wait or during the second wait. However, the above scheme will insist that the unload took place during the second wait and not during the first one (i.e. that loaded was true in s1 and unloaded was true is ~2). Insisting that the gun was unloaded during the second wait and not during the first is unintuitive. The problem is that our narrations only swept forward in time and any scheme that is biased towards one direction, forward or back- ward, is likely to exhibit such a behaviour. This can be corrected as follows. We introduce a notion of a backward narration where each subsequence of the narration temporally follows (as opposed to preceeding it as in the earlier case) the next subsequence. Of course, since this kind of narra- tive is not likely to have sentences describing the ef- fects of actions, it is unlikely that there will be causal sentences in it. So, when being compared to forward narrations, causality should not be a criterion used in the comparison. The other constraints we specified for forward narrations hold. A narration can be a forward or backward narration. We also add a ‘reverse inertia’ axiom that says that if a fact p is true in some situa- tion si, unless it is abnormal in some action performed in si-1, it must have been true in sa-1. With these two constructs, we get one argument (from a forward narra- tion) for the gun being loaded in sr and one for it being unloaded (from a backward narration) in s1 and since we dont prefer one over the other, we are prevented from drawing the unintuitive conclusion that the gun was loaded in sr. Though this may seem to be a complicated scheme, we have to remember that all the intuition was captured directly in the axioms in our KB without changing any of our original representation (an axiomatization of this works on the original formulation of the problem given by Hanks and McDermott (lo)), and without changing the logic. And once we took the pains to identify the class of narrative arguments, this can be used for any number of other examples. Implementation We strongly believe that the best way to actually test the feasibility of any proposed scheme for reasoning is to implement it and try to obtain the desired results from it. In fact the scheme described above evolved through a cycle of implementing something, obtaining an abstraction of it, improving it at the abstract level, implementing the new abstraction, and so on.7 The just adding axioms about arguments 7 We are currently on our fifth cycle, having tried a num- ber of things including numeric certainty factors and other task of using the defaults to obtain conclusions is the task of the Heuristic Level (HL) which is not really the topic of this paper. However, this section provides a short description of the approach used. The HL does not provide a complete inference mechanism (it can’t, since with reification and mixed level statements the language becomes undecid able), but it does cover part of what can be done using the above formalism in an efficient fashion. We are currently in the process of redoing part of the HL to make it cover more of the above formalism. The concept of generating arguments and comparing them lends itself very conveniently to an implementa- tion. The HL is largely organized around default rea- soning and is divided into the following modules. a. Argument generator: Given a sentence P, this module generates arguments for it. b. Argument Validator and Comparator: Given two ar- guments, this module checks for the validity of the arguments and compares them. c. Contradiction Detector: This module tries to detect when there is a contradiction, detects the wrong as- sumptions underlying this, and tries to fix them. d. Conclusion Retractor: When a fact ceases to be true, this module retracts conclusions that were made based on it. Given a query P, module [a] is called twice, to gen- erate arguments for P and for 1P. These arguments are then handed to module [b]. It checks their valid- ity, compares them, and decides on one of them (or none of them if there is an unresolved tie) and adds the sentence with the winning argument to the to the KB. Though this module might itself call on the problem solver (since the preference axioms are just like other axioms) we have proceduralized many of the preference axioms in the EL at the HL for the sake of efficiency. At the HL, the representation of defaults is quite dif- ferent than that at the EL. The abnormality literals are stripped from the axioms and classes of axioms with the same abnormality literal are formed and labeled with the literal. The argument validator keeps track of the truth-values for these labels (actually instantiations of these) and uses them in checking for the validity of ar- guments. Of course, the argument comparator makes heavy use of these labels (and the other sentences used in the arguments for P and 1P) to compare arguments. The argument generator is not concerned with the labels and deals only with the versions of axioms that are stripped of them. Certain precautions have to be taken against an apparently contradictory KB since the ab-literals have been stripped away. Overall, though, this significantly reduces the complexity of generating arguments. Further details on the HL can be found in (3), (9). probablity-like schemes over the previous five years. GUHA 613 Discussion and Future Work References In this paper we presented some of the salient aspects of the architecture of CycL with an emphasis on default reasoning. We gave a description of a scheme for doing default reasoning based on the notion of arguments. The basic idea was to tackle some of the hardest issues in default reasoning in the KB as opposed to dealing with them in the logic. A clear separation was made between the mechanism used to incorporate the nonmonotonicity in the logic and the other issues in default reasoning. The non- monotonicity is incorporated using just the closed world assumption and can be easily formalized using any of the available formalisms. This lack of dependence on the non-monotonic formalism is desirable for anyone keen on actually encoding information using the logic since it makes it less likely that their work is going to be undermined by subsequent changes in the logic. The assorted (known and unknown) problems related to default reasoning are dealt with not in the logic, but by using axioms in the KB (they are as much a part of the MB as are the domain axioms). This not only gives us greater control over the conclusions drawn but also enables us to control what is concluded by changing the KB, something that is vastly easier than changing the logic. Since it is unlikely that the basic axioms (such as the Argumentation Axiom) are going to be removed from the KB, it is worth building faster infer- ence schemes for using them. For example, one could provide procedural attachments for predicates such as argument For, etc., and exploit the fact that the basic structure of this formulation of default reasoning tries to mimic a reasoning process. There are two main topics for our future work. The first is to obtain an efficient implementation capable of the full scheme presented here. The second and more important task is to develop a rich theory of arguments, their basic types and properties, and interesting prefer- ence criteria. We expect that as the Cyc group tries to axiomatize new domains, we will need new preference criteria. This should provide both a test for the exist- ing framework and also give us better insights into the nature of default reasoning. Acknowledgments I would like to thank Doug Lenat and Mark Derthick for comments and suggestions, and th.em and David V. Wallace for helping me in the implemention. I thank Mary Shepherd, Karen Pittman, and the other mem- bers of the Cyc group for testing all this and putting up with the all the bugs in the implementation. I thank John McCarthy, Doug Lenat and Mark Derthick for comments on an earlier draft and Mary Shepherd for making this paper more readable. [l] M. Derthick. An epistemological level interface for cyc. Technical Report ACT-CYC-084-90, MCC, February 1990. [2] D.S.Touretsky. Implicit ordering of defaults in in- heritance systems. In M. Ginsberg, editor, Read- ings In Nonmonotonic Reasoning. Morgan Kauf- mann, Los Altos, CA, 1987. [3] D. B. Lenat and R. Guha. Building Large Knowl- edge Buses. Addison Wesley, Reading Mass., 1990. [4] V. Lifschitz. Formal theories of action. In M. Gins- berg, editor, Readings In Nonmonotonic Reuson- ing. Morgan Kaufmann, Los Altos, CA, 1987. [5] J. McCarthy. First order theories of individual concepts and propositions. In H .Levesque and R.Brachman, editors, Readings In Knowledge Rep- resentation. Morgan Kaufmann, Los Altos, CA, 1986. [6] J. McCarthy. Applications of circumscription to formalizing common sense knowledge. In M. Gins- berg, editor, Readings In Nonmonotonic Reuson- ing. Morgan Kaufmann, Los Altos, CA, 1987. [7] J. McCarthy and P.J.Hayes. Some philosophical problems from the standpoint of ai. In M. Gins- berg, editor, Readings In Nonmonotonic Reuson- ing. Morgan Kaufmann, Los Altos, CA, 1987. [8] R.V.Guha. Default reasoning using arguments. In preparation. [9] R.V.Guha and D.B.Lenat. Cycl: The cyc represen- tation language, part 3. Technical Report ACT- CYC-454-89, MCC, December 1989. [lo] S.Hanks and D.McDermott. Default reasoning, nonmonotonic logics and the frame problem. In M. Ginsberg, editor, Readings In Nonmonotonic Reasoning. Morgan Kaufmann, Los Altos, CA, 1987. [l l] Y. Shoham. Chronological ignorance. In M. Gins- berg, editor, Readings In Nonmonotonic Reuson- ing. Morgan Kaufmann, Los Altos, CA, 1987. [12] A. Tversky and D.Kahneman. Judgement un- der uncertainity: Heuristics and biases. Science, 185:1124-1131,1974. 614 KNOWLEDGEREPRESENTATION
1990
116
867
The Generalized Theory of Model Preference (Preliminary Report) Piotr Rychlik Institute of Computer Science Polish Academy of Sciences PKiN, 00-901 Warsaw, POLAND Abstract In this paper we present a purely semantic view on non-monotonic reasoning. We follow the direction pointed in [16] and claim that any non-monotonic logic can be viewed as a result of transforming some base standard logic by a selection strategy defined on models. The generalized theory of model preference is shortly outlined here together with its use in mod- eling non-monotonic beliefs. Introduction One of the most serious and, at the same time, common problems encountered in implementing knowledge-based systems is that it is usually unfea- sible to provide complete knowledge on which the sys- tem is supposed to operate. The ability to fill up the gaps in incomplete information is one of the fac- tors characterizing common sense reasoning. This is common sense, which, in many circumstances, enables people to “jump to conclusions” and solve problems that could never be solved by a perfect, but purely deductively reasoning agent. Deductive reasoning is monotonic, because with a larger set of premises it is possible to conclude more facts. Common sense reasoning is non-monotonic - new facts may cause previously derived beliefs to be withdrawn. There have been proposed many formalizations of non-monotonic reasoning: non-monotonic logic, au- toepistemic logic, default logic, circumscription, and many others. The landmark papers in the discipline of non-monotonic reasoning can be found in [6]. Any non-monotonic logic can be viewed as a result of transforming some base standard logic by a selection strategy defined on models. The selection strategy is supposed to choose those models that, possibly, best explain a theory described in the base logic. In other words, it selects models that are, in some sense, more “preferred” than others. The selection strategy can be defined in terms of a binary relation P defined on interpretations of the base logic. It is generally assumed that P is an ordering rela- tion, either a strict partial order [16] or a quasi-order [3]. In the case of a strict partial order P, the intuitive meaning of (Ml, M2) E P is that A42 is more preferred than A4’r or it is better than Ml. If P is a quasi-order, then (Ml, M2) E P is read that M2 is at least as good as Ml. The selection strategy simply picks up the maximal (which here means “the best”) elements of P, if such elements exist, that is, those models M* for which there is no model M such that (M*, M) E P. We claim that in many circumstances P does not need to be an ordering relation in order to be able to capture the idea of preference. In the next section we will examine one example and show that a pref- erence relation which is not assumed to be transitive and hence, which is not an ordering relation, may still make sense. Motivations Our example will be related to temporal reasoning. We will present a variation of the famous case of tem- poral projection discussed in [7] and commonly known as the Yule shooting problem. However, it seems that the problem we are going to present here is very gen- eral and does not characterize only the domains mak- ing reference to temporal information. The temporal projection problem arises whenever, given an initial description of the world, a reasoning agent tries to determine which facts are true and which are false after some sequence of events has occurred. To describe the problem we we will adopt situa- tional calculus [12], which was chosen in [7]. There are three types of objects that we will be considering: situations, facts (also called propositional jluents) and events. A situation is meant to denote an interval of time when none of the facts changes its truth value. Events (or actions) may change the world assigning new values to fluents. We will write HoZds( f, s), if a fact f is true in a situation s, and resuZt(e, s) to de- note the new situation that results from an event e taking place in a situation s. In order to be able to effectively represent an “inertia” of the world, and get RYCHLIK 615 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. rid of the frame problem [12], we will use a technique proposed in [13]. W e introduce a special abnormality predicate a b that, for a given triple (f, e, s), will be satisfied if and only if an event e occurring in a sit- uation s changes the value of a fact f; in this case a fact f is said to be abnormal with respect to an event e in a situation s. We also accept a non-monotonic inference rule which allows to conclude that a given fact is not abnormal unless its abnormality can be deduced (monotonically) from the currently available data. With these assumptions the single frame axiom Vf, e, s. HoZds( f, s) A lab(f, e, s) > HoZds(f, result(e, s)) (1) will suffice to express what otherwise would usually require a great many of axioms. The Yale shooting scenario, only slightly modified, is as follows. First, in the initial situation So, some- one loads the gun aimed at Fred (a LOAD event). This brings about the situation S1. Subsequently, after a sequence of WAIT events, where nothing in- teresting happens, in a situation S,-1 the gun is fired (a SHOOT event), which yields the new situ- ation S,. Suppose that in the situation So Fred is alive (ALIVE), loading a gun causes it to be loaded (LOADED), and firing a loaded gun at someone causes that person to become dead (DEAD). The knowledge about our domain can be characterized in a natural way by the following set of axioms [7]: HoZds(ALIVE, So), (2) Vs. HoZds(LOADED, resuZt(LOAD, s)), (3) Vs. Holds(LOADED, s) > ab(ALIVE, SHOOT, s) A HoZds(DEAD, resuN(SHOOT, s)). (4) The question is whether the non-monotonic mecha- nism described above allows us to conclude from (l)- (4) that in the situation S, Fred is dead. Surprisingly, the answer to this question is negative. Let us recall that our non-monotonic inference rule tries to mini- mize the interpretation of the abnormality predicate. The answer that Fred is dead after shooting can be obtained if we chronologically minimize the abnormal- ity predicate. The LOAD event causes the gun to be loaded in Sr (3), and, since waiting has no particular effect, the gun remains loaded in the situation ,$-I. Hence, by (4), Fred becomes dead in S, , which agrees with our intuition. But it is possible that first we have applied the rule of minimization in the situation S n- 1 assuming lab(ALIVE, SHOOT, .!?,-I), which, together with (l), supports that Fred is alive in S,. However, we know from (4) that the loaded gun causes the predicate ALIVE to be abnormal with respect to the SHOOT event in any situation. The only explana- tion of Fred being alive in S,, is that the gun somehow has become unloaded as a result of the WAIT event. We must, therefore, assume ab(LOADED, WAIT, Si) for some situation Si , where 0 < i < n - 1. There appeared great many of solutions to the Yale shooting problem [1,2,5,8,9,10,11,14,16, and many others]. They either reformulate the domain de- scription so that the intended prioritization of mod- els can be captured by the properties of some well known standard non-monotonic mechanisms, or they live the description unchanged introducing new for- malisms with some other preference criteria. Our non-monotonic mechanism, which was unable to deal properly with the problem of temporal pro- jection, imposed certain ordering on models of the domain description. We expected to find a correct answer to our example by picking up the maximal el- ements of this ordering, namely, the models that min- imized the extension of the abnormality predicate. In all of the above solutions there are also used certain model ordering methods to fix the most preferred in- terpretations. However, it is not difficult to imagine a situation in which there is not enough information provided to construct a reasonable ordering of mod- els, except, perhaps, the one that would make every model the most preferred interpretation. Let us suppose that there is some process going on which may cause the effect of the gun being unloaded (without actually unloading it). For example, it might be continuous corrosion of the metal parts of the gun, or graceful degradation of the explosive material in the cartridge, etc. If we had no idea when, more or less, this process has started and/or how fast it is progress- ing, then the conclusion that Fred remains alive in S, would seem to be equally justified as the one postulat- ing his death after shooting. Suppose we know that besides the process which eventually might prevent killing Fred, nothing unexpected can happen between So and S,, that is, no action is performed in parallel with those that we have already mentioned. It might be the case that our restricted knowledge about the process allows us to think that if it is possible to kill Fred in a situation Si, then it can be believed that this would also be possible in Si+k, provided that the time elapsed between Si and Si+n: is not too long. For ex- ample, one can safely assume that if it is known that in the situation Ss it is possible to shoot Fred, then af- ter five waiting events, each of which lasting a second, in the situation Sis shooting will also be successful. Translating these intuitions into a preference rela- tion on models it means that at any moment we prefer those models which allow the effects of the process to 616 KNOWLEDGEREPRESENTATI~N appear as late as it is only possible within certain lim- its. Considering the time interval corresponding to a situation Si we may prefer the effects of the process to appear no sooner than in the time interval correspond- ing to a situation Si+k+i. (Here, the silent, simplify- ing assumption has been made that the time intervals corresponding to situations are of equal length.) In accordance with the above observations, the pref- erence relation P can be defined in the following way. Let us take that (Ml, M2) E P if and only if P satisfies the following condition. The extension of the abnor- mality predicate for the events WAIT and LOAD in M2 is a subset of such an extension in Ml. Otherwise, if these extensions are equal, then for every i and j such that i < j 5 i+L, Mr E M(ti) and M2 E M(tj), where M(t) d enotes the class of models that satisfy the effects of the process in the time interval t, and ti and tj denote the time intervals corresponding to sit- uations si and sj, respectively. It is easy to see that such defined preference relation is not transitive, al- though it is locally transitive within the limits set by the constant Ic. We may wonder whether the preference relation de- fined above can be expressed using any of the existing non-monotonic formalisms such as circumscription or default logic. It seems that the answer to this question is negative. All versions of circumscription limit the instances which satisfy a selected predicate (or a set of predi- cates) in a given theory only to those that are neces- sary in light of this theory. The preference criterion is therefore defined by the set inclusion relation so as to minimize the extensions of the chosen set of predi- cates. Inference mechanism in default logic can be ex- plained in a similar way-l Suppose A = (T, D) is a default theory. A default rule [cy : ,0/r] E D, in- formally speaking, allows us to assume that the con- clusion y is true, if the prerequisite o is assumed to be true, and the justification p is consistent with all facts that have already been assumed.2 Applying any default Si = [cu : P/r] E D whose conclusion does not follow directly from T causes the class Ml) of all mod- els of T to be narrowed to the class M{a,) of models of T that satisfy p and y. If we further apply another default rule S2 E D, the class M{a,} will be narrowed to the class M{~,,s~} which contains the models that 1 A detailed discussion on these issues can be found in 1161. 2 Actually we should talk about some instances of CY, p and y, if these formulae contain free variables. We may, however, make a simplyfying assumption that every default of D is closed, that is, it does not contain a formula with free variables. General- ization to the case where defaults are allowed to contain free variables is obvious. additionally satisfy the justification and the conclu- sion of f52. And so on. The preference criterion is, again, defined in terms of the set inclusion relation. We have: M{) r> Mpl) I> M{~,,J,} > . . . . This, however, is a strict partial ordering relation on mod- els. Since the preference relation we defined for the modified version of the Yale shooting problem is not even transitive, the existing schemes of non-monotonic reasoning cannot express it. In [4] it is claimed that permitting a preference rela- tion to be intransitive implies irrationality and “wreak havoc on the semantics of the resulting non-monotonic logic”. It seems, however, that it is not the case. Sup- pose that an agent is willing to admit that M2 is better than Ml and M3 is better than M2, and considering M3 to be the most preferred of these three models only when some new information which is not a non- monotonic conclusion is provided falsifying Ml, but still satisfying M2 and M3. Such behavior does not suggest irrationality of the a.gent, althouhg the pref- erence criteria are intransitive. General approach to non-monotonic reasoning3 Given some base standard logic fZ, we can use a pref- erence relation P to modify the notions of logical sat- isfiability, validity and entailment in C, defining in this way a new logic Cp. The semantics of Lp we are go- ing to establish is very similar (actually, is inspired by) the one proposed in [16] and then generalized in [3]. The main departure from the above mentioned formalism is that a preference relation is not assumed to have any particular property and that it heavily de- pends on some consistent theory written in the base logic ,!Z, and can be understood only in the context of this theory. First-order predicate calculus is assumed as a stan- dard logic in the following definitions. However, this choice is not crucial. It is easy to adopt these defi- nitions to other formalisms such as modal or higher- order logics. DEFINITION 1. Let T be a theory and PT a binary relation defined over the set of models of T. We say, then, that PT is a preference relation depending on T. We will usually drop the index denoting a theory on which a preference relation depends when it is not confusing. DEFINITION 2. Let P be a preference relation depend- ing on a theory T. Any non-empty (possibly infinite) sequence s = (Ml, M2, . . .) of models of T is called a P-sequence over T if and only if for every Mi and Mj, 3Tlle proofs of all cited theorems can be found in [15]. RYCHLIK 6 17 where i < j, (Mi, Mj) E P, and there is no model A4 4 s of T such that for every Mk: E s, (Mk, M) E P. A P-sequence s is bounded, if, additionally, there is a model M* E s such that for every A4’k E a, Mk = M* or (Mk, M*) E P. In this case we say that M* is an upper bound of s. A P-sequence with no upper bound is unbounded. DEFINITION 3. Let P be a preference relation de- pending on a theory T. A model M of T P-satisfies a formula (Y, written M +p o, if and only if there is a P-sequence over T such that M is its upper bound and M /= Q. In this case we say that M is a P-model ofa. If M is a P-model, then, of course, A1 is a model of a theory T on which the preference relation P depends. However, M is an upper bound of some P-sequence over T. Hence, M is also a P-model of T. DEFINITION 4. Let P be a preference relation de- pending on a theory T. We say that a formula cy is P-satisfiable if and only if there is a P-model of cr. Clearly, P-satisfiable formulae are also satisfiable. The converse might not be true. If a formula is satis- fiable, it does not automatically mean that it has to be satisfied by some of the models of the theory T and, in particular, by some P-model of T (if such a P-model exists at all). Our modified definition of satisfiability, unlike the Shoham’s definition of preferential sutisfiubility [16], makes an explicit reference to some theory T. Shoham considers an interpretation M as a preferred model of some formula (Y if and only if there is no other model M’ of CY that would be strictly better than M in the sense determined by an ordering relation P defined over the set of all interpretations of some given logical language. Let P{,l be a restriction of P only to the models of o. With this assumption, A4 is a preferred model of CY if and only if M is a Pi,)-model of o. In other words, a formula CY is PT-satisfiable if and only if there is preferential model of T U {a}. The notion of preferential satisfiability is a special case of the sat- isfiability introduced in Definition 3 and Definition 4. DEFINITION 5. Let P be a preference relation depend- ing on a theory T. We say that a formula a is P-valid, written bp cy, if and only if T has a P-model and CY is satisfied by every P-model of T. Shoham defines his notion of preferential validity in a roundabout way. He considers a formula Q to be preferentially valid if and only if a formula la is not preferentially satisfiable. This makes possible situa- tions in which a formula o is preferentially valid al- though it is not even preferentially satisfiable, and Q and 1~ are both preferentially valid. Our notion of P-validity does not suffer from this drawback, since we can talk about P-validity only if the preference re- lation distinguishes at least one model of the theory on which it depends as the most preferred, that is, a P-model of this theory. This approach has a very simple and intuitive explanation. If the relation P is not sufficient to express the preferences accordingly to which formulae should be assigned truth values, con- sidering the formulae that are satisfied by all preferred models does not make much sense. PROPOSITION 1. If a formula Q is P-valid, then lo is not P-satisfiable. Unfortunately, the converse is not true. It may hap- pen that every P-sequence is unbounded. In this case, both Q and lo are not P-satisfiable. It is clear that a P-valid formula need not be valid, because it has to be satisfied only by models that are upper bounds of P-sequences. If there is no bounded P-sequences, a valid formula is not P-valid, although it is satisfied by all interpretations. With the additional restriction of a preference relation P, which says that there must exist at least one bounded P-sequence, the following propositions hold. PROPOSITION 2. Let P be a preference relation de- pending on a theory T, and there be a bounded P- sequence over T. Then a formula Q is P-valid if and only if icy is not P-satisfiable. PROPOSITION 3. Let P be a preference relation de- pending on a theory T, and there be a bounded P- sequence over T. Then a formula o is P-valid if cy is . . - valid. DEFINITION 6. Let P be a preference relation depend- ing on a theory T. We say that P is complete, if for every formula cy consistent with T, every model of a is in some bounded P-sequence over T whose upper bound satisfies CX. Completeness is a very strong notion. Together with the emptiness of a theory on which a preference rela- tion P depends, it implies that the resulting logic ,Cp is monotonic. PROPOSITION 4. Let T be an empty theory, and P a complete preference relation depending on T. Then a formula Q is satisfiable if and only if o is P-satisfiable. PROPOSITION 5. Let T be an empty theory, and P a complete preference relation depending on T. Then a formula (Y is valid if and only if CY is P-valid. DEFINITION 7. Let P be a preference relation depend- ing on a theory T. We say that a P-entuils /3, written cx bp p, if and only if every P-model of o is also a P-model of p. The above definition corresponds definition of preferential entailment, to the Shoham’s which says that cy 6 18 KNOWLEDGEREPRESENTATION greferentiully entails /3 if and only if every preferential model of CV. is also a model of p. He does not require IY to be preferentially satisfied by preferential models of cy. We can also substitute the requirement that every P-model of a is a P-model of 0 by the requirement that every P-model of Q is a model of p. Let us note, however, that this substitution does not change the notion of P-entailment. If ,f3 has a model A4 and this model is a P-model of cy, then M is a P-model of p. In fact, the notion of preferential entailment is a special case of the P-entailment. If P is an ordering relation defined over the set of all interpretations of some given logical language and P{,l its restriction to the models of CY, then cy preferentially entails p (in the sense determined by P) if and only if (Y Pj,l-entails P- Not surprisingly P-entailment, just as preferential entailment introduced by Shoham, satisfies the follow- ing propositions: PROPOSITION 6. Let ,Cp be a P-logic, and (Y, ,0 and y three formulae in it. Then, if 0 A /? /==p y, then also d=PP>Y. PROPOSITION 7. If Cp is a P-logic, then L:p is mono- tonic if and only if for all formulae Q, p, y E Cp, if cubcpP>r, thenalsocuApbpy. Writing down the axioms of some theory, we usually have in mind one particular (real or imaginary) world that we want to formalize. This world is the intended model of the theory. Of course, our theory, if it is only consistent, has also other models (actually, infinitely many of them). All the models that agree with the intended model on how every formula is assigned a truth value are considered the most preferred. In fact, we will be recognizing a model as the most preferred, if it only agrees with the intended model on a fixed subset of formulae that we think is important for some reasons. The preference relation must, therefore, be defined with respect to some partial interpretation. Any two models will be compared accordingly to how well they match the requirements about the valuation of the formulae determined by this partial interpretation. These requirements can be viewed as a function, which we will call a preference rule, that for a given formula and a valuation of variables states which logical value is preferred. Since it would be much easier to com- pare models if they had the same domains, we will be comparing them through the corresponding Herbrund interpretations. 4 4A Herbrand interpretation is any interpretation with a Her- the skolema’zed axioms of T. If there is no constant symbol,, brand universe as its domain. A Herbrand universe HT of a theory T is the set of all well-formed expressions that can be one such symbol is introduced. We say that two interpretations built using the function and constant symbols that appear in correspond to each other if and only if every formula is assigned the same truth value in these interpretations. DEFINITION 8. Let A be a set of formulae and As(liT) a set of valuations of variables over the IIerbrand uni- verse of some theory T. A preference rule for A over T is any function 7r : A x As(HT) * {true, false). DEFINITION 9. Let Ml and M2 be models of a the- ory T, and HI and Hz Herbrand interpretations of T corresponding to Ml and M2, respectively. Ml is preferred over Mz with respect to a preference rule r : Ax As(HT) - {true, false} if and only if for ev- ery formula Q E A and every valuation v E As(HT), if Vu&, (~2) = W(LY, v), then also VU~&~ (cry) = W(Q, v). Ml is strictly preferredover M2 with respect to I if ad- ditionally there are ,0 f A and v E As(HT) such that v4f1 (P) = Q, v) and vq&z(P) # @, 4. v+(Q) stands for the value of a formula cy in an interpretation M and a valuation v. DEFINITION 10. Ml is strongly preferred over Al, with respect to a set II of preference rules if and only if for every K E II, Ml is preferred over Mz with respect to 7r. DEFINITION 11. Ml is weakly preferred over AI2 with respect to a set II of preference rules if and only if there is a subset X C Il such that Ml is strongly preferred over M2 with respect to X. The notion of weak preference enables us to define a preference relation which is not transitive. Let us return to the Yale shooting example. The preference relation for this example can be defined us- ing the notion of weak preference. First, however, let us rewrite the axiom (4) to reflect changes in our do- main description. V’s, t. Holds(LOADED, s) A time(s) = t A Noeflects(t) > ub(ALIVE, SHOOT, s) A Holds( DEA D, resu/t(SHOOT, s)) (5) If Noe$ects(t) is true, it means that the effects of the process have not emerged in a time interval t. Func- tion time maps situations into corresponding time in- tervals. We need also following axioms to tie time intervals to situations. time($) = to, (6) Ve, s. time(resuZt(e, s)) = time(s) + 1. (7) Let A denote the set of axioms (l)-(3) and (5)-(7), and 11~ the Herbrand universe of A. Assume that for any time interval t, II contains a preference rule RYCHLIK 6 19 7rt whose value is true for any formula in Bt and any valuation in As(HA), where Bt = {wb(f, WAIT, s), lub(f, LOAD, s)}u { NoeRects( . . . , Noeflects(t + k)}U { lNoe$ects(t + k + l), lNoefiects(t + k + 2), . . . }. It is easy to see that the weak preference over H re- flects the intuition that among the models in which effects of the process have not emerged in a time in- terval t, those models should be preferred that al- low the effects to appear after t + k, but not sooner. If we know nothing about the time the process has started, lNoeffects(time(s)) should be true in ev- ery situation s and, therefore, the conclusion that Fred is dead after shooting is blocked. Knowing that Noeflects( time(&)) t is rue, we are allowed to conclude that lNoeflects(time(So) + k) is true. If k is suffi- ciently big, that is k 2 n, Fred can be believed to die in the final situation. Conclusion In this paper we outlined a generalized theory of model preference, following the idea that any non- monotonic logic can be viewed as a result of transform- ing some base standard logic by a selection strategy defined on models. Our approach is more general than the one presented in [16], b ecause a preference relation is parameterized by some theory written in the base logic. Moreover, we are not making any assumption about properties of preference relations. This allows us to model the situations in which our knowledge is too restricted to build a reasonable and justified order- ing of models of the domain description. It seems also that the notions of P-satisfiability and P-validity are defined more clearly than the corresponding notions of preferential satisfiability and preferential validity introduced by Shoham. References [ 1 ] Baker, A.B. 1989. A simple solution to the Yale shooting problem. Proc. First Interna- tional Conference on Principles of Knowledge Representation and Reasoning, pp. 1 l-20. [ 2 ] Baker, A.B. & Ginsberg, M.L. 1989. Temporal projection and explanation. Proc. IJCAI, 906- 911. [ 3 ] Brown, A.L.,Jr., Shoham, Y. 1989. New results on semantical nonmonotonic reasoning, Lecture Notes in Art$ciul Intelligence 346:19-26. [ 4 ] Doyle, J & Wellman, M.P. 1989. Impediments to universal preference-based default theories. Proc. First International Conference on Princi- ples of Knowledge representation and Reason- ing, 94-102. 620 KNOWLEDGEREPRESENTATION [51 [61 VI [81 PI [lOI WI [ 121 [ 131 [ 141 WI [161 Gelfond, M. 1988. Autoepistemic logic and for- malization of commonsense reasoning: prelimi- nary report, Lecture Notes in Artificial Intelli- gence 346:176-186. Ginsberg, M.L. 1988. Readings in Nonmono- tonic Reasoning, Los Altos, CA: Morgan Kauf- mann. Hanks, S. & McDermott, D. 1987. Nonmono- tonic logic and temporal projection, Artificial Intelligence 33(3):379-412. Haugh, B.A. 1987. Simple causal minimizations for temporal persistence and projection. Proc. AAAI, 218-223. Kautz, H.A. 1986. The logic of persistence. Proc. AAAI, 401-405. Lifschitz, V. 1987. Pointwise circumscription, Readings in Nonmonotonic Reasoning, (ed M.L. Ginsberg), 179-193. Los Altos, CA: Mor- gan Kaufmann. Lifschitz, V. 1987. Formal theories of action (Preliminary report). Proc. IJCAI, 966-972. McCarthy, J. & Hayes, P.J. 1969. Some philo- sophical problems from the standpoint of arti- ficial intelligence, Machine Intelligence 4:463- 502. McCarthy, J. 1986. Applications of circumscrip- tion to formalizing common sense knowledge, Artificial Intelligence 28(1):89-l 16. Morgenstern, L. & Stein, L.A. 1988. Why things go wrong: A formal theory of causal reasoning. Proc. AAAI, 518-523. Rychlik, P. 1989. S emantic considerations on non-monotonic reasoning, Technical Report, 674, Institute of Computer Science. Polish Academy of Sciences. Shoham, Y. 1987. Reasoning about Change: Time and Causation from the Standpoint of Artificial Intelligence. Cambridge, MA: MIT Press.
1990
117
868
Terminological Cycles in KL-ONE-based Knowledge Representation Languages1 Franz Baader German Research Center for Artificial Intelligence Projektgruppe WINO, Postfach 2080 D-6750 Kaiserslautern, West Germany baade@uklirb.uucp Abstract Cyclic definitions are often prohibited in terminological knowledge representation languages because, from a theoretical point of view, their semantics is not clear and, from a practical point of view, existing inference algorithms may go astray in the presence of cycles. In this paper, we shall consider terminological cycles in a very small KL-ONE-based language. For this language, the effect of the three types of semantics introduced by (Nebel 1987,1989,1989a) can be completely described with the help of finite automata. These descriptions pro- vide a rather intuitive understanding of terminologies with cyclic definitions and give insight into the essen- tial features of the respective semantics. In addition, one obtains algorithms and complexity results for subsump- tion determination. As it stands, the greatest fixed-point semantics comes off best. The characterization of this semantics is easy and has an obvious intuitive interpretation. Furthermore, important constructs - such as value-restriction with respect to the transitive or reflexive-transitive closure of a role - can easily be expressed. 1. Introduction Cyclic definitions are prohibited in most terminological knowledge representation languages (e.g., in KRYPTON (Brachman et al. 1985), NIKL (Kaczmarek et al. 1986), or LOOM (MacGregor & Bates 1987)) for the following rea- sons. From a theoretical point of view, it is not obvious how to define the semantics of terminological cycles. But even if we have fixed a semantics it is not easy to obtain the corresponding inference algorithms. On the other hand, cyclic definitions may be very useful and intuitive, e.g., if we want to express the transitive closure of roles (i.e., binary relations). For a role child, value-restrictions with respect to its transitive closure off- spring can be expressed by cyclic concept definitions if we take the appropriate semantics. For the same reason, recur- sive axioms are considered in database research (see e.g., (Aho & Ullman 1979), (Immerman 1982), (Vardi 1982), and (Vielle 1989)). Aho and Ullman have shown that the transitive closure of relations cannot be expressed in the relational calculus, which is a standard relational query 1. This work was supported by the German “Bundesministe- rium ftir Forschung und Technologie” under Grant ITW 8903 0. language. They proposed to add cyclic definitions which are interpreted by least fixed-point semantics. This was also the starting point for an extensive study of fixed-point extensions of first-order logic (see e.g., (Gurevich & S helah 1986)). A thorough investigation of cycles in terminological knowledge representation languages can be found in (Nebel 1987,1989,1989a). Nebel considered three different kinds of semantics - namely, least fixed-point semantics, greatest fixed-point semantics, and what he called descrip- tive semantics - for cyclic definitions in his language 5@” But, due to the fact that this language is relatively strong2, he does not provide a deep insight into the mcan- ing of cycles with respect to these three types of semantics. For the two fixed-point semantics, Nebel ex- plicates his point just with a few examples. The meaning of descriptive semantics - which, in Nebel’s opinion, comes “closest to the intuitive understanding of termino- logical cycles” ((Nebel 1989a), p. 124) - is treated more thoroughly. But even in this case the results are not quite satisfactory. For example, the de&ability of subsumption determination is proved by an argument3 which cannot be used to derive a practical algorithm, and which does not give insight into the reason why one concept defined by some cyclic definition subsumes another one. Before we can determine what kind of semantics is most appropriate for terminological cycles, we should get a bet- ter understanding of their meaning. The same argument applies to the decision whether to allow or disallow cycles. Even if cycles are prohibited, this should not just be done because one does not know what they mean and how they can be handled. In this paper, we shall consider terminological cycles in a very small KL-ONE-based language which allows only concept conjunction and value-restrictions. For this lan- guage the effect of the three above mentioned types of semantics can be completely described with the help of finite automata. These descriptions provide a rather intu- 2. The language allows concept and role conjunction, value- restrictions, number-restrictions and negation of primitive concepts. 3. Roughly speaking, the argument says that it is sufficient to consider only finite interpretations to determine subsumption relations. BAADER 621 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. itive understanding of terminologies with cyclic definitions and give insight into the essential features of the respective semantics. In addition, subsumption deter- mination for each type of semantics can be reduced to a (more or less) well-known decision problem for finite automata. Hence, existing algorithms can be used to decide subsumption and known complexity results yield the complexity of subsumption determination. Syntax and (descriptive) semantics of our small termi- nological language is introduced in the next section. In Section 3, alternative types of semantics - namely least and greatest fixed-point semantics - are considered, which may be more appropriate in the presence of terminological cycles. We shall see that, from a constructive point of view, the greatest fixed-point semantics should be preferred since greatest fixed-point models can be obtained by a single limit process. In Section 4, the three types of semantics are characterized with the help of finite automata. The characterization of the greatest fixed-point semantics is easy and intuitively clear. Subsumption with respect to greatest fixed-point semantics, and - after some modifications of the automaton - also with respect to least fixed-point semantics can be reduced to inclusion of regular languages. For descriptive semantics, we have to consider inclusion of certain languages of infinite words which are defined by the automaton. Fortunately, these languages have already been investigated in the context of monadic second-order logic (see (Biichi 1960)). 2. A Small KL-ONE-based KR-language In KL-ONE-based languages we start with atomic concepts and roles and can use the language formalism to define new concepts and roles. Concepts can be considered as unary predicates which are interpreted as sets of individuals whereas roles are binary predicates which are interpreted as binary relations between individuals. The languages differ in what kind of constructs are allowed for the definition of concepts and roles. The language considered in this paper has only two constructs, which can be used to define concepts: concept conjunction and value-restriction. Definition 2.1. (concept terms and terminologies) Let C be a set of concept names and R be a set of role names. The set of concept terms is inductively defined. As a starting point of the induction, (1) any element of C is a concept term. (atomic terms) Let C and D be concept terms and let R be a role name. (2) Then C n D is a concept term. (concept conjunction) (3) Then VR:C is a concept term. (value-restriction) Let A be a concept name and let D be a concept term. Then A = D is a terminological axiom. A terminology (T- box) is a finite set of terminological axioms with the additional restriction that no concept name may appear more than once as a left hand side of a definition. LI A T-box contains two different kinds of concept names. Defined concepts occur on the left hand side of a termin- logical axiom. The other concepts are called primitive concepts. For our language, roles are always primitive since we do not have role definitions. The following is an example of a T-box in this formalism: Let Man, Human, Male and Mos (for “man who has only sons”) be concept names and let child be a role name. The T-box consists of the following axioms: l&n = Humann Male Mos = Man n Vchild: Man That means that a man is human and male. A man who has only sons is a man such that all his children are male humans. Male and Human are primitive concepts while Man and Mos are defined concepts. Assume that we want to ex- press a concept “man who has only male off-springs”, for short Momo. We can’t just introduce a new role name off- spring because there would be no connection between the two primitive roles child and off-spring. But the intended meaning of off-spring is that it is the transitive closure of child. It seems quite natural to use a cyclic definition for Momo: A man who has only male off-springs is himself a man and all his children are men who have only male off- springs, i.e., Morn0 = Mann Vchild: Momo. This is a very simple cyclic definition. In general, cycles in terminologies are defined as follows. Let A, B be concept names and let T be a T-box. We say that A directly uses B in T iff B appears on the right hand side of the definition of A. Let uses denote the transitive closure of the relation dir&y uses. Then T contains a termino- logical cycle iff there exists a concept name A in T such that A uses A. The next definition gives a model-theoretic semantics for the language introduced in Definition 2.1. Definition 2.2. (interpretations and models) An interpretation I consists of a set dam(I), the domain of the interpretation, and an interpretation function which associates with each concept name A a subset A1 of dam(I), and with each role name R a binary relation RI on dam(I), i.e., a subset of dam(1) x dam(1). The sets AI, RI are called extensions of A, R with respect to I. The interpretation function - which gives an interpretation for atomic terms - can be extended to arbitrary terms as follows: Let C, D be concept terms and R be a role name. Assume that C1 and DI are already defined. Then (C n Di := C1 n D1, (VR:C) := ( x E dam(1); for all y such that (x,y) E RIwehaveyE C?). An interpretation I is a model of the T-box T iff it satisfies A1 = D1 for all terminological axioms A = D in T. Ll The semantics we have just defined is not restricted to non-cyclic terminologies. But for cyclic terminologies this kind of semantics - which will be called “descriptive semantics” in the following - may seem unsatisfactory. One might think that the extension of a defined concept should be completely determined by the extensions of the primitive concepts and roles. This is the case for non- cyclic terminologies. More precisely, let T be a T-box 622 KNOWLEDGEREPRESENTATION containing the primitive concepts PI, . . . . Pn and the roles Rl , . . . . Rm. If T doesn’t contain cycles, then any interpre- tation Pl1, . . . . Pnl, RI I, . . . . Rrnl of the primitive concepts and roles can uniquely be extended to a model of T (see e.g., (Nebel 1989a), Section 3.2.4). If T contains cycles, a given interpretation of all primitive concepts and roles4 may have different extensions to models of T. Example 2.3. concept names.5 Let R be a role name and B, P be The terminology T consists of the single axiom B = P n VR:B. We consider the following primitive interpretation: dam(1) := ( a, b, c, d } =: P1 and R1 := ( (a,b), (c,d), (d,d) }. It is easy to see that this interpretation has two different exten- sions to models of T. The defined concept B may be inter- preted as ( a, b } or as ( a, b, c, d >. Please note that individuals without RI-successors are in the extension (VR:C)I of the term VR:C, no matter how C may be interpreted. Cl The example also demonstrates that, with respect to descriptive semantics, the above construction B = P n VR:B does not express the value-restriction B = VR*:P for the reflexive-transitive closure R* of R. This implies that our definition of the concept Momo from above is not correct, if we use descriptive semantics. For these reasons we shall now consider types of semantics for terminological cycles. alternative 3. Fixed-point Semantics for Terminological Cycles A terminology may be considered as a parallel assignment where the defined concepts are the variables and the primi- tive concepts and roles are parameters. Example 3.1. Let R, S be role names and A, B, P, Q be concept names, and let T be the terminology A = Q n VS:B, B = P n VR:B. We consider the following primi- tive interpretation I, which fixes the values of the parameters P, Q, R, S: dam(1) := { ag, al, a2, . . . ), P1 := ( al, a2, a3, . . . and S* := ), QI := ( ag ), RI := ( (a;+l,ai); i 2 1 ), (h,ai);i2 1). For given values of the variables A, B, the parallel assignment A := Q n VS:B, B := P n VR:B yields new values for A, B. If A and B are interpreted as the empty set, an application of the assignment T yields the values 0 for A and ( al ) for B. If we reapply the assignment to these values we obtain 0 for A and ( al, a;! ) for B. CI In the general case, a terminology T together with a primitive interpretation I defines a function TI: (2dom(1))n 4. In the following such a partial interpretation will be called primitive interpretation. 5. We shall no longer use intuitive names for concepts and roles since I agree with (Bra&man & Schmolze 1985). p.176, that “suggestive names can do more harm than good in semantic networks and other representation schemes.” Suggestive names may seemingly exclude models which are admissible with respect to the formal semantics. -+ (2dom@))n, where 2 dam(I) denotes the set of all subsets of dam(1) and n is the number of defined concepts in T. For the above example we have seen that TI(0,0) = (0, ( al 11 and ‘b(Q),{ al I> = (W al, a;! I). A rimitive interpretation together with an element A of (2 ! om(l))n, which gives the extensions of the defined concepts, yields an interpretation of T. Obviously, this in- terpretation is a model of T if and only if A is a fixed- point of the function TI, i.e., if and only if TI(A) = A. In our example, the element (( ag ),( al, a2, a3, . . . )) of (2domo))2 is a fixed-point of TI. If we extend I by defining ;;T= ( a0 ), B1 := ( al, a2, as, . . . ), we obtain a model . One may no\. ask whether any primitive interpretation I can be extended to a model of T, or equivalently, whether any function T has a fixed-point. The answer is yes, because (2dom( d )“, ordered componentwise by inclusion, is a complete lattice (i.e., a partially ordered set where any subset has a least upper bound) and the functions TI are monotonic (i.e., A c B implies TI(& c TI(B)). More precisely, this implies that TI has a least fixed-point lfp(Tl), a greatest fixed-point gfp(TI), and possibly other fixed-points which lie between the least and the greatest fixed-point (see e.g., (Schmidt 1986), Chapter 6, and (Lloyd 1987), Chapter 1, $5). Definition 3.2. Let T be a terminology, possibly containing terminological cycles. (1) The descriptive semantics allows all models of T as admissible models. (2) The least fixed-point semantics (lfp-semantics) allows only those models of T which come from the least fixed- point of a function TI (lfp-models). (3) The greatest fixed-point semantics (&p-semantics) allows only those models of T which come from the greatest fixed-point of a function TI (gfp-models). Cl Any primitive interpretation I can uniquely be extended to a lfp-model (gfp-model) of T. In Example 2.3, the extension of I which interprets B as ( a, b ) is a Ifp-model of T and the extension which interprets B as ( a, b, c, d ) is a gfp-model of T. It is easy to see that, for cycle-free terminologies, lfp-, gfp- and descriptive semantics coincide (see (Nebel 1989a), p.137,138). The next question is how lfp-models (gfp-models) can be constructed from a given primitive interpretation. (Nebel 1987,1989,1989a) claimed that the functions TI are even m-continuous (i.e., for any chain A(O) G A(l) c . . . . one has Ui>OTI(A(‘)) = TI(Ui>uA(iv), and that thus Ifp(TI) = U +gTIrbottom), wherebottom denotes the least element of (2 d om(l))n, namely the n-tuple (0 ,..., 0). Unfortunately, this is not true. Proposjtion 3.3. In general, we may have lfp(TI) f U+uTI’(bottom). PROOF. We consider Example 3.1. It is easy to see that T1’(0,0) = (0,( al, a2, . . . . ai 1). Thus L’i>O’l”I’(Q),Q)) = (0,( a;; i 2 1 )) which is not a fixed-point since TI(0,( ai; i21 ))=(( a()),( ai;iZl )). CI In this example, the least fixed-point is reached by BAADER 623 applying TI once more after building the limit. In general, one may need several limit processes to obtain the least fixed-point (see (Lloyd 1987), p.29). On the other hand, I was able to show that the greatest fixed-point can always be reached by a single limit process. Proposition 3.4. The functions TI are always down- ward o-continuous, i.e., for any descending chain A(*! z, A(l) 2 . . . we have n i>OTI(A(i)) = TI(n i>o&(‘)). Consequently, the greatest fixed-point may be obtained as gfp(TI) = nGoTIr(top), where top := (dom(I),...,dom(I)). PROOF. See (Baader 1990). Cl The two propositions show that, from a constructive point of view, the gfp-semantics should be preferred. However, if dam(1) is finite, the greatest and the least fixed-point can be reached after a finite number of applica- tions of TI. An important service terminological representation systems provide is computing the subsumption hierarchy. Definition 3.5. (subsumption of concepts) Let T be a terminology and let A, B be concept names. A ET B iff A1 c B1 for all models I of T, A EQ, T B iff A1 c B1 for all lfp-models I of T, A Egfp:T B iff A1 c B1 for all gfp-models I of T. In this case we say that B subsumes A in T w.r.t. descrip- tive semantics (resp. lfp-semantics, gfp-semantics). Lt 4. Characterization of the Semantics Using Finite Automata Before we can a%OCiate a finite aUtOmatOn AT t0 a termi- nology T we must transform T into some kind of normal form. It is easy to see that the concept terms VR:(B I-I C) and (VR:B) n (VR:C) are equivalent. Hence any concept term can be transformed into a finite conjunction of terms of the form VRl:VRz:... VR,:A, where A is a concept name. We shall abbreviate the prefix “VRl:VR2:...VR,” by “VW” where W = RlR2...Rn is a word over RT, the set of role names occurring in T. In the case n = 0 we also write “V&:A”6 instead of simply “A”. For an interpreta- tion I and a word W composition R ~~oR$o = RlRz...R,, let WI denote the R2I . ..oR.I of the binary relations Rll, , *--, Rril. The term &I denotes the identity relation, i.e., E1 = ( (d,d); d E dam(I) }. Let T be a terminology where all terms are normalized as described above. Definition 4,l. The generalized (nondeterministic) automaton 5?T is defined as follows: The alphabet of AT is the set RT of all role names occurring in T; the states of XT are the concept names occurring in T; a termino- logical axiom of the form A = VWl:Al n . . . I-I VWk:Ak gives rise to k transitions, where the transition from A to Ai is labeled by the word Wi. CI The automaton AT is called “generalized” because tran- sitions are labeled by words over the alphabet and not only by symbols of the alphabet. However, it is well-known that any generalized finite automaton can be transformed into an-equivalent finite automaton (see (Manna 1974), p. 9). Definition 4.1 will now be illustrated by an example. Example 4.2. (A normalized terminology and the corresponding automaton) A = VR:A n VS:D B = VRS:D n VS:C C = VR:C D = VS:D n V&:P A finite path (resp. infinite path) in a generalized automaton will be described by the initial state of the path and the finite sequence of transition labels followed by the terminal state of the path (resp. by the initial state and the infinite sequence of transitiou labels). For example, A; R, R, S, S, S, &; P is a finite path in the above automaton and B; RS, S, S, S, . . . is an infinite path. The label of the finite path is the finite word RRSSS while the label of the infinite path is the infinite word RSSSS... . For two states p, q of an automaton let L(p,q) denote the set of all finite words which are labels of paths fom p to q and let U(p) be the set of all finite7 and infinite words which are labels of infinite paths with initial state p. In the example, L(A,P) = R*SS* = ( RnSSm; n,m 2 0 ) and U(A) = ( RRR... ) u { R*SSS...; n>O}.Wearenowreadyto characterize the gfp-semantics. Theorem 4.3. Let T be a terminology and let AT be the corresponding automaton. Let I be-a gfp-model of T and let A, B be concept names occurring in T. (1) For any d E dam(1) we have d E AZ iff for all primitive concepts P, all words W E L(A,P), and all individuals e E dam(I), (d,e) E WI implies e E PI. (2) Subsumption in T can be reduced to inclusion of regular languages defined by %I. More precisely, A ~gfp,~ B iff L(B,P) _c L(A,P) for all primitive concepts P. PROOF. The “if’ direction of (1) is proved by induction on the positive integer n such that d is not in the A- component of TI*(top). The “only if’ direction is proved by induction on the length of a path from A to P with label W. (2) is a relatively easy consequence of (1). See (Baader 1990) for a complete proof. CI The theorem can intuitively be understood as follows: The language L(A,P) stands for the possibly infinite number of constraints of the form VW:P which the ter- minology imposes on A. The more constraints are imposed the smaller the concept is. In the example, B 7. An infinite path of the form p; WI, . . . . Wn, E, E, E, . . . has the finite word W 1 . ..W. as label. This word is also an element of U(P). 6. “E” denotes the empty word. 624 KNOWLEDGEREPRESENTATION subsumes A w.r.t. gfp-semantics since L(B,P) = RSS* is a subset of L(A,P) = R*SS*. For the terminology B = P n YR:B of Example 2.3, L(B,P) = R* = { Rn; n 2 0 ). Hence it is an immediate consequence of part (1) of the theorem that this terminology - if interpreted with gfp- semantics - expresses value-restriction with respect to the reflexive-transitive closure of R. Corollary 4.4. The problem of determining subsump- tion w.r.t. gfp-semantics is PSPACE-complete. PROOF. We have already seen that subsumption can be reduced to inclusion of regular languages. On the other hand, it is relatively easy to show that inclusion of regular languages can be reduced to subsumption determination w.r.t. gfp-semantics. It is well-known (see (Garey & Johnson 1979)) that inclusion of regular languages is PSPACE-complete.* D This shows that, even for our very small language, sub- sumption determination w.r.t. gfp-semantics is-rather hard from a computational point of view. On the other hand, (Nebel 1989b) has shown that, even without cycles, this language has a co-NP-complete subsumption problem. We shall now consider the lfp-semantics. Proposition 4.5. Let T be a terminology and let AT be the corresponding automaton. Let I be an lfp-model of T and let A be a concept name occurring in T. For any do E dam(1) we have do E AZ iff the following two properties hold: (PI) For all primitive concepts P, all words W E L(A,P), and all individuals e E dam(l), (dg,e) E WI implies e E PI . (p2) For all infinite paths A; WI, W2, W3, . . . and all individuals dl d2, d3, . . . (dn-l,dn) e W,! there exists n 2 1 such that PROOF. The proof is more difficult than the proof of part (1) of Theorem 4.3 since it requires transfinite induc- tion (see (Baader 1990)). 0 As a consequence of P2 of the proposition, &-cycles in AT - i.e., non-empty paths of the form B; &, . . . . &; B - are important for the lfp-semantics. In particular, it is easy to show that the concept A is inconsistent w.r.t. lfp- semantics - i.e., it has the empty extension in all lfp- models of T - if and only if there exists a path A; &, . . . . &; B such that B is the initial state of an &-cycle. Because of this phenomenon, the automaton AT has to be modified before we can express subsumption w.r.t. lfp-semantics. We add a new state QloO B to %, a transition with label E from Qloo to ~~~~~~ & an for each role R in T a transi- tion with la el R from Qloop to QlooP. For any state B of e lying on an &-cycle we add a transition with label & from B to QlooP, and for any primitive concept P we add a transition with label & from Qloop to P. This modified automaton will be called !&-+. Theorem 4.6. Let T be a terminology and let /BT be the corresponding modified automaton. Then A -Clf-,~ B iff U(B) _c U(A) and L(B,P) _c L(A,P) for all primitive concepls P. PROOF. See (Baader 1990). Cl In Example 4.2, B does not subsume A w.r.t. lfp- semantics since U(B) contains the infinite word SRRR... which is not in U(A). It is not hard to show that the in- clusion problem “U(B) c_ U(A)“, which refers to languages of finite and infinite words, can be reduced to an ordinary inclusion problem for regular languages. On the other hand, inclusion of regular languages can also be reduced to subsumption w.r.t. lfp-semantics (see (Baader 1990)). Corollary 4.7. The problem of determining subsump- tion w.r.t. lfp-semantics is PSPACE-complete. Cl For the descriptive semantics, the characterization of subsumption as well as the proof of its correctness is more involved. Infinite paths are still important but it is not enough to consider just their labels. The states which are reached infinitely often by the path are also significant. An infinite path which has initial state A and reaches the state C infinitely often will be represented in the form A, Uo, C, Ul, C, U2, C, ***, where_ the Ui are labels of non- empty paths9 from A to C for i = 0 and from C to C for i > 0. Theorem 4.8. Let T be a terminology and let AT be the corresponding automaton. Then A ET B iff the following two properties hold: (Pl) For all primitive concepts P, L(B,P) _cL(A,P). 0)2) For all defined concepts C and all infinite paths of the form B, Uo, C, UI, C, U2, C, . . . there exists k L 0 such that UoUl...Uk is the label of a path from A to C. PROOF. See (Baader 1990). 0 It is not at all obvious how to decide P2 for given states A, B, C of a generalized nondeterministic automaton. Fortunately, this problem can be reduced (see (Baader 1990)) to an inclusion problem for a certain class of lan- guages of infinite words, and this class has already been considered in the context of monadic second-order logic (see (Buchi 1960)). A solution to the inclusion problem for this class is also not obvious, but there is a theorem due to Biichi and McNaughton (see (Eilenberg 1974), p.3821°) which implies that the class is closed under intersection and complement. But then the inclusion prob- lem can be reduced to the emptiness problem for these languages as follows: L1 c L2 iff ?r2 n L1 = 0. Finally, it is rather easy to solve the emptiness problem. More precisely, it follows from a result in (Sistla, Vardi & Wolper 1987) that the inclusion problem for languages ac- cepted by Biichi automata is PSPACE-complete. This shows that the problem of subsumption determination 8. This is only true if the languages are given by nondetermi- nistic automata. With respect to the size of deterministic automata the problem could be solved in quadratic time. 9. Non-empty means that the path uses at least one transition. Nevertheless, Ui can be empty if all transition of the path are labeled with E. 10. The proof is constructive; but it takes eight pages, which shows that we are dealing with a hard problem. BAADER 625 w.r.t. descriptive semantics is decidable with polynomial space (see also (Nebel 1990)). 5. Conclusion We have considered a small terminological language, be- cause for this language the meaning of terminological cycles with respect to different kinds of semantics and, in particular, the important subsumption relation could be characterized with the help of finite automata. These results may help to decide what kind of semantics is most appropriate for cyclic definitions, not only for this small language, but also for suitably extended languages. As it stands, the gfp-semantics comes off best. The characteri- zation given in Theorem 4.3 is easy and has an obvious intuitive interpretation. Furthermore, important constructs - such as value-restriction with respect to the reflexive- transitive closure of a role - can easily be expressed. The lfp-semantics is less constructive and the modifications of the automaton which are necessary to characterize sub- sumption are not obvious. For the descriptive semantics one has to consider certain languages of infinite words which are more difficult and less intuitive than the regular languages which occur in the context of gfp-semantics. This research can be continued in two directions. Firstly, one may try to extend the results to cyclic defini- tions in larger languages. As a first step in this direction, the results for gfp-semantics were extended to cycles in the language Fc of (Levesque-Brachman 1987). Hybrid infer- ences such as realization can also be handled in this con- text (see (Baader 1990)). Secondly, one can use a larger language, but restrict cycles to the small language. One idea in this direction is to extend a given language by value-restrictions of the form )JL:P where L is a regular language over the alphabet of role names. In accordance with part (1) of Theorem 4.3, the semantics of this con- struct should be defined as (vL:P)I := ( d E dam(1); for all words W E L and all individuals e E dam(I), (d,e) E W* implies e E PI }. For example, vRR*:P would express value-restriction with respect to the transitive closure of the role R (RR* is the regular language { Rn; n 2 1 )). 6. References Aho, A.V., and Ullman, J.D. 1979. Universality of Data Retrieval Languages. In Proceedings of the 6th ACM Symposium on Principles of Programming Languages, 110-120. Baader, F. 1990. Terminological Cycles in KL-ONE-based KR-languages. Research Report, RR-90-01, DFKI, Kaiserslautem. Brachman, R.J., and Schmolze, J.G. 1985. An Overview of the KL-ONE Knowledge Representation System. Cognitive Science 16: 171-216. Brachman, R.J., Pigman-Gilbert, V., and Levesque, H.J. 1985. An Essential Hybrid Reasoning System: Knowledge and Symbol Level Accounts in KRYPTON. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, 532-539, Los Angeles. Biichi, J.R. 1960. On a Decision Method in Restricted Second Order Arithmetic. In Proceedings of the 1960 Congress on Logic, Methodology and Philosophy of Science, l-l 1, Stanford. Eilenberg, S. 1974. Automata, Languages and Machines, Vol. A. New York/London: Academic Press. Garey, M.R., and Johnson, D.S. 1979. Computers and Intractability: A Guide to the Theory of NP- Completeness. San Francisco: Freeman. Gurevich, Y., and Shelah, S. 1986. Fixed-point Extensions of First-Order Logic. Annals of Pure and Applied Logic 32: 265-280. Immerman, N. 1982. Relational Queries Computable in Polynomial Time. In Proceedings of the 4th ACM Symposium on the Theory of Computing, 147-152. Kaczmarek, T.S., Bates, R., and Robins, G. 1986. Recent Developments in NIKL. In Proceedings of the 5th National Conference of the American Association for Artificial Intelligence, 978-987, Philadelphia. Levesque, H.J., and Brachman, R.J. 1987. Expressiveness and Tractability in Knowledge Representation and Reasoning. Computational Intelligence 3: 78-93. Lloyd, J.W. 1987. Logic Programming, Second, Extended Edition. Berlin: Springer Verlag. McNaughton, R. 1966. Testing and Generating Infinite Sequences by a Finite Automaton. Information and Control 9: 521-530. MacGregor, R., and Bates, R. 1987. The Loom Knowledge Representation Language. Technical Report ISI/RS-87-188, Information Science Institute, Univ. of Southern California. Manna, Z. 1974. Mathematical Theory of Computation. New York: McGraw-Hill. Nebel, B. 1987. On Terminological Cycles. KIT Report 58, Technische Universitit Berlin. Nebel, B. 1989. On Terminological Cycles. In Preprints of the Workshop on Formal Aspects of Semantic Networks, Two Harbors. Nebel, B. 1989a. Reasoning and Revision in Hybrid Representation Systems. PhD diss, Universitat des Saarlandes, Saarbrticken. Nebel, B. 1989b. Terminological Reasoning is Inherently Intractable. IWBS Report 82, IBM Deutschland. Nebel, B. 1990. Terminological Cycles: Semantics and Computational Properties. To appear in Sowa, J. ed. 1990. Formal Aspects of Semantic Networks. Schmidt, D.A. 1986. Denotational Semantics: A Methodology for Language Development. Boston: Allyn and Bacon. Sistla, A.P., Vardi, M.Y., and Wolper, P. 1987. The Complementation Problem for Btichi Automata with Applications to Temporal Logic. Theoretical Computer Science 49: 217-237. Vardi, M. 1982. Complexity of Relational Query Languages. In Proceedings of the 4th ACM Symposium on the Theory of Computing, 137-146. Vielle, L. 1989. Recursive Query Processing: The Power of Logic. Theoretical Computer Science 69: l-53. 626 KNOWLEDGEREPRESENTATION
1990
118
869
On the Complexity of Monotonic Inheritance with Roles Ramiro A. de T. Guerreiro Andrea S. Hemerly Rio Scientific Center Rio Scientific Center IBM Brazil IBM Brazil P. 0. Box 4624 P.O. Box 4624 20.001, Rio de Janeiro, RJ 20.00 1, Rio de -Janeiro, R.J Yoav Shoham Computer Science Dept Stanford University Stanford, CA 94305 ABSTRACT We investigate the complexity of reasoning with monotonic inheritance hierarchies that contain, beside ISA edges, also ROLE (or FUNCTION) edges. A ROLE edge is an edge labelled with a name such as spouse-of or brother-of. We call such networks ISAR networks. Given a network with n vertices and m edges, we consider two problems: (P,) determining whether the network implies an isa relation between two particular nodes, and (P,) determining all isa relations implied by the network. As is well known, without ROLE edges the time complexity of P, is O(m), and the time complexity of P, is O(n3). Unfortunately, the results do not extend naturally to ISAR networks, except in a very restricted case. For general ISAR network we frost give an polynomial algorithm by an easy reduction to proposional Horn theory. As the degree of the polynomial is quite high (O(mn4) for P,, O(mn’) for P,), we then develop a more direct algorithm. For both P, and P, its com- plexity is O(n3 + m*). Actually, a finer analysis of the algorithm reveals a complexity of O(nr(Zog r) + n*r + n3), where r is the number of dif- ferent ROLE labels. One corolary is that if we fix the number of ROLE labels, the complexity of our algo- rithm drops back to O(n3). 1. INTRODUCTION Inheritance systems are a common framework for representing knowledge, in both AI and the database community. In these systems objects are organized hierarchically, and properties of objects are inherited by those below them in the hierarchy. For example, if it is recorded in this knowledge base that mothers are parents and that parents are responsible people, it may be concluded that mothers too are responsible. As is well known, an inheritance system may be represented by a directed graph. The vertices in the graph are alI of the same kind, and they each repre- sent a class of objects. Arcs, on the other hand, come in several varieties, and there has been less uniformity among the various inheritance schemes in this respect. Beside the basic ISA type of arc, denoting class inclu- sion and common to all systems, other types that have been mentioned are ROLES (or FUNCTIONS), RELATIONS, and IDENTITYs. In the past few years much attention has been paid to the issue of cancella- tion of inheritance, that is, to systems which allow an object to override some property that it would other- wise inherit from another object higher in the hier- archy. These systems have been called nonmonotonic (since the set of properties does not increase monotonically as one descends the hierarchy); in con- trast, systems without cancellation have been called monotonic. Most recent research in inheritance systems has been concerned with the semantics of inheritance. In particular there have been several results relating cancellations to nonmonotonic logics (Etherington, 1987) (Touretzky, 1986), (Touretzky et al., 1987). Our concern in this paper is different, as we look at the complexity of reasoning with inheritance net- works. Consider a network with vertices V and edges E, and let IV1 = n and IEI = m. As is well known, if all the edges are ISA edges (such simple networks have been called taxonomic) then in time O(m) one can determine whether the network implies an ISA relation between two particular nodes, and in time 0( nm) (and therefore in time 0( n3)) one can find all the implied ISA relations in the graph. If E contains other types of edge or if cancellation is allowed then the problem becomes harder. We know of relatively few results in this direction, including ones by Touretzky ( 1986) and Borgida (1989). Some relevant results involving negative and positive links are found in Thomason (1986). There are also results involving , RELATIONS and IDENTlTYs in Thomaaon (1989). We know of no results on the particular problem we con- sider, which is to allow E to contain ROLES as well as ISA edges, and to prohibit cancellation; we call these ISAR networks. We preclude cancellation not because we consider it unimportant, but because we would like to understand the monotonic case frost. As will be seen, it is by no means straightforward. The GUERREIRO ET AL. 627 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. problem lies in a new closure rule that is provided by the interaction between ROLES and ISA edges. Con- sider for example the following graph: Intuitively, since mothers are parents, spouses of mothers are also spouses of parents. In other words, an ISA relation is implied about the two right nodes. Of course, this intuitive claim needs to be formal- ized, and we will indeed do that. We will then con- sider the complexity of determining the implied ISA relations in such a network. In a very restricted type of ISAR networks we will be able to salvage the O(m) and 0( nm) results from the simple taxonomic case. For general ISAR networks we will offer a slightly costlier O(n3 + m*) algorithm to find all the implied ISA relations. Actually, a finer analysis of our general algorithm reveals a complexity of O( nr( log r) + n*r + n3 ), where r is the number of ROLE labels in the network (we distinguish ROLE labels which are dis- tinct, like brother-of and spouse-of, and actual ROLE edges, in which ROLE labels may be repeated). Note that we have r I m, but we do not have m I n*, since, unlike ISA edges, we may have multiple ROLE edges between two nodes (the spouses of mothers are exactly the joint-tax-payers of mothers). Among other things, this finer analysis takes us back to O(n3) for an r bounded by a constant. As this is close to the best known algorithm for simple taxonomic networks it seems unlikely that this result can be significantly improved. The remainder of the article is organized as follows. In section 2, we briefly define the semantics of ISAs and ROLES, and based on these we provide provably complete conditions for determining all the implicit ISAs entailed by a given ISAR network. In section 3, we formally defme the graph theoretic problem. In section 4, we briefly recall the results on taxonomic hierarchies, all well known. In section 5, we finally turn to the complexity of reasoning with ISAR net- works. In section 5.1, we extend the results of section 4 to a restricted kind of ISAR networks which we call “equi-multiple inheritance”-EAR (EMI-ISAR) networks. We then turn to the general case. First, in section 5.2, we provide a polynomial algorithm which reduces the problem to that of determining entailment by a propositional Horn theory. The degree of the polynomial turns out to be quite high, and so, in section 5.3, we give another, more direct algorithm, whose complexity was discussed above. Finally, in section 6 we summarize our results, compare them to previous results of which we are aware, and point to some open questions. SYNTAX AND SE NETWORKS In order to be able to define our problem we first present the syntax and semantics for monotonic ISAR networks. Their syntax is defmed as follows. Definition 1: Let V and L be two disjoint sets. An ISAR network is a triple < V,E,,E, > where E,cV X V, E,cV X V X L and it satisfies: 1) If (a,b,p)eE, and (a,c,p)eE, then b = C; 2) If peL then there are ad and beV such that (a,b,p)eE,. V is the set of vertices, L is the set of ROLE labels, E, is the set of ISA edges and E, is the set of ROLE edges. The second condition in the above deftition is not essential, but it guarantees that any ROLE label indeed labels at least one ROLE edge, which is con- venient. We now defme their semantics. Definition 2: Jet N= <V,E,,E,> be an ISAR network and L the set of ROLE labels of N A model for N is a pair < D,ll/ > where D is a set and $ is a (total) function on VUL such that: 1) If ad then Il/(a)ED; 2) If peL then $(p) is a partial function from DtoD; 3) if (a,b)EE, then ti(a)c$(b); 4) if W,pW, then $(W = vW04a)). Next we define one syntatic. two isa relations, one semantic and Definition 3: Let N= <V,E,,E,> be an ISAR network. The binary relation isa, on V is defined by: isa,(a,b) iff for every model < D,$ > for N, it is the case that +(a)z$(b). We will denote the fact that isa,(a,b) holds by N j= isa(a,b). Definition 4: J,et N = < V,E,,E, > be an ISAR network. The binary relation isa, on V is the smallest set satisfying: 1) If (a,b)EE, or a = b then (a,b)eisa,; 2) (Rulel) If (a,b)eisa, and (b,c)dsa, then (a,c)eisa,; 3) (Rule2) If (a,b)Eisa,, (a,c,p)eE, and (b,d,p)eE, then (c,d)Eisa,. We will denote the fact that isa,(a,b) holds by 628 KNOWLEDGEREPRESENTATION Ntisa(a,b). There is also the well known direct algorithm for P,: The next theorem establishes actually the same relation. that isa, and isa, are Theorem 1: (S oundness and Completeness) Let N = < V,E,,E, > be an BAR network. For every ad and beV, N/-isa(a,b) iff N+=isa(a,b). Proof (-+)Note that if (a,b)eE, then Nj=isa(a,b); and also that Rule1 and Rule2 are sound with respect to our semantics. (+--)We omit this part of the proof; it will be included in the long version of this paper. Note that if E, is empty then the ISAR network reduces to a simple taxonomic inheritance network. 3. FORMAL PROBLEM DEFlNlTlON Given the syntax and semantics of ISAR networks, we now formally define the two problems we will be addressing. P,. Input: an BAR network N = < V,E,,E, > and a pair of vertices xy in V Output: ‘yes’ if N + isa( ‘no’ otherwise P** Input: an ISAR network N = < V,E,,E,> Output: an ISAR network N = < V,E,‘,E, > such that E,’ = ((xJ): N + isa( If IV1 = n and COMP, is the time complexity of Pi (i = 1,2), then clearly we have COM P, I n*COMP,, since we solve P, by solving P, for each pair of nodes. In the rest of this paper, the number of vertices, IVl, will be n, the number of edges, IE,I + IE, I, will be m and the number of ROLE labels, IR I, will be r. Note that r 5 m and m 5 rn*. 4. SIMPLE TAXONOMIC HIERARCHIES: A REVIEW In this section we briefly review the well-known results for the case in which the network contains only ISA edges. Theorem 2: There exists an O(m) algorithm for P,. Proof. Use, e.g., the depth-frost search (DFS) algo- rithm for directed graphs (Aho et al., 1974). In fact, DFS may be used to find in O(m) time all the nodes reachable from a given node. We therefore have the following: Corollary 1: There exists an O( nm) algorithm for P,. Proof. Run a DFS from each node. Theorem 3: There exists an O(n3) algorithm for P,. Proof. Use the dynamic progamming algorithm of, e.g., (Aho et al., 1974). In fact, there exists a theoretically even better algo- rithm for P,, whose complexity is about O(n*-‘). EIowevcr, this theoretical result has not been trans- lated to a practical advantage. We mention these results for two reasons. First, as these are the best known results for taxonomic net- works (and of course the linear result for P, is provably optimal) they form a lower bound for what we might expect for ISAR networks, and are good ref- erence points against which to test our results. Second, the details of the algorithms mentioned above provide good insight into the qualitative increase in difficulty of ISAR networks. In the next section we discuss the DFS algorithm, and why it can be extended only to a limited class of ISAR networks. The dynamic programming algorithm, on the other hand, does not extend at all as far as we can see. Briefly, it relies on the property that if a path is decomposed at any vertex then each component is itself a path; that is true for simple taxonomic hierar- chies, but not for general ISAR networks. 5. ALGORITHMS FOR ISAR NETWORKS We now address the two problems defined in section 3, P, and P,, in the context of general ISAR networks. We start with a very efficient algorithm for a restricted class of ISAR networks. We then give an easy algorithm for the general case whose complexity, though polynomial, is uncomfortably high. Finally, we give a low polynomial algorithm for the general case. 5.1 EMI-ISAR networks The DFS algorithm for taxonomic hierarchies extends paths into the graph, backtracks chronologically when a path is blocked, and never traverses the same edge twice. In this section we extend the algorithm to ISAR networks, introducing two major modifications. First, paths are extended in a way that is more com- plicated than simply following ISA edges. Second, in order to guarantee that we do not lose completeness by not traversing edges more than once (which guar- antees linearity) we will need to impose a strong restriction on the network. Given the space limita- tions on this paper, we will only illustrate the algo- rithm through an example. Consider the simple network in Figure la. GUERREIROETAL. 629 a b 1x1 Jrl !%I X2 x2 f 2 X2 X2 C d-i- F%-i;-Ti Z:integers, Z +:nonnegative Z, Qrationals, R:reals, R+:nonnegative R, 1x1: the absolute value function, x2: squaring function Figure 1 Now consider the query ‘isa({289},R’-)‘. This query should succeed due to the path shown in Figure lb, which consists of three types of edge traversal: going back on ROLE edges (e.g., Z+ to Z), going up ISA edges (e.g., Z to Q), and going forward on ROLE edges (e.g., R to R +). We will call these respectively left, up and right traversal. Left and up traversals have no preconditions. Right traversal has a precondition that it not immediately follow a left traversal, and that the last left traversal to precede it was along a ROLE with the same label. To implement this we maintain a stack as we develop a path: up traversal does not affect the stack, back traversal pushes the ROLE label onto the stack, right traversal pops the stack (and has the precondition mentioned above). Figures 1 c- 1 j illustrate the stack at all the vertices along the path in Figure lb. Lemma 1: Let N be an ISAR network. Then N #= isa iff there is a path of the sort described above that starts at x with an empty stack and ends at y with an empty stack. The only question that remains is how to determine efficiently whether such a path exists. Unfortunately, in ISAR networks with multiple inheritance we will in general need to traverse some edges many times. A simple example exists already in Figure la: if the first path developed is {289}{ 1732 + QR, then at that point backtracking must occur. If we are not allowed to traverse the edge QR twice, then we will not dis- cover the path {289}{17}Z+ZQRR+R+, and thus miss a solution. In special case, however, it is safe to not traverse an edge twice: Definition 5 The label of a path is the sequence of ROLE labels appearing in it, ignoring all ISA edges. Definition 6: An ISAR network is an equi-multiple inheritance-ISAR network (EMI-ISAR network) if for any two nodes x and y, all paths from x to y have the same label. Theorem 4: In the case of EMI-ISAR networks there exists an O(m) algorithm for P,. Proof. Develop paths of the sort described above in a depth-first fashion, backtracking chronologically, never traversing an edge twice. In fact, just as in the simple taxonomic case, this extended DFS can be used to discover all nodes to which a path exists from a given node. We thus get the following: Corollary 2: In the case of EMI-ISAR networks there exists an O( nm) algorithm for P,. Note that our contains cycles. results hold also when the network 5.2 Reducing general ISAR networks to propositional Horn theory We now start to look at the general case of ISAR net- works. In this section we pursue an easy way out, namely to reduce the graph theoretic problem to the problem of deciding a query about a propositional Mom theory, which is known to be decidable in linear time (Dowling and Gallier, 1984). Unfortunately, the resulting datalog theory will not be linear in the size of the ISAR network. Let N = < V,E,,E,> be an ISAR network. We construct a Horn theory Th(N) as follows. First, for each three vertices XJ,Z in N, we construct a clause isa(x\y) < - isa(x,z) A isa(z& Then, for each four vertices V,XJ,Z in N and each ROLE label 2 we construct a clause isa(x& < - isa(v,z) A role(Z,v,x) A role(Z,z& Finally, for every pair (a,b) in E, we add a predicate isa(a,b), and for every triple (a,b,p) in E, we add a predicate role(p,a,b). Theorem 5: There exists an O( rn4) (and thus O( m n4)) algorithm for P,. Proof. From Theorem 1 we have that N k isa iff Th(N) + isa(x&. The latter can be decided in time linear in Th(N). The number of clauses in Th(N) is O(n3+rn4) = O(rn4). Corollary 3: There exists an O(rn8) (and thus O( m n6)) algorithm for P,. 630 KNOWLEDGEREPRESENTATION 5.3 An efficient algorithm for general EAR networks The degree of the polynomial in the previous algo- rithm is a bit too high for comfort. We now offer a more direct algorithm whose complexity is much lower. Definition 7: A directed AND/OR graph is one in which the set of edges emanating from each node is partitioned into sets, each set called an AND-set of that node (single edges are viewed as singleton sets). A path in such a graph is a rooted tree such that the set of edges in the tree ema- nating from each vertex forms an AND-set of that vertex in the AND/OR graph. Searching an AND/OR graph from a given vertex means starting with a path consisting of the node itself, and iteratively extending it. Definition 8: Let N= <V,E,,E,> be an ISAR network. The evidence graph of N is the AND/OR directed graph EVID(N) = < V*,E’> where E’= {((k,l),(i,j)): for some p, (i,k,p) and (j,l,p) are both in EJ U { (((i,k),(i,j)),((i,k),(j,k))): i,j,k in V>. The fast type of edge is shown pictorially below: i Q P, k Ck,h> c9 Ci,j) & i* P _ 2 (h,kl - cj,i) N EVID( N) The intuition behind the construction is the fol- lowing: an AND-set of a vertex (i,j) in the evidence graph is evidence that (i,j) is in the isa relation. More precisely, we have the following: Definition 9: Let N= <V,E,,E, > be an ISAR network. A path rooted at (a,b) in EVID( N) is said to be grounded if a = b or for all terminal nodes (k,l) in that path it is the case that (k,l) is in E,. Lemma 2: Let N = < V,E,,E, > be an ISAR network and i,j in V. Then N/=isa(i,j) iff there is a grounded path in EVID(N) rooted at (i,j). Proof. (outline) By theorem 1, N+isa(i,j) if and only if N/-isa(i,j). By induction on the number of applications of Rule 1 and Rule2 (Definition 4) we have that if Ntisa(i,j). then there is a grounded path rooted at (i,j) in EVID(N). By induction on the the size of the path we may prove that if there is a grounded path rooted at (i ,j) in EVID(N) then Ntisa( i,j). Lemma 3: It can be determined in time O(m’) simul- taneously for all vertices in EVID(N) whether there is a grounded path rooted at them, where m’ is the number of edges in EVID( N). Proof. (outline) Conduct a breadth-first search (BFS) starting from all nodes (i,j) such that (i,j) is in E,, moving backwards on edges, and extend a path beyond a vertex only when at least one of its AND-sets has all its members originate in previously- reached nodes. The last lemma points to the reason for constructing the evidence graph. We now note that m’ is bounded by the complexity of generating EVID(N). To com- plete the story, then, it remains to estimate this com- plexity. We fast show an easy bound, and then look more closely at the algorithm to improve the com- plexity. Theorem 6: There exists an U(n3 + m*) algorithm for p2. Proof. The construction of the edges in EVID(N) that are due to the transitive closure is done in time 0(n3). To construct the other edges, we look at all pairs of ROLE edges (i,j) and (k,l), and, if their ROLE labels agree, add to EVID(N) the edges ((i,k),(j,l)) and ((j,l),(i,k)). The total number of edge-pairs is O(m*). Thus the total complexity of the algorithm is O(n3+ m*). Recall that in ISAR networks there is no necessary relation between the number of vertices and the number of edges. However, if it happens that m = O(n*), we have that the algorithm is of com- plexity O(n4). We now improve on this by a more careful construction of the evidence graph. Theorem 7: There exists an O( nr(logr) + n*r + n3) algorithm for P,, where r is the number of dii- ferent ROLE labels. Proof. We create the first n3 edges as before. Then, rather than blindly compare all pairs of edges, we do the following. 1) Create a list for each vertex of all the ROLE edges emanating from it and their associated label. A typical list will have the form i: (I,,i,),(l,,i,), . . . . (where i, i, and i, are vertices, and I, and I, are ROLE labels); 2) Sort each of these lists by the label component; 3) For each pair of vertices i,j, scan their lists in par- allel to see which role labels they share. If you encounter the pair (p,k) in i’s list and the pair (p,l) in j’s list, add the edges ((k,l),(i,j)) and ((LWj,O). GUERREIROETAL. 631 Complexity of the steps: 1) WW; 2) Q( nr(Zogr)) (note that each list is of length r at most); 3) O(n*r) (scanning the sorted lists is linear in their length, r, and there are n* pairs of vertices). We also note that we have m 5 n*r, and so the total complexity of creating the evidence graph is O(nr(Zogr) + n*r + n3). Corollary 4: If the number of ROLE labels is bounded by a constant, there is an 0(n3) algorithm for P,. We note that as this is realistically the lowest com- plexity known for transitive closure, we hould not hope to improve on this. 6. SUMMARY AND DISCUSSION We haI-, offered new results on the complexity of rea- soning with inheritance hierarchies with ROLES, or ISAR networks. We defmed two problems, P, (deter- mining whether a ISAR network implies an isa relation on two nodes) and P, (fmding the closure of the isa relation). Let n be the number of vertices of an ISAR network, m the number of edges, and r the number of distinct ROLE labels. To somewhat crudely summarize our results, we have the following. The only results bearing directly on ISAR networks with which we are familiar are due to Borgida ( 1989). His results include NP-Hardness for networks with cancellation, and polynomial results for two other problems. We do not yet understand well the relation between his results and ours. There appear to be few other complexity results. We are aware of Touretzky’s (1986) polynomial algorithm for parallel networks with RELATIONS, but do not see an interaction with our work. Our results leave open some interesting questions. Our general result for P, is somewhat worse than the O(nm) of transitive closure; can it be improved? Another striking feature of our result is that in the general case we have identical results for P, and P,, although at first glance it seems that P, is much easier. Actually, our experience with the problem leads us to conjecture that P, is not any easier, but it would be nice to have a result on that. Then there is a question about other ways to salvage the O(n),O( n m) results from the simple taxonomic case: do there exist interesting classes of networks which permit that other than EMI-ISAR networks? Finally, what happens when we add other features to the network, such as RELATIONS or cancellation? We conjecture that at least in the latter case the problem in general becomes intractable, which seems to agree with Borgida’s result mentioned above. Acknowledgments. The authors are grateful to Richmond I-I. Thomason, who described the problem, and for helpful suggestions on a first version of this paper. REFERENCES Aho, A. V., Hopcroft, J. E., and Ullman, J. D. eds. 1974. The Design and Analysis of Computer Algo- rithms, Addision Wesley. Rorgida, A. 1989. Type Systems for Querying Class IIierarchies with Non-Strict Inheritance. In Pro- ceedings of PODS-89. Dowling, W. and Gallier, J. 1984. Linear-Time Algo- rithms for Testing the Satisfiability of Propositional Horn Formulae. Journal of Logic Programming, 1( 3):267-284. Ether&ton, D. W. 1987. More on Inheritance Hier- archies with Exceptions. In Proceedings of American Assiciation for Artificial Intelligence, 352-357. Thomason, R. II., Horty, J. F., and Touretzky, D. S. 1986. A Calculus for Inheritance in Monotonic Semantic Nets Relations and Identity., Technical Report CMU-CS-86 1.138, Computer Science Department, Carnegie-Mellon University. Thomason, R. II. 1989. Completeness Proofs for Monotonic Nets with Relations and Identity. In Pro- ceedings of the Fourth International Symposium on Methodologies for Intelligent Systems, 523-532. Touretzky, D. S. 1986. The Mathematics of Jnheritance Systems. Morgan Kaufmann. Touretzky, D.S., Horty, J. F., and Thomason, R. I-1. 1987. A Clash of Intuitions: current state of nonmonotonic multiple inheritance. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence, 476-482. 632 KNOWLEDGEREPRESENTATION
1990
119
870
Dynamic Constraint Satisfaction Sanjay Mittal and Brian Falkenhainer System Sciences Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto CA 94304 Abstract Constraint satisfaction (CSP) is a powerful and exten- sively used framework for describing search problems. A CSP is typically defined as the problem of finding consistent assignment of values to a fixed set of vari- ables given some constraints over these variables. How- ever, for many synthesis tasks such as configuration and model composition, the set of variables that are rele- vant to a solution and must be assigned values changes dynamically in response to decisions made during the course of problem solving. In this paper, we formalize this notion as a dynamic constraint satisfaction prob- lem that uses two types of constraints. Compatibility constraints correspond to those traditionally found in CSPs, namely, constraints over the values of variables. Activity constraints describe conditions under which a variable may or may not be actively considered as a part of a final solution. We present a language for express- ing four types of activity constraints in terms of vari- able values and variables being considered. We then describe an implemented algorithm that enables tight interaction between constraints about variable activity and constraints about variable values. The utility of this approach is demonstrated for configuration and model composition tasks. 1 Introduction Constraint satisfaction is a powerful and extensively used framework for expressing and solving search prob- lems. A variety of general techniques have been devel- oped for finding a consistent assignment of values to a predefined set of variables [3; 8; 9; 12; 111. The variables typically have preenumerated domains of discrete values, and a set of constraints over subsets of these variables limit their possible values. In contrast, for synthesis tasks such as configuration, design, or model composition, the constraint problem is of a more dynamic nature where any of the elements of the constraint problem (i.e., variables, domains, con- straints) might change as the search progresses. For example, selecting the type of hard disk controller for a computer configuration is only relevant when a hard disk has been chosen as the form of secondary storage. Installing a floppy disk drive would require solving for a different set of variables and constraints. A common approach in such cases is to embed general constraint satisfaction mechanisms within a larger task- specific problem solving architecture. In this coupled mode of problem-solving, the problem solver makes deci- sions about which variables and constraints are relevant in order to define a problem solvable by some constraint satisfaction mechanism. For example, in the Cossack expert system for configuration [6],the problem solver created variables in response to requirements specifying some functional aspects of a desired configuration. Con- sistent choices were assigned to these variables by select- ing a component and propagating constraints associated with the selected component. Separate mechanisms were used for creating variables and processing constraints on them. Similarly, the Mapsee- and Mapsee- programs for understanding sketch maps [Is] have a control cy- cle that alternates between creating a new variable for each image element (a chain of edges in the Mapsee ter- minology) and propagating constraints between existing variables to find consistent interpretations for the im- age elements. However, experience with Cossack and Mapsee- suggests that this separation becomes cum- bersome and inefficient when decisions about which vari- ables are relevant interact closely with decisions about consistent value assignments. In this paper, we identify a specialized but useful class of dynamic problems that we call dynamic constraint satisfaction problems (DCSP). For this class of problems, we propose a more integrated approach that extends the notion of constraint satisfaction problems to include con- straints about the variables considered in each solution. When a variable must be included, we say that it is ac- tive. By expressing the conditions under which variables are and are not active, standard CSP methods can be extended to make inferences about variable activity as well as their possible value assignments. The key characteristic of such dynamic constraint problems is that constraints on introducing or remov- ing variables from a potential solution closely interact with constraints on consistent assignment of values to some already identified set of variables. This property distinguishes our model of DCSP from other “dynamic” formalisms such as incremental constraint satisfaction methods that can handle a changing set of variables [17] or constraints [7], but cannot themselves reason about the activity of a variable. Similarly, constraint logic pro- gramming [lo] naturally allows variables to be dynami- cally introduced but provides no special support for ef- ficiently reasoning about variable activity. We begin by considering how to formalize configu- MITTALANDFALKENHAINER 25 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. ration as a constraint satisfaction problem. This pro- vides the motivation for the formal definition of dy- namic CSPs. We describe a simple language for stat- ing constraints about the activity of problem variables in a form that is usable within an extended constraint satisfaction framework. This enables introduction and removal of variables and constraints during search. Fur- ther, it provides a useful framework in which to express certain kinds of knowledge for synthesis problems. Ex- amples from configuration and model composition tasks are used to illustrate the use of our language. Finally, we describe an implemented search algorithm for solving DCSPs that is tuned to this specialized language. We close by describing some extensions in progress. 2 Modeling configuration as constraint satisfaction A constraint satisfaction problem (CSP) is typically specified by a set of variables V = { 111,. . .,v~} and a set of constraints C on subsets of V limiting the val- ues that may be assigned in a consistent manner [ll; 121. Each variable vi has an associated domain 0; = { 41,. - - ,di,;} w ic 1 en 1 h’ h ‘d t’fi es its set of possible values. The constraint satisfaction task is to find assignments of values for { 211,. . . ,v~} that simultaneously satisfy all the constraints C. We shall motivate the ideas behind this paper by con- sidering how to formalize a configuration task as defined in [14] as a CSP. In that paper, it was shown that many configuration domains satisfy two important properties. One, the functional roles played by a component in a configured artifact are known ahead of time. Two, for each functional role, one can identify a set of compo- nents as the “key” for each role, i.e., one of these “key components” are always needed for implementing the corresponding functional role. For example, in a com- puter domain, the functional roles include instruction execution, program memory, secondary storage, display, and printing. The key components for these functions respectively include microprocessor chip, memory chip, hard disk or floppy disk, display terminal, and printer. One can model such configuration tasks as a CSP by making additional simplifying assumptions such as: (1) there are a fixed set of functional roles to be decided in a configured artifact; (2) there is a one-to-one associa- tion between functional roles and the components that function in that role; and (3) there is a fixed set of com- ponents for each functional role. In the CSP model, we create a variable for each functional role and the com- ponents associated with that functional role constitute the domain of these variables. Constraints are used to represent compatibility and selection knowledge. The model presented above is rather simplistic, since only assumption (3) above is valid. Assumption (2) is generally invalid, since the mapping between func- tional roles and actual available components is typically many-to-many. For example, in order to provide sec- ondary storage one not only needs a disk drive but also a drive controller, bus connection, and some associated driver software. Here the crucial problem is not just the additional components that are needed but the fact that different components for the same functional role often need non-identical sets of additional components. Similarly, some of the available components often pro- vide more than one function. For example, the moth- erboard on a PC often implements many functions such as the microprocessor, co-processor, program memory, basic I/O system, and assorted other functions. The problem is not just the multiplicity of functions but the fact that such multi-function components often provide non-identical sets of functions. One can use the “key component” idea to partially solve this problem. Essentially, retain the mapping de- scribed above but keep in mind that the mapping is only partial, i.e., the component associated with a func- tional role may only partially provide that function and other components (or functional roles) may be needed to complete the requirements. One can express these addi- tional requirements by dynamically introducing “new” variables and constraints in the solution as a result of selecting a partially satisfying component C for a func- tional role R. The model of dynamic CSP presented in this paper formalizes this insight by extending CSP to al- low constraints on both the values of a variable and “rel- evance” to a solution. It is important to point out that the language presented here is more general than the ini- tial motivation. In particular, as will become apparent from the examples discussed later in the paper, the same constraint type can be used to represent many different kinds of domain knowledge. Furthermore, the CSP ex- tensions require new methods for solving the problem and also create opportunities for additional heuristics for controlling search. 3 ynamic constraint satisfaction problems In CSPs, the sets V, Oi, and C are fixed and known be- forehand. Each solution must contain an assignment for every variable in V. In DCSPs, there is also some set of variables V. However, unlike for a CSP, not all variables have to be assigned a value to solve the problem. Some variables and their value assignments render other vari- ables irrelevant, thus indicating that effort should not be spent considering values and constraints for these ir- relevant variables. Because the set of variables requiring assignment is not fixed by the problem definition, we distinguish be- tween variables that appear in the solution and those that do not. A variable is called active when it must be part of the solution. The statement that vi is active is represented by the proposition active:v;; the statement that vi is not active is represented by lactive:vi. A vari- able vi is assigned one of its possible values { di,, . . ., din, } if and only if it is active: active:vi t-t vi = dil V . -. V vi = d;,, 26 AUTOMATEDREASONING DCSPs can themselves post constraints about which variables are active. Thus, a DCSP explicitly constructs each active:vi and every solution to a DCSP only assigns values to variables that are also active. A dynamic CSP specifies a non-empty set of initial variables VI that must appear in all solutions (i.e., for all vi in VI, active:vi always holds). The set of active variables appearing in each solution will always be a su- perset of VI. A problem not requiring additional vari- ables reduces to a conventional CSP with V = VI. To concisely represent the conditions under which a variable and its associated domain become relevant to forming a solution, we extend the notion of constraint to include a set of activity constraints CA affecting vari- able activity. These constrain a variable to be active or not active based on other variables’ activity and value assignments. There are several activity constraint types described in section 4. The simplest of these is logically equivalent to: P( 211 ,a - *, Vn) -+ UCtiVe: Vj where vj @ (vi ,s . + , vn} and P is a predicate over variables and their possible values. To distinguish between activity constraints and tradi- tional constraints over variable values, we call the stan- dard set of CSP constraints compatibility constraints, represented by C c. Due to the active variable distinc- tion, we must also revise the conditions under which a compatibility constraint is satisfied. Take P( vi, vj, vk) to represent a compatibility constraint over variables vi, vj, and VE. It is treated as being logically equivalent to: active:v~ A aCtiVe:Vj A active:vk + P(V~,Vj,V~) Thus, if one of the variables in a constraint is not active, the constraint is trivially satisfied; if all of the variables in a constraint are active, it has the standard mean- ing and we say that the constraint is active. For effi- ciency reasons, we make these two cases explicit within the problem solver so that only active constraints need be checked. We can now define a dynamic constraint satisfaction problem as follows: Given: A set of variables V representing all variables that may potentially become active and appear in a solu- tion. (This list need not be explicitly preenumerated, although our algorithm does not currently handle that case.) A non-empty set of initial variables VI = {VI ,- - -,vk}, which is a subset of V. A set of discrete, finite domains Di ,. . .,Dk, with each domain Da = (ddl,. - - ,diki} representing the set of pos- sible values for each variable vi in V. A set of compatibility constraints Cc on subsets of V limiting the values they may take on. A set of activity constraints CA on subsets of V spec- ifying constraints between the activity and possible values of problem variables. Find: e All solutions, where a solution is an assignment J1 which meets two criteria: 1. The variables and assignments in A satisfy Cc U CA. 2. No subset of A is a solution. This definition extends the standard definition of CSP by introducing activity constraints and identifying a sub- set of the possible variables as initial variables. Note that the choice of initial variables is important for defin- ing the problem. Keeping everything else the same, we can create different problems simply by changing VI .l Also note that because activity may be affected by value assignments, each possible solution may contain a differ- ent set of variables. Consider the following simple dynamic CSP. We are given the variables vi, ~12, 25, v4 with domains Di = {a, b}, 02 = {c, d}, 03 = {e, f), and 04 = {g, h). Fur- ther, we are given initial variables VI = {vi ,v2} and con- straints: Vl =a + v2=d Vl =b+v2=c V2 =c A v3=e --) v4=h Vl = b -+ active : v3 V3 = e + active : 214 This problem has three solutions: =a, v2=d Vl =z, 3 v 2=c, v2'= c, v3 v3= f =e, va=h 4 Activity constraints In applying the dynamic CSP framework to several ap- plication tasks, we have found it useful to define a spe- cialized language for expressing four types of activity constraints. This language is useful for compactly rep- resenting selection and composition constraints in syn- thesis tasks. Additionally, separating these different ac- tivity constraints into distinct types also improves search efficiency by adding finer control over constraint propa- gation within our algorithm. 4.1 Require variable constraint The most fundamental activity constraint is the require variable (RV), which establishes a variable’s activity based on an assignment of values to a set of active vari- ables. RV constraints have the form: P(v1,. * * jvj) 3 Viz (where VU~ 4 {VI,- - *,vj}) This notation is logically equivalent to: p( Vl )’ ’ .jVj) -+ active: vk ‘This flexibility in reusing the knowledge base is often quite useful in the class of problems that have provided the motivation for the ideas presented here. MITTAL AND FALKENHAINER 27 where P( ~11,. . . ,vj) is a predicate over the possible val- ues of variables. When P(vr,. . .,vj) is both active (i.e., 211,’ - *9 3 21. are all active) and satisfied, ZIP must be active. A contradiction results if it is inconsistent for vk to be active. One use of RV constraints is to represent conditional requirements in synthesis tasks. 4.2 Always Require The always require variable (ARV) constraint extends the basic notion of a require constraint to require a vari- able’s activity based on the activity of other variables, independent of their current value. ARV constraints have the form: VlA * **r\vj 2 vk (where vk # (~1,. - *,Vj}) This form handles the special case where all choices for v1 through Vi require vk. While logically equivalent to the special form is more compact and can lead to more efficient search. The search method presented in sec- tion 6 takes advantage of the form of ARV constraints to make ?& active as soon as vl through vj become ac- tive, without waiting for particular value assignments to ~1 through vj. Note that the initial variables in the definition of a DCSP could alternatively be expressed as a set of ARV constraints whose conditions are “true”. Thus, a given DCSP can be easily modified by simply changing such top-level ARV constraints. 4.3 Require -Not In addition to stating when variables must be in the problem space, some tasks require the ability to state when variables must not be in the problem space. For example, selecting a convertible body frame for a car is inconsistent with any type of sunroof. The require not (RN) constraint states an inconsistency between an assignment of values to a set of active variables and an- other variable being active. RN constraints have the form: P(v1,. * - ,%a) s Vj (where vj e (VI ,- - -,vn}) They are logically equivalent to q Vl,’ * *,%a) + ~UCtiVfXVj For some applications, this constraint is more efficiently reexpressed as an inconsistency (c.f. ATMS NoGood [l]): P(V1,’ * * ,Vn) A UCtiVe:Vj -+ I where I represents false. RN constraints are particularly useful for expressing an incompatibility between variables without knowing if the variables will ever be active. This situation often arises when a given DCSP can be incrementally modified by simply adding a few other variables to the initial set. 4.4 Always Require Not Analogous to the always require constraint, the always require not (ARN) constraint extends the basic notion of a require not constraint to rule out a variable’s activity based on the activity of other variables, independent of their current value. ARN constraints have the form: vlA~~~ Avn 3 Vj (where vj fit {VI,- - -,v~}) and are logically equivalent to active:vl A - - - A active:v, + -active:vj or the corresponding NoGood. 5 Examples The DCSP framework is being applied to two indepen- dent research problems. The first is concerned with con- figuration and automated design tasks [6; 141. The sec- ond is concerned with composing the most appropriate model of a physical system given some query [4; 51. In this section, we demonstrate our approach on some sim- plified examples taken from each. 5.1 Configuration The following simple example of a DCSP has been for- mulated from a car configuration task (adapted from [13]). There are eight variables, of which three are in the initial variable set. Notice that the activity constraints have been used to represent many different kinds of do- main knowledge such as packaging, functional require- ments, structural decomposition, and incompatibilities. Similarly, compatibility constraints represent functional, structural, and packaging concerns. Variable Package Frame Engine Battery Sunroof AirConditioner Glass Opener Domain {luxury, deluxe, standard} Initial var {convertible, sedan, hatchBack} Initial var {small, med, large} Inztial vaf {small, med, large) {srl,sr2} { acl,ac2} {tintednot-tinted) { auto,manual} Activity constraints 1. Package=luxury 3 Sunroof 2. Package=luxury 3 AirConditioner 3. Package=deluxe 3 Sunroof 4. Sunroof=sr2 3 Opener 5. Sunroof=srl 3 AirConditioner 6. Sunroof A% Glass 7. Engine A= Battery 8. Opener Ag Sunroof 9. Glass A* Sunroof 10. Sunroof=srl 3 Opener 11. Frame=convertible 3 Sunroof 12. Battery= small & Engine=small 3 AirConditioner Compatibility constraints 13. Package=standard + AirConditioner#ac:! 28 AUTOMATED REASONING 14. Package=luxury + AirConditioner#acl 15. Package=standard + Frame#convertible 16. Opener=auto & AirConditioner=acl + Battery=med 17. Opener=auto & AirConditioner=ac2 + Battery=large 18. Sunroof=srl & AirConditioner=ac’L --+ Glass#Tinted For this problem the smallest solutions have only four variables (Package, Frame, Engine, and Battery) and the largest have all eight. The description of a car configura- tion knowledge base given above would be augmented by additional constraints that represent user requirements before solving the problem. Thus, notice that changing the set of initial variables (e.g., by making Sunroof an- other initial variable) creates a somewhat different prob- lem with different solutions, which may not simply be subsets of the solution set of the original problem. 5.2 Model composition Compositional modeling is a method for reasoning about modeling assumptions and automatically composing the most appropriate model of a physical system for a given analytic query [4; 51. Th e model composition problem is to synthesize the simplest model that is relevant to the needs of the task and consistent with the operat- ing conditions of the system. This model must contain the parameters of interest, be able to show specified in- formation about those parameters at a specified degree of accuracy, and minimize “cost” by reducing extrane- ous detail and computational effort. Many choices must be made, including the parts of the system to consider, their structural decomposition, the phenomena to con- sider, and how they should be modeled (e.g., what ap- proximations can be applied). In the compositional modeling framework, the sys- tem’s models of various domains consists of a set of el- ementary domain models, each describing some funda- mental piece of the domain’s physics, such as processes (e.g., liquid flow), devices (e.g., transistor), and objects (e.g., container). Each elementary model is conditioned on a set of modeling assumptions stating their range of applicability and underlying approximations. Reasoning focuses on choosing among the set of possible modeling assumptions, which enable a corresponding set of ele- mentary models, rather than reasoning about each ele- mentary model individually. Model composition consists of four steps: . Query analysis. A query specifies a set of parameters of interest. Thus, a minimal requirement is that all of these parameters be modeled. Additionally, these parameters directly suggest further constraints. For example, a query about the level of liquid in a con- tainer indicates that a macroscopic, “contained fluids” view is called for as opposed to a microscopic “particle dynamics” view. . Object expansion. The query directly indicates a set of objects to consider, but additional objects may need to be considered to capture all relevant interactions. 3. 4. Candidate completion. Some choices of simplifying as- sumptions raise new choices in turn. For example, considering liquid flowing through a pipe requires a decision about whether to model the fluid as com- pressible or incompressible. Candidate evaluation and selection. Finally, each can- didate is evaluated and the “best” candidate is se- lected. Choices made during the first two stages are fully de- termined by the query. The DCSP framework is be- ing used to express and solve the third stage, in which additional choices made relevant by the query must be made. In describing the possible modeling assumptions for a domain, some collections of assumptions represent mutually exclusive, alternative ways to model the same aspect of an object or phenomenon. To represent this important relationship, some assumptions are organized into sets called assumption classes. Each assumption class has a name cd representing a DCSP variable; its domain is the set of assumptions in that class. The analytic query entails a set of minimal required mod- eling assumptions, which identify VI, a DCSP’s initial variables. The dynamic constraint satisfaction task is to extend this initial set to identify a set of modeling assumptions corresponding to a coherent and parsimo- nious model. For example, consider the task of determining an ap- propriate set of modeling assumptions for analyzing the flow of water through a pipe connecting two contain- ers. There are 7 variables, with the initial variable set consisting of VI ={ Precision, Can-Geometry} .2 Assumption Class Domain Precision {qualitative,quantitative} Can-Geometry {quantity(level), lquantity(leve1)) Can-Capacity {finite-cans, infinite-cans} On tology {energy-flows, contained-fluids, particle-dynamics} Fluid-Density {incompressible, compressible} Fluid-Viscosity {inviscid, viscous, non-newtonian} Flow-Structure {laminar, turbulent} Activity constraints Can-Geometry A3 Can-Capacity Fluid-Viscosity=viscous 3 Flow-Structure Fluid-Viscosity=inviscid s Flow-Structure Precision=quantitative & Ontology=contained-fluids 3 Flow-Structure Ontology=contained-fluids s Fluid-Density Ontology=particle-dynamics 3 Fluid-Density Ontology=particle-dynamics s Can-Capacity Compatibility constraints Can-Geometry=quantity(level) ---f Ontology=contained-fluids Flow-Structure=turbulent ---) Fluid-Viscosity#inviscid 2A more sophisticated set of representations is used in the modeling work than shown here. We have highly simplified the representation descriptions in an effort to gain clarity. MITTALANDFALKENHAINER 29 Can-Capacityzfinite-cans + Can-Geometry=quantity(level) Given a query about the changing levels of the two containers, these constraints elaborate the query as needed to ensure that the resulting model is coherent. For example, asking about the changing fluid levels re- quires a model that represents a macroscopic view of liquids and their containers. 6 Algorithm and Implementation 6.1 ATMS-based Implementation We have implemented the dynamic CSP framework as a specialized problem solver integrated with an assumption-based truth maintenance system (ATMS) [l]. The algorithm is designed to use our specialized lan- guage and is summarized in Figure 1. It enables con- straints about variable activity (the problem space) to interact with constraints about variable values (the so- lution space), producing the two problem solving levels shown in Figure 2. The algorithm consists of a main choose / propagate cycle based largely on [a]. Each choose step selects an active, unassigned variable and assigns it a value that has not been previously ruled out. Each propagate step checks the constraints relevant to the new variable value assignment and propagates their consequences and de- pendencies. Constraint checking is ordered to take ad- vantage of the differing scope of each constraint type. ARV and ARN constraints are checked first because they have the largest scope (i.e., they apply to variable activ- ity, encompassing all their possible value assignments). RV and RN are checked second, because they affect a variable’s activity, independent of its possible values. Fi- nally, compatibility constraints are examined to see if the new variable value assignment is consistent. A con- straint is “run” when it becomes active. Due to ATMS caching, each constraint need only be run once. Con- straints that never become active during the course of problem solving are never checked. Search for each so- lution terminates when all active variables have been consistently assigned a value (i.e., there are no more variables for the choose step). Other variables do not appear in the solution. Here we informally show that the algorithm is sound, which requires showing that each solution generated meets the criteria specified in section 3. The first cri- terion, that each solution satisfies all the constraints in Cc U CA, is easy to show since each time a value is as- signed to an active variable, all the constraints activated by that assignment are checked before the next choose. The second criterion states that each solution must be minimal in the sense that there should be no other solution that is a superset of another. Since a solution is described by a set of variables and their associated value assignments, a proof of minimality requires prov- ing two properties for each solution. One, no variable is assigned a value unless it is active. Two, a variable Figure 1: Dynamic constraint satisfaction algorithm. procedure DCSP (VI) v t VI SOLUTION t empty Check all applicable activity and compatibility constraints if CONTRADICTION(V, SOLUTION), then return faa’l (initial problem statement is inconsistent) BACKUP? t f&e while V is not empty if BACKUP? or CONTRADICTION(V, SOLUTION) then backtrack and change V and SOLUTION. if backtracking fails, then return fail. BACKUP? t false else if there is an active ARV constraint c; then run c; and add newly activated variables to V else if there is an active ARN constraint c( then run c; else if there is an active RV constraint c; then run c; and add newly activated variables to V else if there is an active RN constraint c; then run c; else if there is an active compatibility constraint c; then run c; else vi t SELECT&DELETE(V) {choose nezt variable} value( 21;) + CHOOSE( v;) {choose an assignment} if value(v;) is NULL, then BACKUP? +trere else add value(vi) to SOLUTION return SOLUTION / RC \ RN /\\ / Solulion Space \ Figure 2: A dynamic constraint satisfaction network. 30 AUTOMATED REASONING is made active if and only it has well-founded support from the initial set of active variables. It is easy to see that the first property holds since the only variables that are assigned values are those that have been made active by a previous step in the main loop. The second prop- erty, i.e., well-founded support for active variables can be seen to hold for our algorithm based on the following observations. One, at any time through the loop in the algorithm, a new variable 21 is made active if and only if there are some active activity constraints that justify it and there is no active activity constraint that contra- dicts ~1’s activity. This ensures that a variable becomes active if and only if it has well-founded support. In other words, all active variables in a solution are required ei- ther by the initial variables or by explicitly chosen value assignments. Two, the only place where a choice is made (i.e., a branch in the search space) is in the assignment of a value to a variable. Each such choice generates a potentially different solution. Notice that in the lan- guage described so far, there is no disjunction over re- quired variable constraints. While such disjunctive ac- tivity constraints will be necessary for expressing some types of domain knowledge, finding minimal solutions in the presence of such constraints can be very expensive. We are exploring heuristics that will work in some spe- cial cases. Also see [15] for an alternative formulation of disjunctive require constraints in some cases. 6.2 Example Trace Here we briefly show a partial trace of our algorithm solving the car configuration example described in sec- tion 5.1. The algorithm does not commit to any heuris- tics for selecting the next variable or choosing a value for the selected variable. For this trace assume that a “smallest domain size” heuristic is used to select vari- ables and the “first value” heuristic is used to choose variable values. We show the active variables and value assignments in bold font, constraint propagation and value choices in italics, and explanatory comments in roman. We have also shortened the names of the vari- ables to the first letter. VI = {P,KE} C7 r~n8 (ARV making B active) V = {P,F,E,B} Choose P=luxury C1 runs (RV making S active) V = { P=luxury,F,E,B,S} C6 runs (ARV making G active) V = {P=luxury,F,E,B,S,G} C.2 runs (RV making A active) V = { P=luxury,F,E,B,S,G,A} C9 runs, (ARV on S - no change to the active variables) V = { P=luxury,F,E,B,S,G,A} Cl4 runs (constraint between P and A, creating nogood(P=luxury, A=acl} Choose A =a& Choose F=convertible V = {P=luxury,F=convertible,E,B,S,G,A=ac2} Cl1 runs (RN on S, leading to a contradiction (Cl & Cll)) Backup to the previous choice point Choose F=sedan Ehz,I,“, !small = uxury,F=sedan,E,B,S,G,A=ac2} = Choose B=smaIl V = { P=luxury,F=sedan,E=small,B=small, S,G,A=ac’L} CIZ runs (RN on A, leading to a contradiction on AirConditioner activity (C2 & C12)) Backup to the previous choice point Choose B=medium V = { P=luxury,F=sedan,E=small,B=medium, S,G,A=aclt} Choose S=srl V = { P=luxury,F=sedan,E=small,B=medium, S=srl,G,A=ac%} C5 runs (RV on A - no change to the active variables) Cl0 runs (RN on 0 - no immediate effect) C18 runs, creating nogood{S=srl, A=ac2, G=tinted} Choose G=not-tinted V = { P=luxury,F=sedan,E=small,B=medium, S=srl,G=not-tinted,A=acft] This is one of the possible solutions. Others can be found by backtracking. 6.3 Backtracking Implementation A subset of our language (only the RV and CC con- straints) has also been implemented by extending a con- ventional backtrack-search CSP framework [13]. The ba- sic algorithm is similar to the one shown in Figure 1 with the major difference that each time an active variable is assigned a value, we use forward checking to propagate all active compatibility constraints. This implementation was also used for making some comparisons between solving a DCSP directly using the method presented here versus solving a logically equiv- alent “static” CSP. The latter was obtained by intro- ducing a new distinguished value called “null” in the domains of all variables and by appropriate transforma- tions of all constraints such that they are trivially sat- isfied if one or more variables have a “null” value. In the examples that we tried, the DCSP version outper- formed the “static” CSP version in all the performance metrics we measured (total time, number of backtracks, constraint checks and total variable assignments). Even on simple problems the difference in constraint checks, the metric most commonly used in the literature, was quite significant. A more detailed comparison will be presented in an extended version of this paper. 7 Discussion In this paper we showed how to extend a constraint satisfaction framework to include constraints on activ- ity of variables. With these extensions, we described a core language for compactly representing selection and composition constraints in synthesis tasks. We also de- scribed an algorithm that efficiently finds minimal (non- redundant) solutions to such dynamic CSPs. We believe that the combination of the DCSP version of a problem and our method for solving such problems is effective for two reasons. One, by creating distinct MITTALANDFALKENHAINER 31 kinds of constraints, we can tune a search engine to take advantage of these distinctions in focussing the search. Two, in the DCSP version most of the variables and con- straints are initially “not active”. Thus, the constraint graph for the problem starts out by being very sparsely connected. It is filled only as the result of choices made by the search engine. Theoretical and empirical results [3; 8; 91 based on analyses of constraint graphs clearly show a close correspondence between the search effort and connectivity (e.g., width) of variables in the prob- lem. The DCSP version (when relevant) “hides” many of the variables and constraints. Clearly more work is needed here, especially in terms of precise mathematical analysis that can quantify this intuition. We are also developing improved heuristics for choos- ing the next active variable (e.g., most supported) or choosing a value from the domain of these variables (e.g., based on how many variables are activated by a choice). We are also looking at heuristics that help with disjunc- tive require constraints. The language we have presented is only a subset of what will eventually be needed for modelling the rela- tionships that are important in synthesis tasks. For ex- ample, referring back to the discussion in section 2, note that we need to express constraints of the form that a component may also provide additional functional roles or that it cannot be part of a configured solution by itself, i.e., without some other parent component. We have already alluded to the need for expressing disjunc- tive activity constraints, the simplest examples arising in cases where a component’s requirements can be sat- isfied by disjunctive functional roles. Our methodology has been to identify language primitives for expressing some domain relationships and then examine their im- plications for our search method both in terms of effi- ciency of search and minimality of solutions found. We expect to report progress on developing a more complete language in the next paper. 8 Acknowledgments Felix Frayman and Harley Davis made important con- tributions in defining the basic notion of a dynamic CSP in the context of formalizing configuration tasks. We are grateful to Dan Bobrow, Johan deKleer, Vijay Saraswat, and Mark Shirley for in-depth discussions and comments on earlier drafts of this paper. References [l] de Kleer, J. An assumption-based TMS. Artificial Intelligence, 28(2), March 1986. [2] de Kleer, J. and Williams, B. Back to backtracking: Controlling the ATMS. In Proceedings of AAAI-86, August, 1986. [3] Dechter, R. J and Pearl, J. Network-based heuris- tics for constraint-satisfaction problems. Artificial Intelligence, 34(1):1-38, December 1987. [4] Falkenhainer, B and Forbus, K. D. Setting up large- scale qualitative models. In Proceedings of AAAI- 88, August 1988. [5] Falkenhainer, B and Forbus, K. D. Compositional modeling: Finding the right model for the job. (sub- mitted for publication), January 1990. [6] Frayman, F and Mittal, S. Cossack: A Constraints- Based Expert System for Configuration Tasks. In Sriram, D and Adey, R. A, editors, Knowledge Based Expert Systems in Engineering: Planning and Design. Computational Mechanics Publica- tions,pp. 143-166, August 1987. [7] Freeman-Benson, B. N., Maloney, J ., Borning, A. An Incremental Constraint Solver Comm.unications of the ACM, 33( 1):54-63, January 1990. [8] Freuder, E. C. A su ffi cient condition for backtrack- free search. Journal of A CM, 29( 1):24-32, 1982. [9] Freuder, E. C. A sufficient condition for backtrack- bounded search. Journal of ACM, 32(4):755-761, 1985. [lo] Joxan, J. and Lassez. J. Constraint Logic Program- ming Technical Report, IBM Thomas J. Watson Res. Ctr., Yorktown Hts., NY, USA, October 1986 [ll] Mackworth, A. K. Consistency in networks of rela- tions. Artificial Intelligence, 8:99-118, 1977. [12] Mackworth, A. K. Constraint Satisfaction. In Shapiro, S. C. (ed.), Encyclopedia of Artificial In- telligence, Vol. I, 2nd edition,John Wiley & Sons, New York, 1990 [13] Mittal, S and Davis, H. Representing and solving hierarchical constraint problems. SSL Technical re- port, Xerox PARC, 1989. [14] Mittal, S. and Frayman, F. Towards a Generic Model of Configuration Tasks. Proceedings IJCAI- 89, Detroit, Michigan, August 1989. [15] Mittal, S. Reasoning about Resource Constraints in Configuration Tasks. SSL Technical report, Xerox PARC, 1990 [16] Mulder, J., A. Mackworth, W. S. Havens Knowledge Structuring and Constraint Satisfac- tion: The Mapsee Approach. IEEE Trans. on Pat- tern Analysis and Machine Intelligence, 10(6):866- 879, November 1988. [17] Seidel, R. A N ew Method for Solving Constraint Satisfaction Problems. Proc. IJCA I-81, pages 338- 342, Vancouver, Canada, August 24-28, 1981. 32 AUTOMATED REASONING
1990
12
871
Boolean Extensions o Inheritance N&wo&s John F. Horty Philosophy Department and Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 horty@umiacs.umd.edu Abstract Much of the theoretical research on nonmonotonic in- heritance has concentrated on formalisms involving only IS-A links between primitive nodes. However, it is hard to imagine a useful network representation of commonsense or expert knowledge that would not in- volve nodes representing negative, conjunctive, or dis- junctive properties. Certain nodes of this kind were included in some of the earliest formalisms for defeasi- ble inheritance, but were omitted in later work, either to secure tractability or to simplify the task of the- oretical analysis. The purpose of the present paper is to extend the theoretical analysis of defeasible in- heritance to networks incorporating these expressive enhancements. 1 Introduction Much of the theoretical research on nonmonotonic in- heritance has concentrated on formalisms involving only positive and negative IS-A links between,primitive nodes. However, even setting aside the need for relational rea- soning, many knowledge representation applications call for extensions of this austere collection of network prim- itives. The following are typical instances of mecha- nisms that provide useful expressive power. e Negative antecedents. We may wish to say that if a seat is unoccupied it is availa,ble. 0 Conjunction. We may wish to say that purple mushrooms are poisonous. e Disjunction. We may wish to say that members of congress are (by definition) either senators or rep- resentatives, and (by default) either democrats or republicans. It is hard to imagine a useful representation of common- sense or expert knowledge that would not call for such definitions and assertions. Also, vital tasks like recogni- tion of concept instances require expressive extensions. If, for instance, we enter three subtypes of sentences- declarative, imperative, and interrogative-along with information about the constituent structure of these Richmond H. Thomason Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260 thomason+@cad.cs.cmu.edu three types, a recognizer must somehow know that a string of words that is not classifiable under any of these three types is not a sentence. What all these expressive extensions have in com- mon is the introduction of boolean node types into net- work formalisms. Such types were included in some of the earliest formalisms, such as that of Fahlman [4], but were omitted or restricted in later work, either to secure tractability or to simplify the task of theoretical analy- sis. If extensions of this sort are to be developed piece- meal, there would inevitably be much duplication of effort. In a system providing for negative antecedents, for instance, as well as in one introducing limited dis- junctions, it would be necessary to provide separate inheritance definitions-and this would require, for ex- ample, a separate account in each case of when one reason for a conclusion should preempt another. The purpose of the present paper is to present a framework providing criteria that apply generally to the design of boolean extensions of nonmonotonic inheri- tance networks: we develop a theory of defeasible in- heritance for networks containing, in addition to nodes representing primitive kinds or properties, also nodes representing arbitrary boolean combinations of proper- ties. As in [9], we allow for both strict and defeasible links. However, the treatment of reasoning conflicts here is credulous, rather than skeptical. This is for the sake of presentation only: the credulous theory is some- what simpler, and the other ideas involved in boolean inheritance are already complicated enough. It is hoped that the theory presented here will help to meet the challenges posed by Brachman [2] and Israel [lo] for defeasible inheritance reasoners. These papers raise two serious problems for knowledge representation services employing defeasible inheritance reasoning: il- licit cancellation and the need for classification. The former problem, which is simply that definitions cre- ate strict connections that should not be subject to exceptions, is solved by providing a mixed theory like that of [9], in which some conclusions cannot be can- celed without introducing inconsistency into the net- work. The second problem is that, when defeasible links are present, it is not clear how a defined concept should HORTY AND THOMASON 633 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. be placed in an inheritance hierarchy. In view of the usefulness of the classifier in KL-ONE style systems, this challenge has to be taken very seriously. The theory of [9] d oes not deal with this classi- fication issue, because the mixed system of that pa- per does not provide for complex concepts. Since the present work, however, gives an account of the con- clusions derivable from a defeasible network containing boolean-defined concepts, it provides also a specifica- tion for a classification algorithm in this boolean case. Admittedly, implementing such an algorithm would en- tangle us in intractability problems; but such entangle- ments also arise in purely monotonic settings. As far as we know at present, intractability problems are not necessarily worsened by the presence of defeasible links. 2 Basic concepts 2.1 Nodes and arguments Nodes representing objects or individuals are depicted, as usual, by lowercase letters from the beginning of the “.. alphabet (a through d). Nodes representing kinds or properties of individuals, however, may now be com- pound. Simple nodes, representing primitive proper- ties, are depicted by lowercase letters from the middle of the alphabet (f through t). Compound property nodes are then obtained by closing the simple property nodes under the boolean operations of A, V, and 1; the nodes belonging to this closure, both simple and com- pound, will be depicted by uppercase letters from the middle of the alphabet (F through 7’). Capital letters from the end of the alphabet (U through 2) range over nodes in general, both individual nodes and property nodes. Although we generally use infix notation in exam- ples, we take the boolean operations of A and V of- ficially to operate on sets of nodes; thus, for instance, FAGAH is the result of applying conjunction to the set {F, G, H}. (It will b e convenient to take the conjunc- tion and disjunction of a unit set to refer to that set’s only member.) The compound property nodes should be interpreted in the obvious way: if F and G are prop- erties, then the nodes the nodes FAG, F V G, and TF represent the property of being both an F and a G, the property of being either an F or a G, and the property of not being an F, respectively. We allow for both strict and defeasible links. A strict link has the form X 3 F. If X is itself a prop- erty node, such a link is equivalent in meaning to a universally quantified material conditional; for exam- ple, the link p A 4 + Y might represent the statement ‘All purple mushrooms are poisonous’. If X is an indi- vidual node, a strict link will mean that this individual possesses the property F; the link a a l(p A q), for example, might represent the statement ‘Albert is not a purple mushroom’. A defeasible link has the form X + F. If X is a property node, such a link is to be interpreted as a defeasible generic statement; the link p A q + f might represent the statement ‘Purple mushrooms tend to be poisonous’. It is harder to find a natural reading for these defeasible links when X is an object, but we can assume that they represent defeasi- ble assertions about the properties of individuals; the link a + l(p A q), then, might represent a statement like ‘It is most natural to suppose that Albert is not a purple mushroom’. In theories of inheritance that allow only primitive nodes, reasoning processes can be represented by paths, or linear sequences of links. With the addition of com- pound nodes, however, inheritance networks are able to support more complicated patterns of reasoning. We represent these patterns using certain kinds of proof trees, called arguments; we refer to them by lowercase Greek letters (a:, ,f?, y, . . .). Arguments will be classified as strict or defeasible, simple or compound; and with each argument cv there is associated a premise node P(o) and a conclusion node C(o). These are analogous to the two nodes, in or- dinary inheritance, standing at the beginning and the end of a reasoning path. An argument supports a con- ditional statement constructed out of its premise and conclusion. If o is a strict argument, it supports the statement P(o) j C(a); if o! is a defeasible argument, it supports the statement P(o) + C(o). The simple arguments are those containing only one inference. If X1 A - - - A X, is a node, then the tree is a simple strict argument, with P(a) = X1 A - - - A X, and C(a) = F. Likewise, for any node X, the tree X CV= F - is a simple defeasible argument, with P(o) = X and C(a!) = F. Compound arguments are defined induc- tively by the following clauses, in which the bracketed components are optional. 1. Ifal,.. . , on are strict arguments and l\{p(ai), “‘, P(cE,), [Xl,. . . ,X,1} is a node, then CYl *a* Qn CY= [Xl .a. xl F is a compound strict argument, with P(o) = l\(P(m), . . . , p(w& [Xl,. . . ,&J) and C(a) = F. 634 KNOWLEDGEREPRESENTATION 2. If&l,..., (Y, are arguments at least one of which is defeasible, and P(cQ) = X for each oi, then ct!l . . . an ix1 a = F is a compound defeasible argument, with P(a) = X and C(o) = F.l 3. If 01 is any argument, strict or defeasible, with P(cQ) = X, then is a compound defeasible argument, with P(o) = X and C(o) = F. In addition to specifying the arguments themselves, this definition allows us also to classify inferences contained in an argument as strict or defeasible: a double infer- ence bar, analogous to the double-arrow link, indicates that the inference is strict; a single inference bar, anal- ogous to the single-arrow link, indicates that it is de- feasible. The arguments defined here are intended as a gen- eralization of the standard inheritance paths; a path is simply an argument that does not branch. To save ver- tical space, we will use ordinary path notation from [9] to refer to those arguments that can be identified with paths. For the same reason, we will occasionally write those compound arguments defined by the third clause above horizontally, as (al/F) . 2.2 Nets, theories, and extensions Capital Greek letters from the beginning of the al- phabet (I’, A, 0,. . .) stand for networks, which are fi- nite sets of links; those from the end of the alphabet (a, E, Q, . . .) stand for sets of arguments. Intuitively, the statements belonging to a network are supposed to represent the information provided as hypotheses to some reasoning agent. We imagine this agent develop- ing a body of accepted arguments in stages, by a pro- cess of argument formation and ratification. Since ar- guments are a means of constructing other arguments, at any stage in this process there will be an argument set, consisting of patterns of reasoning that have been explicitly carried out and accepted. The relation of support already defined between ar- guments and statements can be extended in the obvious way to a relation between argument sets and statement ‘The reason why a common premise is required when defea- sible arguments are combined is that defeasible arguments can be compared and combined only with respect to the same total background evidence. This common evidence is represented here by the shared premise. sets: an argument set Q will be said to support a state- ment set A just in case A is the set of statements sup- ported by the arguments in a. The primary task for a proof-theoretic account of inheritance networks is to specify the theories associ- ated with each network-the statement sets that an ideal reasoner could arrive at, given the information in that network as hypotheses. Following the strategy de- veloped in previous work on inheritance networks, we approach this task here in a roundabout way. We first define the relation between a network and certain argu- ment sets known as the extensions of that network; in- tuitively, these represent alternative argument sets that an ideal reasoner would be able to accept, based on the initial information contained in the network. Once this relation has been defined, it is then a simple matter to specify the theories associated with a network: A is a theory of the net I’ just in case there is an extension of I’ that supports A. 3 Inheritability If I’ is a network and @ is some set of arguments, we describe the pair (I’, @) a~ an epistemic context. Al- though, formally, any such pairing of a net and an ar- gument set counts as a context, it is part of the intuitive picture that the argument set should arise out of the net. In any given context, certain arguments can be classified as inheritable-forcible or persuasive. We use the symbol ‘ k ’ to stand for this relation of inheritabil- ity, SO that ‘(I, a) k o’ means that the argument Q is inheritable in the context (I’, Q). This notion of inheritability is the central concept in our proof-theoretic account of inheritance networks. In the present section, we set out an appropriate no- tion of inheritability for arguments. This notion will then be used in the following section to provide a defi- nition of credulous extensions for inheritance networks containing compound nodes. 3.1 Motivation By appealing to logic, it is easy to specify the condi- tions under which strict arguments, at least, should be classified as inheritable in a context. Suppose we have chosen some background logic for the boolean connec- tives; this logic will determine a consequence relation t-. In fact, the account of inheritance presented in this pa- per can be cast against a number of background logics, but for reasons described in Thomason et al. [12], the most promising candidate seems to be the four-valued logic of Belnap [ 11. N ow imagine, for a moment, that we interpret the nodes as propositions rather than proper- ties; and that we supplement our background logic with the strict statements belonging to some net I’ as addi- HORTYANDTHOMASON 635 tional rules of inference. This leads to a new logic, with a new consequence relation l-r-. We say that a strict ar- gument is F-valid if the conclusion of each inference in the argument, regarded as a proposition, follows from its premises in the logic b ; and we will say that such an argument is inheritable in the context (I’, @) if it is P-valid. The intuitive idea behind this treatment of the strict arguments can be described as follows. Initially, we construe all nodes as properties (this includes individ- uals, since we can interpret the node a as the property of being CL). Each strict link in the net then induces a new logical rule of inference involving a single free variable, say 2: the link p j q, for instance, leads to the rule of inference p(z)/q(z); the link a + p leads to 2 = a/p(x). We want to evaluate strict arguments against this new predicate logic determined by the net. However, since we are reasoning about statements con- taining only a single free variable, it suffices to treat the logic as propositional, and the nodes as proposi- tions. Notice that, for the four-valued logic, and for any other reasonable logic of the boolean connectives, this treatment already provides, even for entirely strict arguments, a polynomial reduction of inheritability to an NP-complete problem. We turn now to the matter of inheritability for de- feasible arguments; and here, we focus on the special case of compound defeasible arguments ending in a de- feasible inference. This is really the most interesting case, since it forces us to isolate the conditions under which an inference can be drawn using defeasible in- formation. Once the case is understood, it is then a simple matter to embed it in a general definition of in- heritability. For arguments of this kind, the account we provide is modeled on that of Touretzky [13] : such an argument will be classified as inheritable in a con- text if it is constructible, but neither conflicted nor pre- empted. These key concepts-constructibility, conflict, and preemption-are defined by Touretzky for paths. Our task here is to generalize them in such a way that they will apply to arguments as well. The generalization is unproblematic in the case of constructibility. We adopt the following definition, which captures the idea that the arguments constructible in a context are those that can be assembled by chain- ing together, in a certain way, the arguments and links already present in that context. Definition 1 (Constructibility) Let a, be an argu- ment of the form (c~l/Y), with C(cxl) = U. Then cy is constructible in the contezt (I’, @) ifl cy1 E @ and u-49x It is helpful to think of the final inference, or link, in a constructible argument as a reason for accepting that argument. Suppose, for example, that the agent Figure 1: I’1 Figure 2: I’2 isgiventhenetI’r={a+p,a+q,pAq+r}ashis initial information, where p = purple things, q = mush- rooms, and r = poisonous things. This net is shown in Figure 1, in which the compound conjunctive node p Aq is depicted as a conjunction sign with double-barbed arrows pointing at its conjuncts (disjunctive and nega- tive nodes will be depicted in a similar fashion). Now let a a = = a! = V Q PM and /3 = (a/r); and supp ose that the reasoning agent has already accepted the argument cr, so that he is in the epistemic context (I’, @) with Q = I’ U {oy). In this new context, the argument /3 is constructible. Since the agent has already accepted the argument that a is a purple mushroom, the link pA q + r, telling him that purple mushrooms are poisonous, gives him a reason to accept the argument that a is poisonous. Constructibility is a necessary condition that an ar- gument of this kind must satisfy in order to be classi- fied as inheritable, but it is not sufficient. Even if an argument is constructible in some context, it will not be classified as inheritable if it is either conflicted or preempted. The intuitive force of the requirement that a con- flicted argument should not be inheritable is that the agent must check for consistency before accepting argu- ments. An argument cannot be classified as persuasive- even if there is some reason for accepting it-whenever the adoption of that argument would introduce a con- flict into an epistemic context. The need for this re- quirement is usually illustrated by the familiar Nixon Diamond. Once an agent has embraced, say, the argu- ment that Nixon is a pacifist, he can no longer accept the conflicting argument that Nixon is not a pacifist. The generalization of the notion of conflictedness to nets with compound nodes is nontrivial, because the more powerful strict consequence relation provided by 636 KNOWLEDGEREPRESENTATION boolean nodes can make conflicts indirect. To illus- trate this possibility, suppose that the reasoner is given as his initial information the net I?2 = (a 3 p,p ---) s,a 3 q,q + t,a * ~(s A t)}, depicted in Figure 2; and suppose also that he has already accepted the ar- gument a =) q ---) t, so that his epistemic context is (I’z,@) with @ = l?2 U {a + q + t}. Here, the argu- ment a + p * s would introduce a conflict into the context, even though the context does not contain an explicit argument to the contrary. In order to generalize the notion of conflict to handle cases like this, we need to introduce a few preliminary ideas. First, where @ is an argument set and X is a node, we let @(X) = (C(o) : (Y E Q and P(Q) = X). This set can be thought of as the projection of Q onto X; it tells us what an agent who has accepted all the arguments in ip has concluded in particular about X. Next, we say in the obvious way that a set of nodes F is a conjlict set in l? iff F b 2 and F b 12, for some node 2; and that F is a minimal conflict set in I’ (a I’-mcs) iff F but none of its proper subsets is a conflict set in I’. Using these ideas, the notion of a conflicted argument can be defined as follows. Definition 2 (Conflict) Let a be an argument of the form (al/Y), with P(cq) = X. Then (Y is conflicted in the context (I’, a) i$ there is a r-mcs 3 such that YE3and3-(Y)C:@(X). In the example above, it can now be seen that the ar- gument a * p ---) s is conflicted in the context (I’2, a), since F = (s, t, l(sAt)) is a Pa-mcs and F-(s) C @(a). The appeal to minimal conflict sets in the defini- tion above is one way of capturing the idea that a path is conflicted only if it is somehow responsible for a conflict2 Why not simply say in the definition above that (or/Y) is conflicted in (I’, @) if Q(X) U {Y} is a conflict set in I’? The problem with this simpler for- mulation is that it does not properly capture the idea that a path is conflicted only if it introduces a con- flict into a context; and as a result, it classifies too many paths as conflicted. Consider, for example, the net r3 = (a + p, a + -p, a a q, q + r), depicted in Figure 3. Suppose the agent has drawn no inferences, so that he is in the context (I’, @a>, with @ = l?. Accord- ing to the simpler analysis, the argument a + q + r would have to count as conflicted, since (a(u) U (r) is a conflict set. But according to the official analysis con- tained in our definition, a 3 q -+ T does not count as conflicted, since it does not introduce a conflict into the context. It seems that this official analysis of conflict- edness is preferable: rs contains a little bit of inconsis- tency elsewhere, but it gives us no real reason not to conclude that a is a r. 2There may be other, more natural mechanisms for capturing this notion of blame for a conflict. 4 Jl . T . p 3-0 \I/ . 4 Figure 3: l?s e 9 Figure 4: I’4 Like a conflicted argument, an argument that is pre- empted cannot be classified as inheritable. An ideal reasoner should not view an argument as persuasive whenever his context provides him with a more spe- cific reason for accepting a conflicting argument. The notion of specificity appealed to here is carried over di- rectly from simple inheritance: we say that V <a U iff U E a(V); and that V <G U, meaning that V is a specific kind of U, iff V sa U but it is not the case that U <,a V. Using this idea of specificity, the notion of preemption can be defined as follows. Definition 3 (Preemption) Let a be an argument of the form (al/Y), with P(cy1) = X and C(al) = U. Then o is preempted in the context (I’, @) ifl there are nodes V and W such that 1. X s* V and V <.g U, 2. V + W E I? and there is a I?-mcs 3 such that W,Y~3and3-{W,Y)54P(X). The first clause of this definition tells us that V rep- resents a better reason than U for drawing conclusions about X; the second clause tells us that V suggests a conclusion about X that conflicts in the context with the conclusion suggested by U. Again, it would be sim- pler to replace the second clause with a stipulation to the effect that V -+ W E l? and a(X) U {W,Y} is a conflict set; but this would often lead to the wrong re- sults. For example, with the simpler stipulation, the path x 3 u -+ y would b e improperly preempted in the context (r,l?), where r = {X + U,U ---) y, x G- v,vuu,v--+w,x~~w). To illustrate the notion of preemption, let us sup- pose that the reasoner is given as his initial informa- tion the net l?h = (a 3 m,m * n,a * 0,n --) p,o -+ !l,(Pkl) * r,m --) lr}, illustrated in Figure 4. Just to give the net some concreteness, we adopt the following interpretation: p = people with Disease-l, q = peo- ple with Disease-2, r = people exhibiting Symptom-3, HORTYANDTHOMASON 637 n = people who have spent time on Island-l, o = peo- ple who have spent time on Island-2, m = people who have spent time in some particular swamp on Island- 1. Under this interpretation, what I4 tells us is that the Island-l people tend to acquire Disease-l, that the Island-2 people tend to acquire Disease-2, that anyone with both of these diseases must exhibit Symptom-3, that the swamp people tend not to exhibit this symp- tom, and that the individual a has spent time both on Island-2 and in the Island-l swamp. Now suppose that the agent has reasoned his way to the epistemic context (r4, Q), where @ = r4 u {a 3 m * n,a * 0 + q); that is, he has already decided that a has Disease- 2. In this context, he will find that the argument a 3 m + n + p is preempted by the more specific argument a * m + ir. 3.2 The definition At this point, we can embed the special cases of in- heritability that we have considered into a general def- inition. It is convenient to begin by introducing some notation, analogous to that of [9], for analyzing the structure of arguments. Where (Y is an argument tree, we let a(a) be the maximal strict subtree of cx begin- ning with its root inference; and we let S(o) be the set of argument trees that remains when a(a) is trun- cated from o. These concepts are easier to illustrate than to define precisely; so suppose that o is the argu- ment Y In that case, we would have and { a Q S(a) = P = a !l 7 - I . S r Using this notation, we can classify the arguments according to their structure as follows. An argument Q might end in a strict inference without without being entirely strict, in which case we would have a(a) # o and S(o) # {a}. Alt ernatively, the argument might be entirely strict, in which case we would have a(o) = Q, or it might end with a defeasible inference, in which case we would have 6(a) = {a}. We use this classification of arguments to define inheritability for argument trees. Definition 4 (Inheritability) Case A: a(o) # a and S(a) # {cu}. Then (I’,@) k a! iflu(cu) E @ and S(a) C Qp. Case B: a(a) = CY. Then (I’, a) k Q i$cr isJ’-valid. Case C-I: S(a) = {a} and cv is a direct link. Then (r, a) )- LY ifla E r. Case C-II: S(Q) = {o} and Q is a compound path. Then (I’,(a) k cx i$ 1. o is constructible in (I’,@), 2. Q is not conflicted in (I’, a), 3. a~ is not preempted in (I’, @). Here, Case A reduces the question of inheritability for a defeasible argument ending in a strict inference to two questions: inheritability for a strict argument and inheritability for arguments ending in a defeasible in- ference. These are treated in Cases B and C. 4 Credulous extensions With the relation of defeasible inheritability secured, it is straightforward to define the credulous extensions of the inheritance networks containing compound nodes. Intuitively, an extension is supposed to represent some total set of arguments that an ideal reasoner would be able to accept, based on the initial information in some network. Therefore, we need only ask: what could pre- vent a path set <p from representing such an ideal set of arguments determined by the net I’? There are two possibilities. First, @ might contain too few arguments; there might be some argument inheritable in the con- text (l?, @) that does not actually belong to a. Or second, Qi might contain too many arguments; some ar- gument actually belonging to @ might turn out not to be inheritable in the context. The credulous extensions of a net I’ can be defined as those path sets exhibiting neither of these defects. Definition 5 The argument set @ is a credulous ex- tension of the theory I’ ifl@ = {(w : (I’, @) )- cu}. One requirement of the theory of skeptical mixed in- heritance from [9] was that it should specialize to previ- ously formulated skeptical theories of strict and defea- sible inheritance. In the same way, it can be shown that the theory of boolean inheritance presented here spe- cializes to the mixed theory of [7], a forward-chaining variant of the original credulous theory of inheritance from Touretzky [13]. Criteria guaranteeing the exis- tence of extensions for boolean nets, and also the prop- erties of soundness and stability, are established in [S], which describes a defeasible logic from which the present treatment of inheritance is descended. 638 KNOWLEDGEREPRESENTATION 5 Conclusion The main result of the paper is a generalization Touret- zky’s familiar inheritance definition to networks con- taining boolean-defined nodes. We have concentrated on motivating this definition, and have not discussed the independent problem of providing a model theo- retic semantics for the theory; this problem is complex and somewhat problematic even in the case of simple inheritance. It seems however, that certain model the- oretic studies of simple inheritance networks, such as those of Gelfond and Przymusinska [5] or Ginsberg [6], could be extended to the boolean system of this paper. In obtaining generality, our inheritance formalism also becomes intractable; as we have pointed out, there is a polynomial reduction of inheritance in our system to the problem of boolean satisfiability. We have tried to keep an open mind on whether it is best to secure a polynomial inheritance algorithm at all costs, or to provide expressive adequacy even if this requires in- tractable algorithms. It remains to be established that special-purpose representation problems can be fully supported by tractable algorithms, so that the repre- sentation tasks can be served without user-defined calls to the programming language that underlies the repre- sentation system. In the case of medical applications, for example, Doyle and Patil [3] argue forcefully that expressive extensions are needed for KL-ONE style sys- tems. The special-purpose problem of representing syn- tactic structures in natural languages provides another case in point. So far, the formalisms that have been de- veloped for this purpose have been highly intractable, though in this application one would hope for tractabil- ity (see Shieber [ll] for background). Whether we seek tractable fragments of the full the- ory or begin with a powerful theorem-proving imple- mentation of the full system, it seems impossible to avoid the need for careful testing of the system under field conditions. Both sorts of systems need to be tested for expressive adequacy and for practical efficiency, but with different emphases in the testing. The theory of this paper serves as a first step in designing implemen- tations of both sorts. Acknowledgments This material is based on work supported by the Na- tional Science Foundation under Grant No. IRI-8700705, and by the Army Research Office under Grant No. DAAL- 03-8%K0087. References [l] N. Belnap. A useful four-valued logic. In Modern Uses of Multiple-valued Logic, J. Dunn and G. Ep- stein (eds.), D. Reidel (1977), pp. 8-37. [2] R. Brachman. “I lied about the trees” or, defaults and definitions in knowledge representation. The AI Magazine, vol. 6 (Fall, 1985), pp. 80-93. [3] J. Doyle and R. Patil. Two dogmas of knowledge representation: language restrictions, taxonomic classification, and the utility of representation ser- vices. MIT/LCS Technical Report 387.b (1989). [4] S. Fahlman. NETL: a System for Representing and Using Real-world Knowledge. The MIT Press (1979). [5] M. Gelfond and H. Przymusinska. Formalization of inheritance reasoning in autoepistemic logic. Un- published manuscript, University of Texas at El Paso (1990). [6] M. Ginsberg. A 1 ocal formalization of inheritance: preliminary report. Unpublished manuscript, Stanford University (1990). [7] J. Horty. A credulous theory of mixed inheritance. Forthcoming in Inheritance Hierarchies in Knowb edge Representation, M. Lenzerini, D. Nardi, and M. Simi (eds.), John Wiley and Sons (1990). [8] J. Horty. Defeasible arguments: an extension of path-based reasoning. Technical Report, Institute for Advanced Computer Studies, University of Maryland (1990). [9] J. Horty and R. Thomason. Mixing strict and de- feasible inheritance. Proceedings of AAAI-88, Mor- gan Kaufmann (1988), pp. 427-432. [lo] D. Israel and R. Brachman. Some remarks on the semantics of representation languages. In On Conceptual Modeling: Perspectives from Artificial Intelligence, Databases, and Programming Lan- guages, M. Brodie, J. Mylopoulos, and J. Schmidt (eds.), Springer-Verlag (1984). [ll] S. Shieber. An Introduction to Unification-based Approached to Grammar. Center for the Study of Language and Information, Stanford, CA, (1986). [12] R. Thomason, J. Horty, and D. Touretzky. A cal- culus for inheritance in monotonic semantic nets. In Methodologies for intelligent Systems, Z. Ras and M. Zemankova (eds.), North-Holland (1987), pp. 280-287. [13] D. Touretzky. The Mathematics of Inheritance Systems. Morgan Kaufmann (1986), 220 pp. HORTYANDTHOMASON 639
1990
120
872
A Temporal Terminological Logic* Albrecht Schmiedel Technische Universittit Berlin Sekr. FR 5-12, Projekt KIT-BACK Franklinstrafle 28/29 D-1000 Berlin 10, West-Germany schmied@dbOtuill.bitnet Abstract An attempt is made to integrate three well-known formalisms of knowledge representation: termino- logical logic in the tradition of KL-ONE, the tem- poral logic of Shoham, and Allen’s interval calcu- lus. Drawing on each of these sources, a temporal terminological logic is proposed which combines structural with temporal abstraction. A formal semantics is provided, and some hints are given for exploring the computational properties of rea- soning in the formalism. Introduction Terminological logics in the tradition of KL-ONE [Brachman & Schmolze 19853 as well as temporal reasoning have both received considerable attention within the knowledge representation community in the last years. However, there has never been a serious at- tempt to integrate these two fields. Although in a num- ber of projects using terminological logic (e.g. [Poesio 19SS]) the problem of representing time has arisen, no- tably in the context of tense in natural language under- standing, the approaches taken there and the partial solutions found have not culminated in a syntactically and semantically well-defined temporal variant of a ter- minological logic. The theoretical framework described in the following is the foundation of a (future) tem- poral extension of BACK, a knowledge representation system based on terminological logic being developed in our project [Peltason et al. 891. The approach for integrating time into a termino- logical formalism which I am proposing here draws on three ingredients. First of all on terminological logic it- self, the appealing features of which are completely pre- served in the temporal variant. The temporal capabili- ties come straightforwardly by adding some new term- forming constructs. The model-theoretic semantics are accordingly amended, and remain unchanged for non- temporal terms. Compositionality is unaffected; there *This work was supported by the Commission of the European Communities as part of ESPRIT Project 311. 640 KNOWLEDGEREPRESENTATION are no restrictions on the combination of temporal and non-temporal terms. The second ingredient is concerned with the elemen- tary combination of temporal and non-temporal ob- jects. Following [Shoham 19871 I will keep temporal and non-temporal components of the language sepa- rate, giving time a special status in the formalism. The role of ‘TRUE’ in his logic is played by a new term-forming construct ‘(at interval concept)’ which denotes the set of all individuals that are in the de- notation of concept at the time interval. Denotations of concepts are interpreted at intervals, not at points. Also as in Shoham’s approach there is no commitment with respect to the property/event/process trichotomy in the basic framework. The denotation of a concept at one interval is essentially unrelated to its denotation at other intervals. Thirdly, for expressing temporal relationships and constraints, I rely on Allen’s interval calculus [Allen 831 extended by some additional constra.int types for deal- ing with durations, absolute times, and the granular- ity of intervals. This restricts the range of expressible temporal constraints compared with a full-fledged tem- poral logic, but for this subset specialized algorithms are available making an efficient treatment at least for a broad range of ‘non-puzzle-mode’ cases conceivable, which is a prerequisite for a knowledge representation (KR) service. Syntax and Introductory Examples Figure 1 shows the syntax of some of the basic concept- and role-forming constructs common to most termino- logical logics of the KL-ONE family: and, all, atleast, atmost for concepts, and and, clomain, range for roles. Note that the (restrict role concept) con- struct found in some systems is equivalent to (and role (range concept)) in our syntax. The new term- forming operators involving time are at, sometime, and al1time.l The syntax for time intervals and time ‘1 have deliberately included a fairly expressive set of role-forming constructs to demonstrate the expressive po- tential of the formalism, disregarding for the time being various possible trade-offs between expressivity and the From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. concept role ..- ..- ..- ..- I I I I I I I I I I I I I atomic-concept ( and concept+) ( all role concept) ( atleast min role) ( atmost max role) ( at interval concept) ( sometime ( interval-variable+) time-net concept) ( alltime ( interval-variable+) time-net concept) atomic-role ( and role+) ( domain concept) ( range concept) ( at interval role) ( sometime ( interval-variable+) time-net role) ( alltime ( interval-variable+) time-net role) Figure 1: Syntax for Concepts and Roles with Temporal Structure nets is found in Figure 2. In standard terminological logics using the non- temporal constructs of the syntax concepts can be formed such as (and man (atleast 1 (and child (range female))) (all child grown-up)), with the intuitive meaning a mun with atleast one child, which is a female, whose children are all grown- up. According to the model-theoretic semantics for the term-forming constructs, the extension of a term is strictly a function of the extensions of its subterms and of the extensions of the atomic concepts or roles it contains, in this example the concepts man, female, grown-up, and the role child. An important feature that all KR systems based on terminological logics provide is the possibility of intro- ducing new atomic concepts and roles by definition, for example daughter := (and child (range female)). After this definition, the new name can be used in other terms and definitions as an abbreviation for the defin- ing term. Before looking at some examples using the new con- structs involving time, let us consider the meaning of non-temporal terms. For example, if we define car-owner := (atleast 1 (and own (range car))), what is the meaning of car-owner within the tempo- ral framework? The answer is that all terms have to be evaluated with respect to a particular interval of time. Consequently, car-owner does no more denote a set of individuals fulfilling the definition, but in fact a computational complexity of classification algorithms. I could also have included negation and disjunction (as some terminological logics do), but that contributes nothing to the problem at hand of integrating time. function that assigns such a set of individuals to ev- ery interval. Using the at-construct, we can bind the time of evaluation of a term to a specific interval. For example, (at ‘August 1990’ car-owner) denotes the set of car-owners at the interval denoted by the interval constant ‘August 1990’. Every term can be rewritten as (at NOW term). NOW is the special reference interval which represents the index at which the term is evaluated. Thus, the index implicit in every non-temporal term can be made explicit. Every at-term creates an evaluation environ- ment whereby every (explicit and implicit) occurrence of NOW in the embedded term is bound to the inter- val specified in the at-term. Taking the last example, if we expand the definition of car-owner and bind the implicit NOW’s, we get (atleast 1 (and (at ‘August 1990’ own) (range (at ‘August 1990’ car)))). Note that the outer at-terms are redundant (and have therefore been deleted in the example), since there are no more embedded NOW’s that the time interval could be bound to. We can identify two classes of terms: time-indexed terms whose denotation depends on a particular value of NOW and universal terms whose denotation is the same for all values of NOW. Obviously, all terms that contain no unbound (implicit or explicit) occur- rences of NOW are necessarily universal, and as a con- sequence an at-term embedding a universal term is redundant and can be replaced by the term itself. With the at-construct and temporal constants alone, expressivity is very restricted. The potential for tem- poral abstraction comes with the ability to express abstract temporal patterns, which classify objects in terms of their pattern of change. In order to achieve SCHMEDEL 641 time-net ::= time-constraint ::= I interval-relation :I= comparison granularity ..- ..- ..- ..- interval ::= interval-variable I interval-constant I NOW interval-variable : := 4Ybl--- lQlZ2 1x31 *** interval-constant ::= ‘1987’ 1 ‘3/12/1987 14:04:12’ I . . . duration-constant ::= ‘5d 2h 4min 33sec’ I . . . time-constraint ( and time-constraint +) ( interval-relation interval interval) ( comparison interval duration-constant) ( granularity interval) equal 1 before 1 after 1 meets 1 met-by I overlaps I overlapped-by I starts I started-by1 finishes I finished-by I during I contains I ( or interval-relation+) i I< I= 12 I> ’ set I min I hour 1 day I month I year I . . . Figure 2: Syntax for Time Nets this, temporal variables and means of expressing con- straints over these variables are necessary.2 Temporal variables are introduced by the temporal quantifiers sometime and alltime together with a set of con- straints, a time net, over these variables. Three kinds of constraints are allowed by the syntax for time nets according to Figure 2: relations between pairs of in- tervals using Allen’s basic interval relations and dis- junctions of these, metric constraints on single inter- vals, and granularity constraints requiring an interval to take values that are multiples of some time unit. Absolute bounds can be imposed on an interval by us- ing interval constants in interval relations. The following term denotes the set of individuals that were car-owners at an interval sometime before NOW: former-car-owner := (sometime (x) (before x NOW) (at x car-owner)). former-car-owner can be equivalently expressed using a temporal role: former-car-owner := (atleast 1 have-owned-a-car), have-owned-a-car := (sometime (x) (before x NOW) (at 2 (and own (range car))))). Two individuals are related by have-owned-a-car if they 2The introduction of variables in terms seems counter to the spirit of KL-ONE-like languages; but the only variable- free temporal language I could think of is equivalent to the subset of the one presented here where there is only one variable per sometime- or alltime-term. That appears overly restrictive; not even a covering of NOW by a se- quence of meeting intervals can be expressed. 642 KNOWLEDGEREPRESENTATION were related by own at some interval before NOW and if at that time the second was an instance of car. The following term correctly applies to an individual NOW if at a point during NOW he ceases to be a car-owner and becomes a bike-owner: (and (sometime (x y) (and (start x NOW) (finishes NOW y) (meets x y)) (and (at x car-owner) (at y bike-owner))) (alltime (5) (during x NOW) (at 2 (atmost 1 (and own (range vehicle)))))). It is important to realize that in the sometime-term there is nothing that expresses ‘ceases to be’ or ‘be- comes’; being car-owner and bike-owner at the same time throughout NOW would be perfectly consistent. It is only in conjunction with the alltime term, which restricts the number of role-fillers for the own-a-vehicle role to atmost one for all times during NOW, and with the assumption that car and bike are subsumed by vehicle, that this interpretation is ruled out. In the previous examples, time nets only used inter- val relations to constrain temporal variables. The fol- lowing time net additionally uses metric constraints on the duration of intervals, and granularity predicates: (and (day x) (= x ‘24h’) (day y) (= y ‘24h’) (meets 2 y) ((or starts finishes during equal) 2 NOW) ((or starts finishes during equal) y NOW)) It constrains x and y to be consecutive days within NOW. The granularity constraint (day x) above re- stricts x to take only values that are started and fin- C[(and cl . . . cn)lt g[(all r c)]$ = ni”=, C[c& = E[(atleast m r)]: (d E D : E[r]:(d) C C[c]k} = 4!T[(atmost m r)]= = {d E V : IE[r]g(d)l 2 m) v E ‘D : lGw)l L n-4 i Z(x) E[clF if x is a variable E[(at 2 c)]: = G4, ifx = NOW J+‘(X) Glz if x is a constant E[(sometime X TC c)]” = f[(alltime X TC c)]” {d E 2) : 32 E Z*((TC,Xu (NOW}))jww++t d E E[c];} = {d E V : VI E Z*((TC, X u {NOW}))NOW,+~ ([(domain c)]: = E[c]$ x 2) d E $[c];} C[(range c)]; = V x C[c]i Figure 3: Semantic Conditions for Extension Functions ished by a day. Without this constraint, x could be any interval with a duration of 24 hours, due to the metric constraint (= x ‘24h’), but not necessarily co- inciding with a full day of the calendar. On the other hand, leaving away the duration constraint, x could be any interval starting and ending with a full day. Semantics In terminological logics in the tradition of KL-ONE it has become customary to provide a model-theoretic account of the semantics (e.g. [Schmolze 1989, Nebel 19891). I will f o 11 ow this tradition. Before spelling out the semantics for concepts and roles, the semantics as- sociated with temporal constraints must be clarified. For the present purpose, I will assume a discrete time model and interpret all time intervals as pairs of inte- gers, and define the domain of time intervals as follows: 7 def {(i&) 1 il < ia, il, i2 E Integer}. Thus, con- secutive integers form the smallest, non-decomposable intervals, the moments in the sense of [Allen & Hayes 19851. For time constraints according to Figure 2, I will assume a fixed model M which maps interval con- stants to elements of 7, duration constants to subsets of 7, comparison operators and interval relations to sets of pairs of elements of 7, and granularity predi- cates to subsets of 7, such that the intuitive meaning of these constructs is adequately mirrored (for exam- ple, that (M[‘August 1990’1, M[‘September 1990’1) E M[meets], M[‘3/12/1990’] E M[day], etc.). An interpretation3 of a time net (TC, X), where TC is a set of constraints and X a set of variables, is a function Z : X + 7 which satisfies TC (for example, if (meets x y) E TC, then (Z(x),Z(y)) E M[meets]). The set of all interpretations of the time net (TC, X) is denoted by Z*((TC, X)). The set of all interpretations of a time net in all of which x is mapped to the same value is denoted by Z*((TC, X)),,,. A model for a set of terms with temporal structure defined by the syntax in Figures 1 and 2 is a triple 3The notation for time nets and their interpretations owes much to [NCkel 19881. (D, 7, r) where 2) is a set of individuals, 7 is the set of time intervals, and I is a function c: “R { ---) (7-+2’D) + (7 + 2=y where C are the concept terms and 1-2 are the role terms without free variables and after all definitions have been expanded. Thus, each concept (each role) is mapped to a function that assigns sets of individuals (sets of pairs of individuals) to each time interval. For (D, 7, C) to be a model, the conditions in Figure 34 that define the meaning of the syntactic constructs must be met for all t E 7. Taking into account the extension at each time inter- val, subsumption can now be defined in the usual way: For all concepts and roles, cl subsumes c2 ifl for all extension functions 6 (models) and all time intervals t E 7, tqc2y E qc1y. Kinds of Time Dependency As already mentioned in the introduction, in the basic semantic framework as spelt out in the last section there are no built-in restrictions on the extensions of primitive concepts or roles. For example, if a pair of individuals are related by the primitive role own at one interval, they may or may not be related by that role in any subinterval. As a consequence, according to the semantics so far, (at ‘June 1990’ own) does not subsume (at ‘1990’ own), and 4Notation: r(d) dgf {d’ E 2, : (d, d’) E T}, and Z[c]i gf E(c)(t) where all free variables in c are evaluated using Z. Note that Z is only needed to evaluate at-terms nested within sometime- and alltime-terms. The conditions for and, at, sometime, and alltime for roles have been omitted since they are completely analogous to those for concepts. SCHMIEDEL 643 (sometime (x) (> x ‘35 days’) (at x own)) does not subsume (and (at ‘June 1990’ own) (at ‘July 1990’ own) although intuitively if an own relation holds in an in- terval it should also be valid in all subintervals, and if it holds over two consecutive intervals it should hold over their union (which, in the second example, is then an interval longer than 35 days). Extra re- strictions on extension functions are needed in order to legitimize the subsumptions above. Using the ter- minology in [Shoham 19871, the possible extensions of the primitive role own should be restricted to be- ing downward-hereditary: for all t, t’ E 7, t’ subin- terval of t, S[ownlt 2 E[ownlt’, and con&enable: for all t, t’, t” E 7, t meets t’, t starts t”, t’ finishes t”, C[ownlt fI E[ownlt’ C I[ownlt”. Of course, for other kinds of concepts and roles these restrictions are quite inadequate. For example, for a role average-temperature that relates an object to its average temperature in a certain time interval, the first subsumption relation above should not hold since aver- age temperatures are not inherited to subintervals; the second may be correct though, because if an average remains identical for two consecutive intervals also ap- plies for their union. The latter is certainly not true for other aggregate values which depend on time, such as e.g. sales of a department per month, or annual salary of an employee. For these, another kind of restric- tion might be more appropiate: a role sales-per-month, for example, should take values only for intervals that correspond to full months, and be empty for all other intervals. How can these different kinds of restrictions be inte- grated into our framework? The situation is quite sim- ilar to the problem of integrating disjointness restric- tions into terminological logics. Should the disjoint- ness of e.g. male and female be treated as definitional and hence be used for validating certain subsumptions, or rather as assertional? On pragmatic grounds, the former alternative is generally chosen. Terminologi- cal systems allow disjointness restrictions for primitive concepts and use these for computing subsumption. The same approach could be adopted for restrictions on temporal extensions of primitive concepts or roles. So, in the example above, after declaring the primi- tive role own as downward-hereditary and concaten- able, the subsumptions would be valid. Computing Subsumption Subsumption is the central semantic notion in termino- logical logics; designing sound (and possibly complete) algorithms that compute subsumption is the central is- sue for providing practical KR services. Until now, no algorithms are available for the temporal variant pre- sented here, but atleast some preliminary hints as to what is involved can be given. Assume CT’ is of the form (sometime X’ TC’ (and . . . (at xi ci) . . . )), j E J, xj E X’ , C: E C’ and CT is of the form (sometime X TC (and . . . (at xi ci) . . . )), i E I, xi E x, ci E c. Under what conditions does CT’ subsume5 CT? Intu- itively, CT’ is a more general concept than CT, if its temporal variables are less constrained than those of CT, and for each of its temporal variables there is a corresponding variable in CT such that the associated concept of that variable subsumes the associated con- cept in CT. Obviously, CT can have additional tempo- ral variables and associated concepts, which specialize it further. To formalize this notion, let S : J -+ 1 be a function from indices of CT’ to indices of CT. Sx : X’ --f X is defined as Sx(xi) = xs(j), and SC : C’ -+ C is defined as SC(C[i) = CS(j)e S must always map NOTE to NOW, i.e. Sx(NOW) = NOW. The notion of ‘less constrained temporal variables’ is captured by the following definition: A time net (TC’,X’) subsumes another time net (TC, X) wrt a variable mapping Sx : X’ --+ X iff for all Z E Z*((TC, X)) th ere exists an Z’ E P((TC’, X’)) such that for all x E X’, Z’(x) = Z(Sx(x)). We can now express the condition above more for- mally: CT’ subsumes CT iff there exists a mapping S : J --+ I such that (TC’, X’) subsumes (TC, X) wrt Sx, and c; subsumes SC(C$). The subsumption of time nets guarantees that for every interpretation of TC there is a corresponding interpretation of TC’, and the second condition states that for each iuterva.1 of that interpretation the corresponding concepts are in the subsurnption relation. Tllus, all instan& of CT must also be instances of CT’. Assuming for the moment that the ci and ci are all non-temporal concepts and WC can therefore a.p- ply known subsumption algorithms, we are left with the problem of determining subsumption between two time nets. Here we can utilize known algorithms for constraint propagation in temporal networks (e.g. [Schmiedel 1988, ValdCz-Perez 19871). These algo- rithms compute the most specific constraints deriv- able from the given ones, and at the same time check whether the network is consistent. Subsump- tion can easily be checked-modulo incompleteness of constraint propagation-relative to a given variable mapping. All constraints in the subsuming network must have corresponding ones-via the ma.pping-in the subsumed network wh ich are at least as restrictive. 5Additional restrictions as mentioned in the last are not taken into account in the following. section 644 KNOWLEDGE&PRESENTATION Conclusion Of course complete and tractable subsumption algo- rithms for the whole language and for the standard se- mantics presented here cannot be expected. In Allen’s interval calculus on its own, which is a subset of our temporal constraint language, determining all conse- quences of a set of constraints is NP-hard [Vilain & Kautz 19861. And even for moderately expressive lan- guages the non-temporal part is intractable [Nebel 19881. That does not render these formalisms use- less. On the one hand, it remains to be seen to what extent normal cases in practical applications can be handled even by complete algorithms. On the other hand, algorithms for computing subsumption in ter- minological logics that are incomplete with respect to standard semantics are increasingly being character- ized as complete with respect to a weakened semantics [Schild 1989, Patel-Schneider 19881; approximative al- gorithms are also studied in the field of temporal rea- soning [van Beek 19891. These developments are a rea- sonable starting point for developing subsumption al- gorithms for temporal terminological logics. References [Allen 831 James F. Allen, Maintaining Knowledge About Temporal Intervals, Communications of the ACM 26(11), 832-843, 1983. [Allen & Hayes 19851 James F. Allen, Patrick J. Hayes, A Common-Sense Theory of Time, Proc. 9th IJCAI, Los Angeles (Cal.), 528-531, 1985. [van Beek 19891 P e er t van Beek, Approximation Al- gorithms for Temporal Reasoning, Proc. 11th IJ- CAI, Detroit (Mich.), 1291-1296, 1989. [Brachman & Schmolze 19851 Ronald J. Brachman and James G. Schmolze, An overview of the KL- ONE knowledge representation system, Cognitive Science 9(2), 171-216, 1985. [Nebel 19881 Bernhard Nebel, Computational Com- plexity of Terminological Reasoning in BACK, Artijicial Intelligence 34, 371-383, 1988. [Nebel 19891 Bernhard Nebel, Reasoning and Revision in Hybrid Representation Systems, Ph. D. Thesis, UniversitZt des Saarlandes, Saarbriicken, 1989. To appear in Lecture Notes in Artificial Intelligence, Springer, Berlin. [Niikel 19881 Kl aus Nijkel, Convex Relations Between Time Intervals, SEKI Report SR-88-17, Univer- sit%t Kaiserslautern, 1988. [Patel-Schneider 19881 Peter F. Patel-Schneider, Adding Number Restrictions to a Four-Valued Terminological Logic, Proc. AAAI-88, Saint Paul (Minn.), 485-490, 1988. [Peltason et al. 891 Christof Peltason, Albrecht Schmiedel, Carsten Kindermann, and Joachim Quantz, The BACK System Revisited, KIT Report 75, Fachbereich In- formatik, Technische Universitat Berlin, 1989. [Poesio 19881 Massimo Poesio, Towards a IIybrid Rep- resentation of Time, Proc. ECAI-88, Munich, 1988. [Schild 19891 Kl aus Schild, Towards a Theory of Frames and Rules, KIT Report 76, Fachbereich Informatik, Technische Universitat Berlin, 1989. [Schmolze 19891 J ames G. Schmolze, The Language and Semantics of NIKL, Technical Report 89- 4,Tufts University, Medford (Mass.), September 1989. [Schmiedel 19881 Albrecht Schmiedel, Temporal Con- straint Networks, KIT Report 69, Fachbereich In- formatik, Technische Universitat Berlin, 1988. [Shoham 19871 Yoav Shoham, Temporal Logics in AI: Semantical and Ontological Considerations, Arli- ficial Intelligence 33(l), 89-104, 1987. [Vilain & Kautz 19861 M arc B. Vilain, IIcnry Kautz, Constraint Propagation Algorithms for Temporal Reasoning, Proc. AAAI-86, Phila.delphia (Penn.), 377-382, 1986 [Valdez-Perez 19871 Ratil E. Valdkz-Perez 87, The Sat- isfiability of Temporal Constraint Networks, Proc. AAAI-87, Seattle (Wash.), 256-260, 1987. SCHMIEDEL 645
1990
121
873
A Maximum Entropy Approach to NOIl notonic Mois& Goldszmidt Paul Morris Judea Pearl moises@cs.ucla.edu morris~intellicorp.com judea@cs.ucla.edu Cognitive Systems Lab. Intellicorp Cognitive Systems Lab. University of California 1975 El Camino Real West University of California Los Angeles, CA 90024 Mountain View, CA 94040 Los Angeles, CA 90024 Abstract This paper describes a probabilistic approach to nonmonotonic reasoning which combines the prin- ciple of infinitesimal probabilities with that of maximum entropy, and which sanctions inferences similar to those produced by the principle of minimizing abnormalities. The paper provides a precise formalization of the consequences en- tailed by a defeasible knowledge base, develops the computational machinery necessary for deriv- ing these consequences, and compares the behav- ior of the maximum entropy approach to those of e-semantics ([Pearl SSa]) and rational closure ([Lehmann 891). 1 Introduction An approach to defeasible reasoning based on proba- bilities arbitrarily close to 1 (see [Geffner & Pearl 881, [Pearl 881) p ro d uces a semi-monotonic logic that prop- erly respects specificity-based preferences but often ap- pears to be too conservative. This system, called E- semantics, was proposed as a common core for all non- monotonic formalisms, but, in itself, turns out too weak to capture many aspects of common sense rea- soning such as chaining, contraposition, and respect- ing irrelevancies ([Pearl SSa]). Pearl has proposed to increase the inferential power of c-semantics using the independence assumptions embedded in distributions of maximum entropy, and has shown that when ap- plied to knowledge bases containing a small number of rules, maximum entropy yields patterns of reason- ing which are rather pervasive in common discourse (see [Pearl 881 chapter 10). This paper explores a system based on infinites- imal probabilities augmented by maximum entropy considerations. Given a set R of rules and a set *The participation of the second author was funded by a joint project of NASA and DARPA under contract F30602- 88-C-0045. The work of the first and third authors was sup- ported in part by National Science Foundation grant #IRI- 88-21444 and Naval Research Laboratory grant #N00014- 89-J-2007. 646 KNOWLEDGEREPRESENTATION P,,, of probability distributions that satisfy each of the rules in R to within E, we single out a distin- guished distribution P,*,a having the greatest entropy: - C, P(w) log P(w). We then define the notion of a plausible conclusion of R in terms of a collection of such distributions, parameterized by e. This system is related to those based on minimizing abnormalities (e.g. circumscription [McCarthy SS]), in that inferences are sanctioned if they hold in a model that minimizes a weighted count of rule violation. We discuss the computational and behavioral aspects of the ME approach, indicating improvements over E- semantics and the rational closure of [Lehmann 891. The paper is organized as follows: Section 2 in- troduces the language and some basic definitions and theorems. Section 3 is concerned with the formalism of parameterized probability distributions (PPDs). In section 4 the necessary machinery is de- veloped for computing the maximum entropy distri- bution and deciding whether an arbitrary conditional sentence is a plausible conclusion of a given knowl- edge base. Section 5 provides a summary and ex- amples, and Section 6 evaluates the main results. Proofs to all theorems can be found in the full pa- per [Goldszmidt, Morris & Pearl 901. 2 Notation and Preliminaries. Let L be a closed set of well formed propositional for- mulas, built in the usual way from a finite set of propo- sitional variables and the connectives “V” and “1”. The letters A, B, C, D will be used to denote formulas in L. A world ‘w is an assignment of truth values to the propositional variables in Z. The satisfaction of a for- mula by a world is defined as usual, and will be written as 20 + A. Note that if there are n propositional vari- ables in L there will be 2n worlds. Let U stand for the set of worlds. A more complete treatment of the concepts summa- rized below can be found in [Goldszmidt & Pearl 891. Using the binary connective u+n and two formulas A and B from 1: we can construct the defeasible rule A + B. We will use 72 to denote a set of such rules. A From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. rule A -+ B is said to be verified by w, if w b A A B. The same rule is said to be falsified or violated by w, if w b A A -B. If u, &c= A, the rule is considered as neither verified nor falsified. A rule T is tolerated by a set ‘R if we can find a world w that verifies r while no other sentence in 72 is falsified by w. We will say that a non-empty set 72 of rules is confirmable if we can find a rule r E R that is tolerated by R. Given a positive real number e, we will say that a probability measure P e-satisfies the rule A + B if P(BIA) 2 l- e. Given a set 72 of rules, we will use P,,a to denote the set of probability distributions that e-satisfy R. We will say that a probabilty measure P is proper for 72, if P(A) > 0 for all A such that A + B E 72. A rule will be considered proper if its antecedent is satisfiable. A set R is probabilistically consistent if, for every E: > 0, there is a proper probability assignment P such that P E-satisfies every rule A + B E R. Intuitively, consistency means that it is possible for all rules to be as close to absolute certainty as desired. Alternatively, it means that P E,~ is nonempty for all E > 0, and hence, the existence of P:,R is guaranteed for consis- tent rule sets. Moreover, if Pc,~ is a convex set, P:‘R will be unique. The next theorem constitutes the ba.&s of a simple procedure for testing consistency: Theorem 1 (Consistency.) ’ A set 72 is probabilis- tically consistent if und only if every nonempty subset R’ of R is confirmable. In other words, 72 is consistent iff we can find a sen- tence tolerated by R’, in every subset R’ of R. Corollary 1 ([Goldszmidt & Pearl 891.) Given a set R, consistency can be tested in I%Zi2/2 sutisfiability tests by the following simple labeling procedure: con- struct the set RI with all rules tolerated by 2, then construct the set R2 with all rules tolerated by R -RI and so on. If a partition of R is obtained the set is consistent; otherwise R is inconsistent. Although propositional satisfiability is in general NP- complete, for the case of Horn clauses it is lin- ear on the number of occurrences of literals in R [Dowling & Gallier 841. 3 Parameterized Probability Distributions Among the general laws that a common sense consequence relation (denoted by b)” might be expected to obey, the following have been proposed ([Geffner & Pearl 881, [Kraus et.al. 881, [Makinson 891, [Pearl 89a]): ‘This theorem appears initially in [Adams 751 and is extended to mixtures of defeasible and strict rules in [Goldszmidt & Pearl 891. 2We reserve the symbol I- for classical derivability (Logic) If P t- Q, then P b Q. (Cumulativity) If P b Q, then P b R iff PAQ k R. (Cases)IfP~RandQ~R,thenPVQ~R. Kraus, Lehmann and Magidor [Kraus et.al. 881 in- troduce the class of preferential models, and show that each preferential model satisfies the three laws given above. Moreover, they show every consequence re- lation satisfying those laws can be represented as a preferential model. (Kraus, et. al. actually use a slightly different set of laws, but they are easily shown to be equivalent to those above.) Equivalent results were shown in [Lehmann & Magidor 881 with respect to the class of ranked preferential models and the set of rules above augmented by Rational Monotony: IfPbRandPplQthenP~QbR. As it stands, e-semantics does not quite fit within the same framework as preferential models. The ba- sic notion behind e-entailment is: Given a set R, a new rule A + B is e-entailed, if for all S > 0, there exists an e > 0 such that for all P in PC,, we have P(BIA) 1 1 - 6. Thus, c-semantics defines an entail- ment relation which is essentially that induced by the class of preferential models [Lehmann & Magidor 881, but, it presents no direct counterpart to the notion of an individual preferential model. Furthermore, in gen- eral, e-semantics does not satisfy rational monotony. This motivates the following reformulation of the idea of e-semantics: Definition 1 A parameterized probability distribu- tion (PPD) is a collection (PC) of probability measures over a space of worlds, indexed by a parameter t that ranges over positive real numbers in a neighborhood of zero. Definition 2 Every parameterized probability distri- bution {PC) induces un consequence relation on for- mulas us follows: A k B i$ lim,,e PE( BIA) = 1. To avoid having to treat some cases separately in the proofs and definitions, it is convenient for the purposes of this section to define P(BIA) = 1 when P(A) = 0 (thus extending Definition 2 to non-proper distribu- tions.) Under this convention, a PPD consequence re- lation can now contain instances of the form A k false even when A is logically consistent (see [Adams 661.) It is easy to show from elementary probability equiv- alences that each such consequence relation satisfies the Logic, Cumulativity, and Cases laws discussed ear- lier. Also, as might be expected, there is a close rela- tion between PPDs and e-semantics: Theorem 2 A proper rule is a consequence of a finite probabilistically consistent set of rules with respect to the class of PPDs in it is e-entailed. We now identify a subclass of PPDs that is of spe- cial interest. We will say a PPD {Pe} is convergent if PE (B IA) converges (as c + 0) for each pair of sentences A and B. The following is an important sufficient con- dition for PPD convergence. We define a PPD to be GOLDSZMIDTETAL. 647 analytic if, for every event E, P,(E) has an extension to a function over complex values of e that is analytic in a neighborhood of 0. (This implies that it possesses derivatives of all orders, all of which converge as E ap- proaches 0.) Theorem 3 Every analytic PPD is a convergent PPD. The proof is a direct consequence of the fact that any given analytic PPD can be expanded as a Taylor series about zero. Either Pc( E) is identically zero, or at least one of the coefficients must be non-zero. In the latter case, as c approaches 0, the series is dominated by the first term whose coefficient is non-zero. Besides the three laws considered earlier, a con- vergent PPD consequence relation satisfies Ratio- nal Monotony. The following theorem is an f=JY consequence of the results and methods in [Lehmann & Magidor 881. A similar result has been independently obtained by Satoh [Satoh 901. Theorem 4 Every convergent PPD entailment rela- tion can be represented as a ranked preferential model,3 and every ranked preferential model with a finite non- empty state space can be represented as a convergent PPD entailment relation. There is also a connection between PPDs and preferential models: since the entailment relation of a PPD satisfies the laws of Logic, Cumulativity, and Cases, it can be represented as a preferential model [Kraus et.al. 881. The following result shows that the converse is also true, for finite systems. Theorem 5 Every PPD entailment relation may be represented as a preferential model, and every prefer- ential model with finite non-empty state space may be represented as a PPD theory. The basic idea in the converse part of the proof is to consider the total order extensions of the partial order that determines the preferential model. Each of these corresponds to a ranked preferential model, which by theorem 4 can be expressed as a convergent PPD. In- terleaving the individual PPDs then gives a single PPD that represents the original preferential model. Preference logics were originally introduced as a gen- eralization of circumscription. One might ask where circumscriptive theories fit in the framework discussed above. The simplest form of circumscription is one that minimizes a single finite abnormality predicate, letting everything else vary. Thus, it is characterized by a preference for worlds that satisfy minimal subsets of a finite set of abnormality propositions. We will call a system of axioms together with such a preference a finite abnormality model . Clearly, every such system can be represented as a preferential model. The fol- lowing is a partial converse. ‘We remark that the proof of theorem 3 shows that for analytic PPDs, the ranks are well-ordered. This stands in contrast to example 4.1 in [Lehmann & Magidor 881. Theorem 6 Every preferential model with a finite propositional language and finite state space can be rep- resented as a finite abnormality model. 4 Maximizing the Entropy. As mentioned earlier, given any ruleset R, there is a distinguished PPD {P:,‘R} where P:,R is defined as the distribution of maximum entropy that c-satisfies each rule in R. This suggests the following defini- tion (we assume proper probability distributions and proper rules): Definition 3 (ME-plausible conclusions.) Given a consistent set R, we say that A + B is a ME-plausible conclusion of 72 iff lim,,oP,*,,(BIA) = 1. Equivalently, we say that A )- B is in the ME- consequence relation of R, denoted CME(R) While plausible conclusions in c-semantics are re- quired to attain arbitrarily high probabilities in all probability distributions in Pc,a, the requirement for ME-plausible conclusions concerns only one distin- guished distribution, that having the maximum en- tropy among those in Pc,a. In this section we develop the machinery for deciding whether a rule A + B is an ME-plausible conclusion of R in accordance with the definition above. Let R be a set of defeasible rules ra : Ai + Bi, 1 5 i 5 n. Each of these rules imposes the constraint: P(BilAi) 2 1 - .S 0) on the space of distributions. Using elementary prob- ability theory we can rewrite Eq. (1) as: fi x P(Bi, Ai) 2 P(&, Ai) - where & denotes the complement of Bi. Note that the term P(Bi, Ai) equals the sum of the probabilities of the worlds in which Ai + Bi is verified and similarly P(&, Ai) equals th e sum of the probabilities of the worlds in which Ai + Bi is falsified. Writing IV”‘: as a shorthand for the set of worlds in which ri is verified, and lXr; for the set of worlds that falsify ri, Eq. (2) can be written as: P(W) - fi x P(W~) 5 0 - (3) where P(WJ = cwGw;, P(w) and P(W,?) = c w@v$ P(w)* The entropy associated with a probability distribu- tion P is defined as: H[P] = - c P(w) log P(w) W (4) The problem of computing the maximum entropy dis- tribution PJ’ a reduces to the problem of maximizing the entropy expression (Eq. (4)) subject to the set of constraints Eq. (3) and the normalization constraint c, P(w) = 1. 648 KNOWLEDGEREPRESENTATION One of the more powerful techniques for solving such optimization problems is that of Lagrange multipli- ers [Aoki 711. This technique associates a factor a! with each constraint (rule), and yields a distribution P*(w) that is expressible as a product of these factors ([Cheeseman 831). We will show that, under the in- finitesimal approximation, P*(w) will be proportional to the product of the factors (o) associated only with rules falsified in w *. At the point of maximum entropy, the status of a constraint such as (3) can be one of two types: ac- tive, when the constraint is satisfied as an equality, and passive, when the constraint is satisfied as a strict in- equality. Passive constraints do not affect the point of maximum entropy and can be ignored (see [Aoki 711). The task of identifying the set of active constraints is discussed at the end of this section. We will first as- sume that all constraints are active. An application of the Lagrange multiplier technique on a set of n active constraints yields the follow- ing expression for each term P(w) (see the appendix in [Goldszmidt, Morris & Pearl 901 for a step by step derivation):5 P(W) = CYO x I-J ixTi x n ,!;-ilt;) (5) riERZ rjER$ where R; denotes the set of rules falsified in w and R$ denotes the set of rules verified in w. Motivated by Theorem 3, we look for an asymptotic solution where each CV’,~ is-proportional to e’i for some non-negative integer tci ,6 (-A) namely, each term of the form opj - will tend to 1 as e tends to 0 . The term CYO is a normaliza- tion constant that will be present in each term of the distribution and thus can be safely ignored. Using P’ to denote the unnormalized probability function, and taking the limit as e goes to 0, equation (5) yields: P’(w) W ;Ir;ER; “i R, =0 otherwise Thus, the probability of a given world w depends only on the rules that are falsified in that world. Once the a-factors are computed, we can construct the desired probability distribution and determine which new rules are plausible conclusions of 72. In order to compute the o-factors we substitute the expression for each P’(w) (Eq. (6)) in each of the the active constraints equations (Eq. (3)), and obtain: 4 We drop the subscript “E, R” for notation clarity. 51n equation (5) (~0 = e(‘O+‘) and arPk = exk, where X0 and Xk are the actual Lagrange multipliers. 6 We use a Qbootstrapping” approach: if this assumption yields a solution, then the uniqueness of P’ will justify this assumption. Note that this amounts to the assumption that there is no world whose probability depends exponentially on E. where 1 5 i 5 n. A few observations are in or- der: First, Eq. (7) constitutes a system of n equa- tions (one for each active rule) with n unknowns (the o-factors, one for each active rule). Unfortunately, each summation might range over an exponentially large number of worlds. Second, by our assumption, ari Fz: ad.5 ni where Ki is a nonnegative integer. This im- plies log CVy,i =logai+Kilogexrciloge,and &a~. Thus, each probability term P’(w) is determined once the values of the K’S are computed (see Eq. (6)). We can rewrite Eq. (7) * t m erms of the K’S, by replacing the summations in Eq. (7) by the min operation since the highest order term (the term with minimum K) will be the most significant one as e approaches 0. Taking the log on both sides of Eq. (7) yields: =l+uy.~[ c ni] l<iLn (8) rs rjERZ Each Eli can be regarded as the cost added to a world w that violates rule ri; since such violation causes log(P’(w)) to decrease by pi. Since rule ri is falsified in each world on the left- hand-side of equation (8), pi will appear in each one of the C-terms inside the min operation and can be isolated: Ki+wFfig c Kkl= 1 +w~$n[ c tcj] ‘i rkER; r* rjER, (9) k#d- Even with these simplifications, it is not clear how to compute the values for the K’S in the most general case. We now introduce a class of rule sets R for which a simple greedy strategy can be used to solve the set of equations above: Definition 4 (Minimal Core Sets.) A sed R is a minimal core (MC) set i$ for each rule ri : Ai ---) Bi E R, its negation Ai ---) 1Bi is tolerated by R - {ri}. Equivalently, for each rule ri there is a world that fal- sifies ri and none other rule in R. Clearly, to decide whether a set R is an MC set takes 1721 satisfiability tests. Note also that the MC property excludes sets R that contain redundant rules, namely, rules r that are already e-entailed by R - {r}. This is so because the toleration requirement of MC sets guarantees that the negation of each rule ri is consis- tent with respect to the rest of the rules in R and it is known (see [Goldszmidt & Pearl 891) that a rule ra is c-entailed by R - {ri) if and only if its negation is inconsistent with R - { ri}. For example, consider the rule set RJ,.b = {s --) c, s + b, s + c A bj7. This set is redundant because the third rule is e-entailed by the first two, and vice versa. Indeed, Rscb does not meet the MC requirements (Def. 4); any world falsify- ing s ---) c (or s + b) must also falsify s + c A b. ‘A possible interpretation for this set could be: “typi- cally, Swedes are civilized”, “typically, Swedes are blond”, ‘%ypically, Swedes are civilized and blond”. GOLDSZMIDTETAL. 649 The MC property guarantees that for each rule ri E R there is a world wi, in which only that rule is falsified. Thus, From Eq. (6), P’(w,) m ari M cni. Note that in the equation representing the constraint imposed by rule ri (Eq. (9)), the min operation on the left-hand-side ranges over all worlds w in which ri is falsified. Clearly, the minimum of such worlds is wi, and the constraint equations for an MC set can be fur- ther simplified to be: lei = We now describe a greedy strategy for solving Eq. (10). Let us assume that we are given a consistent MC set R and let {Rr,&, Rs, . . .} be the partition of R that results from applying the labeling procedure described in Section 2 (Corollary 1). For every rule ri E RI there is a world w: for which the set Rz! is empty, no rule is falsified by w: and consequently ZjER- “j w! = 0. It follows that for every ri E 721 we must have pi = 1 (see Eq. (10)). We can now use these values to compute an initial upper bound for the rest of the K’S. We set to infinity all K’S associated with rules in R -‘RI, and evaluate the right-hand-side of Eq. (10) associated with rules in 322. These evaluations will pro- duce an upper bound for the K’S associated with these rules. Using these upper bounds we repeat this pro- cess and compute upper bounds for the K’S associated with rules in Rs and so on. By following the order- ing induced by the labeling procedure, we are assured that none of this upper bounds for the K’S will be in- finity. By definition, for each rule ri E R, there is at least one world wi in which ri is verified and the rules that are falsified must belong to R, where 1 5 m < n. Thus, the min operator ranging over wi must produce a x-term which is no greater than that associated with that particular world wi, and this term is clearly fi- nite. Once these initial upper bounds are computed we can divide the K’S into two disjoint sets: the first set includes the cc’s for which a precise value is known (initially only the K’S associated with rules in RI), and the second set includes those for which only an upper bound is known. Let FINAL denote the first set, and let BOUNDED denote the second set. The objective is to compute precise values for the K’S in BOUNDED and transfer them into FINAL. Thus, until BOUNDED is empty, we repeatedly perform the following steps: (1) identify those K* in BOUNDED with minimal up- per bound, (2) remove them from BOUNDED, (3) in- clude them in FINAL and update the bounds of the remaining tc’s in BOUNDED. Theorem 7 Given a consistent MC set R the above procedure computes a solution to Eq. (lo), and requires no more than I%Zl- 17211 iterations. Once the K-values are computed we have a complete model, and a new rule r:A+ B is ratified as an ME-plausible conclusion if and only if the following equation is satisfied: (11) Note that under the approximations described, the sat- isfaction of this equation will guarantee the satisfaction of Def. 3. We conclude this section with a discussion of the issue of recognizing the active constraints in non-MC knowledge bases. The Lagrange multipliers method treats all constraints as equalities, and finds local max- ima on the boundaries defined by these constraints. The problem with blindly assuming that all rules in a set R are active is that the system may become overspecified, and the technique we have been using might find spurious solutions which do not satisfy all the constraints. Such constraints violations cannot be detected by the infinitesimal analysis presented here since the coefficients of .s were ignored. Some passive constraints could, in principle, be detected before the maximization process begins, since they do not par- ticipate in delimiting the feasible region. For example consider:s P(bls) 2 1 - E (12) P(?.uls) 2 1 -E (13) P(b, ~1s) 2 1 - E (14 Since the third statement implies the first two, the point of maximum entropy must lie in the region de- fined solely by Eq. (14). The first two constraints are completely irrelevant (and will be satisfied by strict inequalities). The main problem are those constraints that do constrict the feasible region, but do not influ- ence the maximum value of the entropy. These con- straints represent rules which already belong to the maximum entropy closure of the active set of rules. We know of no effective method of identifying these passive constraints in advance, and are currently ex- ploring ways of detecting these constraints within the K equations since, being passive, they should obtain a tc value of 0. We remark that the task of identifying the passive constraints will be performed only once, during the construction of the model from 72, and can be amor- tized over many queries as long as R remains fixed. This optimistic note however, should be further qual- ified by the fact that the minimization required by Eq. (11) is NP-complete even for Horn expressionsg. 5 Summary and Illustration The proposed method of infinitesimal ME-analysis computes a ranking function 0 on worlds, where 8Note that these are the probability constraints imposed by R&, above. ‘Rachel Ben-Eliy ahu, personal communication. 650 KNOWLEDGEREPRESENTATION Q(w) = log(P’(w)) corresponds to the lowest exponent of E in the expansion of P’,,(W) into a power series in e. This ranking function is encoded parsimoniously by assigning an integer weight K to each rule r E R, and letting O(w) be the sum of the weights associated with the rules falsified by w. Thus, worlds of lower 0 are considered more “normal” than those of higher 0. The weight K, in turn, reflects the “cost” we must add to each w that falsifies the associated rule A ---) B, so that the resulting ranking function would satisfy the constraint conveyed by 72, namely, min( O(w) ] o t= (Ai A -Bi) } > min( O(w) ] w l= (Ai A &) } Vri E R (15) These considerations led to a set of ]n] nonlinear equa- tions for the weights IC which under certain conditions can be solved by iterative methods. The criterion for deciding whether an arbitrary rule P -+ Q is a ME- plausible conclusion of R is: &(P I\ Q) < 6(P A -Q) 06) where for any formula E, 6(E) is defined as d(E) = min(@(w)]w b E}. In other words, a rule is ME- plausible iff the ranking associated with the minimal world falsifying the rule is higher than the ranking as- sociated with the minimal world verifying the rule. As an example consider the MC set R,, = { ~1 : p + lf, r2 : p + b, v-3 : b + f, r4 : b --+ w}l*. Both r-3 and r4 are tolerated by R,, (they belong to the first set in the partition of 7&,,) hence ~3 = 1~14 = 1. The equations for ~1 and ~2 are: ccl = 1+ min[K3,&2,@3+ ~4)] K2 = 1 +min[K3,o,(s3+ K4),@1+ ~4)] giving ~1 = ~2 = 2. These Ki-values, 1 5 i 5 4, completely specify the ranking 9. Assume we wish to inquire whether “green-birds fly”, i.e. (b A g) k f E C~J#&,). Since the propo- sitional variable g l1 does not appear in any rule of R PW, its truth value does not constraint the rank- ing function 0 (see Eq. (15)). Thus, it must be the casethat@gAbAf)=$(bAf)and&(gAbAlf)= 6(b A -f). Since b + f E Rpw, 6(b A vf) > 6(b A f) and (bbd )v f is indeed in the closure. In general it follows that the ME formalism complies with the in- tuition that, if nothing is specified in R about some property g, and A --+ B can be concluded from R, then (A A g) + B should also follow from R. Now consider whether penguins, despite being an exceptional class of birds (with respect to flying) can “The literals can be taken to mean bird, penguin, fly, and winged-animal, respectively. I1 We are slightly abusing the language by using g both as a propositional variable denoting the property “green” and the proposition that “green” is true. The correct meaning of g however, should be clear from the context. inherit other properties of birds. In particular, we wish to test whether 72,, sanctions that penguins are winged-animals. It is easy to verify that 6(p A w) = 1 while 6(p A -w) = 2, and in accordance with Eq. (16), p ---) w is an MEplausible conclusion of R,,. Such conclusions, representing property inheritance across exceptional classes, are not sanctioned by s-semantics nor by the rational closure of [Lehmann 891. 6 Discussion As we saw in the previous section, ME overcomes some of the deficiencies of e-semantics as well as rational monotony. In particular it properly handles irrelevant properties (a deficiency of c-semantics), and sanctions property inheritance across exceptional subclasses (a deficiency of both c-semantics and rational monotony). In fact maximum entropy can be viewed as an ex- tension of these two systems. Like E-semantics, ME is based on infinitesimal probability analysis, and like rational monotony, ME is based on optimal rankings of models subject to constraints, and a sanctions in- ferences on the basis of more normal worlds. Ratio- nal monotony however, is driven by a different rank- ing, uniquely determined by the relation of tolerance (see section (2)). In this ranking, called Z-ranking in [Pearl 903, worlds are ranked according to the most crucial rule violated in each world, while the rules are ranked according to the partition formed by the con- sistency test (see section (2)). In contrast ME ranks worlds according to the weighted sum of rule viola- tions, and it is this difference that explains the ability of ME to conclude that “penguins are winged-animals” in the example from the previous section. Another instance where the MEranking proves ben- eficial is in answering the following question, posed in [Lifschitz 891: can the fact that we derive -p v -q from p V Q when p, Q are jointly circumscribed be ex- plained in terms of probabilities close to 0 or l? Trans- lated to the ME formalism, we have R,, = {True + -rp,True -+ YQ}, and we wish to inquire whether (PVQ) k (‘P-q) is in fact in CME(~&). Since the minimal world verifying (p v q) ---) (1p v lq) violates a subset of the rules violated by the world verifying (P v q) - (P A q), we see that 8((pV q) A (PA q)) > 6((p V q) A l(p A q)) which verifies the conclusion. This conclusion is not sanctioned by rational monotony since, given that the two initial rules belong to the same rank (first level of the consistency partition), the ranking on worlds violating one or two rules will be the same. Note however, that had we encoded the informa- tion slightly different, e.g. RL4 = {True + -(p A q)}, ME would not yield the expected conclusion. This sensitivity to the format in which rules are expressed seems at odds with one of the basic conventions of traditional logic where a ---) (b A c) is regarded as a “shorthand” for a -+ b and a + c. However, it might be useful for distinguishing fine nuances in natural dis- GOLDSZMIDTETAL. 651 curse, treating q and p as two independent properties if expressed by two rules, and related properties if ex- pressed together. Another pattern of reasoning sanc- tioned by maximum entropy is contraposition. For ex- ample, from R,, we could conclude that animals with no wings are not birds (lw )- lb), but penguins with no wings are “ambiguous”, they may or may not be birds. The main weakness of the ME approach is the fail- ure to respond to causal information (see [Pearl 881, PP. 463,519, and [Hunter 891). This prevents this formalism from properly handling tasks such as the Yale shooting problem [Hanks & McDermott 861, where rules of causal character are given priority over other rules. This weakness may perhaps be overcome by introducing causal operators into the ME formu- lation, similar to the way causal operators are incor- porated within other formalisms of nonmonotonic rea- soning (e.g., [Shoham 861, [Geffner 891). References [Adams 661 Adams, E., Probability and The Logic of Con- ditionals, in Aspects of Inductive Logic, ed. J. Hintikka and P. Suppes, Amsterdam: North Holland. [Adams ‘751 Adams, E., The Logic of Conditionals, chapter II, Dordrecht, Netherlands: D. Reidel. [Aoki 711 Aoki, M., Introduction to Optimization Tech- niques, Chapter 5, The Macmillan Company, New York, 1971. [Cheeseman 831 Cheeseman, P., A Method of Comput- ing Generalized Bayesian Probability Values for Expert Systems, Prc. of Intl. Joint Conf. on AI (IJCAI-83), Karlsruhe, W. Germany, 198-202. [Dowling & Gallier 841 Dowling, W. and J. Gallier, Linear- Time Algorithms for Testing the Satisfiabihty of Propo- sitional Horn Formulae, Journal of Logic Programming, 3~267-284, 1984. [Geffner & Pearl 881 Geffner, H. and J. Pearl, A Frame- work for Reasoning with Defaults, to appear in De- feasible Reasoning and Knowledge Representation, H. Kyburg et. al. (eds.), Kluwer Publishers, 1990. [Geffner 891 Geffner, H., Default Reasoning: Causal and Conditional Theories, UCLA Cognitive Systems Lab. TR-137, PhD dissertation, December 1989. [Goldszmidt & Pearl 891 Goldszmidt , M. and J. Pearl, De- ciding Consistency of Databases Containing Defeasible and Strict Information, Proceedings of the 5th Work- shop on Uncertainty in AI, Windsor, Canada, August 1989, pp. 134-141. [Goldszmidt & Pearl 901 Goldszmidt, M. and J. Pearl, On the Relation Between System Z and the Rational Clo- sure, to appear in Proceedings of 3rd Intl. Workshop on Nonmonotonic Reasoning, 1990. [Goldszmidt, Morris & Pearl 901 Goldszmidt, M., P. Mor- ris and J. Pearl, A Maximum Entropy Approach to Nonmonotonic Reasoning, Technical Report, Cognitive Systems Lab., UCLA. [Hanks & McDermott 861 Hanks, S., and D. McDermott, Default Reasoning, Nonmonotonic Logics, and the Frame Problem, Proc. 5th National Conference on AI (AAAI-86), Philadelphia, pp. 328-333. [Hunter 891 Hunter, D., Causality and Maximum Entropy Updating, Intl. Journal of Approximate Reasoning, 3 (no. 1) pp. 87-114. [Jaynes 791 Jaynes, E., Where Do We Stand on Maximum Entropy?, in The Maximum Entropy Formalism, eds. R. Levine and M. Tribus, Cambridge MIT press, 1979. [Kraus et.& 881 Kraus, S., D. Lehmann and M. Magidor, Preferential Models and Cumulative Logics, Technical Report TR 88-15, Dept. of Computer Science, Hebrew University, Jerusalem, Israel, November 1988. [Lehmann & Magidor 881 Lehmann, D. and M. Magidor, Rational Logics and their Models: A Study in Cumu- lative Logics, TR-8816 Dept. of Computer Science, He- brew Univ., Jerusalem, Israel. [Lehmann 891 Lehmann, D., What Does a Knowledge Base Entail?, Proceedings of First International Conference on Knowledge Representation, Toronto, Canada, 1989, pp. 212-222. [Lifschitz 891 Lifschitz, V., Open Problems on the Bor- der of Logic and Artificial Intelligence, unpublished manuscript, 1989. [Makinson 891 Makinson, D., General Theory of Cumula- tive Inference, Second International Workshop on Non- Monotonic Reasoning, Springer-Verlag, 1989. [McCarthy 861 McCarthy, J.,Applications of Circumscrip- tion to Formalizing Common-Sense Knowledge, Artifi- cial Intelligence, 28 (no. 1):89-116. [Pearl 881 Pearl, J., Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo: Morgan Kauffman Publishers. [Pearl 89a] Pearl, J., Probabilistic Semantics for Non- monotonic Reasoning: A Survey, in Proceedings of the First Intl. Conf. on Principles of Knowledge Represen- tation and Reasoning, Toronto, Canada, May 1989, pp. 505-516. [Pearl 901 Pearl, J., System Z: A Natural Ordering of De- faults with Tractable Applications to Nonmonotonic Reasoning, in Theoretical Aspects of Reasoning About Knowledge, M. Vardi (ed.), Morgan Kaufmann Pub lishers, 1990, pp. 121-135. [Satoh 901 Satoh, K., A Probabilistic Interpretation for Lazy Nonmonotonic Reasoning, to appear as ICOT- TR-525, Institute for New Generation Computer Tech- nology, 1990. [Shoham 861 Shoham, Y., Chronological Ignorance: Time, Necessity, and Causal Theories, Proc. 5th Natl. Conf. on AI (AAAI-86), Philadelphia, pp. 389-393. 652 KNOWLEDGEREPRESENTATION
1990
122
874
A Hybrid Framework for Representing Uncertain Knowledge Alessandro Saffiotti* IRIDIA - Universit.e Libre de Bruxelles Av. F. Roosvelt 50 - CP 194/6 B-1050 Bruxelles - Belgium E-mail: rO1507@bbrbfuOl.bitnet Abstract This paper addresses the problem of bridging the gap be- tween the fields of Knowledge Renresentation OCR) and Uncertain Reasoning (UR). The prohosed solution consists of a framework for representing uncertain knowledge in which two components, one dealing with (categorical) knowledge and one dealing with uncertainty about this knowledge, are singled out. In this sense, the framework is “hybrid”. This framework is characterized in both model- theoretic and proof-theoretic terms. State of belief is repre- sented by “belief sets”, defined in terms of the “functional approach to knowledge representation” suggested by Levesque. Examples are given, using first order logic and (a minimal subset of) M-Krypton for the KR side, and a yes/no trivial case and Dempster-Shafer theory for the UR side. 1. Introduction An impressive work has been carried out over the last two decades in the fields of Knowledge Representation (RR) and of Uncertain Reasoning &JR), resulting in a number of concepts being investigated, a number of problems being identified, and a number of solutions be- ing developed (see e.g. Israel & Brachman, 1981; Bra&man & Levesque, 1985; Saffiotti, 1987; Henkind & Harrison, 1988). Yet, curiously enough, these two fields, which we would expect to be tightly related, ap- parently suffered from a lack of cross-fertilization, or even of communication, during their development. As a matter of fact, the literature in each field seems to have scarcely taken into account the problems and the results emerged in the other one. As an example, consider the assertions “Birds are animals”, “Typically birds fly”, “Most of my friends like music” and “Smoke suggests fire”: a person working in KR would recognize them as pertaining to qualitatively different types of knowledge, and would claim for different mechanisms to represent (part of) them. Though, a Bayesian would probably code all of them by the single structure P(AIB) = x. * This research has been partially supported by the ARCHON pro- ject, funded by grants from the Commission of the European Communities under the ESPRIT-II Program, P-2256. The partners in the ARCHON project are: Krupp Atlas Elektronik, Amber, CERN, CNRG-NTUA Athens, ECRC, Framentec, FWI Univ. of .4msterdam, Iberduero, Univ. Libre de Bruxelles, JRC Ispra, Labein, Volmac, Univ. of Porto, Queen Mary College. In this paper we present an attempt at bridging the gap between the UR and the KR fields, by describing a for- mal unifying framework for representing uncertain knowledge. It is of basic importance here to make clear what we mean by “uncertain knowledge”. We postulate that uncertain knowledge is composed of categorical knowledge accompanied by information regarding the uncertainty about it; for instance, the fragment of uncer- tain knowledge expressed by “Smoke suggests fire” can be seen as a piece of categorical knowledge (e.g. a “symptom-of’ link between the concepts “Smoke” and “Fire” in a semantic net), plus information about its uncertainty (e.g. a probability value for that link). From an AI viewpoint, this corresponds to seeing uncertainty as metaknowledge. Throughout this paper, we will use the expressions “knowledge component” and “uncertainty component” to refer to the two components of uncertain knowledge. Two hypotheses are hidden in this notion of uncertain knowledge. First, knowledge, and the reason- ing processes based on it, is categorical: it is the validity of knowledge (and that of the conclusions drawn from it) to be a matter of degree, not the knowledge itself. Second, the only uncertainty we talk about is epistemic uncertainty: uncertainty about the validity of our knowl- edge with respect to an intrinsically certain reality. An in- teresting problem is whether other types of uncertainty (vagueness, for instance) are captured by our notion or not. For the case of vagueness, the answer is affirmative, if we accept the interpretation of the vagueness of the sentence “Enzo is rich” as pertaining to the adequacy (in our mind) of the description “rich” to the individual “Enzo”, rather than to the ontological fuzziness of the predicate “rich” (cf. Schefe, 1980). The formal framework that we define in this paper deals with uncertain knowledge by singling out the knowledge component and the uncertainty component. Both the differences and the relationships between these two components are accounted for. Under the hypothesis of considering uncertainty as a kind of knowledge (about our knowledge), the proposed framework fits the “hybrid knowledge representation” paradigm (Bra&man & Levesque, 1982). Apart from its theoretical interest, this framework is meant to form a basis for defining -given a RR system and a UR system- a combined uncertain SAFFIOTTI 653 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. knowledge representation system able to perform uncer- tain reasoning on structured knowledge. The rest of this paper is organized as follows: section 2 presents the conceptual background and the formal de& nition of our framework. Section 3 gives a model-theo- retic account of this framework in terms of possible worlds, and shows three examples. Section 4 mimics section 3, but uses a proof-theoretic perspective. Some hints for building a real hybrid system for uncertain knowledge are also given. Finally, section 5 concludes. 2. Hybrid Belief Structures Once we have clear the notion of “uncertain knowledge” that we want to model, we can state two essential re- quirements for a general framework in which this uncer- tain knowledge can be represented and dealt with: 1. it must account for both the distinction and the rela- tionship between “knowledge” and “uncertainty”, as the two basic components of uncertain knowledge; 2. it must be general enough to accommodate a number of KR languages and of UR calculi: the framework should not make strong hypotheses on the form in which knowledge and uncertainty are represented. In order to guarantee ourselves a general formalization, we adhere to what Levesque (1984) has called “functional approach to Knowledge Representation”. In this ap- proach, knowledge is represented by abstract data struc- tures (“Knowledge Bases”) characterized in terms of the operations that can be made on them. Typical operations will be a query operation “Ask”, an updating operation “Tell”, and an initialization operation “Empty”. Correspondingly, we model uncertain knowledge in terms of abstract “Belief Sets”, the uncertain correspon- dent of Levesque’s Knowledge Basest. In them, knowl- edge and uncertainty about this knowledge are repre- sented according to a specific KR language and a specific UR calculus, respectively. We will write IBS to refer to the set of belief sets. Belief sets will be characterized in terms of the following operations: Ask: 31f;XBS+ r Tell: ZXYXBS+ Empty : (0) + %S where Z is the language used to represent knowledge (a KR language), and T is the set of beZief judgements used to represent uncertainty about this knowledge (numbers, “true/false”, tokens, etc.). Intuitively, “Ask[a,K]” returns an element of r representing the extent to which the be- lief expressed by formula a of Z may be considered true2 in the belief set K, while “Tell[a,x,K]” returns the 1 Here “belief’, as opposed to “knowledge”, is not related to what is true in the real world, and is a matter of degree. We later use “belief judgement” instead of “truth value” in a similar way. 2 Notice that we are only considering yes-no queries; this assump- tion is not restrictive (Levesque, 1984). new belief set obtained by assimilating in K the belief ex- pressed by a with belief judgement x. “Empty[]” simply returns a knowledge base without any knowledge. Unfortunately, the operations proposed by Levesque do not account for the dichotomous nature of uncertain knowledge. We want to find a decomposition of these operations, in which the KR component and the UR component are singled out. In order to do this, we will borrow from another well-known dichotomy: the distinc- tion between extension and intension. By intension of a sentence we mean its “meaning”, i.e. the idea expressed by it. By extension of a sentence we mean the particular object designated by it (i.e. a truth value). E.g. the two sentences “17 is a prime” and “A=A” have the same ex- tension (true), but different intensions. For our goals, we accept the following characterization of intensions: the intension of a sentence is a function from possible states of affair (contexts) to the truth value of the sentence in each context. In this light, we can state the problem ad- dressed by the Ask operation as that of judging how much the context (partially and doubtfully) specified by a belief set K is likely to be one in which the asked formula is true. We decompose this problem into two steps 1. Given formula a, evaluate its intension (written Ilall); 2. Given Ilall, evaluate its belief judgement in the context (partially and doubtfully) specified by K. Intuitively, step 1 corresponds to a process of meaning attribution, i.e. a mapping between the structures we use to represent knowledge and the knowledge which is rep- resented. This process is typically performed by a KR system. So step 1 identifies the KR component in the Ask process. On the other hand, in step 2 we completely dis- regard linguistic structures: rather, we evaluate belief judgements of abstract propositions (intensions) in a given context. This corresponds to the entailment part of what has been traditionally called “the evidential reason- ing problem” in the UR literature (e.g. Thompson, 1985). Thus, step 2 identifies the UR component of the Ask process. An analogous decomposition of “Tell(a,x,K)” can be given as: 1) evaluate llall (meaning attribution), and 2) update K by assimilating llall (with its belief judgement x) in it (updating part of the evidential reasoning problem). The above decompositions are graphically summarized below Ask: X X 8s Tell: L x r x as where 9 is the set of intensions, 0 is a function respon- sible for attributing a meaning to formulae, Y is a func- tion responsible for solving the (two facets3 of the) evi- dential reasoning problem, and I is the identity. 3 Y will be actually c onsidenxi as a pair of functions. 654 KNOWLEDGEREPRESENTATION We are now in a position to formally define hybrid be- lief structures, the basic ingredient of our framework. As a notational convention, we use a, p,... to denote formu- lae of Z; x, y ,... for belief judgements; p, q ,... for in- tensions; Kl, K2,... for belief sets; and @ CI for the power set of set Ct. Def. I. Let Z be a language, and r and 9 be non empty sets. Let 33 stand for (9 X V). Let also 0 be a function from 35 to 9, and V be a pair offinctions (Y1,Y2) such that@: 9 X @ 33 + r and Y2: 33 X k;3 33 + @ 33. Then we call the tuple X = <3S, a, 9, r, Y> a hybrid belief structure. In words, we are given a KR language aP;, and a function 0 for it which maps each formula of IE to an intension (its meaning). 3e and @ together constitute the KR com- ponent of X. From the other hand, we are given the set r of belief judgements of an UR calculus, and a pair of functions (Y’,Y2) which define its dynamic behaviour on the space of the intensions. Y’ returns a belief judge- ment given an intension and a belief set: this is the belief judgement for the given intension with respect to the be- lief judgements of the other beliefs in the belief set. Y2 returns a (new) belief set given a (new belief judgement for a) belief and a (old) belief set: this is the updated be- lief set in which the belief judgements of all beliefs have been modified in order to account for the new one. r and Y together constitute the Uf? component of X . Notice that the only hypothesis made on the KR and the UR components is that their semantics can be described in terms of the functions Bi and Y above. These compo- nents act as independent and autonomous systems, which cooperate via elements of $. Intensions are used as ab- stract objects which represent our knowledge indepen- dently from the particular linguistic structures we use (in Z) to express it: it is to these objects that uncertainty is alIocat.ed. We now define belief sets trough the operations which characterize their behaviourz Def. 2. Let X = <X,Q,d,~,Y> be a hybrid belief structure. Then belief sets on X are defined by the following operations: Ewtyx [I = 1 wlm&l9m,> 1 Ask&, ~1 = Y’@(a), K) Tell#[a, x, K] = Y2(&(a), x>, K) where mz p re resents the tautology of E, and tTU&xE $I@ represents total confidence. Operationally speaking, belief sets are built starting from Empty& and then by performing successive Tell, op- erations on it. Thus, belief sets are sets of pairs <p,x>, where p is an intension and XE T. Notice that, while we use formulae of X to interact with belief sets via the Ask and Tell operations, belief judgements are actually asso- ciated with the intensions connoted by these formulae (and not to the formulae themselves). 3. The Semantic Perspective We now want to see how the belief sets defined above can actually perform uncertain reasoning on structured knowledge, possibly merging together already existing KR and UR systems. We will make a two step descent from the abstrc :t level we were before: as a fiast step, we will consider particular choices for the set of intensions 9. This will of course pose more constraints on the form of the Q, and Y functions, and then on the set of KR and UR calculi we can capture. As a second step, we will consider full instantiations, where the elements of the framework will be completely specified for a particular choice of KR and UR calculi. In this section, we stick to a model-theoretic approach to both KR and UR, by using possible worlds to represent intensions. In the next sec- tion, we will use (sets of sets of) formulae to represent intensions, so switching to a syntactic viewpoint Let M = <S, D, V, (ai1 20) > be a Kripke structure4, where as usual S is a set of states, D is a domain of indi- viduals, V is a mapping from symbols of Z and states SE S to elements (and sets) of D, and the 3% i’s are binary relations over S. A Kripke world is a pair <M,s>, with seS. We then let 9 C @ { <M,s> I 44,s is a Kripke world} Given a KR language Z, and a formula a of it, @(a) consists of the set of worlds where, a holds according to the semantics given to 35. Thus, according to this seman- tic perspective, we must be given a model-theoretic ac- count of any KR system we want to fit in a hybrid belief structure, where the notion of truth in a world is defined. A belief is a set of worlds together with a belief judge- ment, and belief sets are composed of these beliefs. The pair of functions Y must consider sets of worlds as its basic objects. This means that a description of an UR technique in terms of possible worlds must be available in order to use this technique in a hybrid belief structure. Example 1: First Order Logic + (yes, no} Only for the sake of familiarizing the reader with the use of hy- brid belief structures and of Ask and Tell operations to define belief sets, we present a very simple example, where knowl- edge is represented in standard first order logic (FOL), and uncertainty is represented by yes/no values. We need to define 4 Other mathematical structures which are used to give semantics to KR languages could have been employed. Kripke structures have been chosen here mainly because of their wide use within the AI community (e.g. Halpem & Moses, 1985). SAFFIOTI-I 655 the elements of X1 = <XpoL, aPoL, 4, rTp, W*. X,,, is a standard first order language, and rm=(yes, urn). 4 is com- posed of sets of worlds cM,s+ such that M = < { ~0) ,D,V> is a standard FOL interpretation structure. We indicate by W the set of ah these worlds. We then define <r?poL(~) = t msp 1 ~,w %I, (11 1 where I=,,, is the standard truth relation for FOL. In words, QFoL(a) returns the set of all the first order models of a. The definition of the \y’Ip pair of functions is given by: tl.0 otherwise hqf-lp.ys;> 1 q9ye-E K} if x=yes otherwise The condition “qGp” can be read in terms of logical entail- ment: given two formulae Q and p of X, respectively connot- ing p and q, qEp is true whenever (w I WC p} E {w I WC a), that is whenever pt= a. As for the updating side, the belief set obtained from 1c by assigning the value true to the intension p is made by retaining those sets of worlds in K which are consistent with p. Belief set operations are then easily defined in terms of @mL and \yTp % Emptyl[l = NW, ye-1 Telll[cx, x, K] = ( (<gq,&(a),y~>Icqsyes>EK] ifxy otherwise As required, knowledge is expressed in these belief sets through FOL formulae, and uncertainty about it is expressed by “yes/no” values. For instance, if the belief set K is built by Te~lWogWd,y=, Tell1[Vx.(dog(x)~animal(x)),yes, Empty l[lll then we have Ask,[animal(Alex), K] = yes Notice that the definition of Ask accounts for the incomplete- ness of belief sets: e.g. Ask,[a,EmptyI[]] = no and Askl[-a,Emptyl[]] = 1~) for every non valid formula a. Example 2: FOL + Dempsterdhafer Theory In the next example we want to define belief sets in which knowledge is represented in FOL, while uncertainty is dealt with according to the Dempster-Shafer (DS) theory of evidence (Shafer, 1976; Smets, 1988). As you could expect, only the UR component of the X 1 defined in the previous example needs to be modified. Our new hybrid belief structure is X2 = CX poL, aFoL, 4, r,, \y,s>, where ZFOL, @, ad 9 =e as in X 1. rDs is now the [O,l] interval. A belief set on 342, then, is a set of pairs <p ,x>, with p being a set of worlds and XE [O,l]. In the terminology of the DS theory, we can see a belief set K as a “basic probability assignment’* (bps) on the set W of 5 Not surprisingly, the definitions of Ask and Tell for this “collapsed” case basically corresponds to those in (Levesque, 1984). possible (FOL) worlds, i.e. a distribution of a unitary mass among the subsets of W. Y',(P,K) returns an element of [O,l] that corresponds, in DS terminology, to the value of Bel(p) with respect to the bpa represented by K qrP Intuitively, our confidence in the truth of the knowledge repre- sented by p is just the sum of the mass values attributed to any intension in K whose truth entails the truth of p. As for the up- dating half of ‘y,s, we notice that updating is typically per- formed in DS theory by combining, according to the socalled Dempster’s rule of combination, the bpa representing the pre- sent state of belief with that representing the new evidence. So, we combine the old bpa K with a bpa which allocates the de- sired amount of belief to the new intensior8, and the rest to W. yi (<)h X>, K) = K@ {<p,X>,<w,l-X>} where 8 stands for Dempster’s rule of combination, recast in terms of belief sets (see full paper). The definitions of Emptyz, Ask2 and Tell2 are then given by: Empty2[1 = W, I>) Ask&x, K] = c {X t Cq,X>EKdq c @&a)) Tel$Ea, x9 ~1 = K@ {d&&%),X>, <w, l-x> } We show the behaviour of these belief bases by a simple ex- ample. Consider the set of worlds W=(b,d,a,da}, where D contains the only individual u, and V is such that in the world “d”, & is in the extension of&g, but not in that of Animal; in “da”, he is in the extension of both DOP: and Animal, etc. We then have the following belief sets: ~0 = Empty-J] = (<W, 1)) K,=Tell$(isa Dog Animal),09,Kg]={ <W,O.l>,<(a,a,da),O.9)) ~~ = Tell,[(Dog alex), 0.7, ~11 = {<W, O.M>, <{d,da), 0.07>, <(a,iqda), 0.27>, <(da), 0.63)) From what we get, for instance: Askz[(Animal alex), K2] = 0.63 Example 3: 66Z~~~99 KR System + DS Theory In the last example of this section we use a KR system, called ZERO, which, though almost unreasonably simple, presents some interesting characteristics. First, ZERO incorporates con- cepts -like the distinction between “facts” and “deftitions”- that are traditionally addressed in the KR field, but which are not even taken into consideration in the UR tradition. The re- sulting belief sets will therefore exhibit conceptually non trivial (though minimal) KR capabilities, together with the power of the DS approach in dealing with uncertainty. Second, ZERO constitutes the very core of the M-KRYPTON KR language7: 6 Notice that we are using in the Tell primitive a single value x to specify confidence in the truth of (x. We can easily generalize x to pairs (xt, xf), where xt and xf measure our confidence in the truth of a, and in its falseness, respectively. The full formalization of Dempster-Shafer belief sets is given in the fool paper. 7 M-Krypton (Saffiotti & Sebastiani. 1988) is a KR language that extends Krypton (Brachman et al. 1985) with belief operators for 656 KNOWLEDGEREPRESENTATION we could use the same procedure described in this example to define belief sets with M-KRYPTON as KR component (by just replacing the truth function I== of ZERO by that of M- KRYPTON). Third, ZERO has been implemented and used as a test-bed to experiment with the ideas presented here. Formulae of ZERO are of two kinds*: definitions, of form (isa Ci Cj), to be read as “Ci is a sub-concept of Cj”, and facts, of form (Ci aj), to be read as “aj is an instance of concept C:‘. Semantrcs is given to ZERO in terms of Kripke structures m the following way: <M,s>~z (Ci 5) in V(aj, S) E V(Ci, S) <M,s>~z (isa Ci Cj> iff for& S' s.t. s~s', V(CiBs’) GV(Cjls’) The use of the VI, relation guarantees the one-way dependence between facts and definitions: if (& Ct C$ holds in a world, all instances of Cr in that world will necessarily be instances of C2 as we& yet, the opposite implication does not hold. Our new hybrid belief structure is X3 = <X,, Qz, 9, rDs, YD>, where rDs, \yns and 4 are as in X2, and X, is the lan- guage of ZERO. The definition of az is plainly: Q?,(a) = { &I,0 I 4&s> Cz a ) Empty3, Ask, and Tell, arc exactly the same as in the preced- ing example, with az replacing apoL. As a result, the modelled belief sets will represent knowledge using the ZERO language and mechanisms, and uncertainty using DS. The following ex- ample illustrates the behaviour of these belief sets: Kl = Tells[(isa Researcher Brontosaurus), 0.7, Emptys[ll ~2 = Tell,[(isa Brontosaurus Animal), 0.9, K~] K3 = Tell,[(Researcher alex), 1, ~~1 From this we get, for instance: Ask$(Brontosaurus alex), ~31 = 0.9 Ask3[(Animal alex), K~] = 0.63 4. The Syntactic Perspective Though convenient from a formal point of view, the above description in terms of manipulation of abstract possible worlds does not shade much light on the practi- cal side of our enterprise. The hybrid belief structure formalism, however, is not constrained to use sets of possible worlds to represent intensions. As proof-theo- retic descriptions are normally more convenient than model-theoretic ones whenever concreteness is at issue, we move now to a proof-theoretic approach, and consider representing intensions in terms of more “tangible” syn- tactical structures, rather than semantical ones. Given a KR language Z, equipped with a proof theory, we define a possible argument in Z to be a consistent set of formu- lae of Z We let intensions be sets of possible arguments: multiple agents. More important here, it provides a Kripke tics for Krypton in the same style of that 8 A full description of ZERO is given in given to ZERO. the full paper. seman- Given a KR language Z, and a formula a of it, we focus on all those possible arguments in Z such that a is de- ducible from them according to the given proof-theory. We then let @(a) consist of the collection of all these sets of formulae? Therefore, according to this syntactic per- spective, we must be given a proof-theoretic account of any KR system we want to fit in a hybrid belief structure, where the notion of deduction is defined (but notice that we do not need a truth relation for Z any more). A belief is then a set of possible arguments together with a belief judgement, and belief sets are composed of these beliefs. Example 4: 6CZ~~~" KR System + DS We restate example 3 in proof-theoretic terms. ZERO provides us with a proof theory (and hence a deduction operator I-J consisting of one axiom schemata and two inference rules: Al. (isaC&) Rl . R2. From (Ci ak) 1ul(i (isa Ci C* infer (Cj ak) From (isa Ci Ci, and (isa d’ j Ck) infer (isa q Ck> Our new hybrid belief structure is X4 = 4,, a’=, !I’, rDS, Y’,>. Apart from 4 ‘, the only difference with 94 3 is in the definition of @=, given now in terms of deduction: W,(a) = ( n E P I ‘TE l-p ) where P is the set of all possible possible arguments for ZERO. As for the uncertainty component of X4, rw and \yDs are exactly the same as in X3. The intuitive interpretation is however different. The condition q Ep in YIDs should now be read in terms of deduction: (R I A-p) E {n: I al-a) is true when, whatever argument is valid among those proving p, it proves a as well, i.e. whenever f%a. The ‘y2Ds function, by considering the intersection of possible arguments (via Dempster’s rule), extends the set of focal elements to include the intension corresponding to the conjunction of all the knowledge already in K with the new one. Ask4 and Tell4 are the same as in example 3, with @’ re- placing 0, while Empty4[] = { <P,l>). The behaviour of the resulting belief sets on X4 will be the same as in the X3 case: using a model-theoretic or a proof-theoretic approach is trans- parent to the user of the resulting belief sets. Going down to the syntactic level gets us closer to the implementation side. Consider the architecture below: 9 When Z comprises a conjunction operator A satisfying “( a$) I- y iff (aAp) l-r, and a disjunction operator v satisfying “(a) l-7 or ((3)I-r iff (av(3)L~. a (finite) set of possible arguments (Xi I 7ti=( (3il ....*fjiNi) ) is equivalent t0 the fmula Of X; “Vi(Aj bij)“. In such cases, we could represent intensions by formulae of Z. However, this is not true in general (eg Zero does not fit this case). SAFFIOTTI 657 The role of the KR module (possibly an already existing system) is to compute Q(a) for each formula a of the KR language. When Q(a) is a set of possible arguments, this role is not so far from a plausible behaviour for a KR system. Though, this set includes &l the possible argu- ments for a according to the deduction theory of Z, and this is in general a computationally intractable object. Two steps may be undertaken at this stage: 1) to only consider possible arguments which are “reasonable” with respect to what is actually believed in the belief sets; and 2) to let the KR module provide “fragments” of possible arguments, generated while performing inferences; the reconstruction of full possible arguments from these fragments is then performed by the UR module. Step 1 means that the KR module must access the content of the belief set to decide which inferences to draw. Step 2 greatly weakens the demands on the KR module. Moreover, if the fragments above correspond to reports of single inference steps, they can be seen as ATMS jus- tifications (deKleer, 1986). We can then use an ATMS inside the UR module to reconstruct the full possible ar- guments (ATMS “environments”) given these justifica- tionsrO. The discussed architecture, and the two steps above, are detailed in (Saffiotti, 1990), where a possible algorithm based on an ATMS is also proposed. 5. Conclusions and We have presented an approach to link an arbitrary model for UR to an arbitrary KR system (provided that suitable formalizations are available for both). This approach has been formalized by defining in a functional way belief sets, abstract data types where uncertain knowledge is represented as knowledge (dealt with by the KR system) plus uncertainty (dealt with by the UR calculus). Belief sets are peculiar in that they associate uncertainty to the knowledge itself rather than to the linguistic structures used to represent it. From the point of view of KR, belief sets are a tool for attaching an arbitrary treatment of uncertainty to a KR system. From the point of view of UR, they constitute a tool for extending the applicability of an UR technique to kinds of knowledge that would otherwise be difficult to express in the language of the mathematical model of the UR calculus. E.g. consider ex- pressing in a standard DS formalism Tell[Vx.@y.child(x,y)+married(x)), 0.9, Empty[]] The idea of marring uncertain reasoning with knowl- edge representation seems to be fairly new in the litera- ture. Some symptom of this tendency may be found in (Zadeh, 1989). However, Zadeh’s solution consists in proposing Fuzzy Logic @ a KR tool, while we suggest to combine an UR tool with a KR tool. On a different side, the possible world account given here to DS theory is lo This suggestion is similar to some recent proposal to use an ATMS for implementing.DS thecxy (e.g. Laskey & Lehner, 1989). strongly related to other possible world based accounts given to DS theory or to probability theory (e.g. Gaifman, 1986; Ruspini, 1986; Fagin and Halpem, 1989). However, our focus is the decomposition of the uncertain knowledge representation task in its KR and UR components, using intensions as a formal bridge: possible worlds are just one of the possible choices for representing intensions. Moreover, while the accounts above are normally restricted to the propositional case, we attach it to an arbitrary KR language. Acknowledgements. This research has benefit from worthy discussions with (and comments from) Yen-Teh Hsia, Robert Kennes, Bruno Marchal, Philippe Smets, and Nit Wilson. Fabrizio Sebastiani has an early respon- sibility in my all-hybridizing impetus. References Brachman, R.J. and Levesque, H.J. (1982) “Competence in Knowledge Representation”, Proc. of AAAI-82: 189-l 92. Brachman, R.J. and Levesque, H.J. (Eds.) (1985) Readings in Knowledge Representation (Morgan Kaufmann, CA). Bra&man, R.J., Pigman Gilbert, V., Levesque, H.J. (1985) “An Essential Hybrid Reasoning System: Knowledge and Symbol Level Accounts of Krypton”, Proc. of ITCAE85. de Kleer, J. (1986) “An Assumption-Based Truth Maintenance System” Artificial Intelligence 28: 127-162. Fagin, R. and Halpem, J.Y. (1989) “Uncertainty, Belief and Probability”, Proc. of IJCAI-89: 1161-l 167. Gaifman, H. (1986) “A Theory of Higher Order Probabilities”, Proc. of the Conf. of Theoretical Aspects of Reasoning about Knowledge: 275-292. Halpem, J.Y. and Moses, Y (1985) “A Guide to the Modal Logics of Knowledge and Belief ‘, Proc. of IJCAI-85. Henkind, S.J. and Harrison, M.C. (1988) “An Analysis of Four Uncertainty Calculi”, IEEE Trans. on Systems, Man and Cybernetics 18(S): 700-714. Israel, D.J. and Bra&man, R.J. (1981) “Distinctions and Confusions: a Catalogue Raisonne”, Proc. of IJCAI-81. Laskey, K.B. and Lehner, P.E. (1989) “Assumptions, Beliefs and Probabilities”, ArtifKial Intelligence 41( 1): 65-77. Levesque, H.J. (1984) “Foundations of a Functional Ap- proach to Knowledge Representation”, Artif. Intell. 23. Ruspini, E.H. (1986) “The Logical Foundations of Evidential Reasoning”, Tech. Note 408, SRI Int. (Menlo Park, CA). Saffiotti, A. (1987) “An AI view of the treatment of uncer- tainty”, The Knowledge Engineering Review 2(2). Saffiotti, A. (1990) “A Hybrid Belief System for Doubtful Agents”, to appear in: Procs. of the Third IPMU Conference (Paris, France, 2-6 July 1990). Saffiotti, A. and Sebastiani, F. (1988) “Dialogue Modelling in M-Krypton, a Hybrid Language for Multiple Believers*‘, Proc. of the 4th IEEE Co&on AI Applications: 56-60. Schefe, P. (1980) “On Foundations of Reasoning with Uncertain Facts and Vague Concepts”, Int. J. Man- Machine Studies 12: 35-62. Shafer, G. (1976) A Mathematical Theory of Evidence (Princeton University Press, Princeton). Smets, Ph. (1988) “Belief Functions”, in: Smets, Mamdani, Dubois, and Prade (Eds.) Non-Standard Logics for Automated Reasoning (Academic Press, London). Thompson, T.R. (1985) “Parallel Formulation of Evidential Reasoning Theories”, Procs. of IJCAI-85,321-327. Zadeh, L.A. (1989) “Knowledge Representation in Fuzzy Logic’*, IEEE Trans. on Knowledge and Data Eng. l(1). 658 KNOWLEDGEREPRESENTATION
1990
123
875
A Probabilistic Interpretation for Lazy Nonmonotonic Reasoning Institute for New l-4-28 Mita, Abstract This paper presents a formal relationship Ken Satoh Generation Computer Technology Minato-ku, Tokyo 108, Japan ksatoh@icot.jp for . probability theory and a class of nonmonotomc reasoning which we call daxy nonmonotonic reu- soning. In lazy nonmonotonic reasoning, non- monotonicity emerges only when new added knowledge is contradictory to the previous be- lief. In this paper, we consider nonmonotonic rea- soning in terms of consequence relation. A con- sequence relation is a binary relation over for- mulas which expresses that a formula is deriv- able from another formula under inference rules of a considered system. A consequence relation which has lazy nonmonotonicity is called a ru- tionad consequence relation studied by Lehmann and Magidor (1988). We provide a probabilistic semantics which characterizes a rational consequence relation exactly. Then, we show a relationship between propositional circumscription and consequence relation, and apply this semantics to a con- sequence relation defined by propositional cir- cumscription which has lazy nonmonotonicity. Introduction This paper is concerned about a formal relationship between nonmonotonic reasoning and probability the- ory. Nonmonotonic reasoning is a formalization of rea- soning when information is incomplete. If someone is forced to make a decision under incomplete informa- tion, he uses corm-nonsense to supplement lack of infor- mation. Commonsense can be regarded as a collection of normal results. Those normal results are obtained because their probability is very near to certainty. So commonsense has a statistical or probabilistic prop- erty. Although there are a lot of researches which simulate a behavior of nonmonotonic reasoning based on prob- ability theory(see [Pearl 19891 for example), there is no formal relationship between nonmonotonic reason- ing and probability theory, as Lifschitz (1989) pointed out. In this paper, we consider nonmonotonic reason- ing in terms of consequence relation (Gabbay 1985; Kraus, Lehmann, and Magidor 1988; Lehmann and Magidor 1988; Lehmann 1989). Consequence relation is a binary relation over formulas and expresses that a formula is derivable from another formula under infer- ence rules of the considered system. The researchers consider desired properties in a consequence relation for nonmonotonic reasoning. Gabbay (1985) was the first to consider nonmono- tonic reasoning by a consequence relation and Kraus, Lehmann and Magidor (1988) give a semantics for a consequence relation of nonmonotonic reasoning called preferential consequence relation. The seman- tics is based on an order over possible states which is similar to an order over interpretations in circum- scription (McCarthy 1980) or Shoham’s preference logic (Shoham 1988). Lehmann and Magidor (1988) define a more re- stricted consequence relation called rational conse- quence relation and shows that a consequence relation is rational if and only if it is defined by some ranked model. A model is ranked if a set of possible states is partitioned into a hierarchical structure, and in a ra- tional consequence relation, the previous belief will be kept as long as the new knowledge does not contradict the previous belief. This nonmonotonicity can be said to be lazy because only contradictory knowledge can cause a belief revision. Moreover, they investigate a relationship between Adams’ lo ‘c (Adams 1975) (or equivalently, E- semantics Pearl 19881) r and rational entailment in which conditional assertion is followed by a set of con- ditional assertions. Although Adams’ logic is based on probabilistic semantics, it only considers consis- tency and entailment for a set of conditional asser- tions and does not consider probabilistic semantics for a consequence relation. To give a probabilistic seman- tics to nonmonotonic reasoning, we have to go beyond SATOH 659 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Adams’ logic because most nonmonotonic reasoning systems define a consequence relation in the sense that the systems can define a derived result from a given set of axioms by the inference rules of those systems. In this paper, we provide a probabilistic semantics which characterizes a rational consequence relation ex- actly. To do so, we define a closed consequence relation in the limit. This property means that there exists a probability function with positive parameter ds such that a conditional probability of a pair of formulas in the consequence relation approaches 1, and a con- ditional probability of a pair of formulas not in the relation approaches o except 1 as x approaches 0. Then, we can show that a consequence relation is closed in the limit if and only if the consequence rela- tion is rational. We apply this result to giving a probabilistic seman- tics for circumscription (McCarthy 1980) because cir- cumscription has a similar semantics for a rational or preferential consequence relation and circumscription can define a consequence relation each pair of which consists of original axiom and derived result. Although we can show that every consequence relation defined by circumscription is a preferential consequence rela- tion, it is not always rational. Especially, we can show that if there are some fixed propositions or if we min- imize more than three propositions in parallel, then a consequence relation defined by this circumscription is always non-rational. However, in some cases, we can separate a set of in- terpretations into a hierarchy, and so, we can provide a probability function so that a consequence relation defined by the circumscription in those cases is equiv- alent to a consequence relation defined by the proba- bility function. Consequence Relations and Their Models In this section, we briefly review a work on con- sequence relation by Lehmann, Kraus and Magi- dor (Kraus, Lehmann; and Magidor 1988; Lehmann and Magidor 1988). A summary of the work is found in (Lehmann 1989). We use a propositional language L and consider a bi- nary relation + over formulas in L called consequence relation which has some desired property in a con- sidered reasoning system. Intuitively speaking, AFB means that if a state of knowledge is A, then B is de- rived from A as a belief by inference rules defined in a considered reasoning system. Definition 1 A consequence relation that satisfies all seven properties below is culled .a rational consequence relation. If A z B is a truth-functional tautology and AI-C, then B/4’. (1) If A > B is a truth-functional tautology and CbA, then CkB. (2) A+A. (3) If At-B and At-C, then AkB AC. (4) If A+C and B/4, then A V BkC. (5) If At-B and At-C, then A A BFC. (6) If AkC and A/&-B, then A A B/4’. (7) A consequence relation that satisfies the first six properties is called preferential consequence relation. The property (7) is called rational monotony and proposed by Makinson as a desired property for non- monotonic reasoning system (Lehmann and Magi- dor 1988) and corresponds with one of fundamental conditions for minimal change of belief proposed by Gardenfors (1988). A n intuitive meaning of rational monotony is that the previous conclusion stays in the new belief if the negation of the added information is not in the previous belief. A semantics for a rational consequence relation called ranked model is also studied by Lehmann and Magidor (1988). Th is semantics is a restricted seman- tics for preferential consequence relation studied by Kraus, Lehmann and Magidor (1988) called preferen- tial model. Definition 2 A preferential model W is a triple (S,l, 4) where S is a set, the element of which is called states, 1 assigns a logical interpretation of for- mulas to each state and 4 is a strict partial order on S (irreflexive and transitive relation) satisfying the fol- lowing smoothness condition: for all A E L, for all t E A(dsf {sjs E S,l(s) b A}), either 3s minimal in A^, such that s 4 t or t is itself minimal in A. Definition 3 A ranked model W is a preferential model (S, 1, -x) f or which the strict partial order 4 sat- isfies the following condition: for all s, t and u in S, ifs + t then either s 4 u or u 4 t. This definition is different from the original definition but actually equivalent. In a ranked model, a set of states are divided into hierarchy so that if s 4 t then s and t belong to different rank and if l(s 4 t) and l(t 4 s) then s and t belong to the same rank. We can define consequence relations by the above mod- els as follows. Definition 4 Let W be a preferential (or ranked) model (S, 1, -x) and A, B be formulas in L. The con- sequence relation defined by W will be denoted by kw and is defined by: A/-Q, B if and only if for any s minimal in A^, l(s) b B. Kraus, Lehmann and Magidor (1988) show that a con- sequence relation is preferential if and only if it is the consequence relation defined by some preferential model, and Lehmann and Magidor (1988) show that a 660 KNOWLEDGEREPRESENTATION consequence relation is rational if and only if it is the consequence relation defined by some ranked model. Probabilistic Semantics for Rational Consequence Relation From this poin t, we assume symbols in L is always finite. the set of propositional Definition 5 Let L be a propositional language. Then probability function Pz on L with positive parameter x is a function from a set of formulas in L and posi- tive real numbers to real numbers which satisfies the following conditions. 1. For any A E L and for any a~ > 0, 0 5 P,(A) 5 1. 2. For any x > 0, P,(T) = 1. 3. For any A E L and B E L and for any x > 0, if A A B is logically false then P,(A V B) = Pz(A) + P,(B). If we ignore a parameter x, the above definition be- comes the standard formulation for probability func- tion on L (Gardenfors 1988, p. 37). We introduce a parameter x to express the weight of the probability for every states. Spohn (1988) uses a similar probability function to relate his Natural Conditional Functions to probability theory. Definition 6 Let A, B E L. We define the condi- tional probability of B under A, P,(BIA) as follows. 1 if P,(A) = 0 Px(BlA) = Px(A A B, P,(A) otherwise Definition 7 A probability function P, on L with pos- itive parameter x is said to be convergent if and only if for any A E L, there exists (Y such that lilime P,(A) = (Y. Now, we define a consequence above probability function Px. relation in terms of the Definition 8 A consequence relation b is said to be closed in the limit if and only if there exists convergent probability function P, on L with positive parameter x such that for all A E L and B E L, AFB if and only if liiO P,(BIA) = 1. Intuitively speaking, if a pair, (A, B) is included in the closed consequence relation in the limit, then we can> let the conditional probability of B under A approach 1 as much as possible and if not, the conditional proba- bility will approach some value except 1. This intuitive meaning will be justified later. We can show the following relationship between closed consequence relation in the limit and rational consequence relation. Theorem 1 + is closed in the limit if and only if + is rational ‘. Proof: We can show only-if half by checking that every closed consequence relation in the limit satisfies all properties for rational consequence relation. We show if-half. If k is rational, then there exists some ranked model W = (S, I,+) such that for ev- ery pair of formulas A and B, AkB if and only if Ab,B (Lehmann and Magidor 1988). Since the lan- guage is logically finite, there exists a finite ranked. model with a finite number of ranks. Let the number of ranks be n(n 2 1). Let qa be the number of states at the i-th rank (states which are higher in 4 is in a higher rank). Let a function P, on L with positive parameter x be defined as follows:2 n c qA * xi-1 p,(A) def i;l c Tg * xi-l i=l where r),p is the number of states at the i-th rank that satisfies A. Then, P, is convergent and we can show the fol- lowing consequence relation I-’ is equivalent to bw: Ak’B if and only if LlqP,(BIA) = 1 0 There is another probabilistic characterization for a rational consequence relation. Definition 9 Let L be a finite propositional language and b be a consequence relation. k is said to be E- definable if and only if there exists a function X : L2 I--+ [0, l] such that I. for all A, B E L, AFB if and only if X(A, B) =l. 2. for all E > 0, there exists a proper probability func- tion P such that for all A, B E L, IP(BIA) - X(A, B)I < E. An &-definable consequence relation fits our intuitive meaning stated above and is actually equivalent to a closed consequence relation in the limit and therefore, equivalent to a rational consequence relation. Consequence Relation and Circumscription Preferential Consequence Circumscription Relation and Here, we refer circumscription to the following defini- tion. This is a slightly modified version of generalized ‘Independently, Morris, Pearl and Goldszmidt have ob- tained a-similar result to this theorem, as have Lehmann and Magidor . 2This assignment is suggested in (Lehmann and Magi- dor 1988). SATOH 661 circumscription (Lifschitz 1984) as we use < instead of 5. Definition 10 Let A be a propositional formula and P be a tuple of propositions and p be a tuple of propo- sitional variables. Then Circum(A; <p) is defined as follows: A(P) A -UP@ A P cp P), where A(p) is obtained by replacing every proposition of P in A(P) by every corresponding propositional variable, and p < p P is a binary relation over for- mulas which satisfies the following two conditions: 1. For any P, 1P <p P 2. For any P, Q and R, if P <p Q and Q <p R, then P <p R Then, we can define a consequence relation b <p as follows: A+,pB if and only if Circum(A; <p) j= B. Semantics for the above circumscription is based on the following order over interpretations: 11 <p 12 if and only if for every proposition P not in P, Il[P] = Iz[P], and p <p Q is true if we replace Ii[P] whose P is in P for p and Is[P] whose P is in P for Q. Then, we can think of the following preferential model W = (S, 1,‘4) where a set of logical interpretations is S, and 1 is an identity function and -i is a strict partial order <p over those interpretations. We say the pref- erential model is defined by <p. As Kraus, Lehmann and Magidor (1988) p ointed out, if S is finite, the smoothness condition is always satisfied. Here, we con- sider a finite set of interpretations, so the smoothness condition is always satisfied. However, there are some differences between pref- erential consequence relation and circumscription. In propositional circumscription, for any satisfiable for- mula A, Abc,pF 3 (we say k is proper), but in prefer- ential consequence relation this is not always the case. And since we use an identity function for 1 in cir- cumscription, there is a preferential consequence rela- tion in a language which can not be represented by circumscription in the same language. For example, L contains only two propositions P and Q, then there is a proper preferential consequence relation such that P V Q+(lP A Q) V (P A 19) and Pk P A l& and &k-P A Q, but, there is no consequence relation de- fined by circumscription equivalent to the preferential consequence relation. This is because two or more states are mapped to the same interpretation in a cor- responding preferential model. We say a formula A is complete if for every formula B in L, A + B or A b 1B. A complete formula cor- responds with an interpretation. Then, the following property excludes a preferential consequence relation 3F is falsity 662 KNOWLEDGEREPRESENTATION such that two or more states are mapped to the same interpretation in a corresponding preferential model. If C is complete and A V BklC, then Ak4’ or Bk-6’ 4. Theorem 2 k is a proper preferential consequence relation and satisfies the above property if and only if there is some <p such that b<p = b Proof: We can easily show that every consequence relation defined by a circumscription is a proper preferential consequence relation and satisfies the above property. We show the converse. Suppose k is a proper pref- erential consequence relation and satisfies the above property. Let a(P) and ,8(P) be complete formulas. Define a(P) + p(P) if and only if a(P) V ,f3(P)/vx(P) and a(P) # /3(P). Then 4 is a irreflexive and tran- sitive relation. Suppose we collect all pairs in -i: m(P) -4 h(P) . ..an(P) + ,&(P). Then, p <p P is de- fined as follows: (al(p) A ,81(P)) V . ..((~~(p) A p,(P)). Then, we can show b<p = k. 0 Rational Consequence Relation and Circumscription Unfortunately, although a consequence relation de- fined by circumscription is always preferential, it is not always rational. Theorem 3 If a tuple of propositions, P does not contain all propositions in L and for any non-trivial partial order <p (there are some interpretations, I and J such that J cp I), the consequence relation defined by cp is always non-rational. If P contains all propositions in L, then a conse- quence relation defined by minimizing one or two propositions in parallel is rational. Even if P contains all propositions in L, a consequence relation defined by minimizing more than three propositions in parallel is always non- rational. Proof: 1. Since <p is non-trivial, there exist some interpre- tations, I and J such that J <p I. And there exists some proposition P which is not in P. Let I< be a truth assignment which is the same as J except the assignment of P. Then since J <p I, the assignment of P in I is the same as in J from the definition of <p. Then, K is different both from J and from I in the assignment of P. We can show l(J <p K) and l(h’ <p I) and so, the pref- erential model defined by <p is not ranked from Definition 3. Therefore, the consequence relation defined by <p is not rational. *This prope rt corresponds with (R8) in (Katsuno and y Mendelzon 1990) 2. We can easily check that a preferential model de- fined by ranked. minimizing one or two propositions is 3. Let P contain the following minimized propositions P, Q and R. And let the following three interpre- tations I, J and K satisfy the following conditions: (a) Every assignments are the same except assign- ments for P, Q and R. (b) I /==PAlQAR, J /= PAQAlRandK + 7PAQAR. Then I <p K, but ~(1 <p J) and l(J <p K) and so, the preferential model defined by <p is not ranked. Therefore, a consequence relation defined by minimizing more than three propositions is not rational. 0 Although rational monotony corresponds with one of fundamental conditions for minimal change of belief proposed by Gardenfors (1988), there are several ex- amples in commonsense reasoning which correspond with the third case of the above theorem. A notable example is a closed world assumption(CWA). In CWA, we minimize all propositions and so, we do not have rational monotony if a number of propositions is more than three. So, one may argue that a rational consequence re- lation is not practically rational in commonsense rea- soning. However, what we would like to say here is not whether it is rational or not, but that circumscription in general does not have the probabilistic semantics which we have defined so far and that if an order de- fined by circumscription abilistic rationale. is ranked, then it has a prob- Probabilistic Interpretation for Lazy Circumscription In this subsection, we consider the following kind of circumscription. Definition 11 Circumscription <p is lazy if the pref- erential model defined by <p is ranked. We can show that a consequence relation k is proper and rational if and only if there is some <p of lazy circumscription such that k<p = k. And, if a cir- cumscription is lazy, we can attach a probability func- tion which defines an equivalent consequence relation to the consequence relation defined by <p. For example, let a set of proposition be {P, Q}. Then, there are the following four interpretations: W’,lQ), (Cl&), (+>Q)> @=,&)I- Suppose we minimize P and Q in parallel. We denote the strict partial order relation by this minimization as <(pp*). Then the consequence relation defined by <tplQ) is as follows: AR Q+<wa B( P, Q) if and only if A(P, Q)A~~P~~(A(P, q)A((p, a) < (P, Q))) I= B(P, Q), Figure 1: Partial Order by Minimizing P and Q. where (p, q) < (P, Q) is the following abbreviation: (P, a) < (P, Q) def (P 1 P) A (q 1 Q) A l((P 1 P) A (Q 1 q)). The preferential model defined by <(p@) is ranked (Figure 1). In the figure, a lower interpretation is more preferable than an upper interpretation. In probabilis- tic semantics, we regard this order as an order of prob- ability. This means that a lower interpretation is more probable make the than an upper interpretation. Moreover, probability function of an interpretation we in (i + l)-th rank be x times as much as that of an in- terpreiation in i-th rank so that we can ignore less probable interpretation as x approaches to 0. In this I -- example, we attach the following probability function Pz with ‘a positive parameter x to interpretations. Ps((+,lQ)) def 1 + 2; +x2 JW=~ 19)) sf 1 + 2; + x2 fi((-P>Q)) sf 1 + 2z + x2 WPt Q)) %f 1 + 2t+ 22 Then, probability of formula A is defined by a sum of interpretations which satisfies A. P,(A) gf c P,(I) WA Let k be a consequence relation as follows. AkB if and only if f10 P,(BIA) = 1 Intuitively, making x approach to 0 means that we consider only the most probable interpretations which satisfy A and the fact that Pz( B(A) approaches to 1 means that in all the most probable interpretations which satisfy A, B is extremely probable. This is a probabilistic semantics for lazy circumscription. Let us check if P V &k-P V 1Q. Since (P, l&), (lP, Q) and (P, Q) satisfy P V Q, SATOH 663 Px(P v Q) = Px((P,lQ)) + Px((+, Q)) + Px((P, Q)) 2x + x2 = 1 + 2x + x2 Similarly, Pz((P.V &).A (1P V 1Q)) = 1 + z:+ x2 Then, lick P&P v l&lP v Q) = lirn C((PV&)A(~PV~Q)J = lim 2x -= 1 x-0 Px(P v Q) xc--to 2x + x2 Therefore, P V QklP V l&. This means that in all the most probable interpretations which satisfy PVQ, ~Pv l& is extremely probable and this corresponds with the result of P V Q+<(p,QjlP V 1Q. And suppose we check if P V Q/-P A 1Q. Fio Px(P A 1QlP V Q) = 1;, ~((PVQ)A(PAlQ)) = lim x #1 x-0 Px(P v Q) x+0 2x + x2 Therefore, P V Q k P A -,Q and this corresponds with the result of P V Qbc<(p,s) P A l&. In the same way, we can show that for every A, B E L, A+ B if and only if A~,(P,Q) B- Conclusion We propose a probabilistic semantics called closed con- sequence relation in the limit for lazy nonmonotonic reasoning and show that a consequence relation is closed in the limit if and only if it is rational. Then, we apply our result to giving a probabilistic semantics for a class of circumscription which has lazy nonmono- tonicity. We think we need to do the following research. 1. We would like to know a probabilistic semantics which characterizes a consequence relation defined by whole class of circumscription exactly. 2. We can not apply our result to Default Logic(Reiter 1980) or Autoepistemic Logic(Moore 1984) because a consequence relation defined by those logics is not even preferential. We must extend our result to apply those logics. Acknowledgments I would like to thank Kurt Konolige, Moises Gold- szmidt and Johan van Benthem for instructive com- ments on an earlier version of this paper and especially, David Poole for pointing out an error of Definition 9 and Daniel Lehmann for pointing out an error of Def- inition 6 in an earlier version of the paper. Gabbay, D. 1985. Theoretical Foundations for Non- monotonic Reasoning in Expert Systems. Logics and Models of Concurrent Systems. K. R. Apt ed., Springer-Verlag: 439 - 457. Gardenfors, P. 1988. Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT Press. Katsuno, H., and Mendelzon, A. 0. 1990. Techni- cal Report, KRR-TR-90-3, Dept. of Computer Science, University of Toronto, Canada. Kraus, S., Lehmann, D., and Magidor, M. 1988. Pref- erential Models and Cumulative Logic. Technical Re- port, #TR-88-15, Dept. of Computer Science, Hebrew University, Jerusalem, Israel. Lehmann, D. 1989. What Does a Conditional Knowl- edge Base Entail? Proc. of KR-89, 212 - 222. Lehmann, D., and Magidor, M. 1988. Rational Log- its and Their Models: a Study in Cumulative Logics. Technical Report, #TR-88-16, Dept. of Computer Sci- ence, Hebrew University, Jerusalem, Israel. Lifschitz, V. 1984. Some Results on Circumscription. Proc. of Nonmonotonic Reasoning Workshop, 151 - 164. Lifschitz, V. 1989 Open Problems on the Border of Logic and AI. Unpublished Manuscript. McCarthy, J. 1980. Circumscription - a Form of Non- monotonic Reasoning. Artificial Intelligence, 13: 27 - 39. Moore, R. C. 1985. Semantical Considerations on Nonmonotonic Logic Artificial Intelligence, 25: 75 - 94. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann. Pearl, J. 1989. Probabilistic Semantics for Nonmono- tonic Reasoning: A Survey. Proc. of KR-89, 505 - 516. Reiter, R. 1980. A Logic for Default Reasoning. Arti- ficial Intelligence, 13: 81 - 132. Satoh, K. 1989. A Probabilistic Interpretation for Lazy Nonmonotonic Reasoning. ICOT Technical Report, TR-525, ICOT, Japan. Shoham, Y. 1988. Reasoning about Change: Time and Causation from the Standpoint of Artificial Intel- ligence. MIT Press. Spohn, W. 1988. A General Non-Probabilistic Theory of Inductive Reasoning. Proc. of 4th AAAI Workshop on Uncertainty in AI, 315 - 322. References Adams, E. 1975. The Logic of Conditionals. D. Reidel, Dordrecht. 664 KNOWLEDGE REPRESENTATION
1990
124
876
ILITIES T CE Haim Shvaytser (Schweitzer) SRI, David Sarnoff Research Center CN5300 Princeton, NJ 08543-5300 haim@vision.sarnoff .com Table 1: the four worlds of R and C world R C probability Wl FALSE FALSE Abstract A method is described for deriving rules of infer- ence from relations between probabilities of sen- tences in Nilsson’s probabilistic logic. Introduction One intuitive interpretation of probability is a measure of uncertainty. In many of the appli- cation areas for artificial intelligence it is impor- tant to be able to reason with uncertain infor- mation; this has motivated research in developing methods for probabilistic inference. See, for ex- ample, [Nilsson, 1986, Fagin and Halpern, 1988, Pitt, 19891. A precise model for dealing with probabilities of sentences in predicate calculus was suggested by Nilsson in [Nilsson, 19861. In Nilsson’s probabilis- tic logic the probability of a sentence is its average truth value in possible worlds. Consider the fol- lowing example: Let R and C be two sentences; in a specific world a sentence is either TRU E or FALSE. The truth table of all worlds of R and C is given in Table 1. The probabilities of worlds are determined by an arbitrary probability dis- tribution, i.e., four values pl,p2,p3,p4, such that p; 2 Ofor i= 1,...,4,and: Pl +J%?+p3+p4 = 1. From Table 1 we see that R is true in the worlds w3 and ~4, so that its aaverage truth value is p3 + p4, while C is true in the worlds wf~ and ~4, and its average truth value is 112 + p4. Therefore, Prob(R) = p3 + 134 and Prob(C) = p2 + pd. The probability of other formulae involving R and C can also be computed from Table 1. Thus, since R + C is true in wr , ~02, ~4, we have: Prob( R -+ C) = pl + p2 -I- ~4, and from similar arguments : Prob(C + R) = p1 + 119 + 134. Now if R stands for the sch tence “it rains” and C for the sentence “it is cloudy”, the world w3 is impossible. In this case the value of p3 in Table 1 is 0, Prob( R -+ C) = 1, ant1 Prob(C -+ R) = p1 + P4. In the process of reasoning with probabilistic in- formation we are given probabilities of sentences, and either reason about probabilities of other sen- tences or learn new inforlnation about a specific world. Thus, since Prob(R --+ C) = 1 we can de- duce that it is cloudy in a0 world 20’ if we know that it rains in 20’. On the other hand, R cannot be deduced from C without the additional infor- mation that in a specific Ivorld 112 = 0 because if p2 # 0 then Prob(C --+ R) < I. In this paper we describe a method for identi- fying sentences that are true with probability 1 (i.e., in all possible worlds) from probabilities of sentences that are not necessarily true in all pos- sible worlds. As an example, notice that for any two sentences X, Y: x -+ Y s (X A Y) v (1X) so that: Prob(X + Y) = Prob(S A Y) + 1 - Prob(X). SHVAYTSER 665 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Therefore, Prob(X + Y) = 1 if and only if Prob(X A Y) = Prob(X). Specifically, if it is known that, say, Prob( R) = 0.7 and Prob( R A C) = 0.7 then it must be that R + C in all possible worlds. This is a special case of results that are described in the paper. Definitions The following definitions of possible worlds and probabilities of sentences are the same as those in [Nilsson, 19861. Let &,... , &, be n sentences in predicate cal- culus. A world is an assignment of truth values to h-1 &. There are 2n worlds; some of these worlds are possible worlds and the others are im- possible worlds. A world is impossible if and only if the truth assignment to 41, . . . , & is logically inconsistent. For example, if 42 = 141 then all worlds with both 41 = TRUE and 42 = TRUE are impossible. We denote by PW the set of possible worlds. An arbitrary probability distribution D is associated with PW such that a world w E PW has probabil- ity D(w) 2 0, and: x D(w) = 1. &PW The truth value of a formula 4 in the primitive variables 41, . . . , & is well defined in all possible worlds. The probability of 4 is defined as: Prob(q5) = x w-9. (1) WEPW 4 is true in w Random variables Linear rules of inference can be expressed as: A random variable X,,, is a function that has a well defined (real) value in each possible world. With a formula 4 we associate the random vari- able w( (b) that has the value of 1 in worlds where 4 is true and the value of 0 in worlds where 4 is false. Equation (1) can now be written as: . Prob(q5) = c ~(4) l D(w). (2) IIJEPW Definition: The expected value of the random variable X, is: E(X,) = x L - D(w). UNGPW (3) From Equation (2) we see that for any formula 4: Prob(4) = E(W)). (4) Rules of inference We consider (deterministic) rules of inference of the following type: Let X, be a random variable and 4 a formula. If: w(4) = xw in possible worlds then from X, = 1 infer 4 and from X, = 0 infer 14. We investigate only a restricted case of these rules in which X, can be expressed as a linear combi- nation of the variables ~(4;): If there are coefficients a;j such that: w($j) = C aijw(&) in possible worlds i#j then from &j a;jw( 4;) = 1 infer ~~ and from C;+j CLijW(&) = 0 infer l$j. We call rules of inference of this type linear rules of inference. The main result of this paper is a method for deriving a complete set of linear rules of inference. By this we mean a finite set of linear rules of infer- ence RI such that: if there is a set of linear rules of inference that can infer a formula $ then $ can also be inferred from RI. Algebraic structure w(4j) - C a;jw($;) = 0 ill possible worlds. i#j The left hand side is a linear combination of the random variables w( 4;) i = 1, . . . , n, that van- ishes in all possible worlds. A complete set of linear rules of this type can be obtained by ob- serving that these rules are all elements of a finite dimensional vector space, a,nd therefore, any ba- sis of this vector space is a. complete set of linear rules of inference. In order to determine a ba.sis to the vector space of linear rules of inference we consider three vector spaces: 666 KNOWLEDGE REPRJBENTATI~N v = SPan{w(h), l l * 3 u=&J}* An element v E V is a random variable that can be expressed as v = xi u;w(+i). W={vEV:v= 0 in possible worlds}. W is the vector space of elements of V that vanish in all possible worlds. Therefore, each element of W can be used as a linear rule of inference. u = v/w. U is the quotient space of V by W. See Chapter 4 in [Herstein, 19751 ( or any other basic text on Algebra) for the exact definition. Its elements are subsets of V in the form of v + W, where v E v. There is a natural homomorphism of V onto U with the kernel W. The elements of U are the equivalence classes of V, where two elements or, 212 E V are equivalent if and only if vr = 02 in all possible worlds. We use the notation v (mod W) for the equivalence class (element of U) of v. Thus, if vr = v2 in all possible worlds we write vr = 212 (mod W). The bases of the vector spaces V, W, U are re- lated in a simple way. If vr, l . . , vt is a basis of V, and the equivalence classes of vr , l l . , vd form a basis of U (d 5 t), then there are coefficients bij for i= l,..., t - d such that: d vd+i = x bdjvj (mod W). (5) j=l Furthermore, the t - . . d random variables wi, z = 1 Y”‘? t - d, that are given by: d w; = v&i - c bijvj j=l form a basis of W. (See Chapter 4 in [Herstein, 19751.) We conclude that a basis for W is a complete set of linear rules of inference which can be found by computing the linear dependencies in the vector space U that are given by Equation (5). Example: Let R and C be the two sentences from the example that was discussed in the in- troduction, where R = TRUE, C = FALSE is an impossible world. Let 41 = R, 42 = C, and 4s = R A C. The corresponding random variables are: Xl = ‘1u(41), x2 = w(&), and x3 = x1.x2 = ~(43). If we take V as Span{xr,x2,xa} then {x1,x2,x3} is a basis of V, and the equivalence classes of x1 and 22 form a basis of U. The formula R + C can be expressed ,x3 as x3 = Xl, which is a linear so that: in terms of x1, x2 rule of inference, x3 = Xl (mod IV), and x3 - x1 E W. It can be shown that x3 - x1 is a basis of W. Correlations and the correlation matrix Let E be the expected value operator as defined in Equation (3). The following observations en- able easy computation of linear dependencies in U by standard statistical techniques. For any two random variables x, y E V: e x = y (mod W) a E(x) = E(y). o x =0 (mod W) a 23(x2) =O. Based on these observations we show that linear dependencies in the vector space U can be com- puted by applying standard statistical techniques. The correlation of two random variables x, y is defined in the standa.rd wa.y as E( x y ). Let (21 7*--P xt} be t random variables from V. Their correlation matrix is the t X t matrix R = (Tij), where rij is the correlation value of xi and xj. The matrix R depends on the probability distribution D, but the following properties of R hold for all probability distributions. (For proofs see Chapter 8 in [Papoulis, 19841.) a) If the equivalence classes of xl, . . . , xt are lin- early independent in U then R is nonsingular. b) If the equivalence classes of xl, . . . , xt are lin- early dependent in U then R is singular. c) If the equivalence classes of x1, . . . , x +r are linearly independent in U, but the equivalence classes of xl,. . . , xt are linearly dependent in U then xt = a151 + - - - + at-m-1 (mod WI (6) and al,..., at-1 can be tern of linear equations obtained from the sys- (7) where the matrix R is the correlation matrix of Xl,...,X&l. SHVAYTSER 667 The correlation matrix of {xi, x2, x3) is: ( ii; 1:: k; ) . The correlation matrix of {xl, x2) is non-singular, but the correlation matrix of {xl, x2, x3) is singu- lar, and the system of equations (7) gives: ( 8:: 2 ) ( :; ) = ( x::: ) * The solution is al = 1 and u2 = 0, which gives the rule of inference x3 = xl, i.e., w(R A C) = w(R) in possible worlds which is equivalent to R + C in possible worlds. Table 2: specific world probabilities The following algorithm uses the properties of the correlation matrix to generate a basis for W in the form of linear rules of inference. Its in- put is the correlation values of a set of random variables S = {xl, . . . , xn}. In the algorithm we denote by I c S a set of random variables that are linearly independent modulo W (i.e, their equiv- alence classes are linearly independent in U), and R is their correlation matrix. Algorithm: Initially, let I = {xl}, so that R is (rrr), a matrix of size 1 x 1. For each xt E S: l- Let R’ be the correlation matrix of the random variables in I U {xi}. 2- If R’ is singular solve the system of equa- tions (7) and output the linear rule of inference (6); otherwise, I c I U {xt}, and R c R’. Since the algorithm computes the linear depen- dencies that are given by Equation (6) it gener- ates a basis for W, which is a complete set of linear rules of inference. Example: Let x1, x2, x3, be the random variables from the example that was given at the end of the previous section, with x1 = w(R), x2 = w(C), and x3 = w( R A C). If the four worlds of R and C appear with probabilities as given in Table 2 we have: Tll = E(xl-xl)= Prob(R) = 0.7 7-12 = r21 = E(xl - x2) = Prob(R A C) = 0.7 7-13 = 7-31 = E(xl l x2) = Prob(R A C) = 0.7 7-22 = E(x2 - x2) = Prob(C) = 0.9 r23 = r32 = E(x2 - x3) = Prob(R A C) = 0.7 r33 = E(x3 l x3) = Prob(R A C) = 0.7 The correlation matrix of {xl, x2) is: ( i::: ii:;) * Notice that this result was obtained from the probabilities of the sentences R, C, and R A C, and not from Table 2. Inference rules as CNF formulae Our algorithm for deriving rules of inference from probabilities can be used only when linear rules of inference exist. In this section we show how the algorithm can be applied to derive other types of rules of inference. We consider tions of modus rules of ponens: inference that are varia- Let X,Y be two sentences such that X ---) Y in all possible worlds. Then in a world where X = TRUE infer Y = TRUE. IJet 41 ?a . . y 4h. be derive rules of the n sentences. type: We would like to !P - 4i (8) where @ is a formula in the sentences $j, j # i. Notice that Equation (8) can also be written as: I@ = IP A c,bi in pmsible worlds. Therefore, using the random variables w($i) and w(!l! A 4i) we can write Equation (8) in the equiv- alent form: w(q) = w(Q A 4;) in possible worlds. (9) The reason that the results of previous sections cannot be applied directly to derive rules of the type of (9) is that Equation (9) is not a linear rule of inference for the sent en ces $1, . . . , &, . 668 KNOWLEDGEREPRESENTAT~~N The basic idea of this section is that rules of the degree at most k of w(4i), i < 12. Therefore, Equa- type of Equation (9) can be linearized by adding tion (10) is a multilinear form of degree at most sentences to 41,. . . , &. For example, notice that k + 1 of w(4i), i 5 n. Since each monomial of the if we add the (;) sentences di A bj for i # j to multilinear form is linear in formulae from 0 the 41 , . . . , $n then all formulae of the type & + 40 rule in Equation (10) is 1inea.r in formulae from 0. can be expressed as the linear rules: 0 Clearly, any rule of inference can be regarded as a linear rule for some formulae. However, if too many sentences are added to 41,. . . , #n then the algorithm of the previous section may become im- practical. We investigate the case in which the formulae Xl? are expressed in conjunctive normal form and show that if they have a small size of clauses then the number of formulae that need to be added to $I,... , & is polynomial in 12. A formula in conjunctive normal form (CM F) of 4 1,--s, +n is a conjunction pl A - - - A p, of clauses, where each clause pi is a disjunction ~1 V - l - V Qj; of literals. A literal is either a sentence 4 or the negation 4 of a sentence. A k-CNF is a CNF expression with clauses that are disjunctions of at most k laterals. For example, (41 V $2) A ($1 V ~$2 V 43) is a 3-CNF. Theorem: Let 0 be the set of sentences that can be obtained from disjunctions of at most k + 1 sentences from 41, . . . , @n. A formula of the type Q -+ hi where !P is a k-CN F of 41, . . . , & can be expressed as a linear rule of inference of sentences from 0. Proof: Let cl,. . . , c, be the clauses of !I!: q = cl A .--A c,. Application The ability to derive crisp informa,tion from prob- abilities is most useful in cases where probabili- ties can be computed easily. We have shown in [Shvaytser, 19881 how similar ideas enable learn- ing from examples in the sense of Valiant. (The probabilities were obtained from samples of ex- amples that correspond to possible worlds.) How- ever, there seem to be cases in which it is more natural to have information a.s probabilities and not as examples. Consider a system of 72 computers that are con- nected in a parallel architecture. From time to time the system is required to handle a problem which is distributed among 12/2 of the computers. Let 4; be the sentence: “Computer i is busy work- ing on the problem”. In this case a possible world is a world in which exactly half of the n computers are busy working on the problem. Let Xi = w(4i). By introducing an additional sentence, 40, which is always TRU E, and its corre- sponding random variable ~0 z 1, there are linear rules of inference since: n x; = 24x0 - 2 c Xj. (11) j=l ,j#.; Now consider the case in lvhich the system mal functions, and we suspect that there are problems with the distribution of tasks among the comput- ers. This can be verified by checking the condi- tion: This means that @=TRUE e Fw(c,)=m, cu=l and !P -+ #i if and only if n c 2; = n/2, (12) i=l (2 W(h) - m>(W(h) - 1) = (W(h) - 1). (lo) cV=l Each clause c,, for CL! = 1, . . . , m is a Boolean for- mula of at most k variables 4;) i < n, so that w(ccy), can be expressed as a multilinear form of many computers. Therefore, verifying the above condition for each instance of the problem may cause long delays and may not allow a verifica- tion in real time. but verifying this condition takes time propor- In this case we are not interested in a probabilis- tic answer such as that the condition holds “with tional to n when checked by a single computer, and at least time proportiona, to logn even with SHVAYTSER 669 Table 3: distribution of instances x0 xl 22 x3 x4 x5 26 # instances 1 1 1 1 0 0 0 500,000 1 1 0 1 0 1 0 400,000 1 0 1 1 1 0 0 100,000 high probability”. We would like to verify that for al2 instances of the problem condition (12) holds. Since Equation (12) can be expressed as a linear rule of inference it can be inferred from probabil- ities that can be computed in real time. By as- signing a processor to each pair of computers, the number of times in which they are both activated can be computed in a constant time. For the pair i and j this is equivalent to the probability of the formula 4; A 4j when scaled properly. As a numerical example, consider the case in which n = 6, the number of instances is l,OOO,OOO, and they are given in Table 3. The COrrehtion matrix of 20, ’ ’ - , xs, is: 10 9 6 10 1 1 0 1 6 162 100 10 1011100 110 1010 0 00 0 000 Applying the algorithm we get three linear rules of inference: x3 = x0 x5 = 2x0 - Xl - x2 - x4 x6=0 , and one can easily verify that they can infer any- thing that can be inferred from Equation (11). Furthermore, they imply Equation (12). Conclusions We have shown that relations between probabili- ties of sentences can always be used to determine linear rules of inference, whenever such rules exist. This shows that in many cases probabilities can be used to infer crisp (non-probabilistic) knowledge. References [Fagin and Halpern, 19881 R. Fagin and J. Y. Halpern. Reasoning about knowledge and prob- ability. In Proceedings of the Second Confer- ence on Theoretical Aspects of Reasoning About Knowledge, pages 277-293. Morgan Kaufman, , 1988. [Herstein, 19751 I. N. Herstein. Topics in Algebra John Wiley & Sons, second edition, 1975. [Nilsson, 19861 N. J.’ Nilsson. Probabilistic logic. Artificial Intelligence, 28:71-87, 1986. [Papoulis, 19841 A. Papoulis. Probability, random Variables, and Stochastic Processes. McGraw- Hill, second edition, 1984. [Pitt, 19891 L. Pitt. Probabilistic inductive infer- ence. Journal of the ACM, 36(2):383-433, April 1989. [Shvaytser, 19881 H. Shvaytser. Representing knowledge in learning systems by pseudo boolean functions. In Proceedings of the Second Conference on Theoretical Aspects of Reason- ing About Knowledge, pa,ges 245-259. Morgan Kaufman, 1988. 670 KNOWLEDGE REPRESENTATION
1990
125
877
Very Fast Decision Table Execution of Propositional Expert Systems Robert M. Colomb Charles Y.C. Chung CSIRO Division of Information Technology Box 1599 North Ryde NSW 2113 Australia colomb@syd.dit.csiro.au Abstract A formal equivalence between propositional expert systems and decision tables is proved, and a practicable procedure given to perform the transformation between propositional expert systems and decision tables. The method gave an order of magnitude speed increase for a well-known expert system in routine use. The method is very general: adaptations are shown for forward and backward chaining inferencing engines, inexact reasoning, and systems where some facts have a high cost and must be determined only if necessary. A particular application for the decision table representation is in real-time expert systems, since a simple hardware implementation is available which gives further orders of magnitude increase in performance. Finally, the decision table representation greatly simplifies the problem of completeness and consistency checking. Introduction Expert systems which rely on the propositional calculus are very common, and the standard implementations are very computationally expensive, both in time and memory. For example, the COLOSSUS system (Beinat & Smart 1989) has 6500 rules, and runs on a very large mainframe computer. It requires 20 megabytes of real memory per user, and has a response time measured in minutes. In this paper, we show that a propositional expert system can be mechanically transformed into a decision table. Decision tables are very simple computational structures which can be executed very quickly and require little memory. With very simple hardware assist, it is possible to build systems with hundreds of rules with execution times of a few tens of microseconds, which could greatly expand the useful domain of expert system technology, especially in real time applications. In addition, the decision table representation greatly simplifies the problem of checking rules for completeness and consistency. After a few definitions, we present a general proof of the equivalence of propositional expert systems and decision tables. We then describe a computationally practicable algorithm for performing the transformation, and describe its application to a real system. The algorithm can be easily adapted to a much more space efficient product, shown in the next section. We describe generalizations to inexact reasoning systems and systems where some facts have a high cost and must be obtained only if necessary. We show how the decision table form is particularly adapted to real time applications, and finally consider consistency and completeness checking in the decision table representation. The paper completes with a conclusion section. This restriction to propositional systems differs from the main stream of parallel production system research (Gupta et al. 1986, Stolfo 1985), which concentrates on systems like OPS-5, which are based on the first order predicate calculus. Definitions A propositional expert system is a set of propositions in Horn clause form. Each clause consists of an antecedent, which is a conjunction of elementary propositions, and a consequent, consisting of a single elementary proposition. A clause will be called a rule in the following. Note that a proposition with disjunctions in its antecedent or conjunctions in its conclusion can be easily transformed into clausal form. Elementary propositions can be classified into three mutually exclusive groups: facts, which appear only in antecedents; conclusions, which appear only as consequents; and assertions, which appear both in antecedents and consequents. We designate the set of facts as F = (fi}, conclusions as C = (ci), assertions A = {ai} andrules as R = (ri}. It is convenient to consider a fact as an assignment to a variable of one of a small number of possible values. We thus obtain a set X of variables xi each of which has a set of possible values Vi, and every fact is a proposition of the form Xi = v for v in Vi. An input I is a conjunction of assignments to a subset of the variables. The assignment is incomplete if the subset is proper. (An incomplete input corresponds to the values of some of the variables being unknown.) In applying an expert system R to an input I, we obtain a propositional system P consisting of the conjunction of I with the disjunction of the rules in R, and consider the conclusions C. A conclusion is determined by the input if it is a logical consequence of P, inconsistent with the input if its negation is a logical consequence of P, and consistent with the input if its negation is not a logical consequence of P. A conclusion consistent with an input may be determined by it, but not necessarily. A decision table is a table with one column for each variable and an additional column for an action. Each row COLOMBAND~HUNG 671 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. of the table contains an assignment of values to variables and an associated action. An incomplete assignment is equivalent to a don’t care condition for the variables without values. A decision table is executed by presenting it with an assignment I. If any row is a subset of I, its action is executed. Equivalence Between Propositional Expert Systems and Decision Tables A decision table is a propositional expert system. The assignment in a row is a conjunction of propositions, which is the antecedent of a rule. The action of a row is a conclusion. A row is therefore equivalent to a rule, and a collection of rules is a propositional expert system by definition. Note that the propositional system has only facts and conclusions, but no assertions. Such a system will be called a flat expert system and is clearly equivalent to a decision table. A propositional expert system is a decision table. Consider an expert system R with an input I. Associate with I the subset of conclusions determined from the propositional system P given by the conjunction of I with the disjunction of rules in R. Call this subset R(1). R(1) can always be computed since the propositional calculus is decidable. R can thereby be seen as a function mapping the set of assignments into the set of subsets of conclusions. The number of possible assignments is finite. The function can in principle be expressed in an extensional form by writing down each assignment I as a row in a table and appending to it the subset of conclusions R(1). This form is by definition a decision table. This proof is constructive, but unfortunately not practicable, since the number of assignments is exponential in the number of variables. It is, however, general. In particular, it is independent of the inference engine used and of the details of any rules involving assertions. A Practicable Transformation Algorithm This section presents a practicable transformation algorithm, first by making some restrictive assumptions. It is then shown how the algorithm can be modified to remove many of the restrictions. The restrictive assumptions are that the system uses a forward chaining inference engine, and that no assertion is negated in either an antecedent or consequent of a rule. We consider the rules ri in the expert system R as nodes in a graph. An arc is drawn between ri and rj if there is an assertion a which appears as the consequent of ri and in the antecedent of rj . We assume that this graph is acyclic: no proposition is a consequent of a rule of which it is an antecedent, or more generally, no proposition is a consequent of a rule which has an antecedent necessarily logically dependent on it. This graph is essentially the dependency graph of Nguyen, et al. (1985). We partition the set of rules into B, those all of whose antecedents are facts; K, those whose consequents are conclusions; and M, the others. (B and K are assumed 672 KNOWLEDGEREPRESENTATION disjoint. Any rules in B intersect K are flat by definition, so can be removed and added to the flat expert system produced by the algorithm after its completion.) A node r can be labelled by the maximum number of arcs between it and a member of B, as shown in Figure 1. B is the set with label 0, and M can be partitioned into Ml, M2, etc. each Mi consisting of the nodes with label i. The label of K is defined as the maximum depth of reasoning of the system R. As shown in the figure, we also label each arc with the assertion from which the arc is derived and each assertion with the maximum label of a node with that assertion as a consequent. This labelling can be done in the course of verifying that the graph has no cycles using an algorithm successively removing nodes with no input arcs. 3 Figure 1 Graphical Representation of Rules Labelled by Distance From Base Lemma 1: Every assertion in an antecedent of a rule of label i must have a label less than i. Proof follows directly from the definitions. Lemma 2: There is at least one rule with every label up to the maximum depth of reasoning. Proof: if a level i were missing, then in the acyclic verification process when step i were reached there would be no nodes without input arcs and the graph would therefore be cyclic. The algorithm proceeds by replacing each assertion a in the antecedents of rules labelled i with the disjunction of the antecedents of rules in labelled i-l of which Vassertion a is a consequent, beginning with label 1 . This process in effect collapses the rules onto the conclusions by successively replacing intermediate assertions with expressions which imply them. The resulting propositions have only facts in their antecedents and only conclusions as consequents, can be expressed in clausal form, therefore form a flat expert system and therefore are equivalent to a decision table. A detailed presentation of the algorithm with a worked example is given in Colomb (1989). The assumptions can be relaxed. Analysis of Algorithm The algorithm can be divided into two parts: construction of a labelled dependency graph, and production of the decision table from the labelled graph. We begin with a table of rules. An auxiliary data structure is required: an ahernatives cozm, which will contain for each assertion the number of rules having that assertion as consequent. The dimensions of the problem will be: r the number of rules a the average number of assertions per rule d the maximum depth of reasoning To construct the dependency graph, we first count the number of alternatives for each assertion. This requires one step per rule, therefore is O(r). Facts will be taken as having zero alternatives. We then proceed to identify the rules with label I, which are those rules all of whose antecedents have zero alternatives. When a rule is labelled, we decrement the number of alternatives for its consequent assertion, and also record a pointer to the rule in a data structure associated with the assertion. This step requires examination of each antecedent in each rule, and is therefore O(ar). At the end of the step, additional assertions have a zero alternative count (follows from lemma 2). The graph can be completely constructed and labelled in one step for each possible label, bounded by the maximum depth of reasoning. Construction of the labelled dependency graph is therefore O(ard). Production of the decision table is done by repeated traversal of sections of the dependency graph. The cost of a single traversal is highly dependent on the details of data representation, but requires at most examination of each antecedent in each rule, therefore is at most O(ar). One traversal is required for each row in the resulting decision table. It is therefore necessary to estimate the number of rows. There will certainly be one row in the decision table for each rule whose consequent is a conclusion. Additional rows will arise from alternative paths for satisfying the antecedents of one of these terminaI rules. An alternative arises if one of its antecedents is the consequent of more than one rule. It follows that the number of rows is equal to the number of terminal rules if the alternatives count is I for each assertion, and that the number of rows increases in a complex multiplicative way as the aZternatives counts become greater than I. The problem is the same as converting an arbitrary boolean expression into disjunctive normal form. For example (a + b & (c + d)) & (e + f) converts to a&e+b&c&e+b&d&e+ a&f+b&c&f+b&d&f We can compute the number of rows during the construction and labelling of the dependency graph. We need an additional data structure fuZZ alternative counts paralleling the alternative counts, having one entry for each assertion, and also for each conclusion. For a particular assertion, the latter is the number of rules having that assertion as consequent, while the former will be the number of disjuncts in the disjunctive normal form expression which implies that assertion starting from facts only. Full azternative count is a sort of transitive closure ofalternative count, and is initially 0. A fact has full alternative count of 3. When a rule is labelled, the furl alternative count associated with its consequent is increased by the product of the full alternative counts of its antecedents. The total number of rows in the decision table is the total of the fuZZ alternative counts of all the conclusions. If IV is the number of rows, then the production of the decision table at most O(Nar). This step will tend to dominate the computation time. Backward Chaining The basic algorithm given above is based on a forward chaining inference engine, which starts with known facts and derives conclusions as the antecedents of rules become known. If the rules labelled n are identified as layer n, This corresponds to a traversal of the dependency graph starting from layer 1. This traversal will tend to be layer by layer, necessarily so if negation by failure is employed. We will call the labels and layers obtained in the forward chaining approach as forward labels and layers, respectively. An alternative way to inference is to start from the rules with conclusions as consequents and work backwards, called backward chaining. If the inference engine is backward chaining, we note that a graph can be verified acyclic by successively removing nodes with no output arc. The dependency graph can therefore be labelled with minimum distance from K rather than from maximum distance from B, and the algorithm modified accordingly. The maximum depth of reasoning d is clearly unchanged. These labels will be called backward labels. Note that if a node has backward label i it must have forward label less than d - i. This follows from the observation that step d of the backward algorithm removes nodes with no input arc. The backward collapse therefore has the same result as the forward collapse since it can be performed by the forward collapse algorithm on the graph with the backward labelling. Under the assumptions made so far, forward and backward chaining have exactly the same result, obtained in the forward chaining case by collapsing the dependency graph from the facts onto the conclusions through the assertions. In the backward chaining case, the assertions are subgoals, and the algorithm coliapses the graph from the conclusions onto the facts through the subgoals. The two strategies can COLOMBANDCHUNG 673 be viewed as alternative ways of constructing the function mapping the set of assignments into the set of subsets of conclusions. Negated Assertions in Antecedents We assume that the inference engine does not evaluate a rule with a negated assertion in its antecedent until it has evaluated all rules with that assertion as consequent. Since no rule has a negated assertion as a consequent, the inferencing must rely on the closed world assumption for negation. Let r be such a rule and let i be the maximum forward label of any negated assertion in its antecedent. Rule r is labelled with the maximum of i and the label of any un- negated assertion in its antecedent. When rule r is reached in the forward collapse algorithm, any negated assertion is replaced by the negation of the expression implying that assertion. Negated Assertions as Consequents In a system where negated assertions are allowed as consequents, the closed world assumption is not needed. An assertion and its negation are in most respects separate propositions and can be treated as such, with two exceptions. First, the negation of an assertion can not label any arc leading from the base set to a rule for which that assertion is a consequent. Second, any input which would imply both the assertion and its negation is forbidden. The algorithm in its course identifies an expression in facts which implies each proposition. If we have for assertion a El ->a; E2->nota; then a valid input must be consistent with no@1 and E2) Test Case The algorithm has been successfully applied to the Garvan ES1 thyroid assay system (Buchanan 1986), which has been in routine use since 1984. It has 600 rules, and an average depth of reasoning of about 4. Some of the rules have negated assertions in their premises, but no rule asserts a negation. The system normally runs on a PDP-11 with a specialized inference engine. For purposes of comparison, it was translated into OPS-5 and run on a Microvax II, where it operates at about 18 rule firings per second, taking about 220 milliseconds to generate a conclusion. It was transformed into a decision table with 5300 rows. There are 34 variables with a total of 93 possible values, so the decision table requires 5300 x 93 bits, or about 62k bytes of memory. There are a number of ways to process a decision table. One is to convert it into a decision tree, using methods like ID3 (Quinlan 1982). This approach is presently under investigation. A balanced decision tree with N leaves identifies a particular leaf in log20 decisions, so that a table with 4096 rows would be computed in 12 decisions, each of which is a simple if...then statement. This approach would clearly be extremely fast on standard hardware. In addition, there is evidence that the decision table can be considerably reduced, also the subject of continuing research. Execution results presented here are from a method using an inexpensive bit-serial content-addressable memory (Colomb & Allen 1989) acting as a co-processor on a Sun 3/160. It is capable of processing a decision table at a rate of about 100 million bits per second, and can compute a decision from the transformed Garvan ES1 in about 2 milliseconds. The processor used has a programming model similar to the MasPar, Distributed Array Processor, and the Connection Machine, all of which are commercially available fine-grained parallel machines. The MasPar, for example, would be able to execute the system in about 20 microseconds. We can conclude from this that it is possible to transform a general propositional expert system into a form that is capable of execution in a time sufficiently short that it opens many possibilities for the use of expert systems in real time applications. Generalizations There are a number of practical issues in expert systems engineering which are not addressed by the preceding results. These include explanation capability, obtaining expensive facts only when necessary, and inexact reasoning. The results can be generalized to deal with all of these issues. Explanation Capability An important feature of expert systems is the ability to explain a conclusion or the reasons for asking a particular question. Most approaches to explanation follow the chain of assertions between the base facts and the conclusion, so are derived from the trace of the traversal of the dependency graph. In practice, many expert systems do not use explanations in their normal execution. (Garvan ES 1, for example, is a batch program.) Jansen & Compton (1988) make a strong case for the separation of the normal execution environment where explanations are not available from a maintenance environment where a very complete explanation environment is provided In any case, it is possible to adapt the main results to give an efficient computation structure which permits a complete explanation capability. Rather than a decision table, this approach relies on executing the rules in sequence in a single pass. Recall that the algorithm labels each rule with the maximum number of inference steps between it and the base facts. From lemma 1, all antecedents of rules at level i are determined by rules of level less than i. In addition, rules at the same level can be evaluated in any order. Clearly, if the rules are sorted by level, it is possible to execute them in a single pass. It is only necessary to keep a table of the value of each of the assertions (initialized to fake if the closed world assumption is used) which is 674 KNOWLEDGEREPRESENTATION updated by any rule firing whose consequent makes that assertion. Since no assertion found in the antecedent of a rule can be changed by any subsequent rule, a complete explanation capability is available. For example, if the question is “why did a particular rule not fire?“, the table of assertions will contain the values of all the antecedents of that rule at the time it was considered. A further explanation can be obtained in a similar way be examining the rules in earlier layers which have a particular assertion as consequent. One way to represent this system is as a decision table with one row per rule and one column for each possible value of each fact augmented by one column for each possible value for each assertion. The Garvan system noted above can be represented as a decision table with 600 rows. There are 52 assertions, so 104 columns are needed besides the 93 required for the 34 fact variables, so that 600 x (104 + 93) bits or about 15k bytes are needed for its storage, considerably less than the 62k bytes needed for the fully expanded decision table. The Knowledge Dictionary of Jansen & Compton (1988) has been re-implemented with an inference engine employing the method of this section (Lee, 1990). Note that the Garvan ES1 inference engine (Horn et al. 1985) takes essentially this approach to get fast execution on a PDP-11 with limited memory. Their approach can now be seen to be quite general. Expensive Facts The previous results make the implicit assumption that all facts are available at the beginning of inference, and all facts have equal cost. In practice, some facts may have a high cost, perhaps because they require database access or questioning the user. In this case, it is usual to first make use of the inexpensive facts available at the beginning, obtaining the expensive facts only if necessary. There will usually be rules whose consequent is not an assertion, but a command to assign values to a group of variables. The set of facts can be labelled in the same way as the assertions, with facts available immediately labelled zero. In the decision table representation, the row headings can be sorted in increasing order of label. If the table is processed left to right, by the time a column labelled one or more is reached, the conditions under which that column is needed would be able to be evaluated. In the single-pass representation, the rule whose consequent is to obtain these facts will be in its correct place in the sequence. Note that in this case the choice of forward or backward chaining affects the sequence in which expensive facts are obtained, since the dependency graph is traversed in a different order. This order is preserved in the sequence of column headings in the resulting decision table or in the sequence of rules in the single pass version. Inexact Reasoning Some expert systems use one or another form of inexact reasoning. The result can be adapted to this situation, although insufficient research has been conducted to determine the practicality of the method. First, an uncertainty measure can be appended to each proposition. An assignment of values to variables would also assign an uncertainty measure. The subset of conclusions would also have uncertainty measures. The main theorem still holds. Second, in the forward chaining algorithm, the uncertainty measure can be propagated as a tree of function composition. For example, if u(x) is the uncertainty of proposition x, we might have a&b->c u(c) = f(u(a), u(b)) c&d-Be u(e) = f(u(c), u(d>) then we would have u(e) = f(fo, u(b)), u(d)) If the uncertainty propagation function is associative, it is not necessary to record the tree of inferences by which the assertions are eliminated, and the uncertainty of a conclusion can be computed directly from the uncertainties of the base facts in its antecedent. In particular, the commonly employed Bayesian measure of uncertainty is a priori independent of the intermediate assertions, since the joint probability of conclusions and base facts is known in principle independently of the reasoning system. Advantages of Decision Table Representation Representation of an expert system as a decision table has advantages apart from the possibility of faster execution. Real Time The main impact of these results on real-time systems is that execution time is not only much faster than conventional implementations, but it is also bounded. If the decision table is converted into a decision tree, the maximum number of decisions is known. If the decision table is processed directly by a fine-grained parallel processor, the maximum number of column operations needed is known. A secondary benefit, particularly when the decision table is processed directly using a fine-grained parallel processor, comes from the fact that in real time situations facts are sometimes available asynchronously. In the decision table representation, it is very simple to compute the set of conclusions consistent with any assignment, no matter how incomplete. If we collect facts as they become available into what can be called a current assignment, we can always associate with the current assignment the set of conclusions consistent with it. Of course, we can in particular note the conclusions determined by the current assignment. There might be a subset of conclusions considered to be important for some reason. It would be easy to monitor whether any important conclusions were consistent with the current assignment, for example. The possibility of one of these might trigger some additional measurements. COLOMBANDCHUNG 675 When a measurement is made which conflicts with a fact in the current assignment, the associated conclusions of the new current assignment can be computed quickly, perhaps making complex truth maintenance algorithms less necessary. Rule Induction Since propositional expert systems are equivalent to decision tables, it is more plausible that rule induction methods which build decision trees from examples (e.g Quinlan 1986), are generally applicable. Consistency and Completeness The decision table representation is much easier to test for consistency and completeness. The methods advocated by e.g. Cragun & Steudel (1987) are seen to be generally applicable. Conclusion The equivalence of propositional expert systems and decision tables has been shown, and a practicable algorithm presented for transforming an expert system into a decision table. The algorithm has been successfully tested on a substantial system of real utility. The method is capable of generalization to accommodate many of the practical problems encountered in practice, and makes consistency and completeness checking much easier. A particular consequence is that the computation time for these systems can be reduced by orders of magnitude, potentially greatly increasing the applicability of expert systems technology especially for real time problems. Acknowledgements Thanks to the referees for their helpful suggestions. References Beinat, P. & Smart, R. (1989) COLOSSUS: Expert Assessor of Third Party Claims Fifth Australian Conference on Applications of Expert Systems Sydney, Australia, pp. 70-85. Buchanan, B. (1986) Expert Systems: Working Systems and Research Literature Expert Systems 3( 1): 32-5 1. Colomb, R.M. (1989) Representation of Propositional Expert Systems as Decision Tables Third Joint Australian Artificial Intelligence Conference (Al’89) Melbourne, Victoria, Australia. Colomb, R.M. & Allen, M.W. (1989) Architecture of the Column Computer Conference on Computing Systems and Information Technology, Institution of Engineers, Australia. Cragun, B.J. & Steudel, H.J. (1987) A Decision-Table- Based Processor for Checking Completeness and Consistence in Rule-Based Expert Systems International Journal of Man-Machine Studies 26(5):633-648. Gupta, A., Forgy C., Newell, A, & Wedig, R. (1986) Parallel Algorithms and Architectures for Rule-Based Systems 13th Annual International Symposium on Computer Architecture IEEE, 28-37. Horn, K.A., Compton, P., Lazarus, L., & Quinlan, J.R. (1985) An Expert Computer System for the Interpretation of Thyroid Assays in a Clinical Laboratory Australian Computer Journal 17(1):7-l 1. Jansen, R. & Compton, P. (1988) The Knowledge Dictionary: An Application of Software Engineering Techniques to the Design and Maintenance of Expert Systems AAAI-88 Workshop on Integration of Knowledge Acquisition and Performance Systems, Minnesota USA. Lee, M. R.-Y. (1990) The Implementation of a Knowledge Dictionary in SQL Technical Report TR-FD-90-02 CSIRO Division of Information Technology, Sydney Australia. Nguyen, T.A., Perkins, W.A., Laffey, T.J., & Pecora, D. (1985) Checking an Expert System Knowledge Base for Consistency and Completeness” IJCAI-85 Morgan Kaufman. Quinlan, J.R. (1982) Semi-Autonomous Acquisition of Pattern Based Knowledge, in Hayes, J.E., Michie, D., and Pao, Y-H, eds, Machine Intelligence 10, Ellis Horwood, 159-172. Quinlan, J.R. (1986) Induction of Decision Trees Machine Learning l(1): 81-106. Stolfo, S.J. (1985) On the Design of Parallel Production System Machines: What’s in a LIP? Proceedings 18th Hawaii International Conference on System Science. 676 KNOWLEDGEREPRESENTATION
1990
126
878
The Intelligent Database Interface: Integrating AI and Database Systems Donald P. McKay and Timothy W. Finin and Anthony O’Hare* Unisys Center for Advanced Information Technology Paoli, Pennsylvania mckay@prc.unisys.com and finin@prc.unisys.com Abstract The Intelligent Database Interface (IDI) is a cache-based interface that is designed to provide Artificial Intelligence systems with efficient access to one or more databases on one or more remote database management systems (DBMSs). It can be used to interface with a wide variety of different DBMSs with little or no modification since SQL is used to communicate with remote DBMSs and the implementation of the ID1 provides a high degree of portability. The query language of the ID1 is a restricted subset of function-free Horn clauses which is translated into SQL. Results from the ID1 are returned one tuple at a time and the ID1 manages a cache of result relations to improve efficiency. The ID1 is one of the key components of the Intelligent System Server (ISS) knowledge representation and reasoning system and is also being used to provide database services for the Unisys spoken language systems program. Introduction The Intelligent Database Interface (IDI) is a portable, cache-based interface designed to provide artificial intel- ligence systems in general and expert systems in par- ticular with efficient access to one or more databases on one or more remote database management systems (DBMS) which support SQL [Chamberlm, et. al., 19761. The query language of the ID1 is the Intelligent Database Interface Language (IDIL) [O’Hare, 19891 and is based on a restricted subset of function-free Horn clauses where the head of a clause represents the tar- get list (i.e., the form of the result relation) and the body is a conjunction of literals which denote database relations or operations on the relations and/or their at- tributes (e.g., negation, aggregation, and arithmetic op- erations). The ID1 is one of the key components of the In- telligent System Server (ES) [Finin, et. al., 19891 which is based on Protem [Fritzson and Finin, 19881 and provides a combined logic-based and frame-based knowledge representation system and supports forward- chaining, backward-chaining, and truth maintenance. The ID1 was designed to be compatible with the logic- based knowledge representation scheme of the ISS and *current address: IBM, Research Triangle Park, North Carolina its tuple-at-a-time inference mechanisms. The ID1 has also been used to implement a query server supporting a database used for an Air Trorvel Information System which is accessed by a spoken language system imple- mented in Prolog [Dahl, et. al., 19901. In addition to providing efficient access to remote DBMSs, the ID1 offers several other distinct advan- tages. It can be used to interface with a wide vari- ety of different DBMSs with little or no modification since SQL is used to communicate with the remote DBMS. Also, several connections to the same or differ- ent DBMSs can exist simultaneously and can be kept active across any number of queries because connec- tions to remote DBMSs are abstract objects that are managed as resources by the IDI. Finally, accessing schema information is handled automatically by the IDI, i.e., the application is not required to maintain up-to-date schema information for the IDI. This signif- icantly reduces the potential for errors introduced by stale schema information or by hand entered data. The ID1 can be viewed as a stand-alone DBMS inter- face which accepts queries in the form of IDIL clauses and returns the result relation as a set of tuples (i.e., a list of Lisp atoms and/or strings). IDIL queries are translated into SQL and sent to the appropriate DBMS for execution. The results from the DBMS are then transformed by the ID1 into tuples of Lisp objects. Al- though the IDI was not designed to be used directly by a user, the following descriptions will be couched in terms of using the ID1 as a stand-alone system so that we may avoid complicating our discussions with the details of an AI system such as the ISS. The design of the ID1 was heavily influenced by pre- vious research in the area of AI/DB integration [Kellog, et. al., 1986, O’Hare, 1987, O’Hare and Travis, 1989, O’Hare and Sheth, 19891. One of the more significant design criteria that this lead to is the support of non- trivial queries in IDIL. That is, to allow for queries in- volving more than just a single database relation. This capability allows the AI system to off-load computa- tions that are more efficiently processed by the DBMS instead of the AI system (e.g., join operations). In many cases, this also has the effect of reducing the size of data set that is returned by the DBMS. MCKAY ET AL. 677 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Ertmdhg Re Al system ldxse coupling Figure 1: Of the four alternative approaches to AI/D5 inte- gration, the Intelligent Database Interface is an example of an enhanced AI/DB interface. While the ID1 is to some small degree system depen- dent, it does offer a high degree of portability because it is implemented in Common Lisp, communicates with remote DBMSs using SQL and standard UNIX pipes, and represents IDIL queries and their results as Com- mon Lisp objects. In the following sections we present a brief overview of the area of AI/DB integration which represents a large part of the motivation for the IDI, a discussion of some of the more significant features of the IDI, the organization and major components of the IDI, and fi- nally an example of how the ID1 is being used in two applications. - AI/DB Integration The integration of AI and DBMS technologies promises to play a significant role in shaping the future of com- puting. As noted in [Brodie, 19881, AI/DB integration is crucial not only for next generation computing but also for the continued development of DBMS technology and for the effective application of much of AI technol- ogy* While both DBMS and AI systems, particularly ex- pert systems, represent well established technologies, research and development in the area of AI/DB inte- gration is comparatively new. The motivations driv- ing the integration of these two technologies include the need for (a) access to large amounts of shared data for knowledge processing, (b) efficient manage- ment of data as well as knowledge, and (c) intelli- gent processing of data. In addition to these moti- vations, the design of ID1 was also motivated by the desire to preserve the substantial investment repre- sented by most existing databases. To that end, a key design criterion for ID1 was that it support the use of existing DBMSs as independent system compo- nents. As illustrated in Figure 1 and described below, several general approaches to AI/DB integration have been investigated and reported in the literature (e.g., [Bocca, 1986, Chakravarthy, et. al., 1982, Chang, 1978, Chang and Walker, 1984, Jarke, et. al., 1984, Li, 1984, Minker, 1978, Morris, 1988, Naish and Thorn, 1983, Reiter, 1978, Van Buer, et. al., 19851). Extending the AI System: In this approach, the AI system is extended with DBMS capabilities to provide efficient access to, and management of, large amounts of stored data. In general, such systems do not incorporate full DBMS technology. Rather, the emphasis is on the AI system and the DBMS capabil- ities are added in an ad hoc and limited manner, e.g., [Ceri, et. al., 19861 implements only the data access layer. Alternatively, a new generation knowledge-based system such as LDL [Chimenti, et. al., 19871 may be constructed. In either ease, this approach effectively involves “re-inventing” some or all of DBMS technol- ogy. While such systems typically provide sophisticated tools and environments for the development of applica- tions such as expert systems, they can not readily make use of existing databases. Thus, the development of AI applications which must access existing databases will be exceedingly difficult if not impossible (e.g., when the database is routinely accessed and updated via more traditional kinds of applications). Extending the DBMS System: This approach extends a DBMS to provide knowledge representation and reasoning capabilities, e.g., POSTGRES [Stone- breaker, et. al., 19871. Here, the DBMS capabilities are the central concern and the AI capabilities are added in an ad hoc manner. The knowledge representation and reasoning capabilities are generally quite limited and they lack the sophisticated tools and environments of most AI systems. Such systems do not directly sup- port the use of existing DBMSs nor can they directly support existing AI applications (e.g., expert systems) without substantial effort on the part of the applica- tion developer. In some sense, this is the opposite of the previous approach. Loose Coupling: The loose coupling approach to AI/DB integration uses a simple interface between the two types of systems to provide the AI system with ac- cess to existing databases, e.g., KEE-connection [Abar- bane1 and Williams, 19861. While this approach has the distinct advantage of integrating existing AI sys- tems and existing DBMSs, the relatively low level of integration results in poor performance and limited use of the DBMS by the AI system. In addition, access to data from the database, as well as the data itself, is poorly integrated into the representational scheme of the AI system. The highly divergent methods repre- senting data (e.g., relational data models vs. frames) is generally left to the application developer or knowledge engineer with only minimal support from the AI/DB interface. Enhanced AI/DB Interface: The last approach to AI/DB integration represents a substantial enhance- ment of the loosely coupled approach and provides a more powerful and efficient interface between the two types of systems. As with the previous approach, this method of AI/DB integration allows immediate advan- tage to be taken of existing AI and DB technologies as well as future advances in them. The problems of performance and under-utilization of the DBMS by the 678 KNOWLEDGEREPRESENTATION AI system are handled with differing degrees of suc- cess first by increasing the functionality of the interface itself, and then if necessary, enhancing either the AI system or the DBMS. For example, the BERMUDA system [Ioannidis, et. al., 19881 uses a form of result caching to improve efficiency and performs some sim- ple pre-analysis of the AI application to identify join operations that can be performed by the DBMS rather than the AI system. The BrAID system [O’Hare and Sheth, 19891 is similar except that it supports more gen- eral caching and pre-analysis capabilities and allows for experimentation with different inference strategies. The ID1 is an interface that can be used to facili- tate the development of AI/DB systems using this last approach. That is, the ID1 is cache-based interface to DBMSs and is designed to be easily integrated into var- ious types of AI systems. The design of the ID1 also al- lows it to be used as an interface between DBMSs and other types of applications such as database browsers and general query processors. Design Features _ , The ID1 supports several features which simplify its use in an AI system. These include Connections to a DBMS are managed transparently so that there can be multiple active queries to the same database using a single open connection. Connections to a given database are opened upon de- mand, i.e. at first use instead of requiriig an explicit database open request. Database schema information is loaded kom the database either when the database is opened or when queries re- quire schema information based upon user declarations. The query interface is a logic-based language but uses user supplied functions to declare and recognise logic vari- ables. Results of queries to a DBMS are cached, improving the overall performance system and the cache is accessed transparently by a query manager. All but the last of the features are described in this sec- tion. The cache system and initial performance results are described in subsequent sections. Making Connect ions As suggested above, there are numerous approaches to interfacing an AI system with existing DBMSs. How- ever, the basic alternatives involve balancing the costs of creating the connection to the DBMS and of process- ing the result relations from a DBMS query. Deciding which alternative is the best requires knowledge about the typical behavior of the AI system as well as other, more obvious factors, such as communication overhead (cf. [O’Hare and Sheth, 19891). Consider the following two modes of interaction between an AI system and a DBMS: e The AI system generates a few DBMS queries that tend to yield very large results and the AI system uses only a fraction of the result. e The AI system generates many DBMS queries that tend to yield, on average, small results and the AI system uses most or all of the result. In the first case, it would be best to avoid the cost of processing the entire result by using demand-driven techniques to produce only one tuple at a time from the result stream of the DBMS. However, this requires that separate connections be created for each DBMS query. Thus the overhead of creating such connections must be less than the cost of processing the entire result relation. In the second case, it would be best to avoid the cost of creating numerous connections to the DBMS by using a single connection for multiple queries. However, this requires that the entire result ofeach query be processed so that successive queries can be run using the same connection. The cost of processing DBMS results (i.e., reading the entire result stream and storing it locally) must be less than the cost of creating a new connection for each DBMS query. For most systems, it seems reasonable to assume that the total cost for creating a new DBMS connection will be relatively high. Thus, using the same connection for different DBMS queries would result in a net savings. While specific break-even points could be estimated, it is not clear one need go that far since there are other reasons for minimizing the number of DBMS connec- tions that are open at the same time. Foremost among these is the limit that most operating systems have on the number of streams that can be open simultaneously. This can severely limit the number of DBMS connec- tions that we can have at one time. If one is also inter- ested in allowing connections to different databases, on either the same or a different DBMS, then it is impor- tant to minimize the number of open connections for a single database. Yet another consideration is the use of caching for DBMS results. That is, if DBMS results can be cached locally by the AI system or an agent serving it then all of the DBMS results will probably be processed by the caching mechanism. Thus, the first alternative (where it is assumed that the DBMS results will not, in general, be totally consumed) is no longer applicable. In light of these constraints and requirements, it seems best to minimize the number of DBMS connec- tions that can be open simultaneously. Briefly, the approach taken in the ID1 is to open a connection when a DBMS query is encountered against a database for which no connection exists and process the result stream one tuple at a time until and unless another DBMS query on the same database is encountered. At that point, the new query is sent to the DBMS, the re- mainder of the result stream for the previous query is consumed and stored locally, and then the new result stream is processed one tuple at a time as before. MCKAY ET AL. 679 Automating Access to Schema Information One of the key features of the ID1 is the automatic management of database schema information. The user or application program is not required to provide any schema information for those database relations that are accessed va’u IDIL queries. The ID1 assumes the responsibility for obtaining the relevant schema infor- mation from the appropriate DBMS. This provides sev- eral significant advantages over interfaces which rely on the user to provide schema information. Most impor- tantly, the schema information will necessarily be con- sistent with that stored in the DBMS and thus any errors introduced by hand-coding the schema informs tion are eliminated. The only exception to this occurs when the schema on the DBMS is modifled after the ID1 has accessed it since the ID1 caches the schema information and thus maintains a private copy of it. While this stale data problem exists for any system which maintains a separate copy of the schema informa- tion, the ID1 provides a simple mechanism for forcing the schema information to be updated. In addition, this approach greatly facilitates the implementation of database browsers since users need not know the names or structure of relations stored in a particular database. Logical Glue Another significant feature of the ID1 is the relative ease with which it can be integrated with different AI systems. Aside from the use of Common Lisp as the implementation language for the IDI, this is achieved by employing a logic-based language as the query lan- guage for the IDI. The language, IDIL, may be used as a totally independent query language or, more im- portantly, it may be more closely integrated with the knowledge representation language of a logic-based AI system. In the later case, the key is to allow the ID1 to share the same definition of a logic variable as the host AI system. This is accomplished be simply redefining a small set of functions within the ID1 which are used to recognize and create instances of logic variables. The IDIL 1 query language is a restricted subset of function-free Horn clauses where the head of a clause represents the target list (i.e., the form of the result relation) and the body is a conjunction of literals which denote database relations or operations on the relations and/or their attributes (e.g., negation, aggregation, and arithmetic operations). Figure 2 shows some example queries. ID1 Organization As Figure 3 illustrates, there are four main components which comprise the ID1 - the Schema Manager, the DBMS Connection Manager, the Query Manager, and the Cache Manager. There are three principal types ’ “IDIL” is pronounced as “idle” and should not be con- fused with “idyll”. Get supplier names for suppliers who do not SUpp1y part p%. ( (an8 ,Sname) <- (supplier -Sno ,Sname -Status -City) (not (supplier,part ,Sno "p2" ,Qty))) Get supplier namea and quantity supplied for supplier8 that supply more than ,900 units of part p.% ((am ,Snme ,Wy) <- (supplier -Sno ~Snanw -Status -City) (supplier-part ,Sno "p2" -t&y) 0 ,qty 300)) Figure 2: Two example lDlL queries using the “suppliers” database. Symbols beginning with a “/’ character have been declared to be logic variables. of inputs or requests to the IDI: (a) a database dec- laration; (b) an IDIL query and subsequent retrieval requests against the result of an IDIL query; and (c) advice to the Cache Manager.. Database declarations convey simple information about a given database, e.g., the type of the DBMS on which the database resides and the host machine for the DBMS. For each IDIL query, the ID1 returns a generator which can be used to retrieve the result relation of an IDIL query one tuple at a time. The ID1 also supports other types of requests, e.g., access to schema information, which are described elsewhere [O’Hare, 19891. The Schemu Munuger (SM) is responsible for manag- ing the schema information for all declared databases and it supplies the Query Manager with schema infor- mation for individual database relations. This entails processing database declarations, accessing and storing schema information for declared databases, and man- aging relation name aliases which are used when two or more databases contain relations with identical names. Whenever a connection to a database is created, the SM automatically accesses the list of relation names that are contained within the database. This list is then cached for later access in the event that the connection is closed and re-opened at some later time. In this event the SM will only access the DBMS schema information if it is explicitly directed to do so, otherwise the cached list of relation names will be used. The DBMS Connection Manager (DCM) manages all database connections to remote DBMSs. This includes processing requests to open and close database connec- tions as well as performing all the low-level I/O opera- tions associated with the connections. Within the IDI, each database has at most one active connection associ- ated with it and each connection has zero or more query result streams or generators associated with it but only one generator may be active. The Query Manager (QM) is responsible for process- ing IDIL queries and managing their results. IDIL queries are processed by translating them into SQL 680 KNOWLEDGEREPRESENTATION Database Declarations I . . . Figure 3: The IDI includes four main components: the Schema Manager manages the schema information for all de- clared databases; the Connection Manager handles connections to remote DBMSs; the Query Manager is responsible for pro- cessing IDIL queries and their results; and the Cache Manager controls the cache of query results in accordance with the ad- vice supplied by the application. which is then sent to the appropriate DBMS by the DCM. If the query is successfully executed by the DBMS then the QM returns a generator for the result relation. A generator is simply an abstract data type used to represent the result of an IDIL query. There are two basic type of operations which may be performed on a generator: (a) get the next tuple from the result relation and (b) terminate the generator (i.e., discard any remaining tuples). Generators are actually created and managed by the DCM since there is more than one possible representation for a result relation, e.g., it may be a result stream from a DBMS or a cache element. The QM merely passes generators to the DCM along with requests for the next tuple or termination. The Cuche Munager is responsible for managing the cache of query results. This includes identifying IDIL queries for which the results exist in the cache, caching query results, and replacing cache elements. In addi- tion, our design allows the AI system to provide the cache manager with u&ice to help it decide how to manage its cache and make the following kinds of crit- ical decisions: (D pre-fetching - which relations (and when) should be fetched in anticipation of needing them? This can yield a significant increase in speed since the database server is running as a separate process. This can also be used to advantage in an environment in which databases are accessed over a network in which links are unreliable - critical database relations can be ac- cessed in advance to ensure their availability when needed. e resuZtts caching - which query results should be saved in the cache? Both base and derived relations vary in their general utility. Some will definitely be worth caching since they are likely to be accessed soon and others not. Q pery generalization - which queries can be usefully generalized before submitting them to the DBMS? Query generalization is a useful technique to reduce the number of queries which must be made against the database in many constraint satisfaction expert systems. It is also a general technique to handle expected subsequent queries after a “null answer” [Motro, 19861. 0 replacement - which relations should be removed when the cache becomes full? Additional kinds of advice and examples can be found in [O’Hare and Travis, 1989, O’Hare and Sheth, 19891. As with any type of cache-based system, one of the more difficult design issues involves the problem of cache validation. That is, determining when to inval- idate cache entries because of updates to the relevant data in the DBMS. Our current implementation does not attempt cache validation, which will be a focus of future research. This still leaves a large class of applica- tions for which cache validation is not a problem. These includes access to databases that are write-protected and updated infrequently, such as the Ojg;&l Airline Guide database, and databases that are static relative to the time scale of the AI application accessing them. Moreover, this problem is common to any AI system which gets some of its data from an external source and stores in its knowledge base. Most current interfaces between AI systems and databases (e.g. KEE Connec- tion [Intellicorp, 19871) simply do not worry about this problem at all. Our approach attempts to minimize the AI system’s copying of database data in two ways. First, by providing convenient and efficient access to the information in the database, the AI system devel- opers will have less need to make a local copy of the data. Second, all of the database information that is copied (i.e., in the cache) is isolated in one place and can therefor be more easily “managed” - reducing the problem to “cache validation”. There are a number of approaches to the validation problem which vary in completeness and ease of im- plementation. Examples of possible components of a cache validation system include: only caching relations declared to be non-volatile, only caching data between scheduled DB updates, using heuristics (e.g., a decay curve) to estimate data validity, and implementing a “snoopy cache” which monitors the database transac- tions for updates which might invalidate the cached data. MCKAYETAL. 681 Min Figure 4: The aggregate processing time is broken down in terms of the three main stages of processing: translution, ex- ecution, and coZ2ection. For each processing stage, the mini- mum, mean, and maximum processing times are shown. Current Status Performance The IDI, as described here, has been implemented in Common Lisp and tested as a stand-alone query pro- cessor against two different databases running on RTI INGRES and is also being used as a query server for the Unisys spoken language project. The performance results obtained thus far are, at best, preliminary since the size of the test suite was comparatively small and the ID1 is just now being integrated with an AI system. However, the results are encouraging and indicate the potential for efficient database access afforded by the IDI. The following summarize some of the more inter- esting of these performance results. One test set of IDIL queries used consisted of 48 queries where there were 22 unique queries, i.e., each query was repeated at least once in the test set. The queries ranged from simple (i.e., only project and select operations were required) to complex (i.e., a four-way join with two aggregation operations as well as projects and selects). The size of the result relations varied from zero to 17 tuples. The statistics presented here are based on the mean processing times for 20 repetitions of the test set of queries. Figure 4 shows a breakdown of the aggregate process- ing time in terms of the three main stages of processing: translation (i.e., the time to translate and IDIL query into SQL), ezecvtion (i.e., the elapsed time between sending the SQL query to the DBMS and obtaining the first tuple of the result relation), and collection (i.e., the time required to collect all the tuples in the result relation and convert them into internal form). For each processing stage, the minimum, mean, and maximum processing times are shown. The cache was disabled for these measurements so that a more accurate picture 682 KNOWLEDGEREPRESENTATION Mean Without Caching Empty Cache Nrazre~‘Y Figure 5: Cache performance is measured for three cases: (a) the without caching or base-line case where caching was disabled, (b) the empty cache case where caching was enabled but the cache was cleared before each repetition of the test set, and (c) the non-empty cache case where the cache contained the results for all queries in the test set. of the relative processing times for each stage could be established. The differences in translation time reflect a depen- dence on the number of relations in the IDIL query. Similarly, the collection time is a function of the num- ber of tuples in the result relation. In both cases, the processing times are significantly less than the execu- tion time which is effected by the complexity of the SQL query, the communication overhead, and the load on the remote DBMS host (since only elapsed time was recorded). Figure 5 indicates the effects of result caching on performance. The results represent the mean process- ing times (in seconds) for all queries. Three different cases are represented: (a) the without caching or base- line case where caching was disabled, (b) the empty cache case where caching was enabled but the cache was cleared before each repetition of the test set, and (c) the non-empty cache c8se where the cache was con- tained the results for all queries in the test set. The difference between the base-line and empty cache cases is due to the number of repeated queries (i.e., 26 out of 48 were repeated). The fact that the base-line case is more than twice the empty cache indicated that the overhead required for result caching is not significant. The non-empty cache case indicates the maximum p<r tential benefit of result caching, i.e., nearly two orders of magnitude improvement in performance. Clearly this could only occur when the cache is “stacked” as in the test. However, it does help to establish an upper limit on the possible performance improvement afforded by result caching. Obviously, as the number of repeated queries increases so will the gain in performance. Application of the ID1 Clearly, more detailed performance results need to be obtained using more exhaustive test sets. It will be particularly important to integrate the ID1 with an AI system and measure its performance with a variety of different applications. We are currently using the ID1 to provide a database server for the Unisys spoken lan- guage understanding system and are investigating the integration of the ID1 with the Intelligent System Server and its Protem representation and reasoning engine, The ID1 and the ISS. Protem is a hybrid system containing both a frame-based representation system and a logic-based reasoning component. The integrrt tion of a frame-based representation system with a rela- tional database management system is not straightfor- ward. Our current approach labels some of the classes in the frame system as “database classes”. Any knowl- edge base activity which searches for the instances of this class will be handed a stream of “database in- stances” which will be the result of a query sent to the database via the IDI. In order to avoid filling the knowl- edge base memory with database information, these in- stances are not installed as persistent knowledge base objects but exist as “light weight objects” which are garbage collected as soon as active processes stop ex- amining them. They are also not “fully instantiated”. That is, the values for the frame’s roles are not neces- sarily installed. Instead, ifan attempt is made to access their roles, additional database queries to retrieve the information will generated automatically. Once again, this information is not added as permanent knowledge base data, but only last as long as the currently active process is using it. This approach has three advantages: it is relatively simple to implement, transparent to the user and is the key to isolating the data copy problem to cache validation as stated earlier. Once the relationship be- tween a database class and its database tables is de- clared, the class and its instances can be treated as any other knowledge base objects. However, without the ID1 cache implementation, it would be prohibitively slow. The ID1 ATIS Server. The second AI system that the ID1 is being used to support is a spoken language in- terface to an Air level Information System database. In this project, spoken queries are processed by a speech recognition system and interpreted by the Unisys Pun- dit natural language system [Hirschman, et. al., 19891. The resulting interpretation is translated into a IDIL query which is then sent to the ATIS Server for evalua- tion. This server is a separate process running the ID1 which, in turn, submits SQL queries to an INGRES database server. The utravel agent” domain is one in which there is a rich source of pragmatic information that can be used to infer the user’s intentions underlying their queries. These intentions can be used to generate advice to the I manager to allow it to make intelligent choices cache about ment. query generalizations, pre-fetching and replace- We currently have an initial ATIS server running and will be collecting statistics on its transactions which can then be used to define a effective advice strategy. Conclusion Although the implementation of the ID1 is not com- plete, it does provide a solid foundation for easily cre- ating a sophisticated interface to existing DBMSs. The key characteristics of ID1 are efficiency, simplicity of use, and a high degree of portability which make it an ideal choice for supporting a variety of AI and related applications which require access to remote DBMSs. Among the various extensions to the IDI that have been planned for the future, most involve the Cache Manager. At present, the implementation of the CM has been focused on efficient result caching and most other cache management functions have not been im- plemented. One of the first steps will be to impose a parameterized limit on the size of the cache and to implement a cache replacement strategy. Other exten- sions to the CM include cache validation, and the abil- ity to perform DBMS-like operations on cache elements [O’Hare and Sheth, 19891. If the ID1 is extended so that it is capable of perform- ing DBMS-like operations on the contents of its cache then, given an IDIL query, it will have three general courses of action which it may take to produce the re- sults: (a) the entire IDIL query can be translated into SQL and sent to the remote DBMS for execution; (b) the entire IDIL query can be executed locally by the ID1 (including simple retrieval from the cache); and (c) the IDIL query can be decomposed so that part of it is executed on the remote DBMS and part of it is exe- cuted locally by the IDI. The decision of which action to take would depend on a number of factors includ- ing the current contents of the cache and the estimated costs for each alternative. References [Abarbanel and Wilhams,1986] R. Abarbanel and M. Williams, “A Relational Representation for Knowledge Bases,“, Technical Report, Intellicorp, Mountain View, CA, April 1986. [Bocca, 19861 J. Bocca, “EDUCE a Marriage of Conve- nience: Prolog and a Relational DBMS,” Third Sympo- sium on Logic Progra mming, Salt hlce City, Sept. 1986, pp. 36-45. [Brodie, 19881 M. Brodie, uFutu.re Intelligent Informa- tion Systems: AI and Database Technologies Working Together” in Readings in Artificial Intelligence and Databases, Morgan Kaufman, San Mateo, CA, 1988. [Ceri, et. al., 19861 S. Ceri, G. Gottlob, and G. Wiederhold, “Interfacing Relational Databases and Prolog Efficiently,” Proc. of the 1st Intl. Conf. on Expert Database Systems, South Carolina, April 1986. MCKAY ET AL. 683 [Chakravarthy, et. al., 19821 U.. Chakravarthy, J. Minker, and D. Tran, “Interfacing Predicate Logic Languages and Relational Databases,” in Proceedings of the First International Logic Programming Conference, pp. 91-98, September 1982. [Chamberhn, et. ah, 19761 D. Chamber&n, el al.. ‘SE- QUELZ: A Unified Approach to Data Definition, Manip- ulation, and Control”, IBM Journal of R&D, 20, 560-575, 1976. [Chang, 19781 C. Chang, “DEDUCE 2: Further investiga- tions of deduction in relational databases,” in Logic and Database+ ed. H. Gallaire, pp. 201-236, New York, 1978. [Chang and Walker, 19841 C. Chang and A. Walker, “PROSQL: A Prolog Programming Interface with SQL/D&” Proc. of the 1st Intl. Workshop on Expert Database Systems, Kiawah Island, South Carolina, Octo- ber 1984. [Chimenti, et. al., 19871 D. Chimenti, A. O’Hare, R. Krish- namurthy, S. Naqvi, S. Tsur, C. West, and C. Zaniolo, “An Overview of the LDL System,” IEEE Data Engi- neering, vol. 10, no. 4, December 1987, pp. 52-62. [Dahl, et. al., 19901 D. Dahl, L. Norton, D. McKay, M. Linebarger and L. Hirschman, “Management and Evahra- tion of Interactive Dialogue in the Air Travel Information System Domain”, submitted to The DARPA Workshop on Speech and Natural Language, June 2427,1990, Hid- den Valley, PA. [Finin, et. al., 19891 Tim Finin, Rich Fritzson, Don Mckay, Robin McEntire, and Tony O’Hare, “The Intelligent Sys- tem Server - Delivering AI to Complex Systems”, Pro- ceedings of the IEEE International Worbhop on Tools for Artificial Intelligence - Architectures, languages and AZgorithms, March 1990. [Fritzson and Finin, 19881 Rich Fritsson and Tim Finin, “Protem - An Integrated Expert Systems Tool”, Tech- nical Report LBS Technical Memo Number 84, Unisys Paoli Research Center, May 1988. [Hirschman, et. al., 19891 Lynette Hirschman, Martha Palmer, John Dowding, Deborah Dahl, Marcia Linebarger, Rebecca Passonneau, Fransois-Michel Lang, Catherine Ball, and Carl Weir. The pundit natural- language processing system. In AI Systems in Gotrem- ment Conference. Computer Society of the IEEE, March 1989. [Intellicorp, 19871 Intellicorp, “KEEConnection: A Bridge Between Databases and Knowledge Bases”, An Intel- licorp Technical Article, 1987. [Ioannidis, et. al., 19881 Y, Ioannidis, J. Chen, M. Fried- man, and M. Tsangaris, “BERMUDA - An Architec- tural Perspective on Interfacing Prolog to a Database Machine,” Proceedings of the Second International Con- ference on Expert Database Systems, April 1988. [Jarke, et. al., 19841 M. Jarke, J. Clifford, and Y. Vassiliou, “An Optimizing Prolog Front-End to a Relational Query System,” Proceedings of the 1984 A CM-SIGMOD Con- ference on the Management of Data, Boston, MA, June 1984. [Kellog, et. al., 1986] C. Kellogg, A. O’Hare, and L. Travis, “Optimizing the Rule/Data Interface in a Knowledge Management System,” in Proceedings of the 12th In- ternational Conference on Very Large Databases, Kyoto, Japan, 1986. [Li, 19841 D. Li, A Prolog Database System, Research Stud- ies Press, Letchworth, 1984. [Minker, 19781 J. Minker, “An Experimental Relational Data Base System Based on Logic,” in Logic and Databases, ed. J. Minker, Plenum Press, New York, 1978. [Morris, 19881 K. Morris, J. Naughton, Y. Saraiya, J. Ull- man, and A. Van Gelder, “YAWN! (Yet Another Window on NAIL!),” IEEE Data Engineering, vol. 10, no. 4, De- cember 1987, pp. 28-43. [Motro, 19861 Amihai Motro, UQuery Generalization: A Method for interpreting Null Answers”, in Ex- pert Database Systems, ed. L. Kerschberg, Ben- jamin/Cummings, Menlo Park CA, 1986. [Naish and Thorn, 19831 L. Naish and J. A. Thorn, “The MU-Prolog Deductive Database,” Technical Report 83- 10, Department of Computer Science, University of Mel- bourne, Australia, 1983. [O’Hare, 19871 A. O’Hare, Towards Declarative Control of Computational Deduction, University of Wisconsin- Madison PhD Thesis, June 1987. [O’Hare, 19891 A. O’Hare, uThe Intelligent Database Inter- face Languagen, Technical Report, Unisys Paoli Research Center, June 1989. [O’Hare and Sheth, 19891 A. O’Hare and A. Sheth, “The interpreted-compiled range of AI/DB systems”, SIGMOD Record, 18(l), March 1989. [O’Hare and Travis, 19891 A O’Hare and L. Travis, “The KMS Inference Engine: Rationale and Design Objec- tivesn , Technical Report TM-8484/003/00, Unisys - West Coast Research Center, 1989. [O’Hare and Sheth, 19891 Anthony B. O’Hare and Amit Sheth. The architecture of BrAID: A system for efi- cient AI’DB Integration. Technical Report PRC-LBS- 8907, Unisys Paoli Research Center, June 1989. [Reiter, 19781 R. Reiter, UDeductive Question-Answering on Relational Data Bases,” in Logic and Databases, ed. J. Minker, Plenum Press, New York, 1978. [Sheth, et. al., 19881 A. Sheth, D. van Buer, S. Russell, and S. Dao, “Cache Management System: Preliminary Design and Evaluation Criteria,” Unisys Technical Report TM- 8484/000/00, October 1988. [Stonebreaker, et. al., 19871 M. Stonebraker, E. Hanson and S. Potamianos, UA Rule Manager for Relational Database Systems,” in The Postgres Papers, M. Stone- braker and L. Rowe (eds), Memo UCM/ERL M86/85, Univ. pf California, Berkeley, 1987. [Van Buer, et. al., 19851 D. Van Buer, D. McKay, D. Ko- gan, L. Hirschman, M. Heineman, and L. Travis, “‘The Flexible Deductive Engine: An Environment for Proto- typing Knowledge Based Systems,” Proceedings of the Ninth International Joint Conference on Artificial Intel- Zigence, Los Angeles CA, August 1985. 684 KNOWLEDGEREPRESENTATION
1990
127
879
On the Performance of Lazy Matching in Production Systems Daniel I? Miranker*, David A. Brant**, Bernie Lofaso**, David Gadbois* *Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 ABSTRACT Production systems are an established method for encoding knowledge in an expert system. The semantics of produc- tion system languages and the concomitant algorithms for their evaluation, RETE and TREAT, enumerate the set of rule instantiations and then apply a strategy that selects a single instantiation for firing. Often rule instantiations are calculated and never fired. In a sense, the time and space re- quired to eagerly compute these unfired instantiations is wasted. This paper presents preliminary results about a new match technique, lazy matching. The lazy match algorithm folds the selection strategy into the search for instantiations, such that only one instantiation is computed per cycle. The algorithm improves the worst-case asymptotic space com- plexity of incremental matching. Moreover, empirical and analytic results demonstrate that lazy matching can substan- tially improve the execution time of production system pro- gr-* I.0 Introduction There is a large and growing body of research directed to- ward the integration of relational database and expert sys- tem technologies (Kerschberg 1987, Kerschberg 1988). Our work focuses on the problem of using the production system paradigm as the deductive component of an expert database system. The use of simple rules in databases is well known for enforcing integrity constraints and sponta- neously triggering daemons if certain patterns appear in the data (Bunemann 1979, Astrhan 1976). The database problem of maintaining a view in the presence of updates to a database is very similar to the problem of incremental- ly evaluating the rules in a production system (Blakeley 1986). Even though some database systems incorporate a portion of the power of pattern directed inference systems, the number and form of the rules that can be effectively in- cluded in these systems is very limited. On the other side of the problem, expert systems that require information from existing databases do not access the data directly but maintain a small separate subset of the data by periodically issuing queries. Rule systems on the scale of an accredited expert system have not been tightly integrated with large databases. This is due to the extraordinary time and space demands one can expect Tom inferencing on large data- bases. One of the fundamental issues is the exponential worst- case time and space requirements inherent in existing pro- duction system match algorithms (Raschid 1988, Miranker **Applied Research Laboratories The University of Texas at Austin PO. Box 8029, Austin, TX 787 13 1987, Forgy 1982). The worst-case asymptotic time and space requirements of both the RETE (Forgy 1982) and TREAT (Miranker 1987) match algorithms are O(wm’) where wm is the size of the working memory and c is the maximum number of condition elements. While the aver- age space requirements do not approach worst case, the variance in both time and space demonstrated over the life of a system is very volatile. Figure 1 shows rule firings (x- axis) versus number of instantiations (y-axis) for four OPS5 test applications’. These test programs have previ- ously appeared in the literature (Gupta, Forgy & New- e111989, Miranker 198, Lofaso 1989). Some summary statistics about these systems are presented in Table 1. The erratic behavior seen in these graphs illustrates the time and space wasted in the eager evaluation of rules. Al- though the worst case is rarely achieved, it is clear from the graphs that very bad behavior may appear at any time. For database applications it is entirely possible for such al- gorithms to unexpectedly exhaust all of the available stor- age in large virtual memory computer systems (Bein, King & Kamel 1987). TABLE 1.OPS5 Program Statistics Avg. WM Inst- Rule Unused Inst- % Program Rules Size antiations Firings antiations Unused -- WALTZ TOURNEY :? 42 151 81 54 123 2324 5;: 1796 JIG25 WEAVER 63; 1;; 205 58 147 7’; 1331 751 580 44 Therefore, we have developed an algorithmic basis for matching that is fundamentally better than current match algorithms in its space requirements. The first obstacle we observed is that all presently used algorithms for evaluat- ing production systems enumerate the entire conflict set. The conflict set consists of rule instantiations where an in- stantiation is a rule name and an ordered set of working memory elements that satisfy that rule. The conflict set by itself has worst-case space complexity of O(wm’). Thus, to 1. The applications used in our study are: (a) JIG25 - solves a simple jigsaw puzzle, (b) TOURNEY - schedules a bridge tournament, (c) WALn - interprets three-dimensional line drawings, and (d) WEAVER - routes a VLSI channel. MIRANKERETAL. 685 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. 0 11 10 21 20 31 56 41 46 000 100 wo wo 4w Dw l 1w 0 1 I* 101 lS1 PO* PS8 301 351 409 451 SO1 TOURNEY 180 100 940 (SO too 80 .OD 40 IO 0 * 0 il 10 Sl PO JIG;; 30 41 46 II Figure 1. Conflict Set Instability asymptotically improve the space complexity of the matching problem, it is necessary to avoid enumerating the conflict set. We have developed a new incremental match algorithm, the lazy match, that computes a single rule in- stantiation per cycle, yet may maintain the present execu- tion semantics of existing production system languages. The lazy match is described in section 3 and has worst- case space complexity that is O(max(ts)*c). Where ts is a timestamp and max(ts) is therefore bounded by the total number of updates to working memory. Further, it is often the case that instantiations are computed and placed in the conflict set and never fired (see Table 1). The lazy match never computes these “wasted” instantiations. Time is wasted not only in the eager computation of unused instan- t&ions, but recent results have also shown that memory management is a dominant factor in the performance of these systems (Lofaso 1989). If memory requirements can be reduced then we might also expect performance to im- prove. Section 4 presents preliminary results of an OPS5 implementation based on the lazy match. These results show that the lazy match may substantially improve the performance of a production system program and elimi- nate the need to avoid certain troublesome constructs when writing rules. In section 2 we define production systems. Throughout the a paper we will draw examples using the OPS5 production system language. It is assumed that the reader is familiar with either the RETE of TREAT incre- mental match algorithms. 2.0 Production Systems and Eager Matching In general, a production system is defined by a set of rules, or productions, that form the production memory, together with a database of current assertions, called the working memory (WM). Each production has two parts, the Zeft- hand side (LHS) and the right-hand side (RIB). The LHS contains a conjunction of pattern elements, or condition el- ements (CEs), that are matched against the working mem- ory. The RHS contains directives that update the working memory by adding or deleting facts, and directives that carry out external side effects such as I/O. In operation, a production system interpreter repeats the following recog- &e-act cycle: 1. Match. For each rule, compare the LHS against the cur- rent WM. Each subset of WM elements satisfying a rule’s LHS is called an instantiation. All instantiations are enumerated to form the conflict set. 2. Select. From the conflict set, chose a subset of instanti- ations according to some predefined criteria. In practice a single instantiation is selected from the conflict set on the basis of the recency, specificity, and/or rule priority of the matched data in the WM. 3. Act. Execute the actions in the RHS of the rules indi- cated by the selected instantiations. An OPS5 working memory element (WME) forms the user’s conceptual view of an object and consists of a class name followed by a list of attribute-value pairs (Forgy 1981). A class name identifies an object and the attribute- value pairs describe a particular instance of that object. Each WME has a unique identifier (ID) associated with it. 686 KNOWLEDOEREPRESE~ATION IDS are often implemented as a strictly increasing se- quence of integers-assigned when the WME was created or last modified. They may be construed as timestamps or as logical pointers to individual WMEs. In most production systems, IDS are used in the conflict set resolution criteria. Consider the WMF, shown below, used to describe a red cube named c-l, with a mass of 100, and having a length of 10 (attributes names are distinguished by a preceding h operator). (cube *name c-1 “color red *mass 100 *len 10) A production’s LHS consists of a conjunction of CEs. It contains one or more non-negated CEs and zero or more negated CEs. Negated CEs are distinguished by a preced- ing negative sign. The LHS is said to be satisfied when: 1. for each non-negated CE, there exists at least one matching WME, and, 2. for all negated CEs, there do not exist any matching WMES. Each CE consists of a class name and one or more terms. Each term specifies an attribute within the class and a predicate to be evaluated against the values of that at- tribute. A CE need not reference all of the attributes con- tained in its corresponding class. The class is projected onto the named attributes in the CE. Those not named do not affect the match criteria. Predicates consist of a com- parison operator (c,>,= ,<, , or#) followed by a constant or variable. A predicate containing a constant is true with re- spect to a WME if the corresponding attribute value in the WME matches the predicate. For example, consider the CEs and corresponding WMEs shown in Fig. 2. CE (a) matches WMEs (1) and (3), while CE (b) matches only WME (1). “cube” WMEs CEs name color mass len a) (cube *mass<lO) 1) c-1 red 6 8 b) (cube Amass<10 *len>5) 2) c-2 blue 11 5 3) .c-3 red 1 3 Figure 2. Predicate Matching The scope of a variable is the production in which it ap- pears, and, therefore, all occurrences of a variable within a given LHS must be bound to the same value in working memory for the LHS to be satisfied. For condition ele- ments containing variables, a mapping can be made to a relational join operation. The join operator will ensure that a given variable is consistently bound for all of its occur- rences within a LHS. 2.1 Eager Matching A critical component of any match algorithm that computes production system is the the instantiations. Current- ly used match algorithms are cager in nature. If a new WME is entered into the system they will perform a search for all instantiations containing that WME. These instanti- ations are then added to the conflict set (CS). Thus, from one rule firing to the next, the set of all valid instantiations is preserved by making incremental changes to the CS. In order to analyze the behavior of production systems and their match algorithms we have- characterized them in terms of events that can change the conflict set and the op- erations performed by the system in response to those events. There are five events that may result in changes to the conflict set. Two may add instantiations. They are:2 1. make(WME+) add a WME to a class corresponding to a non-negated CE 2. remove(WME-) remove a WME from a class corre- sponding to a negated CE Three events may remove instantiations from the CS. They are: 3. make(WME-) add a WME to a class corresponding to a negated CE 4. remove(WIVIE+) remove a WME from a class corre- sponding to a non-negated CE 5. fire(I) fire instantiation I. Current implementations of production systems perform several basic operations in response to these five events. For events that involve the computation of new instantia- tions,i.e., (1) and (2), the match algorithm effectively com- putes a relational database join for each rule containing the class associated with the specified WME. The WME is used as a seed to root the join and the join path branches out from the seed to the other classes of the CEs for that rule based on its join query graph. Nodes in the graph rep- resent classes and arcs prescribe the join order. A failed search results in a backtrack to the previous class. The searches that succeed at the lowest level (leaf nodes of the query graph) indicate that an instantiation was found. A given instantiation can be represented by the timestamps of the WMEs along the path leading from the root to the leaf. As new instantiations are produced the conflict set must be resorted according to the resolution strategy. Event (3) also results in a seed join, the results of which are removed from the CS. Event (4) produces a search of the conflict set for instantiations containing the specified WME, which are then removed. Event (5) simply removes the fired instanti- ation from the CS. 3.0 A Lazy Matching Algorithm The following describes a method for computing produc- tion instantiations in a lazy manner. This is accomplished by executing a best-first search for instantiations. After one instantiation is found, the search pauses to allow the corresponding rule to be fired. Since the rule firing may change WM, the best-first search must be capable of re- sponding to a dynamic search space. This is accomplished by maintaining a stack of best-first search pointers. As searches are superceded by changes to the WM, their state is pushed onto the stack. When a search is exhausted, the next set of pointers is removed from the stack. The top of 2. Note that these events do not necessarily have a one-to-one correspondence to the makes and removes specified in an OPS5 rule’s RI-IS, e.g., a given WME may be applicable to many CEs and therefore an OPS5 “make” could result in numerous make(WME+) events. MIRANKERETAL. 687 stack always contains the state information for the next search. The correctness of lazy match is dependent upon being able to enforce a total ordering in the generation of instan- t&ions. If this is not done, duplicate instantiations may be computed and fired. If the total ordering is by timestamp (i.e. ID), a search heuristic based upon firing the produc- tion with the most recent instantiation (McDermott & For- gy 1978) can be employed. However, it is important to note that any total ordering of instantiations for a given rule will work. Adding additional criteria for instantiations of different rules, such as specificity and/or rule priority, can also be accommodated in a straightforward manner?. 3.1 Conflict Set Resolution and Lazy Matching The challenge of the lazy matching algorithm is in control- ling a best-first search for instant&ions through a WM that may change after each instantiation is found and fired. The criteria for “best” in this case is based upon the conflict set resolution strategies. Lazy matching uses the selection strategy as an evaluat- ing function to direct the search for a fireable instance. This is done by using that criteria to direct the search for matching WMEs from the alpha-memories4. On any given cycle, the search for an instantiation will stop after the first one is found, with the search being conducted so as to pre- serve recency. Of course, additions to, and deletions from, the WM will affect the search, and we must ensure that a given instantiation is fired at most once. To do so, state in- formation is saved on a stack in order to continue the cor- rect computation of instantiations. 3.2 Computing Instantiations Using Lazy Matching Elements of the stack consist of a sets of pointers repre- senting the state of a best-first search for instantiations. For convenience we use the timestamps of the WMEs to repre- sent both an instantiation and the search state. For simplic- ity of presentation we assume a single rule system. We define an instantiation as a tuple containing one timestamp from each non-negated CE. Thus for a rule containing n-l non-negated CEs, stack entries and instant&ions have the form &3(+...,t%-l >), where tsi is a timestamp. As each WME is entered into the an alpha-memory, a correspond- ing initial search state is pushed onto the stack. The initial state is <<tsg-l,...,tsi,...,ts,_1- l>>, where t% is the timestamp of the newly added WME. The concept of a dominant timestamp (DT) is intro- duced to control the lazy computation of instantiations. For any stack entry, the DT is the most recent timestamp. Figure 4(a) shows the initial system state for the produc- 3. In general, the only form of conflict set resolution strategies that cannot be done lazily are those that demand an enumeration of the conflict set, e.g., fire the rule having the most instantia- tiOIlS. 4. Same as the alpha-memories described by Forgy and used in both RETE and TREAIY 688 KNOWLEDGEREPRESENTATION tion appearing at the top of the figure. Note that the times- tamp is denoted as the attribute “ts”. The computation of an instantiation begins with pop- ping the top of stack and selecting the DT. This is followed by a best-first search for an instantiation rooted at the WME referenced by the DT. To ensure that instantiations are produced only once, alpha-memories have a fixed or- dering (by timestamp in this example), and the best-first search computation restricts the WMEs joining with DT to Co (ts,A) Cl O-,&B 2,a,c I 1 Lard 7,b,c 1 C, (ts,B) 60 Co (ts,A) Cl (ts,A,B) C2 (ts,B) STACK (W Cn(ts,A) Cl (ts,A,B) C?(ts,B) &arc 4,c Lad Lbrc q 6,~ co ( Cc) ts,A) Cl(ts,A,E Lal-rr2,a,c (d) Co (ts,A) Cl Figure 3. Lazy C2 (ts,B) (ts,A,B) C7(ts,B) STACK 6,7,6 5,5,6 4,5,4 3,3,4 32,~ 1,2,1 l,O,O 5,5,6 4,5,4 1 3,3,4 3,w 1,2,1 LO STACK 5,5,6 4,5,4 3,3,4 322 vu LO,0 STACK 4,5,4 3,3,4 3,2,2 L&1 LO,0 STACK . . 3,3,4 3,w 0 L&1 l,O,O (e) Computations of Instantiations those having timestamps less than DT. As soon as a match- ing set of WMEs (i.e., an instantiation) for DT is found, the computation pauses and the result is fired. If an instan- tiation containing DT cannot be found, then the next stack element is popped, and a new best-first search is begun. When an attempt is made to pop from an empty stack the system halts. Figure 3(b) shows the initial state of the best-first search pointers (pi) after the top stack entry has been popped and the corresponding search has found an instantiation. The best-first search is rooted at ts=7 in alpha-memory Cl and proceeds outward in join order, most recent to least recent WME in each alpha-memory. Thus, for Fig. 3(b) the search state is <<3,7,6>>. These WMEs satisfy the produc- tion and become the first instantiation. Next, the rule is fired, and, assuming for now that no WMEs are added to, or removed from, the WM by firing the rule, the search re- sumes to find the next instantiation. Figure 3(c) shows the state of the search after finding the next instantiation - <x3,7,41>. Before finding <<3,7,4>> the search would have tried << 1,7,&, failed, backtracked, advanced the P2 pointer, and succeeded (we arbitrarily chose to search the left al- pha-memory first). The next time the search is performed <x1,7,4>> will be tried and will fail. That will exhaust the search rooted at the DT with ts=7. At that time the next stack entry is popped. In this case it is the WME with ts=6. The shaded area in Fig. 3(d) contains WMEs that have timestamps greater than that of the DT and therefore are not considered in the search. The next instantiation to be found is ~1,2,6>>, after unsuccessfully trying <<3,5,6>>, <<1,5,6>>, and <x3,2,6>>. After that, the stack is again popped with DT=5. Since no instantiations can be found for DT=5, another pop is performed and the WME with ts=4 is cho- sen as DT. The instantiation <X 1,2,4>> is found (Fig. 3(e)) af- ter trying <<3,2,4>>. <<1,2,4>> is the final instantiation that can be produced. After it is fired, all the remaining stack en- tries are popped and their searches exhausted. Finally an attempt is made to pop from the empty stack and the sys- tem halts. We now consider the effects of adding and deleting WMEs after each rule firing. When a new element is added to the WM a set of initial pointers is pushed onto the stack, but first, the current search is suspended and its state pushed onto the stack. That search may be resumed at a later time when it is popped off the top of stack. Since de- letions may affect the state of a suspended search by re- moving WMEs that have pointers to them on the stack, each time a search state is popped from the stack, its point- ers must be verified by the best-first search, backtracking if necessary. Figure 4(a) is the same as Fig. 3(b). Assume that the instantiation referenced by <x3,7,6>> ties and adds the WME c8,d> to C2. This causes 1. the search state <<3,7,6>> to be pushed to the stack, 2. <<7,7,8>> is pushed onto the stack and subsequently popped* 3. a best-first search is started with DT=8, and 4. the next instantiation is found. i.e.. <<1.5.8>> IFig. 4(b)). Co(ts,A) Cl (ts,A,B) C7(ts,B) (a) Co(ts,A) Cl(ts,A,B) C2 (ts,B) (b) co ( ts,A) Cl (ts,A,B) C7(ts,B) STACK L STACK 5,5,6 4,5,4 3,3,4 3,w L&1 LO,0 STACK 3,7,6 5,5,6 4,5,4 3,3,4 322 L2J l,O,O 5,5,6 4,5,4 3,3,4 - 3,w 123 (c) L- LW Figure 4. Dynamic Search Space Assume that firing ~1,5,8>> does not change the WM. On the next cycle the search rooted at DT=8 will be exhausted and the top of stack popped. Thus the search that was sus- pended, <x3,7,6>>, is resumed. The next instantiation found will be <x3,7,4>>. The pseudocode in Fig. 5 should help elu- cidate the algorithm. program Lazy Match; Plr - l .rPn-1: WME timestamps; begin initialize stack; {The top level makes that form the ini- tial WM are added here. An entry for each is placed on the stack.} loop while stack not empty pop-stack(~~, . . .,p,-lrempty); if not empty then best-first(pl,...,p,-l,found); if found then push-stack (pl, . . . , pnwl 1; fire(pl, . . ..pn-l). end loop; end Lazy Match; function pop-stack(pl,...,p,-l,empty); {If the stack is not empty it returns the top element and sets empty-FALSE, else empty=TRUE} function best-first(pl,...,p,-l,found); {Performs a best-first search for an in- stantiation by working backwards from an ordered list of timestamps. The search first validates the pointers then searches using the DT as the root. If an instanti- ation is found then found=TRUE and the MIRANKERETAL. 689 new instantiation is returned in the Plr - - *rPn-lr else found=FALSE.) function push-stack(pl, . . ..p.-1); {Pushes the pointers onto the stack.} function fire(pI,...,p,-l) ; {Fires the instantiation referenced by the ~1, -..rP*-1. This may alter the WM. A make will place an entry on top of the stack. A remove may delete an entry from the stack.) Figure 5. Lazy Match Pseudocode. There are pathological cases where the best-first search strategy will not produce the identical sequence of instanti- ations as OPS5. It is possible to avoid these cases by im- posing a strict LEX ordering on lazy match, but doing so is computationally expensive. Nevertheless, the criteria used in lazy matching is in keeping with the general concept of recency as presented in (McDermott Jz Forgy1978), and has not yet posed a problem. 3.3 Handling Negated Condition Elements We have discovered three different methods of lazily han- dling negated condition elements (NCEs). Only one will be described here. The methods for dealing with NCEs are closely related to the method developed for the TREAT match algorithm (Miranker 1987). If a search for an in- stantiation consistently binds with a WME that matches an NCE, then the search fails at that point and must back- track. We say that that WME blocked the search. When a blocking WME is removed from the system, some instan- t&ions may become unblocked and allowed to compete for firing. Those instantiations that become unblocked are those that would have been computed had the condition el- ement been positive instead of negative, and had the WME been added to the system instead of removed. To handle NCEs, for each negated condition add a sec- ond alpha-memory which will shadow the first. Rename the original alpha-memory from Ci to Ci . Call the shadow alpha-memory e. When a WME that has blocked a search is removed from a C; alpha-memory it is inserted into Cs i , given the next available timestamp, and an entry is pushed onto the stack. Note that this requires the stack to accommodate another timestamp in its elements, one for each shadow alpha-memory. The newly added WMEs to CT can then be allowed to root a best-first search for those instantiations that they had blocked. A problem arises when a search leads to an instantiation that has already been derived from a search rooted by a WME in Cs . This is solved by requiring best-first search to examine all of Ci and the portion of Ct such that a search that starts with a DT=tst and binds consistently with a WME in c with timestamp ts+sl fails. The idea is that once a WME enters CSi, only it may generate in- stantiations with older WMEs. Such a WME will be able to root the search for all instantiations older than itself, whether they were blocked or not. We can now summarize the operations performed by lazy match and TREAT in response to the five conflict set events (see Table 2). TABLE 2. Events and Operations Event TREAT Operation Lazy Otxration make(WME+) Seed Join Stack Push remove(WME-) Seed Join Stack Push make(WME-) Seed Join & Delete from CS None remove(WME+) Delete from CS None fie(T) Delete from CS Best-first Search 4.0 Preliminary Results 4.1 Space and Time Complexity Each alpha-memory is proportional to the size of the WM. In the most adversarial scenario, every WME can be added to a shadow-memory and never removed. Thus, in the worst-case the shadow-memories are bounded by the maximum timestamp and the worst-case space complexity of the lazy match is O(max(ts)*c). Although the worst- case space requirements for a nonterminating program based on this version of the lazy match are unbounded, the worst-case is very unlikely and the space requirements of the lazy match are not at all volatile. We have identified several techniques that filter and re- duce the size of the shadow-memories. The most aggres- sive of these filtering techniques bounds the size of the shadow memories to O(wm”) where v is the number dis- tinct positive condition elements needed to bind the vari- ables in the shadow memory. This filter results in a worst- case space complexity of O(Min(wm’, Max(ts))*c). A simple filter, invoked when a rule becomes inactive, completely purges a rule’s shadow memory. A rule is ac- tive when each of its positive alpha-memories contains at least one entry. The first filter is expensive, but effective. The second is very inexpensive, but for some rules in a nonterminating program it may never be invoked. Since the shadow memories must be searched as well as the ne- gated alpha-memories, and since there is no analog of shadow memories in either RETE or TREAT, the actual execution time of the lazy match must be evaluated empir- ically. 4.2 Implementation To evaluate the effectiveness and the trade-off’s with re- spect to the variants of the lazy match we have reworked the back-end of the OPSSc compiler to use the lazy match. OPSSc is a portable C/Unix based OPS5 compiler origi- nally based on the TREAT match algorithm (Miranker et al. 1990). OPSSc produces in-line matching code for each rule. Its target is C code which must then be compiled for the target machine. The recently completed current version only imple- ments the simple purging filter on the shadow memories. The above presentation of the lazy match considered the generation of an instantiation by the best-first search as a computation involving a single rule. The current imple- mentation was extended to multiple rules by first selecting the DT and then considering each rule/alpha-memory that contained that DT in the order determined by the remain- ing conditions of the OPS5 lex strategy. 690 KNOWLEDGEREPRESENTATION Table 3 shows the performance of the lazy match imple- mentation with respect to OPSSc tests for three test pro- grams. Lazy matching generally resulted in 2-3 times fewer WME tests. TABLE 3. WME Tests WME Tests Proaram TREAT LAZY JIG25 35,780 11,113 TOURNEY 1,107;259 513;600 WALTZ 23.890 14.967 OPS5c has reduced the match time for these programs to below the 90%. Therefore, speed-up may not be as high as WME tests might indicate. The test programs in Table 3 execute very quickly and do not provide a good measure of execution time. However, the The WALTZ program can be scaled up by inputting larger line drawings. The original data describe a drawing consisting of 18 line segments. To demonstrate scaling and the effectiveness of the algorithm on large problems, we gave it a 10,000 WME waltz prob- lem. This resulted in a 4-fold reduction in the number of WME tests and reduced the run time by more than 50%. These results are much better than we expected, espe- cially when compared to the table of unused instantiations (Table 1) which we had thought was an optimistic measure of pruning. Detailed examination of the programs and their performance has revealed that lazy evaluation of certain programming constructs commonly used in rule systems can result in improved time complexity for the evaluation of those constructs. We start with an illustrative example (see Fig. 6). The rule represents a naive one rule solution to a jigsaw puzzle problem. This rule is typical in structure of many of the rules in all systems we tested. This rule says, “compare all edges to all other edges and if two have the same shape place them next to each other”. If there are n edges, then the TREAT algorithm will perform n2 operations. (p one-rule-jigsaw-solution (edge "piece-id <pidl> *edge-id <eidl> "shape <s> "matched F) (edge ^piece-id {<Xpidl> <pid2>} "edge- id <eid2> "shape <s> "matched F) --> (write "Place puzzle piece" <pidl> "next to piece" <pid2>") (modify 1 "matched T) (modify 2 "matched T)) Figure 6. Jigsaw Rule The execution of the lazy match first picks an edge matching the first condition element and then takes an av- erage of n/2 operations to find the matching piece. The rule would then fire and these two pieces would be removed from consideration. On the next cycle the lazy match would again pick an edge matching the first condition ele- ment and then take an average of (n-2)/2 operations to find the matching piece. Computing the sum, from n until all pieces are exhausted, for this rule shows that the lazy match would execute (n2+2n)/8 operations. We can generalize this type of problem to rules that pick loosely restricted subsets of size k from n objects, where loosely restricted means that oncejck objects are chosen it will always be possible to fill the requirements for the j+l object without backtracking. An eager evaluation of such a rule requires O(nk) time. A lazy evaluation will take O(n) time to pick each of k objects or O(n*k). Many systems, as in the jigsaw puzzle, will fire such a rule until all n of the objects have been chosen forming n/k subsets. Theorem: Given n WM elements. To choose n/k disjoint subsets of size k by executing n/k cycles an eager evaluation will take O(d) operations. A lazy evaluation will take O(n2). Proof: Let (r be the join selectivity. Join selectivity is the proba- bility that the values tested for a WME will be consistent with the values bound up to that point. For example, for the jigsaw puzzle rule if the shape of each edge is unique then cT= l/n. Eagerly evaluating a rule with k conjuncts and having each alpha-memory having n elements takes: k c 0 i-l,i = 0 (nk) i = 1 A Lazy evaluation takes: (n/k) -1 __ It s (k- 1) = O(n2) i= 0 (EQ 1) (EQ 2) Notice the constants greatly favor lazy evaluation. How common are such rules? Rules that are completely of this type are probably not that common. But there are rules in nearly all systems which pairs or triple of CEs rep- resent one of the above constructs. Any rule in any pro- gram that refers to the same class in more than one CE is a candidate for this reduction. We have found rules of this form in WEAVER,TOURNEY, and WALTZ. In WEAVER there are many rules where 5 or more condition elements refer to the same class. Our conjecture is that, using lazy matching, there are many rules in most systems whose time complexity will improve by one or more degrees. 5.0 Conclusions and Current Work The idea of lazy matching is necessary to improve the as- ymptotic space complexity of the incremental match prob- lem. Preliminary results show that for several application programs Lazy matching substantially improves execution time as well as the space requirements. Investigation of the applications revealed that Lazily matching certain com- monly used and expensive rule constructs leads to asymp- totic improvement in the execution time of those rules. In the near future we will consider rule-parallel implementa- tions that compute one instantiation $r rule. We will also investigate other filtering techniques-for the shadow mem- ories, including a technique that eliminates shadow memo- ries completely but whose worst-case space complexity is MIRANKER ET AL. 691 O(Max(wm”, ts)*c), and whose average space require- ments are potentially volatile. Our current goals include the development of an inte- grated expert-database system. By an integrated expert-da- tabase system we mean a system where working memory encompasses a large disk resident database and conven- tional database transactions may occur concurrently with the inferencing tasks. As a prototype, we are integrating the OPS5c compiler and the Genesis extensible database management system (Batory et al. 1988). We are exploring the use of appropriate data structures and memory hierar- chy to support this type of system. This research is being conducted in the context of the behavior of the Lazy match as the size of working memory is scaled to the size typical of existing commercial databases. REFERENCES Astrhan, M., et. al., “System R: A Relational Approach to Data,” ACM-TODS, June 1976. Batory, D., et. al., “GENESIS: An Extensible Database Management System,” IEEE Transactions on Soft- ware Engineering, Nov., 1988. Bein, J., R. King, and N. Kamel, “MOBY An Architecture for Distributed Expert Database Systems,” Proceed- ings of the 13th VLDB Conference, Brighton, 1987. Blakeley, J. A., et. al., “Efficiently Updating Materialized Views,” Proceedings of the 1986 ACM-SIGMOD In- ternational Conference on Management of Data, Washington, DC, June 1986. Bunemann, I? and E. Clemons, “Efficiently Monitoring Relational Data Bases,” ACM-TODS, Sept. 1979. Forgy, C., “OPS5 User’s Manual”, Tech Report CMU-CS- 8 1- 135, Carnegie-Mellon University, 198 1. Forgy, C., “RETE: A Fast Match Algorithm for the Many Pattern/Many Object Pattern Match Problem,” Artifi- cial Intelligence, no. 19, pp. 17-37, 1982. Gupta, A., C. Forgy, and A. Newell, “High-Speed Imple- mentations of Rule-Based Systems,” ACM TOCS, June, 1989. Kerschberg, L., “Proceedings of the First International Conference on Expert Database Systems”, Benjamin/ Cummings Publishing Company, Inc.,Menlo Park CA, 1987. Kerschberg, L., “Proceedings of the Second International Conference on Expert Database Systems”, Benjamin/ Cummings Publishing Company, Inc.,Menlo Park CA, 1988. Lofaso, B. J., “Join Optimization in a Compiled OPS5 En- vironment,” Tech. Report No. ARL-TR-89-19, Ap- plied Research Laboratories, The University of Texas at Austin, April, 1989. McDermott J., and C. Forgy, “Production System Conflict Resolution Strategies,” In Pattern-directed Inference Systems, D. Waterman and F. Hayes-Roth (eds.), Aca- demic Press, 1978. Miranker, D., “TREAT: A Better Match Algorithm for AI Production Systems,” Proceedings of the 1987 Na- tional Conference on Artificial Intelligence, Seattle, 1987. Miranker, D., “TREATA New and Efficient Match Algo- rithm for AI Production Systems”, P&man/Morgan Kaufman, 1989. Miranker, D., B.J. Lofaso, G. Farmer, A. Chandra, and D. Brant, “On a TREAT Based Production System Com- piler”, Proceedings of the 10th International Confer- ence on Expert Systems, Avignon, France, 1990. Raschid, L., T. Sellis, and C-C Lin, “Exploiting Concur- rency in a DBMS Implementation for Production Sys- tems,” Proceedings of the International Symposium on Databases in Parallel and Distributed Systems, 1988. 692 KNOWLEDGEREPRESENTATION
1990
128
880
A Framework for Investigating Production System Formulations with Polynomially Bounded Match Milind Tarnbe School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Real time constraints on AI systems require guaranteeing bounds on these systems’ performance. However, in the presence of sources of uncontrolled combinatorics, it is extremely difficult to guarantee such bounds on their performance. In production systems, the .prirnary source of uncontrolled combinatorics is the production match. To eliminate these combinatorics, the unique-attribute formulation was introduced in (Tambe and Rosenbloom, 1989). which achieved a linear bound on the production match. This formulation leads to several questions: is this unique-attributes formulation the best conceivable production system formulation? In fact, are there other alternative production system formulations? If there are other formulations, how should these alternatives be compared with the unique-attribute formulation? This paper attempts to address these questions in the context of Soar. It identifies independent dimensions along which alternative production system formulations can be specified. These dimensions are based on the fiied class of match algorithms currently employed in production systems. These dimensions create a framework for systematically generating alternative formulations. Using this framework we show that the unique-attribute formulation is the best one within the dimensions investigated. However, if a new class of match algorithms is admitted, by relaxing certain constraints, other competitor fonnulations emerge. The paper indicates which competitor formulations are promising and why. Although some of the concepts, such as unique-attributes, are introduced in the context of Soar, they should also be relevant to other rule-based systems.’ 1. Introduction Soar is an architecture for a system that is intended to be capable of general intelligence. It is based on formulating all symbolic goal-oriented behavior as search in problem spaces (Laird, Newell, and Rosenbloom, 1987). The primitive acts of the system, called decisions, are those required to pursue this search: the selection of problem spaces, states, and operators, plus the application of operators to states to generate new states. The information necessary for the performance of these primitive acts can be provided in one of two ways: from Soar’s knowledge base, which is implemented as a production system, or by the recursive use of problem space search in subgoals. Both can result in adding new working memory elements (wmes) to the system’s existing working memory. Soar learns by converting subgoal-based search into productions that generate comparable results under similar conditions (Laird, Rosenbloom, and Newell, 1986). The actions of the new productions are based on the results of the subgoals. The conditions are based on those wmes in parent goals upon which the results depended. This chunking process is a form of explanation-based learning (Rosenbloom and Lair-d, 1986). ‘This mearch was sponsored by the Defense Advanced Research Projects Agency (DOD) under contract numbers F33615-87-C-1499 and NOOO39-86CXO33 (via subcontract from the Knowledge Systems Laboratory, Stanford University), by the Nathal Aeronautics and Space Adminktration under cooperative agmcmmt number NCC 2-538. Paul S. Rosenbloom Information Sciences Institute University of Southern California 4676 Admiralty Way Marina de1 Rey, CA 90292 This paper is focused on providing Soar with an efficient and non-combinatorial (polynomially bounded) production match, particularly in the presence of continuous chunking. Production match in Soar is a key performance bottleneck. Soar’s current production match is OPSS-based (Forgy, 1981), i.e., it is NP- hard (Tambe and Newell, 1988). Production match occurs at every decision in problem solving in Soar, and unpredictable combinatorial processing can occur in the match at any such decision. This combinatorial match leads to various problems in Soar: (1) It prevents Soar from operating in real time (Newell, 1989). (2) It leads to the problem of expensive chunks, i.e., productions learned in the course of problem solving that can cause a severe degradation in Soar’s performance (Tambe and Newell, 1988). (3) It is problematical for Soar’s quest of modeling human cognition (Newell, 1990). (4) It leads to load-balancing problems in parallelization (Acharya and Tambe, 1989, Gupta, et. al., 1989, Tambe and Acharya, 1989). An efficient and bounded production match could alleviate all these problems. In (Tambe and Rosenbloom, 1989), the unique-attribute formulation was introduced to guarantee an efficient and bounded production match. This formulation eliminated combinatorics from the match by trading off some expressive power. This formulation leads to several questions: is this unique-attributes formulation the best conceivable alternative to the current production system formulation? Are there any other alternative production system formulations? If there are, how should they be compared with the unique-attribute formulation? This paper attempts to address these questions in the context of Soar. The paper identifies different dimensions along which alternative production system formulations can be specified. These dimensions are based on the fixed class of match algorithms currently employed in production systems. These dimensions create a framework for systematically identifying the different production system formulations. This paper shows how the unique-attribute formulation fits into the framework presented. Using this framework the paper shows that unique- attributes are the best possible formulation within the dimensions investigated. However, if a new class of match algorithms is admitted, other competitor formulations emerge. The paper indicates which competitor formulations are promising and why. Although these results are introduced in the context of Soar, Section 9 discusses their relevance to other systems. All alternative formulations investigated in this paper tradeoff expressive power for production match complexity, thus providing some additional data points in understanding the general tradeoff in knowledge representation and reasoning (Lcvesque and Bra&man, 1985, Patel-Schneider, 1989). Additionally, Soar’s chunking provides a unique opportunity to TAMRE AND ROSENBLOOM 693 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. (Production Length-3 /** The working memory **/ gain a better understanding of how learning interacts with this tradeoff. The paper is organized as follows: Section 2 provides an implementation-independent model of the production match, since this paper depends on a deeper understanding of production match. Section 3 presents the unique-attribute formulation. Section 4 discusses issues in evaluating alternative production system formulations. Section 5 introduces the framework for generating alternative formulations and shows how unique-attributes fit in the framework. Section 6 introduces tokenless match, the basis for the new class of match algorithms. Section 7 shows how promising new alternative formulations emerge with the introduction of tokenless match. Section 8 provides some evidence about the generality of the formulations presented here, by showing how marker passing - a formulation quite different from the standard production match - maps onto it. Section 9 outlines the contributions of this paper and its relevance to other research. 2. Modeling Soar’s Production Match The k-search model (Tambe and Newell, 1988) of production match covers match algorithms that find all possible solutions, without the aid of heuristics. This includes widely used match algorithms such as Rete (Forgy, 1982) and Treat (Miranker, 1987). The k-search model is based on the notion of tokens, or partial instantiations. Consider the (simplified) production Length-3 shown in Figure 2-l-a. In the figure, up-arrow (A) indicates an attribute, and angeled bracket (<>) indicates a variable. Figure 2-l-b shows the working memory of the production system, which describes the graph in Figure 2-l-c. On the creation of the wme (current-position A), the production Length-3 will match the working memory, generating tokens, e.g., (2; cx> = A, <z> = B). The first number in the token indicates the number of conditions matched and the other elements indicate the bindings for the variables. Thus, tokens indicate what conditions have matched and under what variable bindings. These tokens can be represented in the form of a k-search tree, as shown in Figure 2-l-d. This k-search tree represents the search conducted by the matcher, using tokens, to match the production. The result of the k-search is the four tokens (instantiations) shown at the leaves of the k-search tree. The time/token is approximately constant (Tambe and Newell, 1988). Therefore, for Soar productions, the number of tokens in the k-search tree is a reasonable estimate of the time spent in match. The causes of Soar’s combinatorial production match can now be explained. All the wmes in Soar’s production system are a priori candidates to match a condition, leading to a k- search tree with a number of tokens greater than wmesconditions. However, variables bound in the conditions prior to the current condition, and constants, can provide a strong filter on the match. Soar conditions have four fields: (CZUSS identifier attribute value). The class and attribute fields are constant (almost always), the identifier field is prebound (almost always) and the value field can be a constant, a prebound variable, or an unbound variable. Thus, an unbound variable should only occur in the value fieZd, and multiplicity only occurs in matching a Soar condition if there are multiple possible values corresponding to the three already fixed fields, i.e., there is more than one value for an attribute. This is referred to as a multi-attribute. For instance, in Figure 2-1-b, connected-to is a multi-attribute of the objects A and D - point A is connected-to both B and C, while D is connected-to to both E and F. As shown in Figure 2-1-d, the k-search tree branches out (current-position a>) (point <x> -connected-to <Z>) (point <z> %onnected-to cy>) (point <y> Aconnectad-to cu>) --> (write path of length 3 from <Jo to <w>) ) (a) E c D F d? A B (3: <z> (4; (point A *connected-to B) (point B Yxnmected-to D) (point A Yzonnected-to C) (point c ~wnnected-to 0) (point D "connected-to E) (point D %onnected-to F) @I - Cl - D) = A. = c, - D, * F) (cl 00 Figure 2-1: An example production system. in matching the multi-attributes with conditions 2 and 4. Thus, combinatorics can occur in Soar’s production match only in the presence of multi-attributes. (Preferences - special control Lrnents in Soar - * can also contribute to the combinatorics. However, their impact is much smaller than multi-attributes, and they are ignored here. See (Tambe, Newell and Rosenbloom, 1990) for details.) 3. Unique-attributes If multi-attributes are eliminated completely, the branching of the k-search tree is also eliminated. This will then limit-the number of tokens in the k-search tree to the number of conditions in the production. Thus, the cost of a production will be linear in the number of conditions, i.e., the-match cost will be bounded linearly rather than being NE-hard. For productions containing variables, this O(conditions) match bound is optimal, since all the conditions of a Droduction must be examined for the match in any event. The new formulation is referred to as the unique-attribute formulation (Tambe and Rosenbloom, 1989). In contrast, Soar’s current production system formulation will be henceforth referred to as the unrestricted formulation, i.e., a formulation with no representational restrictions. (This unrestricted formulation was referred to as the multi-attribute formulation in (Tambe and Rosenbloom, 1989, Tambe, Newell and Rosenbloom, 1990)). The principle computational impact of going with unique- attributes is the removal of the combinatorial k-search from the match - all combinatorics will now occur as search in problem spaces. In (Tambe and Rosenbloom, 1989). detailed experiments on unique-attributes demonstrated the ability of the unique-attributes to eliminate the combinatorics from the production match, and to outperform the unrestricted Soar formulation in various tasks. Basically, the unique-attribute formulation trades off some expressive power to gain efficiency in the production match. This loss in expressive power is not inconsequential. It manifests itself in three issues. First, multi-attributes are used in encoding unstructured sets in working memory. For instance, in Figure 2-1-b, points B and C are an unstructured set of points connected to point A. With unique-attributes, all sets in working 694 KNOWLEDGEREPRESENTATION memory have to be structured (e.g., lists), which may not always be easy (Tambe and Rosenbloom, 1989). Second, the loss of multi-attributes causes a loss in chunk generality. A much larger number of unique-attribute chunks may be required to gain the same amount of coverage as an unrestricted chunk. Until such coverage is obtained, the unique-attributes have to perform potentially expensive problem-space search. So far, in the tasks encoded in the unique-attribute formulation, these two issues have not been a major problem (Tambe and Rosenbloom, 1989). The third implication of the unique-attributes is the inability to process arbitrary sets in a single decision, which is important in cognitive modeling tasks such as syllogisms (Polk, Newell, and Lewis, 1989). The unrestricted formulation allows this type of processing. With unique-attributes, sets are structured and have to be processed in multiple decisions (or a large number of chunks have to be learned to allow the processing to occur in a single decision) (Tambe and Rosenbloom, 1989). These three issues imply that the unique-attribute formulation is not the best conceivable alternative to the current formulation - its expressive power can potentially be improved. 4. Evaluating Alternative Formulations The unique-attribute formulation is not the best conceivable formulation. But, in practice, is it the best possible?, i.e., is the tradeoff in unique-attributes the best rrcrdeo#? This is a difficult question. As shown later, different formulations, which polynomially bound the match, restrict the production system in different ways and engage in different tradeoffs. The concept of the best @a&o#is imprecise. Thus, it is impossible to devise an independent test to determine if the unique-attribute, or any other formulation is the best possible in practice, without a comparison with other formulations. That is, to determine the best, different formulations have to be compared and ranked. Even this comparison is quite difficult. However, we can enumerate a set of requirements, which will help in comparing different formulations. These requirements can be divided into absolute and relative. Absolute requirements allow formulations to be evaluated independent of each other, while relative requirements only enable comparative evaluations. The absolute requirements are: 1. Polynomial bound on match complexity: The match complexity of the desired re resentation should be polynomially bounded (in num er of conditions in a g production and wmes in the s f stem). 2. Closure under chunking: f the productions and the workin particu ar P memory meet the restrictions imposed by a re $ esentation chunks scheme before chunking, the shou d also meet the restriction. If chunking violates the restrictions and creates expensive chunks, clearly that defeats the purpose of this exercise. 3. zure;tness of match: The match should provide correct . The relative requirements are: 1. Expressive adequacy: How easy/difficult is it to encode various existing Soar tasks in a given representation? The prevrous section discussed this issue with respect to unique-attributes. Schemes that allow easier encoding of various Soar tasks are referred. 2. Relative e rciency: fff fv ithin the space of polynomially bounded ormulations, schemes with smaller polynomial bounds are preferred. - _ 3. Chunk&g generality: This refers to the number of chunks . it%:-i! -.- ?ia F articular representation to cover a given ormulation providing higher chunking 4.5 - enerality is preferred. rrnciple of uniformity: Formulations that do not introduce arbitrary divisions in productions or working m.em+ory are f-ke;l. Soar strongly *adheres to the. prmct Fwe;f 1990). l%i m varrous archnectural mechamsms d s re urrement extends that principle to th; q - production system formulation. These are diverse requirements, possibly conflicting with each other. Hence these requirements have to be prioritized: the requirements are listed above in their order of priority. Given the goals of the current research, we adopt the position that the absolute requirements must be met by any candidate formulation. Even with this, the candidate formulations are not well ordered. First, the relative requirements are quite subjective. Second, there are interesting tradeoffs/interactions among these requirements, e.g., it may be possible that a formulation with an O(n2) match bound may provide better chunking generality than a formulation with a O(n) match bound. Even with these shortcomings, the requirements and priorities outlined above are of great help in the search for alternative formulations. As an example of the use of these requirements, consider a combination of unique- and multi-attributes that attempts to introduce limited amounts of multi-attributes to gain expressibility without sacrificing efficiency. First, suppose this formulation adheres to the principle of uniformity and does not explicitly separate out multi- and unique-attributes from each other. Then, it has no way of controlling the number of multi- attributes matching a production - it is exactly like the unrestricted Soar system. The match cost of productions becomes unpredictable and the polynomial match bound requirement is violated. Now, suppose the violation of the principle of uniformity is accepted: the system explicitly labels and separates unique- and multi-attributes. It can then bound the number of multi-attribute- matching conditions in any single production - thus controlling the match cost. However, it is possible for such a system to create chunks where the number of multi-attribute-matching conditions in the production exceeds the specified bound - generating expensive chunks. This violates the requirement of closure under chunking. Thus, this unique- and multi-attribute combination does not work. Note that in this paper, schemes that impose arbitrary time- based cutoffs on the match or those that require Soar to chunk selectively, are rejected as possibilities. Such schemes do not integrate well with the rest of the Soar architecture, i.e., they do not conform with the assumptions underlying the Soar architecture (Tambe, Newell and Rosenbloom, 1990). 5. Formulations Based on Existing Match Algorithms The previous section showed that the combination of unique- plus multi-attributes does not meet all the absolute and relative requirements. Generating a formulation that satisfies all the requirements is difficult - there is no method that, given the requirements, will directly provide a formulation satisfying them. Combined with the need for comparing different formulations to determine the best among them, this situation dictates a strategy for exhaustively searching the space of alternative formulations; then for each formulation, testing if it meets the requirements, and then comparing its properties with other formulations to test if it is better. Although part of the search has already been conducted (Section 3 and 4), how should the exhaustive search proceed further? The problem here is the absence of a device to TAMBEANDROSENBLOOM 695 systematically generate required alternatives. Without such a device, we cannot understand and usefully exploit the structure of the space of alternative formulations. Therefore, this paper introduces a framework for systematically generating alternative production system formulations. To specify the framework, it is necessary to first identify the independent dimensions of this framework. Roughly, these dimensions can be divided into two categories: (1) those that do not require any modification to the current set of token-based match algorithms (introduced in Section 2) (2) those that require some modification. Section 6 introduces dimensions from the second category. This section introduces dimensions only from the first category. These dimensions are based on restrictions on Soar’s working memory format: (class identifier attribute value). They are: 1. Forward-attribute: Given a fixed identifier, this dimension imposes restrictions on the number of attributes for that identifier. For examule, sunuose the number of attributes is restricted to one. If B-1 ii-an identifier, then (class1 Bl attribute-l vail) and (class1 Bl attribute-2 val2) are not allowed simulta&ously to exist in working memo ’ 7 . 2. Values-per-attribute: Given a fixed identi XX, this dimension imposes restrictions on the number of values- per-attribute for that identifier. If the number of values-per- attribute is restricted to one, the unique-attributes formulation is obtained. Multi-attributes refer to multiule values-per-attribute. 1 3. Reverse-attribute: Given a fixed value. this dimension im oses restrictions on the number of attributes for that v ue. This is symmetrical to the forward-attribute a! dimension. Thus, if vall is a value, then (class1 Bl attribute-l vall) and (class1 B2 attribute-2 vall) are not allowed simultaneous1 iY to exist in working memo ‘i: , as attribute-l and attri ute-2 are two different attri utes with the same value vall. 4. Identifiers-per-attribute: Given a fixed value, what are the restrictions on. the number pf identifiers-per-attribute for thghyo; Thrs 1s symmetrical to the values-per-attribute . (The class slot in Soar’s wmes does not serve any semantic role. Hence the paper does not refer to the class slot.) Figure 5-l shows the four dimensions in a tabular format. There are two co-ordinates along each of the four dimensions - 1 and *. Here, an asterisk (*) refers to an arbitrary number of values, i.e., no restrictions on the values. In the figure, to lay out these four dimensions in two dimensions, the forward-attribute and values- per-attribute dimensions are paired. Similarly, the reverse- attribute and identifiers-per-attribute dimensions are paired. The figure shows that different combinations of co-ordinates along these dimensions identify different working memory representations. The choice of 1 and * for the co-ordinates is an interesting issue. This choice of co-ordinates covers both the unique- attribute and the unrestricted working memory and yields semantically meaningful representations like the Tree representation (introduced below). Furthermore, if the number of values-per-attribute are increased to two or more, match becomes combinatoric and does not admit the linear bound of unique-attributes. However, whether no other co-ordinates besides 1 and * are useful, remains unclear. In the unrestricted working memory representation, with a fixed identifier, it is possible to have an arbitrary number of forward-attributes and values-per-attribute. Similarly, with a fixed value, it is possible to have an arbitrary number of reverse- attributes and identifiers-per-attribute. Therefore, the unrestricted working memory occupies the square in the center of the table, where all four dimensions have a value of *. For unique-attributes, the restriction is only on the values-per- attribute - for a fixed identifier, there can be only one value- per-attribute. Its other dimensions are unrestricted and take the value of *. cua Indicates more restrictive l Indicates an arbitrary number than unique-attributes <ts Indicates more restrictive than the tree-structures Cl21 Indicates more restrictive than unique-identifiers e Indicates increasing restrictiveness (reverse- attribute, identifiers- per-attribute) (forward-attribute, values-per-attribute) (1, 1) (1, 1) <ua CUi eta (', 1) ('# '1 (1, '1 [Multi- [Multi- Attr] Attr] <ua Tree <Ui <ui <Ill Ct.8 <ts (1, 1) <ua <III Ct.8 (', 1) <ua <Ui cua Unique- <Ui ident -3l.i <ua <Ui ;.;T 4 > Figure 5-1: Dimensions of alternative representations. In Figure 5-1, the square marked as Tree restricts the reverse- attributes and identifiers-per-attributes to one and leaves the other dimensions unrestricted. This representation implies that two different working memory elements cannot have the same value in their value fields. More semantically, this representation corresponds to a tree-structured organization of the working memory. The square marked as unique-identifiers restricts the identifiers-per-attribute to one, but does not restrict any other dimension. The table shows that the Tree and unique- identifiers allow an arbitrary number of values-per-attribute, i.e., they allow multi-attributes. In fact, all the working memory representations in two of the cohunns allow multi-attributes; these columns are labeled with Wulti-attr]. The table shows the unrestricted working memory as the least restrictive form of representation. It occupies the center square in the table. The other representations form a restriction lattice. The fmt column and the first row are repeated to show the symmetry in this lattice. The most restrictive working memory representation is the one where all the dimensions have the value 1. The table also shows that the Tree structures are more restrictive than the unique-identifiers; but unique-attributes are unrelated in terms of restrictions to either one of these. The important conclusion that can be made from the table is: unique-attributes provide the best possibie formulation within the four dimensions investigatz?hat is, the unique-attributes provide the best fit to all the absolute and relative requirements. All other formulations are either combinatoric, so that they violate the absolute requirement of a polynomial match bound; or they are more restrictive than the unique-attributes, so that unique-attributes fit the relative requirements better. First, consider the formulations in the third and fourth columns of the 696 KNOWLEDGEREPRESENTATION table. They include the Tree and unique-identifiers, which constraint-graph where the variables are represented by nodes restrict the identifiers-per-attribute. These representations allow and the constraints by arcs. Each constraint specifies the set of multi-attributes. As described in Section 2, the match in the permitted pairs of values for the two variables involved. Thus, presence of multi-attributes is combinatoric: thus, all these formulations violate the polynomial match bound requirement. if Xi and Xj are two variables with domains Di and Dj respectively, then the constraint Rij between Xi and Xj is a Second, consider the formulations in the first and second subset of the Cartesian product of their domains, i.e., columns of the table. This includes the unique-attributes and the R, c Di x Di formulations more restrictive than the unique-attributes. The formulations that are more restrictive than the unique-attributes do not reduce the match bound: it is optimal to begin with. Furthermore, these formulations are guaranteed to not be better than unique-attributes along the expressive adequacy or chunking generality requirements, since they are more restrictive. More generally, we can also conclude that if a given formulation meets all the absolute requirements, then more restrictive formulations, which do not reduce the match bound, need not be investigated. If the more restrictive formulations possess the same match bound as the given formulation, only the remaining relative requirements like expressive adequacy, choose the better among them. However, the more restrictive formulations are guaranteed to not meet these remaining relative requirements better than the given formulation. Thus, the unique-attribute formulation is the best possible one in practice, within the four dimensions investigated. The general conclusions drawn here illustrate the power of the &amework: the conclusions exploit the structure of the search space made explicit by the framework. Thus, the need for detailed evaluation of each formulation is eliminated. The match for a single Soar production maps on to the constraint satisfaction problem as follows. The variables in the conditions form the variables in the constraint satisfaction problem. For example, the production in Figure 2-l-a can be represented as the constraint network in Figure 6-l. The symbols in working memory, i.e., symbols occupying the identifier and value fields of working memory elements, form the domains of the variables. A condition containing two variables is a constraint between the two variables. The condition specifies (or selects) the wmes with its attribute, so that each wme represents a permitted pair of values for the variables linked by the condition. If a condition contains Al as an attribute, it specifies all the wmes with the attribute Al. Thus, the conditions from Figure 6-l with the attribute connected-to specify all the wmes from Figure 2-l-b. Finding all possible solutions of the constraint satisfaction problem formed by a production will result in finding all possible instantiations of the production. current-position connected-to connected-to connected-to cx> <LT.> 9’ <w> 6. Tokenless Match: A New Match Scheme The formulations introduced in the previous section were based on the the token-based match scheme (henceforth called token match). The conclusion about unique-attributes being the best among these formulations is based on this token match. In a token match, a single token indicates what variable bindings go together. For example, the token (2; cx> = A, cz> = C) from Figure 2-l-d indicates that the binding A for cx> and C for cz> go together. The outcome of the match is a set of instantiations, indicating which bindings go together. If a tokenZess match is allowed, then new formulations emerge: the unique-attribute formulation is no longer guaranteed to be the best. In a tokenless match, each variable obtains a list of bindings, independent of the bindings of other variables. The outcome of this match is a set of bindings for each variable, rather than the separate instantiations. If the production Length-3 in Figure 2-l-a is matched with the working memory from Figure 2-1-b, the result is a set of bindings for the variables as follows:cx> = A; cz> = B, C; cy> = D; cw> = E, F. The matcher guarantees that these bindings are consistent with each other; however, it does not explicitly create tokens and instantiations. This consistency requirement is explained in the subsection below with the help of a mapping. This mapping also provides ready-made algorithms to perform tokenless match. 6.1. Tokenless Match and Constraint Satisfaction A constraint-satisfaction problem is defined as follows: given a set of N variables each with an associated domain and a set of binary constraining relations between the variables, find all possible N-tuples such that each N-tuple is an instantiation of the N variables satisfying the constraining relations (Mackworth and Freuder, 1985). This problem can be represented as a Figure 6-1: Mapping tokenless match to constraint satisfaction. The constraint satisfaction literature distinguishes between obtaining consistent bindings and forming an instantiation (Dechter and Pearl, 1988). Once consistent bindings are obtained, individual instantiations from the bindings are obtained separately. The tokenless match achieves a similar consistent set of bindings, without forming any instantiations. Thus, there is a mapping between the tokenless match and obtaining consistent bindings in constraint satisfaction problems. 6.2. Implications of the Mapping The mapping between the tokenless match and constraint satisfaction problems provides an important notion: the relation between the structure of a production and the effort required in obtaining consistent bindings for that production. The structure of the production refers to the equality tests across the value fields of the production’s conditions. For instance, if a Soar production admits no equality tests between two variables in the value fields of its conditions, then the production has a tree structure. (To understand why such productions are tree- structured, consider a graph where the variables in the identifier and value fields of the production’s conditions are nodes; and the attributes in the conditions are links bctwcen the nodes. This graph is tree-structured.) Figure 2-l-a presents one such production without any equality tests across its value flclds. To demonstrate how the production structure can be exploited, we need the concept of arc consistency in constraint-satisfaction problems. In constraint-satisfaction problems, arc-consistency is a form of local consistency. Arc-consistency does not solve the general constraint-satisfaction problem. A constraint graph is arc- consistent if each of its arcs is arc-consistent. An arc between variables Xi and Xj is arc-consistent iff for any value n E Di TAMBEANDROSENBLOOM 697 there is a value y E Dj such that Rij(x, y) (Mackworth, 1977). Here Rij(x, y) stands for the assertion that (x, y) is permitted by the explicit constraint Ri.. We add the following small symmetric requirement to d e situation above: if there is a value y E Di then there is a value x E Di such that Rij(x, y). In terms of the production match mapping, the following has to be satisfied for a condition with an attribute Al to be arc consistent: If there is a binding x for the variable <x> in its identifier field, then there is a binding y for its variable <y> in its value field, such that there exists a wme with attribute Al, identifier x and value y. Symmetrically, if there is a binding y for its variable <y> in the condition’s value field, then there exists a binding x for the variable cx> in its identifier field such that there exists a wme with attribute Al, identifier x and value Y- Arc consistency is used along with the production structure in the following result based on an important result from (Dechter and Pearl, 1988): If a production is tree structured, and it is made arc- consistent, then the bindings obtained for its variables are consistent. Furthermore, this arc-consistency can be achieved in O(wmes*conditions). Here, wmes refers to the number of working memory elements. Thus, if productions are tree-structured, then a tokenless match can be obtained in polynomial time, i.e., the match guarantees consistent bindings for the variables in the production using arc consistency. Note that the tokenless match is important in achieving this bound. A token match, despite tree structured o(wmesconditions productions, is still exponential - ) (see section 4 of (Tambe and Rosenbloom, 1989) for a demonstration of this effect). Note, however, that if a production is not tree structured, i.e., it has an equality test across the value fields of its conditions, arc consistency may provide a wrong result. That is, it might provide a binding for a variable, when a token match for the same production would have provided none (Tambe and Rosenbloom, 1990). Thus, there is a tradeoff in the complexity of the tokenless match and the restrictions on the equality tests. By restricting the structure of the production in specific ways, different (and increasingly complex) bounds on the tokenless match can be obtained (Dechter and Pearl, 1988). With this result, the structure of the productions emerges as an important dimension. Thus, two new dimensions are now available in the framework: tokenless match and production structure. 7. Tokenless Match: New Formulations Emerge The token/tokenless match and production structure dimensions, along with the four dimensions introduced previously (see Figure 5-l), provide us with a total of six dimensions. From this expanded space, new formulations emerge, and unique-attributes are no longer guaranteed to be the best fit to the absolute and relative requirements. To illustrate this, consider a specific formulation, called the unrestricted-tree formutation, that works with an unrestricted working memory representation (from Figure 5-l) and a tokenless match, but restricts productions to be tree-structured. This formulation satisfies all of the absolute requirements. First, polynomial match bound is guaranteed because of the use of a tokenless match and tree-structured productions. As noted in Section 6.2, this combination yields polynomial match bound - O(wmes*conditions) - irrespective of the representation used in working memory. Second, with a small modification to chunking - disallowing introduction of new equality tests across value fields of conditions - this formulation provides closure under chunking. That is, only tree-structured productions will be chunked. Third, due to the consistency requirements imposed, this formulation provides correct results. Therefore, all the absolute requirements are satisfied. When the relative requirements are compared for the unrestricted-tree and unique-attribute formulations, it becomes clear that neither formulation dominates the other, and hence neither can be pruned a priori. In particular, in terms of expressive adequacy, in the unrestricted-tree formulation, working memory restrictions are absent (compared to unique- attributes); however, since equality tests across value fields of conditions are not available, productions have to be written in a fairly different manner. (If such tests become absolutely necessary, productions can directly test constants.) Therefore, we need to implement a set of tasks with this new formulation, and compare its performance with unique-attributes. Initial analysis seems to indicate some promising results for the unrestricted-tree formulation. For example, in the grid task (Tambe and Rosenbloom, 1989), the unrestricted-tree formulation should be able to outperform the unique-attribute and unrestricted formulations. The unrestricted-tree formulation would provide chunks with polynomial match cost, where similar chunks in the unrestricted formulation require an exponential match cost (see Section 4 of (Tambe and Rosenbloom, 1989)); but simultaneously it would avoid the loss in chunk generality that afflicts the unique-attributes (Tambe and Rosenbloom, 1990). The work of implementing tasks in this new formulation is currently in progress and remains a key issue for future work. Besides the unrestricted-tree formulation, other formulations based on tokenless match are also possible pambe and Rosenbloom, 1990). For instance, by adopting consistency algorithms more constraining than arc consistency (Dechter and Pearl, 1988). it is possible to introduce limited equality tests across value fields of conditions of productions. Such new formulations may perform better along the expressive adequacy dimension than the unrestricted-tree formulation, but require a higher polynomial bound. Other formulations that are based on the tree and unique-identifier working memory representations from Figure 5-l are also possible (Tambe and Rosenbloom, 1990). All these formulations are competitors for the unique- attribute formulation. Further investigations of these new formulations is another key issue for future work. 8. Relationship to Marker Passing Systems This section develops a mapping between the unrestricted- tree formulation and marker passing systems. This mapping illustrates the generality of the framework presented in this paper for alternative production system formulations - it manages to capture a fairly different production system formulation. This mapping is illustrated using the NETL parallel marker passing system (Fahlman, 1979). Conceptually, NETL is composed of a large collection of object nodes, like semantic nets, and a large collection of bi- directional links or arcs organized so that each object node can be linked arbitrarily to any number of other nodes in the system. Markers (signals) may be propagated along different links in parallel. Queries are posed to the NETL system, and answers are retrieved by passing markers between nodes. (No cancellation links are assumed, hence the problems from (Touretzky, 1986) associated with marker passing systems are not relevant 698 KNOWLEDGEREPRESENTATION here.) The mapping onto NETL relies on describing each symbol in the working memory as an object node of NETL. The attributes in working memory are links between nodes. The production to be matched is the query to the NETL system. Match is performed by passing markers between symbols. Matching a single condition is equivalent to sending a marker from a symbol bound to the identifier field of the condition (since the identifier of a condition is bound before matching the condition) to the symbols linked via the attribute in the condition. The result of passing this marker is a set of bindings in the value field of the condition. This allows bindings to be obtained for the next condition via marker passing. Again, the result of the match is a set of variable bindings, without the token information. E M4 connected-to I connected-to connected-to current-position connected-to B / Ml M2 Figure 8-1: Mapping onto marker passing. Figure 8-l shows this mapping with the help of the simple example from Figure 2-l. The figure shows the structure described by the working memory in Figure 2-l-b, with the addition of the wme (current-position A). This working memory is to be matched with the production in Figure 2-l-a. Matching the first condition results in the marker Ml being sent to the symbol A. This binds the variable <x> to A. Matching the second condition results in marker M2 being sent to the symbols B and C, which become the bindings for the variable cz> in the value field of the second condition. Matching the third condition results in passing marker M3 and obtaining binding D for the variable cy>. Note that when two M3 markers from B and C reach D, they are ORed together. Finally, passing marker M4 obtains bindings for variable cw>. The result of the match is the same as the result of the arc consistency match:ex> = A; <z>=B,C;cy>=D;<w,=E,F. As in the previous modification to the definition of arc consistency, markers actually have to be passed in a bidirectional manner to achieve complete consistency in tree- structured productions. That is, once the markers have reached the leaves of the production, they are transmitted back toward the root. In this example, the backward marker propagation phase would begin by sending marker M4 back and then continuing onwards with M3, M2 and MI. (In an actual implementation, correctness requires that the backward marker propagation be done first. However, for expository purposes, we have reversed the order of marker propagation.) The bidirectional marker passing results in the following: suppose there is a binding x for a variable <x> in the identifier field of a condition with attribute Al. Now, with marker passing in the forward direction along the Al link, bindings from the value fields of wmes with attribute Al are obtained for the variable cy> in the value field of the condition. Symmetrically, if there is a binding y for its variable <y> in the condition’s value field, then with marker passing in the reverse direction along the Al link, bindings from the identifier fields of the wmes with attribute Al are obtained for the variable cx> in the identifier field of the condition. This is exactly what the arc consistency match achieves. Why are tree-structured productions (queries) needed in this mapping? In parallel marker passing systems, the particular equality test issues are well known as the copy confusion problems. That is, only tree-structured productions can be matched accurately. 9. Summary and Relevance to Other Work A combinatorial production match is problematical for Soar for several reasons. This paper was focused on eliminating these combinatorics by introducing alternative production system formulations. The contributions of this paper can be summarized as follows: (1) it introduces absolute and relative requirements for evaluating alternative formulations; (2) it introduces a framework for generating alternative formulations; (3) using the framework it shows that the unique-attribute formulation is the best within the dimensions investigated, assuming a token match; (4) it introduces the tokenless match and maps it onto constraint satisfaction; (5) it shows that with tokenless match, other formulations may fit the absolute and relative requirements better than the unique-attributes; (6) it shows how an entirely different formulation - the marker- passing formulation - maps onto a formulation generated via the dimensions introduced in this paper, providing some evidence for the generality of the formulations considered here. An important question is the relevance of this research for the non-Soar community. The combinatorial production match is not a Soar speciality. It is observed in various other systems - OPSS-based systems (Brownston, Farrell, Kant, and Martin, 1985) and rule-based systems like Prodigy (Minton, 1988a). In all these systems, there is continuing research on achieving real time performance (Barachini and Theuretzbacher, 1988, Parson and Blank, 1989), on eliminating expensive learned rules (Chase et al., 1989, Minton, 1988b) and load balancing schemes for attaining high parallelism (Acharya and Tambe, 1989, Miranker, 1987, Tambe and Acharya, 1989) However, in general, the area of alternative production system formulations for solving the problems facing these systems has not been investigated. The representations in these systems are based on attribute-values, very similar to Soar’s representation - allowing a mapping between the results based on Soar’s representation to those systems. For instance, the utility of the distinction between unique- and multi-attributes for Prodigy is shown in (Etzioni, 1990). Furthermore, representations in frame-based systems like Theo (Mitchell et. al., 1989) map quite well into attribute- values, in fact, the version used by Theo already corresponds to the unique-attributes (Tambe and Rosenbloom, 1989). Thus, the results derived in this paper would appear to be relevant to all these systems. In particular, the idea of a tokenless match, that would eliminate a significant amount of combinatorics, appears to be very relevant to these systems. We hope that our research in Soar and related research in these other systems will allow us to gain a better understanding of the tradeoffs in knowledge representation, efficiency and learning. TAMBEANDROSENBLOOM 699 Acknowledgements We thank Allen Newell, Brian Milnes, Anurag Acharya, Dirk Kalp and Gary Pelton for useful comments on earlier drafts of this paper. We also thank Kathy Swedlow for technical editing. References Acharya, A. and Tambe, M. (1989). Production systems on message passing computers: Simulation results and analysis. Proceedings of the International Conference on Parallel Processing. pp. 246-254. Barachini, F. and Theuretzbacher N. (1988). The challenge of real-time process control for production systems. Proceedings of the National Conference on Artificial Intelligence. pp. 705-709. Brownston, L., Farrell, R., Kant, E. and Martin, N. (1985). Programming expert systems in OPSS: An introduction to rule- based programming. Reading, Massachusetts: Addison- Wesley. Chase, M. P., Zweben, M., Piazza, R. L., Burger, J. D., Maglio, P. P. and Hirsh, H. (1989). Approximating learned search control knowledge. Proceedings of International Workshop on Machine Learning. pp. 218-220. Dechter, R., and Pearl., J. (1988). Network-Based Heuristics for Constraint-Satisfaction Problems. Artificial Intelligence, 34(l), l-38. Etzioni, 0. (1990). A structural theory of search control Ph.D. diss., School of Computer Science, Carnegie Mellon University. In preparation. Fahlman, S. E. (1979). Representing and using real-world knowledge. In Winston, P. H. and Brown, R. H. (Eds.), Artificat Intelligence, An MIT perspective. Cambridge, Massachusetts: MIT Press. Forgy, C. L. (1981). OPSS User’s Manual (Tech. Rep. CMU- CS-81-135) Computer Science Department, Carnegie Mellon University. Forgy, C. L. (1982). Rete: A fast algorithm for the many pat&n/many object pattern match - problem. Artificial Intelligence, 19(l), 17-37. Gupta, A., Tambe, M., Kalp, D., Forgy, C. L., and Newell, A. (1989). Parallel implementation of OPS5 on the Encore Multiprocessor: Results and analysis. Internationat Journal of Parallel Programming, Vol. 17(2). Laird, J. E., Newell, A. and Rosenbloom, P. S. (1987). Soar: An architecture for general intelligence. Artificial Intelligence, 33(l), l-64. Laird, J. E., Rosenbloom, P. S. and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1 (l), 11-46. Levesque, H. J., and Bra&man, R. J. (1985). A fundamental tradeoff in knowledge representation and reasoning. In Bra&man, R. J., and Levesque, H. J. (Eds.), Readings in knowledge representation and reasoning. Morgan Kaufmann Publishers, Inc. Ma&worth, A. K. (1977). Consistency in Networks of Relations. Artificial Intelligence, 8(l), 99-l 18. Ma&worth, A. K., and Freuder, E. C. (1985). The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problems. Artificial Intelligence, 25( 1). 65-74. Minton, S. (1988). Learning Eflective Search Control Knowledge: An explanation-based approach. Ph.D. diss., Computer Science Department, Carnegie Mellon University. Minton, S. (1988). Quantitative results concerning the utility of explanation-based learning. Proceedings of the Seventh National Conference on Artijicat Intettigence. pp. 564-569. Miranker, D. P. (1987). Treat: A New and Eficient Match Algorithm for AI Production Systems. Ph.D. diss., Computer Science Department, Columbia University. Mitchell, TM., Allen, J., Chalasani, P., Cheng, J., Etzioni, O., Ringuette, M., and Schlimmer, J.C. (1989). Theo: A framework for self-improving systems. In VanLehn, K. (Ed.), Architectures for Intelligence. Hillsdale, New Jersey: Lawrence Erlbaum Associates. Newell, A. (1989). The Quest for Architectures for Integrated Intelligent Systems. Talk at IJCAI 89 on Recieving the Research Excellence Award. Newell, A. (1990). Unified theories of cognition. Cambridge, Massachusetts: Harvard University Press. In press. Parson, D. E., and Blank, G. D. (1989). Constant-time pattern matching for real-time production systems. SPIE Vol. 1095 Applications of Artificial Intetligence VII. pp. 97 1-982. Patel-Schneider, P. F. (1989). A four-valued semantics for terminological logics. Artificial Intelligence, 38(3), 319-35 1. Polk, T.A., Newell, A., and Lewis, R.L. (1989). Toward a unified theory of immediate reasoning in Soar. Proceedings of the Annuat Conference of the Cognitive Science Society. pp. 506-513. Rosenbloom, P. S. and Laird, J. E. (1986). Mawh explanation-based generalization onto Soar. Proceedings of the Fifth National Conference on Artificial Intelligence. pp. 561-567. Tambe, M. and Acharya, A. (1989). Parallel implementations of production systems. VIVEK: A quarterly in artificial intelligence, 2(2), 3-22. Tambe, M. and Newell, A. (1988). Some chunks are expensive. Proceedings of the Fifth International Conference on Machine Learning. pp. 451-458. Tambe, M. and Rosenbloom, P. (1989). Eliminating expensive chunks by restricting expressiveness. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. pp. 731-737. Tambe, M. and Rosenbloom, P. (1990). Investigating alternative production system formulations. School of Computer Science, Carnegie Mellon University, In preparation. Tambe, M., Newell, A., and Rosenbloom, P. S. (1990). The problem of expensive chunks and its solution by restricting expressiveness. Machine Learning, Vol. 5. (To appear). Touretzky, D. S. (1986). The mathematics of -inheritence system. Los Altos, California: Morgan Kaufmann Publishers, ITlC. 700 KNOWLEDOEREPRESE~ATI~N
1990
129
881
The Design of a Marker Passing Architecture for Knowledge Processing Wing Lee and Dan Moldovan Department of Electrical Engineering - Systems University of Southern California 90089- 1115 Los Angeles, California wlee@gringo.usc.edu, moldovan@gringo.usc.edu Abstract Knowledge processing is very demanding on com- puter architectures. Knowledge processing generates subcomputation paths at an exponential rate. It is memory intensive and has high communication re- quirements. Marker passing architectures are good candidates to solve knowledge processing problems. In this paper, we justify the design decisions made for the Semantic Network Array Processor (SNAP). Important aspects of SNAP are: the instruction set, markers, relations, propagation rules, interconnec- tion network, and granularity. These features are compared to those in NETL and the Connection Ma- chine. 1 Basic Operations in Knowl- edge Processing The computations that are typical of knowledge pro- cessing require the generation of numerous compu- tation paths that all could potentially be followed in parallel. The process of spawning a number of rela tively independent subcomputations, each of which may spawn other subcomputations, is called bifur- cation. Bifurcation processes appear to be impor- tant for a wide range of knowledge based systems. On a serial computer, the bifurcation of independent subprocesses leads to large computational demands. Even a parallel computer does not have the hard- ware resources to examine all of the parallel paths ‘This research Foundation Grant No. been funded by MIP-89/02426 the National Science of a problem. The problem with using current par- allel computers is that the problems bifurcate into multiple computation paths that share a consider- able amount of context. Another basic operation in knowledge processing is inheritance. Inheritance is the mechanism which locates properties attached to concepts within a cer- tain “distance”. Often, the inheritance is not obvi- ous, or deals with conflicting properties. Recognition is also important in knowledge pro- cessing. It deals with the ability to recognize con- cepts or situations in the knowledge base. Although similar to ordinary pattern matching, this problem is far more complex. For example, the properties may not be available locally and may have to be ex- tracted via inheritance, or the exact pattern may not exist and the best match must be determined. Classification is the process of placing a concept in the knowledge hierarchy. Once placed, it is very easy to retrieve information about the concept. However, classifying a concept is a non-trivial task. It involves comparing the properties of the new concept with properties of all the concepts in the knowledge base. Unification is the process of generalizing two pat- terns to form a new pattern that match what both input patterns would have matched. For example, unification may be used to match an inference rule with a knowledge base to determine the applicability of that inference rule. In probabilistic reasoning, probabilities are at- tached to concepts in the semantic network. The probabilites are modified by interactions with other nodes. Thresholding is used to filter out hypotheses LEE 59 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Problem Bifurcation Inheritance Operation Control the bifurcation process Find all nodes connected to a node via some combination of relations Find the paths connecting two nodes Find the implicit properties of a concept Locate entities in the knowledge ~ Classification Place a pattern at the most appropriate place in the knowledge base Unification Bind concepts with compatible concepts in the knowledge base ~ Learning Change the values of nodes or links according to some learning algorithm Table 1: Important Problems and Operations in Knowledge Processing that have low probabilities. Learning is the ability of a system to adapt to a problem domain. Commonly, learning involves as- signing weights to concepts or links, and being able to change the weights to match the characteristics of the problem domain. Table 1 lists some of the important problems and operations in knowledge processing. Some other im- portant aspectgs of knowledge representation and reasoning are described in [Brachman, 19881. 2 SNAP Design 2.1 Marker-Passing Architectures Marker passing architectures provide efficient im- plementation of the operations indentified previ- ously [Hendler, 1988],[Moldovan, 19891. However, the class of marker passing architectures has been relatively unexplored. This is due primarly be- cause knowledge processing operations and algo- rithms have not really been identifed. Our design approach was to build a machine that would achieve efficient performance for the opera- tions we described earlier and for a set of Natural Language parsing algorithms. In the rest of this pa- per, we will be describing some of the features of SNAP. In order to better understand these features, we will be comparing them with the features of two other architectures for knowledge processing: NETL and the Connection Machine. SNAP is a parallel machine consisting of a central controller and a 16K processing nodes. A SNAP node is capable of storing a single fact, concept, rule pattern, etc. The nodes in the network connect to other nodes in the network by a way of relations. Each relation type denotes a different relationship between concepts (nodes). The primary means of computation in SNAP is the processing of markers. NETL [Fahlman 19791 was one of the first ar- chitectures for knowledge processing. It consisted of a central controller and a collection of very sim- ple processing nodes. NETL had 8 different node types and 8 link types for connecting nodes together. A physical wire served as the connection between two nodes. The computation model in NETL was marker-passing, with the controller playing an active role in the movement of markers. Although NETL was never built, it served as the basis for several ar- chitectures, including the Connection Machine and SNAP. The Connection Machine [Hillis, 19851 was originally developed as an implementation of Fahlman’s NETL. The Connection Machine is a fine- grained array processor with programmable connec- tions between nodes. It consists of 64K single-bit processors, with each processor having 4K bits of memory and a serial ALU. The processors operate in SIMD fashion, with messages being the method of communication. 60 AUTOMATEDREASONING 2.2 Relations The knowledge base in SNAP is built upon relations between nodes. SNAP can support 64 user-defined relations. The user can extend the number of rela- tions by using nodes to act as relations. This con- struct is called a Relation-node. Relation-nodes are not as efficent as the primitive relation, but they ef- fectively enable the user to have as many relations as needed. Relations in SNAP also have a weight associated with them. The weight can represent the strength of the link, the cost of traversing the link, etc. Associating weights with relations is es- sential for implementing reasoning mechanisms such as probabilistic reasoning. In NETL, relations are pre-defined. Thus, the knowledge base has to be defined using the 8 link types (VC, EQ, CANCEL, CANVC, SPLIT, EX- FOR, EXIN, SCOPE). This puts a severe constraint on the type of knowledge that can be represented. NETL has no mechanism for extending the number of relations beyond these 8. In addition, NETL re- lations cannot carry weights. In fact, NETL has no numeric capabilities at all. The relations in the Connection Machine are more general than those in both NETL and SNAP. Like SNAP, the relations are all user-definable. Unlike SNAP, the Connection Machine has no limit in the number of relations. However, the total amount of memory available to the node for storage is limited to 4K bits. In SNAP, we felt that 64 relations was more than adequate for most applications, and that it was not worth the hardware resources to extend beyond 64. We provided the Relation-node construct to sup- port those cases where more than 64 relations are needed. 2.3 Markers and Value Passing SNAP is a marker-passing architecture. The nodes in SNAP communicate by way of messages. The ef- fect of a message at a destination node is to manipu- late a marker and possibly generate more messages. Each SNAP node is capable of simultaneously stor- ing up to 24 markers. Each marker consists of a bit indicating whether the node possesses that marker, a value register and a pointer register. The value register can store either data, the current strength of the marker, a probability, etc. A marker also con- tains a pointer value. This pointer value identifies the node that originated the marker. The marker pointer allows the same marker to be used for dif- ferent hypothesis. The pointer “colors” the marker so that we can identify which hypothesis it refers to. The marker pointer also enables the easy creation of new relations between nodes. This can be used to solidify a hypothesis, or be the end result of a series of computations (classification of a concept is one example where the creation/deletion of links is the end result). We created markers of this type to support prob- abilistic reasoning. Probabilistic reasoning requires the passing of not only markers but also values. The values contain the probabilities and cost associated with the network. They must be included in the marker messages that are sent between nodes. Oth- erwise, the system has no way to modify the prob- abilities in the network. Probabilistic reasoning has important applications in the areas of speech recog- nition and translation and natural language under- standing. The markers in SNAP differ from those in NETL and the Connection Machine. NETL markers consist of only a single bit. No pointer or value is associated with the marker. Consequently, NETL cannot dis- tinguish between two or more hypotheses using the same marker nor can it support probabilistic reason- ing. The Connection Machine is able to associate a pointer and value with a marker. However, this is a software construct and is not nearly as efficent in utilizing these features as the built-in hardware in SNAP. 2.4 Propagation Rules SNAP nodes are capable of communicating with other nodes by way of messages. Each message type has a built-in “propagation rule” which determines the path messages take (i.e. on which relation links to place the messages on). When a destination node receives a message, it sets a marker, performs a cor- responding action, and, depending on the propa- gation rule, can “propagate” the message to other nodes. Thus, propagation rules permit the trans- fer and bifurcation of messages to occur without in- tervention from the central controller. This allows many different message types, with different propa- gation rules, to travel in the network simultaneously. In the SNAP design we felt that it was impor- tant to implement in hardware some key propagac LEE 61 tion rules. The five propagation rules listed below give the programmer considerable freedom to direct how markers are to be propagated. 1. 2. 3. 4. 5. SEQ(R1, R2): the SEQUENCE propagation rule allows the marker to propagate through Rl once, then to R2 once. SPREAD(R1, R2): the SPREAD propagation rule allows the marker to traverse through a chain of Rl links. For each cell in the Rl path, if there exist any R2, the marker switches to R2 link and continues to propagate until the end of the R2 link. COMB(R1, R2): the COMBine propagation rule allows the marker to propagate to all Rl R2 links without limitation. END-SPREAD(R1, R2): This propagation rule is the same as SPREAD except that it marks only the last cells in the paths. END-COMB(R1, R2): This propagation rule is the same as COMB except that it marks only the last cells in the paths. By comparison, the propagation rules in NETL are primitive. Markers can propagate on only one relation type making a propagation rule like Com- bination difficult to achieve. In addition, the NETL controller plays an active role in monitoring and con- trolling the propagation of markers. Consequently, only one type of marker can be propagating in the network. In the Connection Machine, the propagation rules can be quite flexible, which allows the Connection Machine to easily implement the SNAP propaga- tion rules. However, the propagation rules in the Connection Machine are software mechanisms. This creates several drawbacks. First, marker propaga tion in the Connection Machine occurs much more slowly than in SNAP. In SNAP, the marker prop- agation and processing is built into the hardware. Second, since the Connection Machine must execute instructions to process markers, marker propagation must occur in the foreground. Thus, the Connection Machine cannot perform any other function during this time. In the majority of cases, only a small por- tion of the network participates in marker propaga- tion, the rest of the network is idle. In SNAP, how- ever, marker propagation occurs in the background. Consequently, the SNAP nodes that are not busy processing markers are free to do other things. Fi- nally, because marker processing in the Connection Machine is software based, the Connection Machine can propagate only one type of marker at a time. A different type of marker would require a differ- ent set of software instructions, which would not be able to execute at the same time as the first marker instructions. In order to solve most problems, sev- eral different markers are typically required. SNAP, therefore can acheive results with less effort and time than the Connection Machine. 2.5 Instruction Set We have designed for SNAP a set of 21 pow- erful instructions specific to knowledge process- ing for SNAP. These instructions are executed by the processing nodes and are divided into 6 groups: Node Maintenance, Search, Logical, Marker, Marker-Auxiliary, and Data Retrival. We felt that the instructions in these 6 groups represent the core functions required for knowledge process- ing. The Node Maintenance instructions (CREATE, DELETE, SET-COLOR) are used for loading and modifying the knowledge base. The Search instruc- tions (SEARCH and SEARCH-COLOR) are used to select a node or a group of nodes in the ar- ray. The Logical functions (AND, OR, NOT) are used to manipulate the markers within a node. The Marker instructions (MARKER, MARKER-ADD, MARKER-SUB, MARKER-MULT, and MARKER- DIVIDE) t d in ro uce a marker into the network. Each of the Marker instructions has a propagation rule as- sociated with it to provide decentralized control. In addition, an arithmetic function can be associated with a marker to enable manipulation of the numeric values in the marker value and the relation weight registers. This enables SNAP to support a wide range of numeric applications, including probabilis- tic reasoning and learning. The Marker-Auxiliary functions (CLEAR-MARKER, STOP-MARKER, CLEAR-STOP-MARKER, EQUATE, and CLEAR- EQUATE) are used to modify the operation of the Marker instructions. The STOP-MARKER en- ables a node to “eat” a marker and prevent it from propagating. This is an important mecha- nism for controlling the flow of markers and pre- venting the movement of markers into undesired ar- eas. The EQUATE instruction enables a relation to be treated as if it were another relation type dur- ing marker propagation. The CLEAR instructions are used to reset the marker portions of nodes. Fi- 62 AUTOMATEDREASONING nally, the Data Retrieval (COLLECT, COLLECT- RELATION, COLLECT-MARKER) are used to ob- tain information from the nodes. For a more detailed look at the SNAP instruction set see [Moldovan 19891. In NETL, there is no real instruction set per se. The nodes in NETL are very simple. Control of markers is governed by the NETL controller. Rea- soning on NETL is done by retrieving patterns from the knowledge base. The nodes in the Connection Machine have a ba- sic instruction set. They can be combined to form higher level instructions like those in SNAP. For ex- ample, [Chung 19891 created an instruction set simi- lar to SNAP’s when he programmed some knowledge processing examples on the Connection Machine. Our approach was to spend more hardware to cre- ate more complex primitive instructions. This allows us to perform basic knowledge processing operations in a minimal amount of time. 2.6 Granularity of a SNAP Chip In SNAP, we have packaged 32 nodes into a single custom-designed chip. Each node can have an av- erage of 10 relations. We placed these two limita- tions on SNAP to reduce the cost and to save space. The advances in VLSI techonology in the last decade have enabled us to place more logic onto a chip. Con- sequently, we felt it would be much easier to build a machine with 512 chips than it would be to build one with 16K chips. With the amount of logic that each node takes, we have estimated that 32 nodes and 320 total relations would easily fit onto a chip. There is a tradeoff in terms of placing more nodes onto a chip. Each SNAP chip has only 4 data ports for communication with other chip. With 32 nodes in a chip, the data ports can become a bottleneck in the system. [Kim, 19891 has done some preliminary analysis into this area. In both NETL and the Connection Machine, each chip consisted of only one node. 2.7 Interconnection Network The SNAP interconnection network is used to con- nect the SNAP chips together. It is used to enable message passing between nodes. The SNAP inter- connection network is a modified bus hypercube. A 16K network is made up of 512 SNAP chips. Each chip has 4 data ports for communication with other chips and a router for determining the message path. Messages in SNAP are 50 bits long and are sent as five lo-bit packets. The maximum number of inter- mediate chips a SNAP message must pass through is 3. More details on the SNAP interconnection net- work can be found in [Moldovan 19891. The net- work has been software simulated and compared with other networks [Lee 19891. The results show that the network performs favorably when compared to the performance of other networks. The interconnection network in NETL is vastly different. In NETL, all connections between nodes are point to point. A physical wire is placed between two nodes that share a relation. Thus, NETL mes- sages can travel very fast. However, a network of this type is unfeasible. In a dense network (more than a hundred nodes) it is almost impossible to place a physical wire between two nodes and be able to re- move it later. A network where we cannot delete links can only support monotonic reasoning; in al- most all cases, this is unacceptable. The Connection Machine has special routing chips for sending messages. This frees the nodes from hav- ing to participate in message routing. In a 64K Con- nection Machine, there are 4K routers, with each router servicing 16 nodes. The routers are arranged in a 12-dimensional hypercube and can process l-bit at a time. In SNAP, we chose not to create a special router chip, because we felt that incorporating the routing function into the node chip was a better al- ternative. Unlike the Connection Machine, marker propagation in SNAP can proceed in the background without intervention from the controller. Thus an integrated router fits right into that concept. In the Connection Machine, however, actual instructions have to be executed to process a message. Incorpo- rating the routing function in the node would further complicate the long process of marker propagation. 3 Simulation We have built a simulator of SNAP to test some of the concepts we have discussed in this paper. We have run several examples on the SNAP simulator. Table 2 summarizes the simulation results for these examples. Example 1 is a non-obvious inheritance problem involving 30 concepts (nodes). Example LEE 63 SNAP 1 CM Tcm/ Cycle 1 Time Time Tsnap Table 2: Comparsion between SNAP and Connec- tion Machine 2 is a inheritance problem over an imaginary two- dimensional 10 x 10 network. Example 3 deals with recognition with multiple properties. Example 4 is a small classification problem [Lipkis, 19831. A com- plete description of the examples can be found in [Moldovan 19901. Some of these examples have been implemented on the Connection Machine at USC-IX Those times are listed along with the SNAP times in Table 2. Note, for SNAP we considered the speed to be the same clock as the Connection Machine (6.47443 MHz). 4 Conclusions SNAP combines several features which collectively make SNAP a powerful knowledge processing engine. Some of these features are: a powerful instruction set implemented in hardware, marker passing archi- tecture, associative array processing, and a modified hypercube interconnection network. The instruction set has been carefully designed to provide hardware implementation of the most often used knowledge processing operations. Special attention has been given to marker propagation rules. The results shown in Table 2 between SNAP and the Connection Machine are not suprising consid- ering the differences between the two machines de- scribed in this paper. The fundamental reason for the superior performance of SNAP is that it imple- ments in hardware features that require software in- structions in the Connection Machine. References Brachman, R. [1988]. “The basics of knowledge rep- resentation and reasoning”, AT&T Technical Journal, 67: 1, 7-24. Chung, S., Moldovan, D. and Tung, Y. [1989]. “Rea soning on the Connection Machine”, Techni- cal Report CENG-89-13. University of Southern California Department of EE Systems. Fahlman, S. [1979]. “NETL: A system for represent- ing and using real-world knowledge”. The MIT Press, Cambridge, MA. Hendler, J. [1988] “Integrating Marker-Passing and Problem-Solving”. Lawerence Erlbaum Asso- ciates, Inc. Hillis, W. [1985] “The Connection Machine”. The MIT Press, Cambridge, MA. Kim, J. and Moldovan, D. [1989]. “Parallel Classifl- caton for Knowledge Representation on SNAP”. Proceedings of the 1990 International Confer- ence on Parallel Processing. Department of Electrical Engineering Systems, Univ. of South- ern California. Lee, W. “Bandwidth Analysis of Message Pass- ing Networks”. Technical Report CENG 89-24, Department of Electrical Engineering-Systems. University of Southern California. Lipkis, T. and Schmolze, J. [1983]. “Classification in the KL-ONE knowledge representation sys- tem”, Proceedings of the Eighth International Joint Conference on Art$cial Intelligence, Vol. 1, 330-332. Moldovan, D., Lee, W., and Lin, C. [1989]. “SNAP: A Marker-Propagation Architecture for Knowl- edge Processing”, Technical Report No: 89-10. Department of Electrical Engineering Systems, Univ. of Southern California. Moldovan, D., Lee, W., Lin, C., and Chung, S. [1990]. “Parallel K nowledge Processing on SNAP”. Proceedings of the 1990 International Conference on Parallel Processing. 64 AUTOMATEDREASONING
1990
13
882
A Principled Approach to Reasoning about the Specificity of John Yen Department of Computer Science Texas A&M University College Station, TX 77843 Yen@CSSUN.TAMU.EDU Abstract Even though specificity has been one of the most useful conflict resolution strategies for selecting productions, most existing rule-based systems use heuristic approximation such as the number of clauses to measure a rule’s specificity. This paper describes an approach for computing a principled specificity relation between rules whose conditions are constructed using predicates defined in a ter- minological knowledge base. Based on a formal definition about pattern subsumption relation, we first show that a subsumption test between two conjunctive patterns can be viewed as a search problem. Then we describe an implemented pat- tern classification algorithm that improves the ef- ficiency of the search process by deducing implicit conditions logically implied by a pattern and by reducing the search space using subsumption re- lationships between predicates. Our approach en- hances the maintainability of rule-based systems and the reusability of definitional knowledge. Introduction Specificity is a classic conflict resolution heuristic used by many rule languages from OPS through ART for selecting productions [McDermott and Forgy 19’781. It provides a convenient way for expert systems (such as Rl) to describe general problem solving strategies as well as strategies for handling exceptional cases. In a similar spirit, common sense reasoning also relies on the specificity of a rule’s antecedents to override con- clusions drawn by more general rules when they con- tradict the more specific rule. Even though the specificity of rules has been long recognized as an important information for the selec- tion of rules, few efforts have been made to develop al- gorithms for computing a principled measure of rules’ specificity. Instead, most existing rule systems use syn- tactic information such as the number of clauses as a heuristic approximation to the specificity of rules. This has both encouraged, and. to some extent neces- sitated, bad programming practices in which clauses are placed in production rules solely to outsmart the conflict resolution algorithm. As a result, it is hard to explain rules and difficult to determine how to cor- rectly add or revise them. Two other problems with rule-based systems have often been identified by crit- ics. First, the meaning of the terminology used by rules is often ill-defined [Swartout and Neches 19861. This makes it difficult to determine when rules are, or should be, relevant to some shared abstraction - which, in turn, makes it difficult to find and change abstrac- tions. Third, it is difficult to structure large rule sets [Fikes and Kehler 19851. This makes it difficult to de- compose the set into smaller, more comprehensible and maintainable subsets. To address these problems with rule-based systems, we have developed a production system, CLASP, where the semantics of predicates used in rules are de- fined using a term subsumption language (LOOM)’ [Yen et al. 19891. 0 ne of the major feature of CLASP is a pattern classifier that organizes patterns into a lattice where more specific patterns are below more general ones, based on the definitions of predicates in the patterns. Using the pattern classifier, CLASP can compute a well-defined specificity relation between rules. Related Work The idea of using the taxonomic structure of a terminological knowledge base to infer specificity relations between rules was first introduced by CONSUL[Mark 19811. Because rules in CONSUL mapped one description to another, the condition of a CONSUL’s rule is just a concept. Specificity of ‘Term subsu m p tion languages refer to knowledge repre- sentation formalisms that employ a formal language, with a formal semantics, for the definition of terms (more com- monly referred to as concept or classes), and that deduce whether one term subsumes (is more general that) another using a classifier [Patel-Schneider et al. 19901. These for- malisms generally descend from the ideas presented in KL- ONE [Brachman and Schmolze 19851. LOOM is a term subsumption-based knowledge representation system devel- oped at USC/IS1 [Gregor and Bates 19871. YEN 701 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. rules can thus be obtained directly from the con- cept subsumption lattice. To verify the consistency and completeness of expert systems, researchers have also developed algorithms for detecting subsumed rules based on a subset test of clauses [Suwa et al. 1982, Nguyen et al. 19851. M ore recently, the problem of computing the subsumption relation between plan classes has also been explored [Wellman 19881. Defining Pattern Subsumption Relations Conceptually, a pattern P2 is more specific than (i.e., is subsumed by) a pattern Pl if, for all states of the facts database, a match with P2 implies a match with Pl. To define the subsumption of patterns more for- mally, we need to introduce the following terminology. A pattern is denoted by PX where x denotes the set of variables in the pattern2. An instantiation of the pattern is denoted as Px(Z) where Z is a vector of variable bindings for x . For instance, the expression PlIq+ ~~2 ?&John Angela Carl]) denotes an instan- tiation of Pl that binds pattern variables ?xl, ?x2, and ?x3 to John, Angela, and Carl respectively. Let 7 be a terminological knowledge base. Concepts and roles (i.e., relations) are unary predicates and binary predicates defined in 7. An interpretation 7~ of 7 is a pair (D, E) h w ere V is a set of individuals de- scribed by terms in 7 and 6 is an extension function that maps concepts in 7 to subsets of V and roles in 7 to subsets of the Cartesian product, V x V, denoted as V2. PE (Z) denotes that Z satisfies the condition of the pattern P under the extension function E, i.e., Vx,yEV 0 (C x)~ iff x E g(C) * (R x Y)~ iff [x, y] E E(R) e (I1 A 1~)~ iff lf A Zg where C and R denote concepts and relations defined in 7; Zr and 12 denote two literals. is important for using the subsumption of patterns as a useful measure of the specificity of rules, for the con- dition of a specific rule often introduces extra variables to test a situation that is more complicated than the condition of a general rule. Enforcing that two sub- suming patterns have same number of variables will limit the usefulness of pattern subsumption taxonomy for controlling the firing of rules. Second, a pattern Ply subsumes a pattern P2x if and only if Equation 1 holds for all possible interpretations of 7. For instance, suppose we define a House-owner to be a per- son who owns at least a house. The pattern (Own- house ?x ?y) does not subsume the pattern (House- owner ?x) because a match with the latter does not guarantee a match with the former due to incomplete- ness of the knowledge base (e.g., the system may know John owns some houses without knowing any specific houses that are owned by him). Third, the definition allows the parent pattern to contain extra conditions that do not have counterparts in the child pattern. For example, the pattern (robot ?x) V (animal ?x) subsumes the pattern (super-robot ?robot) under the substitution ( ?robot/?x ) even though the condi- tion about (animal ?x> does not have a more specific counterpart in the child pattern. To determine whether a pattern Ply subsumes an- other pattern P2x, we need to find a substitution that replaces variables in Ply by arguments in P2x such that the latter terminologically implies the former un- der the substitution. Terminological implication, de- noted as 3, is defined as follows: P2x 3 Ply iff v77 = (VJ) [ V’I E V” (P2X” (5) * (plx,& (L??))] (2) where the set of variables in x* is a subset of variables in x , and hence the variable binding of x’ is directly obtained from that of x . More formally, we have the following Theorem. Definition 1 Suppose Ply and P2x are two patterns Theorem 1 Suppose patterns Ply and P2x are whose predicates are defined in a terminological knowl- boolean combinations of literals. The pattern Ply sub- edge base 7. The pattern Ply subsumes P2x, denoted sumes P2x i$ there exists a subsumption substitution as Ply? P2x, i$ S that replaces variables of Ply by P2x’s variables VZT = (V, ~)vz’vn or constants such that P2x terminologically implies Ply/S based on the terminological knowledge base 7, (P2xE(z)*3y’EvmPlyE(Y3) (1) i-e., where Z and y’ are vectors of elements in V, with di- mension n and m respectively. It is easy to verify that the pattern subsumption rela- tion is reflexive and transitive. Several important points about our definition of the pattern subsumption relation are worth mentioning. First, the definition allows patterns with different num- ber of variables to be compared with each other. This 2When it is not important to refer to the variables of a pattern, we denote patterns simply by P. PlytP2x i$3S such that P2x 3 Ply/S. (3) The proof of the theorem is based on skolemizing the existentially quantified variable gin Equation 1. Detail of the proof can be found in pen 19901. The subsumption substitution S can also be viewed as a mapping because it maps a variable of of pat- tern Ply to a variable or a constant in pattern P2x. We will use the terms “subsumption substitution” and “subsumption mapping” interchangeably in our discus- sion. Intuitively, it is easy to see that the existence of 702 KNOWLEDGEREPRESENTATION Pl: (:and (father ?x ?y) (father ?x ?z) ) P2: (father ?u ?v) Figure 1: An Example of Two Indifferent Patterns a subsumption mapping is a sufficient condition that P2x is more specific than Ply because, for any in- stantiation of Pax’s variables, we can construct an in- stantiation of Ply’s variables from the subsumption mapping. Thus, matching P2x implies matching Ply if a subsumption mapping exists. We further define the following relationships be- tween patterns: o Two patterns are indifferent, denoted only if they subsume each other, i.e., bY -> if and Pl - P2 u P&P2 A P2hPl. Indifferent patterns are merged in the specificity lat- tice. Conceptually, two patterns are indifferent if, for any states of the fact database, either both patterns match or neither of them matches the fact database. e Two patterns are equivalent, denoted by E, if they are indifferent and the subsumption mapping is a one-to-one mapping between variables of the two patterns. Two indifferent patterns may not be equiv- alent. For instance, the patterns Pl and P2 in Fig- ure 1 are indifferent because Pl subsumes P2 under the substitution ( ?u/?x, ?v/?y, ?v/?z ) and P2 sub- sumes Pl under the substitution ( ?x/?u, ?y/?v ) or ( ?x/?u, ?z/?v ). But the two patterns are not equivalent because they have different instantiations for a given facts database. o Two patterns are equal, denoted by =, if they are equivalent under a subsumption mapping that maps each variable to a variable with the same name. The subsumption substitution differs from substitu- tion in unification in that it is directional. It substi- tutes variables/constants of a child pattern for vari- ables of a parent pattern, but not the other way. This distinction is due to the fact that a subsumption test is meant to test implications, which is directional, while unification is meant to test equality, which is bidirec- tional. Classifying Conjunctive Patterns This section describes an algorithm for classifying pat- terns that are conjunctions of non-negated literals (which we will refer to as conjunctive patterns). The algorithm consists of two steps. First, each pattern is normalized by making explicit in the pattern any un- stated conditions logically implied by the patterns and the terminological knowledge. Second, the algorithm searches for a subsumption substitutions between pairs of normalized patterns. A General Strategy The general strategy of CLASP’s pattern classification algorithm is to simplify the subsumption test between pairs of patterns by first normalizing them. This strat- egy is analogous to completing a concept definition before actually classifying the concept in KL-ONE’s classifier[Schmolze and Lipkis 19831. To formally jus- tify our approach, this section first defines normalized patterns, then describes a theorem about the subsump- tion test of normalized conjunctive patterns. A pattern is normalized if it contains no implicit conditions other than those that can be deduced eas- ily from the subsumption lattice of concepts and of roles, which has been precomputed by LOOM’s classi- fier. More formally, we define a normalized pattern as follows: Definition 2 A pattern P is said to be normalized i$ Vl, if P 3 1, then 31’ in P such that 1’ 3 1 (4) where 1 and l’s are literals with the same number of arguments. We say a pattern F is a normalized form of p if and only if ‘Ti is normalized and P equals F (i.e., they are equivalent without variable substitution). The rationale behind normalizing patterns is to sim- plify the subsumption test. Without the normaliza- tion process, the search for a subsumption substitu- tion would have to consider the possibility that a condition in the parent pattern subsumes a conjunc- tive subpattern of the child pattern. For example, consider rules R2 and R3 in Figure 4. The condi- tion (College-graduate ?y)in R2 subsumes thesub- pattern (Successful-Father ?z) A (Child ?z ?w) of R3’s condition under the substitution ?~/?a. Hav- ing deduced the conditions implied by these conjunc- tive subpatterns during the normalization process, the subsumption test only needs to consider pairs of condi- tions with the same arity (one from the parent pattern, one from the child pattern) for testing subsumption possibility of the two patterns. Thus, normalizing pat- terns significantly reduces the complexity of the sub- sumption test. The following theorem formally states the impact of pattern normalization to the subsump- tion test. Theorem 2 Suppose Pl and P2 are two normalized conjunctive patterns: Pl = 1; A 1; A . ..r. (5) P2 = 1; A 1; A . ..I& (6) where 1: and 1; are literals without negations. The pat- tern PI subsumes P2 if and only if there exists a sub- sumption substitution S such that every literall! in PI subsumes at least one literal in P2 with the same arity, i.e., PIkP2 e 3s [Vlt in Pl, 3; in P2, such that 1; 3 1:/s] (7) YEN 703 where li and lj have the same number of arguments. To prove the theorem, we first introduce the following lemma. Lemma 1 Suppose PI is a conjunction of n literals, i.e., Pl = 1: A 1: A I1 where I1 l1 *** nr 1, 2, . ..lA are literals without negations. The pattern PI subsumes a pattern P2 if and only if there exists a subsumption substitu- tion such that each literals 1’ subsumes the pattern P2 under the substitution, i.e., PIhP2 e, 3s such that (P2 5 l,lls) A (P2 3 l;ls) A . ..(P2 3 l;/S) (8) Proof of Lemma 1 and Theorem 2 can be found in pen 19901. Comparing Equations 3 and 7, we can see imme- diately that the complexity of the subsumption test has been reduced significantly by first normalizing the patterns. Based on Theorem 2, computing whether P2 is more specific than Pl only requires searching for a subsumption mapping such that each condition (i.e., literal) in Pl subsumes at least one condition (i.e., lit- eral) in P2 under the mapping. We will refer to a P2’s condition that is subsumed by a condition 1; in Pl as Ii’s subsumee. The subsumption test between nor- malized conjunctive patterns, thus, is a simpler search problem. The following two sections describe the nor- malization of patterns and the subsumption test be- tween normalized patterns implemented in CLASP. Normalizing Pat terns The normalization step transforms each pattern into an equivalent normalized pattern. Five kinds of nor- malization steps have been implemented in CLASP: (1) domain and range deductions, (2) normalizing unary conditions, (3) normalizing binary conditions, (4) value restriction deductions, and (5) at-least-one deductions. Each normalization step will be described and illus- trated with examples, based on Figures 3 and 4. These normalization steps are correct because each one trans- forms a pattern into an equivalent one based on the se- mantics of LOOM’s term-forming expressions in Figure 2. 1. Domain and Range Deduction: This step deduces unary conditions about variables that appear in a binary condition using domains and ranges of the condition’s predicate (i.e., a relation). For in- stance, this step will infer an implicit condition of R3 (Vehicle ?c> from the range of Has-car rela- tion. 2. Normalizing Unary Conditions: Unary conditions that involve the same variables are replaced by one unary condition whose predicate is the conjunction of the unary predicates (i.e., concepts) in the original pattern. This ensures that all patterns are trans- formed into a canonical form where each variable has at most one unary condition. The condition- side of R2 thus is normalized to combine two unary 704 KNOWLEDGE REPR~~~~~TATIoN (defrule R2 : when ( : and (College-graduate ?y> (Child ?x ?y) (Car-Owner ?y)) . . . 1 (defrule R3 :when (:and (Successful-Father ?z) (Father ?f) (Child ?z ?w) (Son ?f Fred) (Female ?w) (Has-Car ?w ?c)) . . . 1 5. Figure 4: Two Rules Before Normalization conditions about the variable ?x. into one con- dition (College-graduate&Car-Owner ?y> where College-graduate&Car-Owner is the conjunct of College-graduate and Car-Owner. Normalizing Binary Conditions: Binary conditions with the same arguments are collected, and replaced by a new composite binary condition that takes into account the unary conditions of its domain variable and its range variable. This ensures that all normal- ized patterns have at most two binary conditions for each variable pair (the argument position of the variables can be switched). For instance, the con- ditions in R3 (Child ?z ?w) A (Female ?w) are transformed to (Daughter ?z ?w> A (Female ?w). Value Restriction Deduction: Suppose a pattern contains conditions of the form ( :and (Cl ?x> (R ?x ?y) . . . -1 and the def- inition of Cl in the terminological space has a value restriction on R, say C2. Then the pattern is equivalent to a pattern that has an additional unary condition (C2 ?y). For ex- ample, conditions (Successful-Father ?z> and (Daughter ?z ?w) in R3 deduce an implicit con- dition (College-graduate ?w) because successful- father has been defined as a father all whose chil- dren, which include daughters, are college graduates as shown in Figure 3 . At-least-one Deduction: A pattern containing two conditions in the form of (:and . . . (C ?x) . . . (R ?x a) . ..). where LY is either a variable or a constant, is trans- formed to one that replaces C by the concept C’ defined below, which has an additional at-least-one number restriction on the relation R. (def concept C J (:and C (:at-least 1 R))) Following our example, the conditions (Female ?w> and (Has-car ?w ?c) in R3 now can deduce an- other implicit condition about ?w: (Car-owner ?w>, for Car-Owner has been defined to be a person who Expression e (:and Cl CZ) (:and RI &) (:at-least 1 R) (:a11 R C) (:domain C) (:range C) Interpretation r4 X=[C1](2gA [C2](4 h/. [&1(x, Y) A [R21@9 Y) Xx. 3. BRI(x> Y) xx. b. I[Rl(x, Y) + [qb) h/- [Cl(x) h/- [Cl(Y) Figure 2: Semantics of Some Term-Forming Expressions (defconcept Person (:primitive)) (defconcept Hale (:and Person :primitive)) (defconcept Female (:and Person :primitive)) (defconcept College-graduate (:and Person :primitive)) (defrelation Child (:and :primitive (:domain Person) (:range Person))) (defrelation Daughter (:and Child (:range Female))) (defrelation Son ( : and Child ( : range Male)) ) (defconcept Father (: and Male (:at-least 1 Child ))) (defconcept Successful-Father (:and Father (:a11 Child College-graduate))) (defrelation Has-car (:and :primitive (:domain Person) (:range Vehicle))) (defconcept Car-owner (:and Person (:at-least 1 Has-car))) Figure 3: An Example of Terminological Knowledge (defrule R2 : when (:and (Person ?x) (College-graduate&Car-Owner ?y) (Child ?x ?y)) . . . > (defrule R3 : when (:and (Successful-Father ?z) (Female&Co11 ege-graduate&Car-owner ?w) (Daughter ?z ?w) (Father ?f) (Vehicle ?c) (Son ?f Fred) (Has-Car ?w ?c)) . . . 1 Figure 5: Two rules after normalization has at least one car. Figure 5 shows the condition-sides of R2 and R3 af- ter they have been normalized. It is easier to see that R3 is actually more specific than R2, which was not obvious prior to normalization. Testing Subsumption of Normalized Conjunctive Pat terns Reducing the Search Space Although an exhaus- tive search that considers all possible mappings can not be avoided in the worst case, the search space of possible subsumption mapping can be significantly re- duced in most cases by considering the subsumption relationship between predicates. Normally, the condi- tion pattern of a rule consists of several different pred- icates, only a small percentage of which are subsumed by a predicate in another pattern. Thus, using the subsumption relationships between predicates, we can significantly reduce the the search space for finding a subsumption mapping. Our strategy is to identify potential subsumees for all literals in the parent pattern Pl. A literal 12 is a potential subsumee of a literal 11 if there exists a sub- sumption substitution S such that 12 $11/S. The set of potential subsumees of a unary literal determines a set of potential candidates (which we call potential images) that a variable can map to under a subsump- tion mapping. The set of potential subsumees of a binary literal generates mapping constraints on how pairs of variables should be mapped. Potential images are used to reduce the branching factor of the search space, and mapping constraints are used to prune the search tree. This is illustrated using the example in Figure 5. Only two conditions in R3, (Son ?x Fred) and (Daughter ?z ?w), can potentially be subsumed by (Cdhild ?x ?y) in R2. Since (Child ?x ?y) must have a subsumee under a subsumption mapping, we can infer that any subsumption mapping that proves YEN 705 R3 is more specific than R2 has to satisfy one of the fol- lowing two mapping constraints: (1) If (Child ?x ?y) subsumes (Son ?f Fred), then the variable ?x should map to ?f and the variable?y should map to Fred. (2) If (Child ?x ?y) subsumes (Daughter ?z ?w), then the variable ?x should map to ?z and the variable ?y should map to ?w. Similarly, potential subsumees of a parent pattern’s unary condition restrict the candi- date images a variable can map to. Using the exam- ple in Figure 5 again, (Successful-father ?z) and (Father ?f) are the only two unary conditions in R3 that can potentially be subsumed by (Person ?x> in R2. Hence, the potential images of ?x are ?z and ?f. The process of reducing the search space can also detect early failure of the subsumption test. The sub- sumption test terminates and returns false whenever (1) it fails to find any potential images for a variable in P2; or (2) a b inary condition in Pl fails to find any binary condition in P2 as a potential subsumee. Searching for a Subsumption Substitution A subsumption mapping between two normalized pat- terns is constrained by the potential images of each variables in the parent pattern and the mapping con- straints imposed by binary conditions of the parent pattern Pl. To search for a subsumption mapping that satisfies these constraints, which are generated by algo- rithms discussed in previous sections, the pattern clas- sifier first sorts the parent variables in increasing order of the number of their potential images, then performs a dependency-directed backtracking. The position of a variable in the sorted list corresponds to the level it’s images are assigned in the search tree. At each node, the current assignment of variables’ images are checked to see if it satisfies the mapping constraints. If it does not satisfy any of the mapping constraint, the algo- rithm backtrack to the closest node whose assignment causes a constraint violation. Discussion We have shown elsewhere that CLASP’s pattern clas- sification algorithm is sound [Yen 19901. It is also complete for a simple term subsumption language whose expressiveness is equivalent to that of FCC- in [Brachman and Levesque 19841. In general, an imple- mentation of our pattern classification algorithm is sound if (1) th e normalization step transforms an in- put pattern to an equivalent one, and (2) all identi- fied potential subsumees are correct (which requires the classifier to be sound). An implementation of the general algorithm is complete if (1) the normalization step transforms an input pattern into its normalized equivalent form, and (2) the complete set of potential subsumees are identified for each literals of the par- ent pattern (which requires the classifier to be com- plete). A more detailed discussion on the issues regard- ing soundness and completeness of the pattern classi- fication algorithm can be found in [Yen 19901. Determining the subsumption of normalized con- junctive patterns is NP-complete, for it can be re- duced from the problem of determining subgraph iso- morphism for directed graphs, which is known to be NP-complete. However, worst case rarely occur in practice. To analyze the behavior of an algorithm in reality, we have defined normal cases 3 and have shown that the complexity of the algorithm for normal cases is polynomial pen 19901. Brachman and Levesque have demonstrated that there is an important tradeoff between the expressive- ness of a terminological language and the complexity of its reasoner [Brachman and Levesque 19841. A sim- ilarly tradeoff between the computational complexity of the normalization process and the expressiveness of the terminological language has also been investigated [Yen 19901. Summary We have presented a principled approach to computing the specificity of rules whose conditions are constructed from terms defined using a terminological language. Based on a formal definition of pattern subsumption relation, we first show that the subsumption test be- tween conjunctive patterns can be viewed as a search problem. Then we describe a pattern classification al- gorithm that improves the efficiency of the search pro- cess in two ways. First, implicit conditions logically implied by a pattern is made explicit before the search step. Second, the algorithm attempts to reduce the search space using information about the subsumption relation between predicates. Our approach offers several important benefits to the developers of rule-based systems. First, the pattern classifier makes it possible to provide, for the first time, a principled account of the notion of rule-specificity as a guide to conflict resolution. This will greatly im- proves the predictability of rule-based systems, and thus alleviate the problems in maintaining them. Sec- ond, using pattern classifier to compute the specificity of methods, CLASP is able to generalize methods in object-oriented programming for describing a complex situation to which the method applies. Third, sepa- rating definitional knowledge from rules enhances the reusability of knowledge and the explanation capabil- ity of the system. Finally, the pattern classifier is also the enabling technology for our future development of a rule base organizer, which automatically determines groupings of a large set of rules based on the semantics of rules and rule classes. Acknowledgements I would like to thank Robert Neches for his encourage- ment and support of this research. I am also grate- 3Using normal cases to analyze the complexity of in- tractable algorithm has been suggested by Bernard Nebel [Nebel 19891. 706 KNOWLEDGEREPRESENTATION ful to Robert MacGregor, Bill Swartout, and David Benjamin for their fruitful ideas regarding the the pat- tern classification algorithm. Finally, the research on CLASP has benefited from many discussions with Paul Rosenbloom and John Granacki. Part of the work de- scribed in this paper was supported by Engineering Excellence Fund at Texas A&M University. References [Brachman and Levesque, 19841 Ronald J. Brachman and Hector J. Levesque. The tractability of subsumption in frame-based description languages. In Proceedings of AAAI-84, pages 34-37, Austin, Texas, August 1984. [Brachman and Schmolze, 19851 R.J. Brachman and J .G. Schmolze. An overview of the kl-one knowledge representation system. Cognitive Science, pages 171-216, August 1985. [Fikes and Kehler, 19851 R. Fikes and T. Kehler. The role of frame-based representation in reasoning. Communication ofthe ACM, 28(g), September 1985. [Gregor and Bates, 19871 Robert Mac Gregor and Raymond Bates. The loom knowledge represen- tation language. Technical Report ISI/RS-87-188, USC/Information Sciences Institute, 1987. [Mark, 19811 W 11 i iam Mark. Representation and in- ference in the consul system. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, pages 375-381. IJCAI, Morgan Kauf- man, 1981. [McDermott and Forgy, 19781 J. McDermott and C. Forgy. Production system con- flict resolution strategies. In D. A. Waterman and F. Hayes-Roth, editors, Pattern-Directed Inference Systems. Academic Press, New York, 1978. [Nebel, 19891 Bernhard Nebel. Terminological reason- ing is inherently intractable. Technical Report IWBS Report 82, IWBS, IBM Deutschland, W. Germany, October 1989. [Nguyen et al., 19851 T. A. Nguyen, W. A. Perkins, and T. J. Laffey. Checking an expert system knowl- edge base for consistency and completeness. In Pro- ceedings of IJCAI-85, pages 375-378, Los Angeles, CA, August 1985. [Patel-Schneider et al., 19901 Peter F. Patel-Schneider, Bernd Owsnicki-Klewe, Alfred Kobsa, Nicola Guarino, Robert MacGregor, William S. Mark, Deborah McGuinness, Bernhard Nebel, Albrecht Schmiedel, and John Yen. Report on the workshop on term subsumption languages in knowledge representation. to appear in AI Maga- zine, 1990. [Schmolze and Lipkis, 19831 James Schmolze and Thomas Lipkis. Classification in the kl-one knowledge representation system. In Proceed- ings of the Eighth International Joint Conference on Artifkial Intelligence. IJCAI, 1983. [Suwa et al., 19821 Motoi Suwa, A. Carlisle Scott, and Edward H. Shortliffe. An approach to verifying com- pleteness and consistency in a rule-based expert sys- tem. AI Magazine, 3(4):16-21, Fall 1982. [Swartout and Neches, 19861 William Swartout and Robert Neches. The shifting terminological space: An impediment to evolvability. In AAAI-86, Pro- ceedings of the National Conference on Artificial In- telligence, Philadelphia, PA, August 1986. AAAI. [Wellman, 19881 Michael P. Wellman. Formulation of Tradeofls in Planning Under Uncertainty. PhD the- sis, Massachusetts Institute of Technology, Depart- ment of Electrical Engineering and Computer Sci- ence, 1988. Also distributed as a Computer Science Laboratory technical report MIT/LCS/TR-427. pen et al., 19891 John Yen, Robert Neches, and Robert MacGregor. Using terrninological models to enhance the rule-based paradigm. In Proceedings of the Second International Symposium on Artifi- cial Intelligence, Monterrey, Mexico, October 25-27 1989. ven, 19901 John Y en. Reasoning about specificity of patterns in term subsumption-based systems. Tech- nical Report TAMU 90-003, Department of Com- puter Science, Texas A&M University, February 1990. YEN 707
1990
130
883
Weak Representations of Interval Algebras G&u-d Ligozat LIMSI, Universid Paris-Sud, B.P. 133 9 1403 Orsay Cedex, France ligozat@ fiiutoll .bitnet Abstract Ladkin and Maddux [LaMa87] showed how to interpret the calculus of time intervals defined by Allen [AZ2831 in terms of representations of a particular relation algebra, and proved that this algebra has a unique countable representation up to isomorphism. In this paper, we . consider the algebra An of n-intervals, which coincides with Allen’s algebra for n=2, and prove that An has a unique countable representation up to isomorphism for all n2 1. We get this result, which implies that the first order theory of An is decidable, by introducing the notion of a weak representation of an interval algebra, and by giving a full classification of the connected weak representations of A n. We also show how the topological properties of the set of atoms of An can be represented by a n-dimensional polytope. I. Introduction In [AZ2831 James Allen introduced a calculus of time intervals conceived as ordered pairs of real numbers. He considered all possible relations between two intervals defined in this way and described the axioms governing the composition of two such relations. He showed that these axioms are summed up in a transitivity table with 144 entries. In [LiBe88] Bestougeff and Ligozat introduce a geometrical object to describe the topological structure of the set of relations of Allen. By using the properties of symmetry of this structure, they improve on a result of [Zhu87] and show that the axioms of Allen can be described by a transitivity table with only 43 entries. Ladkin and Maddux [L&87, LaMa87j observed that the definitions given by Allen can be expressed as defining a particular relation algebra, in the sense of Tarski [JoTu52]. They showed that there is (up to isomorphism) a unique countable representation of this algebra. They also showed how to reformulate their results in terms of a first order theory, which is complete, countably categorical, and decidable. In this paper we concentrate on the algebraic point of view. We show how the results of Allen, Ladkin and Maddux fit into a more general setting, where the objects considered are n-intervals (for n 2 1). For each positive integer n, there is a corresponding relation algebra A,; in the special case where n=2, A2 is the Allen algebra. A geometrical object II,,,, which is a n-dimensional polytope, describes the topological structure of the set of relations between n-intervals. We then examine the general problem of describing the representations of A,, for an arbitrary n. Slightly more generally, we first examine what we call weak representations of A,, which are in fact the objects used in Artificial Intelligence. We show that basically the same result which is true for n=2 holds for any n. More precisely, extending the results cited in [LudWJ, we define canonical functors between the class of weak representations of A, and those of Al . Applying the classification to the special case of representations yields the uniqueness of the countable representation of An. Because this paper is mainly concerned with representations of interval algebras, we do not give here a complete characterization of the polytope I-I,,. We show in another paper how the topological constructions can be applied to the non convex intervals introduced by Ladkin and Maddux [Lad86J. 2. Algebras and representations 2.1. Binary relations A binary relation R on a set U is by definition a subset R of U x U. If R is a binary relation, the transpose Rt of R is defined by Rt = ((x,y) E U x U I (y,x) E R). Particular binary relations in U are the empty relation 0, the total relation U x U, and the identity relation A = ((x,x)1x E U} . The composition of two binary relations Rt, R2, noted R1 0 R2, is defined by R1 o R2 = ( (X,Y)E U x Ul@z~ U) (X,Z)E R1 and (Z,Y)E R2). 2.2. Relation algebras An algebra A= (A,+, 0, ., 1, ; , l’, -I), where +, ., and ; are binary operations on A, -* is a unary operation on A, and 0, 1, and 1’ are elements of A is called a relation algebra if the following conditions are satisfied : - (A,+, 0, ., 1) is a Boolean algebra. - (x ;y); z = x; (y; z) for any x,y,z E A. - l’;x=x=x; 1’foreveryxE A. LIGOZAT 7 15 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. - The formulas (x ; y) . z = 0, (x-l ; z) . y = 0, and (z ; y-9. x = 0 are equivalent for any x,y,z E A [J07’0.52]. The prototypical example of a relation algebra is the set P(U x U) of binary relations in a set U, with its usual boolean structure, and where ; is composition, 1’ the identity relation, and ml is transposition . 2.3. Weak representations A representation of a relation algebra A is a map Q, of A into a direct product of algebras of the form P&J x II), such that: (a) Q is one-to-one; (h) @ defines a homomorphism of boolean algebras; (cl w = 0 ; (d) @(l’) = A ; (e) @(-l) = * . Condition (c), in particular, means that for any a, /3 in A, we have: 0 @ (a ; P> = wa O @CP>. More generally, a weak representation is defined by dropping condition (a) and replacing condition (I) by the weaker condition : @I @ (a ; P> 2 @@a O WP). If A is a simple algebra, we shall say that a weak representation of A into P(U x U) is connected if Q(l)=UxU. 3. Interval algebras 3. 1. n-intervals and (p,q)-positions Let (U,c) be any totally ordered set. A n-interval is by definition an ordered n-uple (xl,..,xn) of points of U, that is such that x~<x~<...cx,. A l-interval is just a point in U. A 2-interval is an interval in the sense of Allen. The consideration of n-intervals is motivated by several reasons: - they are the natural entities for describing processes with a finite number of consecutive phases; - they can be used to represent unions-of-convex intervals, as defined in Ladkin [La&Q; - they appear in a natural way for the representation of temporal data in natural language, cf. [BeLi8.5, BeLi891. We are primarily interested in the relative positions between two generalized intervals a and b. We define them abstractly as (p&-positions: Definition Let p,q be two positive integers. A (p,q)- position n: is a map 7t:[l ,...,p+ql -e-e> N+ ( N+ is the set of strictly positive integers) subject to the two conditions: (i) the image of n; is an initial segment of N+; (ii) the restrictions of n: to [ 1 ,..., p] and [p+l,..., p+q] are strictly increasing (hence injective) maps. We denote by lIP,4 the set of (p,q)-positions. A convenient way of representing a given (p&position n: is by its associated sequence (IC( l),...,@+q)). Examples (1) Let p-91. Then a and b are two points in T. If a <b, we get the map associating 1 to 1, 2 to 2, which is represented by (1,2); if a=b, we get (1 ,l); if a > b, we get (2,l). (2) Let p=q=2. Then we get the 13 elements considered by Allen. One is equality represented by (1,2,1,2). Six others are: < = (1,2,3,4) (a strictly precedes b); m = (1,2,2,3) (a meets b); o = (1,3,2,4) (a overlaps b); d = (2,3,1,4) (a during b); e = (2,3,1,3) (a ends b); s = (1,2,1,3) (a starts b). Finally, we get six more relations by exchanging the roles of a and b. (3) The element l’P,P = (l,..., p,l,..., p) is called the unit position in IIP,P . Remark. It can be convenient in some cases to identify the initial segment [l ,...,p+q] with the sequence of variables (xl ,..., xP,yl ,..., ys); hence n; can be considered as mapping the set (xl ,..., xp,yl ,..., yY} into N+. More generally, for any fimte sequence (p,...,s) of integers, we can define the notion of a @,...,s)-position in a similar way. For example : Definition A (p,r,q)-position G is a map o:[l,...,p+r+q] -----> N+ subject to the conditions: (i) the image-of CJ is an initial segment of N+; (ii) consider the decomposition of [l ,...,p+r+q] into three subsegments: initial of length p, middle of length r, terminal of length q; then the restrictions of CT to each of the subsegments is strictly increasing. We denote by IIP,r,9 the set of (p,r,q)-positions. Clearly, we have canonical projections of lIp,r,9 onto $,r 9 I-I p,q 9 ZUd K,q l Associated inequations Let n: be a (p&position, where Im(n) = El,..., k]. We can associate to it a set Ea (x,y) of inequations in the following way: i) for each n such that n-*(n) contains two elements xi and yj, &(x,Y) contains the equation xi=yj; ii) for each n, llnlk, let u, be an element in z-* (n); &(x,y) contains the inequations ul < u2 <...<uk. Clearly En (x,y) is essentially uniquely defined: because of the equations in (i), different choices in (ii) do not really matter. Operations on (p,q)-positions The set of (p,q)-positions is naturally provided with a number of operations. We now examine the principal ones. Transposition If n: is an element of IIp,q, the transpose n;‘of 7r is an element of IIq,p , defined by: Itt(i) = rc(p+i) forllsq; 716 KNOWLEDGEREPRESENTATION 7+(i) = Z(i-q) for q+lSi++q. In terms of pairs (a,b) of generalized intervals,trans- position corresponds to exchanging the roles of a and b. Clearly, it is an involution, namely (z~)~ = n; for any n: . Composition Definition Let ~1 E lIp,r and 7~2 E IIr,q; then nl 0 x2 = Wp,q(oYfJ~ np,r,q 9 Prp,r(CO = lit19 Prr,&J) = x2) l We say that ~1 o7r2 is the composition of 7~1 and 7~ . Symmetries If n is an element of IIp,q with Im(7c) = { l,...k) , we get an element lich of I’Iq,p by setting nh(i) = (k+l) - Ic(p+q+l-i). This corresponds to reversing the order on T, and associating to each n-interval (tl,...,b) for the initial order the n-interval &,,,...$I) for the new order. The symmetry v = h o t is an involution on Qq, which commutes to transposition. Proposition The following properties obtain, for any ~1 E Hp,r, ~2 E JJr,q and 7~3 E nq,s: i) (Icl o ~2 ) o x3 = 7cl o (7c2 oz3); ii) 7tlol’r,r=~l and l’p,p 07~1=~1; iii) 1’ p,p E Ic* 0 7Clf et 1 r,r E 7Cl’ 0 7Cl ; iv) n E (~1 o 7~2 ) implies n;l E (n: o nxt ) and 7c2 E (n*’ 0 7c ); v) (7c*olQ)’ =.rc+ 07q’ ; vi) (7r1 oIc2)v=7rlv 07c2". 3.2. Constructing interval algebras We can now use the preceding results to construct a family of relation algebras. Intuitively, As will be the algebra defining the calculus of n-intervals, for n in a fixed subset S of the integers. Let S be a non empty subset of N. We define IIs as the disjoint sum of all IIp, , where p and q belong to S. Let the product ZI ; ~2 o P two elements 1t1 E Qq and QE IIp~,q~ of IIs be defined as ~1 o ~2 if q = p’, as the empty set otherwise; let l’s be the set of l’p,p , where p belongs to S; let finally transposition on IIs be defined componentwise. Then we have: Theorem The system IIs = (I&, ; , Is, f, is a connected polygroupoid in the sense of Comer [Com83]; it is a polygroup if and only if S has a unique element. Applying to IIs the standard construction which associates to a polygroupoid its complex algebra, and using the results of [Com83], we get simple relation algebras: Theorem For any subset S of N, the complex algebra As of IIs is a complete, simple, atomic relation algebra, with 0 $1. Moreover, As is integral if and only S has a unique element. If S =(n), we write A, instead of A{,). In particular, Al is the point algebra with 3 atoms. A2 is Allen’s algebra. A ( 1,s) is a simple algebra with 26 atoms implicitly considered by Vilain in [ViZ82]. 3.3. Associated polytopes There is a canonical way of associating a labelled polytope HP,, to the set of (p&relations [BeLi89]. A complete description of the construction is given in [Lig9Ob]. Here we just consider the cases p=q=1,2,3. The l-dimensional case Here there are three possible relations between two points: c, >, and equality 6. H1.1 is the graph in Fig. 1. Figure 1: H1,1 It can be interpreted -in “physical” terms: Suppose U is R (the reals); any point x in U defines three regions; two are l-dimensional, corresponding resp. to y < x and y > x; they meet in a O- dimensional one, corresponding to 6. - in terms of permutations: The relative positions of two points x and y are of two kinds; the first kind comprizes general positions < and >, corresponding to two permutations of the list (x,y); the two permutations are joined by a permutation of the adjacent elements x and y, corresponding to collapsing x and y. The 2-dimensional case Here H2,2, is as represented in Fig. 2. It is a 2- dimensional polygon, with 6 vertices (O-faces), 6 arcs (l- vertices) and one 2-face. Here again, it has two interpretations: -A physical interpretation: Suppose (x1,x2) and (yl,y2) are two intervals in R, with (x1,x2) entirely on the left of (yl,yz); then, the position is <; moving x to the right, we first get xz=yl; this is position m (meets); going further, we have ylcx2cy2; this is position o (overlaps); then, depending on whether x is shorter, longer, or of the same length as y, we either get position s (x starts y), or position et (x is ended by y), or 6 (x =y); in the first two cases, we then get d (x during y) or df (x contains y), respectively; then e or 9, resp. Going still further, we then get ot (x is overlapped by y), then mt, then finally >. - An interpretation in terms of permutations: Each general position, where all four points xl,x2,yl,y2, are distinct, corresponds to a permutation of the list (x~,x~,Y~,Yz); associate a vertex to each general position, and join two vertices if the two corresponding permutations are related by exchanging two adjacent points. In this manner, one gets the graph underlying H2.2. LIGOZAT 7 17 d d m 0 dt Figure 2: Hz.2 The 3-dimensional case The polyhedron associated to the relations between 3- intervals is represented in Fig. 3. Figure 3: H3,3 The general case For any pair (p,q) of integers, H,,q, is a polytope of dimension d(p,q)=inf(p,q), which is a connected union of k-cubes, for k 5 d(p,q). In particular, H “,? is n dimensional. It contains one n-cube, correspondmg to equality, and (n-l) (n- I)-cubes (corresponding to collapsing n-l points of x with (n-l) points of y), etc. It has two canonical symmetries: one corresponds to transposition; the other one to “reversing the time axis”. For example, in the case of H2.2, this last symmetry corresponds to the vertical symmetry in Fig. 2. From general results (cf. [BeLi89], [LigSOu]), the total number of relations (ie. the total number of faces of H,,.J is h(n,n), where more generally h(p,q) is defined by: (ii) h(p,q) = Xrna (p+q-m)! / m!(p-m)! (q-m)! . Remarks By construction, A, has a canonical symmetry associated with “reversing the time axis”. Moreover, because of their interpretation as sets of faces in H.,, , the elements of A, can be considered as elements in the Euclidian n-space; so they inherit a topology and a dimension; a consequence of the physical interpretation is that all entries in the transitivity table defining the operation of composition have to be connected elements. More can be shown about them: they are in fact intervals in a suitable distributive lattice. For AZ, the transitivity table shown in Fig. 4, together with the action of symmetry and transposition, characterizes composition [LiBe88]. Figure 4: Transitivity table 4. Weak representations of interval algebras In the remaining part of the paper, we consider the con- nected weak representations of the integral algebras A,. 4.1. The l-dimensional case The general setup is already apparent in the one- dimensional case. Consider a connected weak repre- sentation 0 of At: a: Al----> &U x U). Let R be a(<), A = Q(6) the diagonal in U x U. Then: i) R,Rt and A are mutually disjoint, their union is U x U; ii) R o R c R; By (i) and (ii), R is a strict total order. If <f, is in fact a representation, we also have VI RoR 2 R; vi) R o Rf 2 A v R v Rt ; vii) Rt ,Rz AuRuR’. By (v), R is dense; by (vi) (resp. (vii)) it is unbounded on the right (resp. left). Conversely, given a strict total order relation R on U, we get a weak representation; if R is dense and unbounded, it is in fact a representation. A consequence of this fact and the countable categoricity of dense, unbounded total orders is that there exists a unique contable representation of Al up to isomorphism. Example Let U = R, and R = ((u,v) I u c v) . Then we get a representation, which deserves to be called Allen’s l- dimensional representation. The graph H1,1 is dual1 associated to the set of three regions in the plane R 1 representing R,Rt and A. 4.2. The 2-dimensional case A weak representation of A2 is defined by a set U, together with six binary relations R, M, 0, D, E, S on it, satisfying the following conditions: 718 KN~WLEDOEREPIWE~ATION (i) the thirteen relations A, together with R,M,O,D,E,S and their transposes are a partition of U x U, ie. they are mutually disjoint and cover U x U, (ii) the composition of two relations is given by the transitivity table in Fig. 4 together with the identities (4.1) and (4.2). In [BeLi89] this data is called a connected system of intervals in the sense of Allen. Example Let U = (il,i2}, 0 = ((i&)), M =D=E=S=R= 0. This is a connected weak representation of AZ, as it is easy to verify. It corresponds to Fig. 5. It is not a representation. il i2 Figure 5 4.3. The general case Consider the general case of A,, with n 2 1. Let @ be a connected weak representation of A, into p&J x II) : iD: A,----> P(U x u). For each element n; of IIn,n, which can be considered as an atom of A,, @i(z) is a binary relation Rx on U. We have : (i) (RX), for n: E IInVn, is a partition of U x U . (ii) for any z, K’ E I&, RX o Rx* c R,;,? . Recall the interpretation of the elements of IIn,n in terms of maps from the set {xl ,..., x*,yl,..., yn) into N+. We consider the following elements in A,, for II ij < n: aid bij is the sum of all II: such that X(xi) = n(yj). is the sum of all n such that Z(xi) < x(y$. Proposition (iii) aij ; aj,k = ai,k . :;I “b’j ; b-.k ;Jg = aid . b (;i, bfj ; b’jf= oeisk - (vii) 1’: & :” b’ij + aij . (viii) if i < j, then l’,,, E bij . 5. Classifying weak representations 5.1. From weak representations of Al to weak representations of A, Let (B,<) be a weak representation of Al ie. a strict total order. Let U be the set of n-intervals of B, which is non empty if B has more than n elements; we define a weak representation Gn((B,<)) in the following way: For each atom n; of A,: R, = { (x,y) E U x U I x,y satisfy &(x,y)). Then associating R, to rc defines a connected weak representation of A,,. It is easily shown that 6, in fact defines a functor from the category (in a suitable universe) of strict total orders to the category of connected weak representations of A,. We now show how to define a functor F, in the opposite direction. 5.2. From weak representations of Al to weak representations of A, The construction Let @ = (U,(R,J) be a connected weak representation of A,. Consider the disjoint sum u = Ul 0 . ..o un of n copies Ul,..., Un of U (indexed by i =l,...,n). a) Define on U the relation : u z V, where u E Ui , v E Uj iff (u,v) E Q(aij). Then, because of (i,ii,iii) of the proposition in 4.3 and the fact that Q is a weak representation, = is an equi- valence relation on U. Let B be the quotient set U / =. b) Define on U the relation: u < V, where u EU~ , v E Uj iff (u,v) E @(bid). Using the same proposition as before, we get by (iii) that < defines a binary relation on B; by (v,vi,vii), and (vii), this relation (still noted c) is transitive, irreflexive, and total. Hence (B,c) = F,(u,(R&) is a strict total order. Moreover, the canonical injection of U into each Ui defines a map pi of U into B. By (viii), the sequence @l(u)V.*Ad 1) u is a n-interval. Hence we have a canonical map from U into the set of n-intervals on B. In fact, this map defines a morphism of weak representations q, : WW3) ---->(Gn 0 Fn 1 (u,(Rd). In the opposite direction, it is easily seen that, starting with a total order (B,c) with at least n elements, applying G,, then F,, we get a canonical isomorphism of total orders: E, : (B,<) ----> (F, o G, ) (B,<). Theorem The situation (F, ,G, ,+TJ~ ,E~) is an adjunction between categories. The functor F,, is left-adjoint to F,, . In particular, the canonical map r\, is a closure operation. We can define: Definition A connected weak representation of A, is closed if and only if the canonical map Q, is an isomor- phism. The closure of (II,( is (G, o F, ) (II,&)). Intuitively, a closed weak representation is one which contains all the n-intervals it implicitly defines. For example, the weak representation of Fig. 5 is not closed, since it implicitly defines four boundaries, hence six intervals. If n= 1, every weak representation is closed. In the general case, a connected weak representation is canonically embedded into its closure by IJ,. Using general results about adjunctions, we get from the preceding theorem: LIGOZAT 7 19 Corollary The pair of functors (F, ,G,) defines an equivalence of categories between the categories of closed connected weak representations of A, and the category of strict total orders with at least n elements. Hence, we can give a full classification of the connected weak representations of A,,; in summary: - the fact for weak representations of having isomorphic closures define classes of equivalence; - each class contains (up to isomorphism) a closed representative; - closed representatives are characterized by their underlying point sets, which are strict total orders. In the special case where n=2, this classification was obtained in [Lig86]. 5. 3. Representations of A,, Representations are special cases of weak representations. Moreover: Proposition If Q, is a representation of A,, , then it is a closed connected weak representation of A,,. This is proved as follows: consider n elements ut,..., u,, in U such that Po(l)(ul),...,P,(,)(u~) is a n-interval in B, for some map cT of [l ,...,n] into itself. Using the fact that <P is a representation, we can find wl in U such that PI(Y) = Po(l)h) and P2W =Po(2)W; hence we cm replace ul and u2 by wl and get the same n-interval ; after (n-l) steps, we get w=w,_l in U such that Pl(w),...&(w) is the n-interval we started with. By the preceding results, F, and G, define an equivalence of categories between the representations of A, and those of Al, that is, dense, unbounded linear orders. That is, by Cantor’s theorem: Theorem There is a unique countable representation of An, up to isomorphism. Since Gn((Q,<)) is such a representation, any other one is isomorphic to it. This implies that the first order theory associated to A, is countably categorical. Since it is finitely axiomatisable, because A, is finite, it is in fact decidable: Corollary The first order theory of A, is decidable. This result was also obtained independently by Ladkin and McKenzie. 6. Summary We have generalized the calculus of time intervals defined by Allen to a calculus of n-intervals. We have shown how this generalization can be expressed in terms of relation algebras As, whose atoms have a natural 720 KNOWLEDGEREPRESENTATION topological structure representable by a polytopc H,,,. We introduce the notion of weak representation of an interval algebra, which are the objects of interest in Artificial Intelligence, and give a full classification of the connected weak representations of A,. We deduce from these results the fact that A, has a unique countable representation, and that its first order theory is decidable. eferences [A11831 J. F.Allen, Maintaining Knowledge about Temporal Intervals, Communications of the ACM 26, 11 (1983) 832843. [BeLi85] H. Bestougeff and G. Ligozat, Parametrized abstract objects for linguistic information processing, in: Proceedings of the European Chapter of the Association for Computational Linguistics, Geneva, (1985), 107-l 15. [BeLi89] H. Bestougeff and 6. Ligozat, Outils logiques pour le traitement du temps: de la linguistique a li’ntelligence artificielle, Masson, Paris, 1989 . [Com83] S. D.Comer, A New Foundation for the Theory of Relations, Notre Dame Journal of Formal Logic, 24, 2, (1983), 181-187. [m P.B. Ladkin, Time Representation: A Taxonomy of Interval Relations, Proceedings of AAAI- 86,360-366. [LaMa87a] P.B. Ladkin, The Completeness of a Natural System for Reasoning with Time Intervals, Proceedings of IJCM-87,462-467. [LaMa87J P.B. Ladkin, and Maddux, R-D., The Algebra of Convex Time Intervals, Kestrel Institute Technical Report, KES.U.87.2. [Lig86J 6. Ligozat, Points et intervalles combinatoires, TA Informations, 27, no 1, 3-15, 1986. [Lig90a] G.Ligozat, Intervalles generalis& I, Comptes Rendus de 1’Academie des Sciences de Paris, Skie A, Tome 310,199O. [Lig90b] G. Ligozat, Intervalles gCnCralises II, Comptes Rendus de 1’Academie des Sciences de Paris, S&e A, Tome 310,199O. [LiBe89a] G. Ligozat, and H. Bestougeff, On Relations between IntervalsJnformation Processing Letters 32, (1989) 177-182. [JoTa B. Jonsson and A. Tarski, Boolean Algebras with Operators II, American J. of Mathematics 74, 1952, 127-162. [Vi1821 M. B. Vilain, A System for Reasoning About Time, Proc. AAAJ, (1982), 197-201. [Zhu87] M. Zhu, N.K. Loh, and P. Siy, Towards the minimum set of primitive relations in temporal logic, Information Processing Letters 26 (1987188) 121-126.
1990
131
884
A qualitative model for space Amitabha Mukerjee Gene Joe Department of Computer Science Texas A&M University College Station TX 77843-3112 Abstract Most geometric models are quantitative, making it diffiiult to abstract the underlying spatial information needed for tasks such as planning, learning or vision. Furthermore, the precision used in a typical quantitative system often exceeds the actual accuracy of the data. In this work we describe a systematic representation that builds spatial maps based on local qualitative relations between objects. It derives relations that are more “functionally relevanr - i.e. those that involve acckfental aiignments. or can be described based on such alignments. In one dimension, interval logic (Allen 331 provides a mechanism for representing these type of relations; in this work we propose a formalism that enables us to perform aiignment-based reasoning in two and higher dimensions with objects at angles. The principal advanmges of this representation is that a) it is free of subjective bias, and b) it is complete in the qualitative sense of distinguishing all overlap/ tangency/no- contact geometries. In addition, the model is capable of handling uncertainty in the initial system (e.g. “the fuse box is somewhere behind the compressor) by constructing bounded inferences from disjunctive input dam. Two kinds of uncertainty can be handled - those arising from deliberate imprecision in the interest of compactness (“down the road from”), or those caused by an inadequacy of data (sensors, spatial descriptions, or maps). Keywords: Spatial reasoning, qualitative vision, path planning, natural language. 1. INTRODUCI’ION In this paper, we present a systematic method for extracting meaningful symbolic descriptions from geometric data. For example, for the map in figure 1, consider the query “How do I get from the Post office to the Civic Auditorium?“. While conventional geometric modeling systems can represent this map with great precision, the problem of reasoning requires a measure of abstraction which is difficult to obtain from these large databases of geometric coordinates. In this paper we propose a abstraction for spatial relations based on the hypothesis that relations involving tangency are “more important” than others in the categorization of spatial relations, and we show how the representation can be used to successfully generate many of the predicates that have been used in modeling space. Very often, one assumes that the problem of spatial abstraction can be solved somehow, and the input to the model is a list of carefully chosen symbolic descriptors. The difficulty of this problem has long been known [McCarthy 77,epistemological problem 41: “A robot must be able to express knowledge about space, and the locations, shapes and layouts of objects in space. Present programs treat only very special cases. Usually locations are discrete - block A may be on block B but the formalisms do not allow anything to be said about where block on B it is, and what shape space is left on block B...A formalism capable of representing the geometric information that people get from seeing and handling objects has not, to my knowledge, been approached.“ Figure 1. A map of Bryan. TX. 1.1 What is functionally important? This paper is concerned mostly with spatial representation itself, rather than with any particular application. A new and powerful representation is developed that can be used to de- scribe and infer geometric relations under conditions of complete or partial information. The principal thrust is to de- scribe positions relative to other objects, as opposed to descriptions in terms of global coordinates. Such descriptions MUKERJEE AND JOE 721 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. arise naturally in human spatial reasoning, and many cognitive models are based on some particular orientation of the viewer One of the more useful aspects of this representation is the or in relation to other objects (e.g. ‘behind the tree”) [Dennett ability to model uncertainties in the description of the model - 751. situations where spatial knowledge is not precise enough to quantify through coordinates. Two kinds of uncertainty can be precision/detail flexibility/abstraction 4 c handled - those arising from deliberate imprecision in the in- terest of compactness (“down the road from the laundrv“). or QUANTITAllVEh4ODELS ABSlRACI'MODEfS hnbk h Popplestone 761 wineton 751 lxequicha801 Kuipers dc ByunW ~~P=-=-;s~~J - . Fiaure a. Given two objects, their position and orienmtion with respect to each other can bs described either in terms of a series of numbers (coordinates), or in terms of some less quantitative measure (e.g. %ross the Safeway store and veer right at the fork”). The two descriptions involve a tradeoff between precisiorVdetail and flexibility/ abstraction. path planning a number of qualitative systems have been developed, such as sequence of view-frames [Kuipers & Byun 881, and polyhedral approximations [Ernest 861. In all these instances, the carefully selected domain-specific vocabulary is likely to result in brittle behavior when exposed to broader problems. The formal representation developed here can describe one, two, and three-dimensional models, involving orthogonal or angular relations. In the past, spatial relations have been developed with specific domains in mind. A large body of work has dealt with the orthogonal domains in the blocks world (e.g. [Winston 751). Geographic data systems need to be able to answer spatial queries [Peuquet & Ci-Xiang 871. Natural language generators and interpreters need to be able to reason about spatial prepositions [Retz-Schmidt 881. In robot Another motivation behind a formal theory of spatial relations is to provide an objective set of spatial primitives, so that one can remove the influence associated with selecting In order to define a systematic approach towards this the predicates used in a spatial inference system (e.g., in learning arches, “touches” is more important than “overlaps”). abstract yet comprehensive representation, we begin by At the same time, a formal representation, by providing a known domain over which it is complete, unburdens the identifying the qualitatively different aspects of the relative designer of many of the problems involved in ensuring that his/her vocabulary is powerful enough to describe all the positions of two objects in space, i.e. the basic vocabulary of possible descriptions that can arise. This is particularly appropriate binary spatial relations. for acquiring new concepts, where the completeness of the vocabulary and capability of hierarchical abstraction become important. contact). Our approach differs from these prior attempts in that we obtain the set of spatial primitives based on a categorization of Figure 4 shows some examples to illustrate the all possible configurations that are qualitatively distinct; this is importance of this kind of reasoning in human thinking. the sense in which it is complete. The concepts that we focus on involve tangency - Are the given objects are aligned at some face, line, or point? If not, then where is one object with respect to the other? At the same time, there are spatially relevant concepts such as “near” and “far” which we cannot model, since these are both in the same qualitative category (no- 2. ONE4XMENSIONAL RELATIONS : LOGIC Let us first consider objects along a single dimension. This case has been investigated in depth in the study of time. We consider here only one of the formalisms proposed, the interval logic model [Allen 831 and [Malik & Binford 831 in The description predicates are chosen by a simple, which relations are typically defined only between locally comprehensive process. To start with, we realize that obiects in related events. No Pamild block. no&~ Topbbdcmt p-ud. but 4li~~fmd oneaimension -can be either points or intervals. Also, we assume that the points are ordered along some direction, which may be due to a physical fact (e.g. time), due to some abstract notion (e.g.positive numbers), or some object feature (e.g.direction of motion). There are three possible cases for relations between two objects A and B: i) Both A and B are points. In this case there can be only three W- FFe 4a: These arrangements of blocks are distinguished by the accidental alignments between face, edge, and vertices. Usually, such alignments indicate a functional aspect: either the alignment was artificially created, or it is the result of some physical process in accordance with physical laws. For example, along the z-axis, all the objects shown have a face tangency relation, indicating the physical reality of support. In addition to alignments, the other issue that is immediately apparent is that of size - in all three images, the blocks are graded in size. relations - A can be behind B (-), it can be the same point as B (=), or it is ahead of B (+). ii) A is a point and B is an interval. In this case. there are five qualitatively distinct cases: A is behind B (-), A is at the same point as the back of B (b), A is inside B (i), A is at the front of B (fl, and A is ahead of B (+). iii) Both A and B ure interaals. This is the most interesting case, and has been dealt with in some detail in the study of temporal events [Allen 831. In general thirteen relations are possible. These are discussed below. Figure 4b: A number of psychdogical tests bear wimess to the fact that the human cognitive process emphasizes accidental alignments. A well known example is that of the hexagonal cube, where the accidental alignment precludes us from seeing the three-dimensional shape. In the two-interval case, if we consider an endpoint of an interval C then this point can be before, inside or after B. In addition, there are two more cases of interest - coincidence 722 KNOWLEDGEREPRESENTATION with either the front or the back boundary of B, which constitute the tangency cases. Altogether there are five regions of interest for this endpoint of C: +, f, i, b, - (ahead, front, interior, back, and posterior respectively). Each endpoint of C may be at one of these five qualitative positions, subject to the constraint that the front endpoint (or head) must be ahead of the rear endpoint (tnil), leading to a total of thirteen (5+3+3+1+1) positions for C with respect to B. The relation between two intervals C and B can then be expressed for example, as C (++) B, which would mean that C is lifter B. These relations are shown along the left hand margin of figure 5. + f i b - B - c - % Cq?-B ++ ? - B - -C l&by f+ > b B Be. < - - * - - 7 - - > b - i b - i b - i i < i i < > i < + b < + + 7 Figure 5. One-dimensional interval relations and the transitive inference table. The diagrams on the left show the relations between the intervals C and B as C moves leftwards from -after B to “before” B. The graph next to it shows the pro gression of relations during the movement: the three branches in the continuum represent the cases where C is longer than, equal to, or shorter than B. The table on the riiht shows the relationshii of an endpoint of A with respect to the interval C if the relations of both A end C are known with respect to another interval 8. The symbol ? denotes that the relation AK3 can be any of the five possible relations, + implies that NC can be one of (i.f,+), and * implies that the relation is in (-,b.i). Given the local relations between “neighboring” intervals (A/B) and (B/C), the transitive relation (A/C) is often disjunctive. The inference relations shown in figure 5 are more compact than Allen’s: 5x13 instead of 13x13 since they exploit the independence between the two ends of an interval. For example if A was an interval and we knew that A was overZap- by B {A/B = i+) and that C starts B, {C/B = bi) then we can conclude based on the transitivity table that with respect to C, the rear end of A is > or {i,f,+) and the front end of A is +. Therefore the relation A/C is either i+ (ot~lap-by), f+ (met-by), or ++ (@erJ. This establishes a constraint in the possible posi- tions of A with respect to C. This decomposition also depicts a continuum that exists be- tween relations, shown in the elongated vertical graph in the figure. C is initially ahead of B (++I, and as it moves towards B, it intersects B, and can have relations along one of three. branches, e.g., if C is smaller than B, then only the relations (if, ii, bi} are possible. This notion allows us to compare and represent the relative size of objects, which is an important qualitative distinction. Thus if we define a flush-translation opmfor $ for moving A until it is flush with B, then by observ- ing the relation between (PA and B one can determine whether A is longer, equal or shorter than B. Another benefit of the continuum concept is that it lets us define hierarchical relations, which can be used, e.g., as a measure for inductive bias in learning (see section 6). 3. MULTI-DIMENSIONAL SPACES : ORTHOGONAL DOMAINS One-dimensional interval logic can be easily extended to multi-dimensional cases where each object is either oriented with the axes, or is enclosed in a box which is so oriented (if the axes are orthogonal, then the box becomes a cuboid). Here we represent the relations along each of N axes as one element in a N-dimensional relation. For objects that are not rectangular, one can associate a “front” direction, which can be used to determine the enclosing rectangle. In this logic, the “atomic element” is the triple (X,Y,reln), where X and Y are objects with finite extent in each dimension and reln is a n-vector each element of which is in the set of one-dimensional relations outlined above. The “disjunctive element” is the triple (X,Y,complex-rein), where complex-reln is an n-vector where each element is a disjunction from the same relation-set. The interpretation for this syntax is fairly straightforward. Figure 6 shows an example of a transitive inference. Orthogonality arises quite “naturally” in human thought (e.g. left, right, front, back, east, north). One problem is that each object often has a different “natural” orthogonal system, so that no one representation can model all of them. Another significant problem is that for non-aligned objects, the rectangular enclosures often overlap when the actual objects are disjoint. Figure 6. Given the twodimensional qualitative relations between the post office (A) and Texas Avenue (B) and Texas Avenue and the Police Station (C), we can transitively obtain the disjunctive relation (B,C): Given only its position with respect to Texas Avenue, one cannot determine the y- relation of the Police Station w.r.t. the post office. Thus without knowing which side of 25th St the police station is, one would not be able to decide which direction to turn into Texas Avenue. However, adding the 25th St information results in the output ( AC) = ( > : : , which contains sufficient information to make this decision.. Nonetheless, the orthogonal representation system is actually quite powerful. By adding operators into the system, many qualitatively interesting questions can be answered. For example, let us consider a reflection operator p which reflects MUKERJEE AND JOE 723 the object about a +45 degree line through its bottom left y comer. Now, by comparing pA with A, one can answer a query of the form “Is it a square?“. Earlier we introduced a flush- translation operator; by repeating translations equal to an object’s own dimensions, we can define a “integer-multiple” n vector operator A; e.g. 3x 0 A would imply a rectangle twice as large as A in x and thrice as large in y, and with the same bottom left corner. This can be used, along with the flush- translate operator described earlier to compare the relative sizes of two objects: “A is three times as large as B in x”. We can also build a simple model for shape. For example, we can compare A with p n 0 h A to establish if “the aspect ratio for A is greater than 2 or less than 1”. It is clear that even concepts such as near and far can be actually represented in terms of this extended system. Finally, the axes of reference for these objects need not be perpendicular at all, indeed, they need not be straight even. Thus, lines of latitude and longitude, sectors on a polar plot, patches defined by parametric functions on a family of curves, can all be represented by the orthogonal model, since in each case, there are two clearly independent axes along which the interval relations can be used. 4. OBJECTS AT ARBITRARY ANGLES When objects are not oriented orthogonally, relationships become more complex. One of the primary problems is that the relationships are no longer meaningful with respect to some absolute coordinate frame but must be expressed in terms of one object or the other. The road veers to the left; it does not “make an angle of 25.77 degrees to the 59th parallel of longitude”. We begin by constructing a comprehensive mapping of the relations between two objects at an angle, in the qualitative contact sense. Since angular relations of B w.r.t A are dependent on A’s direction, which is generally not related to B’s direction, the operators in this formalism are non- commutative and do not have well-defined inverses, i.e., given the position (A/B), the (B/A) position cannot be determined. Let us consider the relations with respect to a single object A, which has a designated front. This defines four angular quadrants with respect to A, and the “front” for the other object may be oriented in any of these quadrants. The spatial relation between two objects at an angle has two attributes: a. the relative direction (internal angle), and b. the relative positioning. In this representation, we have tried to determine a naturally arising set of attributes for capturing the qualitative description for both of these attributes. 4.1. Representing direction When moving from the orthogonal domain into the angular domain, a mechanism is needed for the representation of the angular information. In our approach, we base all angular relations from a .predefined direction, called the “front” (figure 7). “Special” 21 + 3 4 (3 Figure 7. C&&&VW &ecfMs. Based on a speciel direcdon of a Vronr dlrection, we can define four principal directions (front,left,right,back) as well as four quadrants. The cat, for example, is behind the mouse and facing in the mouse’s first quadrant. Most objects have a special direction or a front. A car, chair, cat, person, house, etc. all have a “front” direction. Where an object is symmetrical and has no “special” direction, one can assign a front to it. This “front” direction is crucial to our modelling of orientation, since it provides a reference direction for all other objects and their “fronts.” For example, the church @es the library-signifies that their fronts are at 180°; the cat pounced on the mouse from the behind and to the right implies that the front of the cat was in the mouse’s first quadrant (figure 7(b)). In two dimensions, the assignment of a single “front” direction enables us to immediately identify several qualitatively different zones. First of all, “front” defines the orthogonal directions “left,” “right,” and ‘back.” In addition, it defines four possible quadrants (e.g. quadrant I is from front to left), thus defining eight qualitative angular relations between any two directions. In three-dimensions, one would need to define a second “special” direction, one that could perhaps be called “up.” This defines twenty-six qualitative regions based on an octant decomposition. In figure 7(b) above, the cat is pointing in the mouse’s quadrant I. This information can be written as dir(Cat/Mouse) = I where the dir0 relation is a representation of the direction information. Note that dir0 is invertible, i.e. given dir(A/B), dir(B/A) is uniquely defined. 4.2. Representing size/position Another consideration of spatial relations is the relative sizes and positions of objects. One method for modeling this may be to construct an enclosing box around the objects based on the “front” direction, and then extend the boundaries of this box to create 8 regions and 8 boundaries as in figure &a). The extensions of the lines in the forward direction are called the “lines of travel” as shown in figure 8(b). The representation shown in (a) is somewhat more powerful than that in 00, but is significantly more complex, and does not add any extra information regarding the actual intersection of objects. Furthermore, representation fb) collapses into the orthogonal representation when the two directions are orthogonal, or the (a) 0’4 Figure 8 The extension of the sides parallel to the front direction fofms the two lines of travel. 724 KNOWLEDGEREPRESENTATION linear case when the two directions are parallel. In this research therefore we have adopted the representation based on the lines of travel and ignored the width lines. This representation is much more compact yet preserves information with respect to the front direction of the object. As in the linear case, we consider each object as defined by the two endlines instead of the interval; eventually this leads to a smaller transitivity table. 4.3. Positional Relations: The Collision Parallelogram The support lines of two objects, when intersecting at an angle, will form a parallelogram, called the collision parallelogram (CP), which defines the area that is common to the lines of travel of both B and A. As an endline of A slides along its line of travel from ‘behind” the CP to “after” the CP, it passes through the “inside” of the CP and also two important qualitative points-“back” of the CP and the “front” of the CP (figure 9(a)). Altogether there are three pieces of information needed to completely describe the relationship of two objects (A and B): 1)the quadrant information of dir(A/B), 2)where object A is located with respect to object 8: pos(A/B), and 3)where object B is located with respect to object A: pos(B/A). Figure 9. The /%&km Relehn. The endline in (a) is located in the l -- region. The relations for two such endlines define the position relation for an object. In (b). the relation between A and B is defined as dir(AIB) I IV. pos(A/B) - -, and pos(WA) I -. Note that pos(B/A) is computed based entirely on the collision parallelogram, and is independent of pas (A/B); therefore it is necessary to maintain both pieces of information. Note that once the quadrant information of A/B is known then the quadrant information of B/A can be derived. To derive the relationship between two objects, each endline is considered one at a time. Finding the relationship between two objects is similar to one-dimensional intervals. The face of an object is labeled depending on which region it is located in with respect to the parallelogram formed by the travel lines of another object. Another aspect of this representation is the interrelations between the direction relation dir0 and the position relation pos(). In particular, the front directions of the objects are key to the position relations obtained - changing A’s front direction can affect the pos(A/B) as well as the dir0 relation. However, note that pos(B/A) is not affected by changing A’s direction since the CP remains the same. In the next section we exploit the interrelations between the dir0 and pos0 predicates to reduce the size of the transitivity tables. 5. TRANSITIVE RELATIONS . If the relations between (A,B) and (B,C) are known, what type of information can be inferred about the relationship between objects A and C? Figure 10 shows an example of such a transitive operation. Let us now investigate the nature of the transitive table for arbitrarily angular objects, which is seen to be significantly larger and more complex than the representation of the one-dimensional transitive table of figure 5. dir(B/Q=W pm(A~= -- pdC/A)=minfumla Q (WA/Q=UlorW Figure 10. TransitMy. An example of transitive inference given the relation between (BA) and (B,C). we infer that A must be - - with respect to C. The position of C w.r.t A is uncertain since C’s position w.r.t B “- -O does not constrein it in anyway w.r.t. the lines of travel of A. In many human contexts also, such decisions ere difficult to make in the absence of local information between A and C. The transitive information relates known information about the spatial relations (A,B) and (C,B) to make inferences about (A,C). Again, by using only the endline information, we can achieve a S/13 savings. Figure 11 shows a sample entry in the tables. Each quadrant group (e.g. dir(B/A) = I, dir(B/C) = I) contains 13x13 such tables. In this formalism, there is a 90° uncertainty in each angular relation, so when we perform a transitive operation, the uncertainty in the output is 1800 or two quadrants. This is seen in the chart of figure 11, where two quadrants rows are shown for each of A/C and C/A. 4 3 + 1 2 43 + 1 2 Figure 11. An entry from the transitive table. This represents the case where pos(WA) I ++, dir(BIA) I I, pcs(B/C) I -, dir(BIC) - I. The A/B and AIC positions are indicated by their endline position in the top row of the table. To use this table five parameters have to be known: 1) pos(B/A), 2) dir(B/A), 3) pos(B/C), 4) dir(B/C), and 5)the relationship of one endline of A or C with respect to B. Note that the resulting relations inferred from the transitive table may be disjunctive: e.g. if the input tuules are: B/A = <I, ++#i+> B/C = <I,-,bf> what is inferred from the table is: A/C = I: -? = c-,-b,-i,-f,+> 5 IV: >+ = <i+#f+,++> 3 C/A= I: IV: z = c-,&-i> 3 = <++> 1 which is actually a disjunction of (5*1) + (3*3) = 14 possible relations between A and C (since when dir(A/C) = I, dir(C/A) = IV). The number of disjunctions can be reduced if additional constraints are placed on A and C, e.g. via fourth object D. 5.1. Size of the transitive table (1 Each relation between two objects has 676 possible results 3x13~4). Therefore, for two objects, there are 676 x 676 = MUKERJEE AND JOE 725 456,976 entries in the table. However, the number of entries in the table can be reduced by realizing that the quadrant information and the position information are interrelated - for example, if the direction of any object is reversed, the position changes in a certain manner. Similarly certain properties are preserved when we consider configurations that are related by reflection. Another point to note is that pos(A/C) is independent of pos(C/A) and vice versa. In particular, pos(A/B) affects only pos(A/C) while pos(C/B) affects only pos(C/A). This permits the transitive table to partition C/A and A/C into two spaces. Formally, the set of quadrants in the transitive tables form two groups under the operations of reflection and direction inversion defined below. The arrangement of the groupings illustrates the structure of the relationship that exists between the groupings. For example, the group dir(B/A) = I and dir(B/C) = I, when reflected, results in dir(B/A) = III and dir(B/C) = III. Thus by determining the effects of the reflection operator, it becomes sufficient to maintain only one of these two quadrant groups. cmupet1 Groupset Fiiure 12. Ouackarrt hwelatfons. Each node retxesents the transitivity tabks for W< auadrants for B/A and B/c. The arcs b&een the nodes r&resent the oper&ions that map one quadrant group into another. (R = reflecti&; A.B,C = direction change on A, B and C). As a result of these interrelations it is sufficient to maintain tha tables for only two quadrants, say (1.1) and (IIJ). First let us consider the operation of reflection. For example, if we consider the table in figure 13, then under reflection (looking from behind the page), the configuration is the same as that in figure 11. Clearly the transitive tables are not independent; in this instance the position relations are essentially the same (since reflection does not affect the pos() relations), but the rows are interchanged (it does affect the quadrants). 14 + 2 3 14 + 2 3 Figure 13. Retecfed version of ahe table in Figure 11. The positional information remains unchanged. whii is reflected in the fact that rows 1 and 3 of this table is the same as rows 2 and 4 of Figure il. The quadrants are switched due to reflection. Another operation that reduces table size is directional inversion. When an object reverses direction, the physical location of the object does not change. In a transitive relationship involving objects A, B, and C, any one of the objects can perform a directional change. The directional change of two objects (e.g. A and B) will result in the same quadrant group as that obtained by changing the direction of the third object alone (i.e. C). Thus, directional changes form a cycle of operations in the group. a. Quadrants I I b. Quadrants ill Ill Figure 14. Dbcfib &an@?. when the direction of B is reversed, four relations are affected: dir(B/A) and dir(W). pos(B/A) and pas(B/c). The pos() relations are reversed, e.g. (I- -* becomas ‘++“. Tha changes in dir() result in going from one quadrant grwp to another, (e.g. from I I to Ill Ill). Let us consider a transitive inference involving A, B, and C. When the direction of B is changed to its opposite direction, the physical locations of objects A, B, and C remain exactly the same with respect to each other. The only things that change are the quadrant relations of B/A and B/C and B’s positional relations with respect to A and C. Since there has been a quadrant change for B/A and B/C, the result is a movement from one quadrant group to another. In this case, from quadrant group I I to III III. This type of quadrant group interrelation allows us to store only two quadrant tables, a saving of 216, or oneeighth. This representation has also been extended to three dimensions, where a nominal frame can be attached to each object based on the “front” and “up” directions mentioned earlier. Rectangular enclosures do not represent sufficient feature information, and a generalized cylinder model is used to represent shape, with the cross-section being modeled with a qualitative version of the medial-axis transform. This model has been used to construct qualitative geometric models for visualization and recognition tasks [King and Mukerjee 901. 6. CONCLUSION In this paper we have presented a spatial representation scenario that is useful for extracting symbolic representations from geometric models. It can be used to represent relations in any dimensions for objects that are either aligned to the reference frame, or are at arbitrary angles. The essence of this approach is that it preserves the information of contact, no-contact and tangency. Given two objects, one can determine if they are flush along some face or line, and this contains all the information required to identify accidental alignments and relative positions of objects. Such relations, which describe the properties at the boundaries of an object, are critical in the analysis of many systems such as VLSI, circuits, mechanisms, structures - indeed almost anywhere geometrical relations are important. The representat& has been applied to generating directions in city maps, acquiring data from graph diagrams in texts, to path planning for an indoor mobile robot and for learning concepts related to spatial knowledge. 6.1. Explanation-Based Learning Another capability inherent in the system is building commonsense theories for explanation-based learning. One can define “naive” notions such as the need for support against gravity, and that solids cannot physically intersect. The following predicate formulation in this logic can be used to ensure that every relationship in the model passes the no- intersection condition: 726 KNOWLEDGEREPRESENTATION gravity rule: no-intersection rule: y) where INT = (-i, bi, ii, -f, bf, if, 4, b+, i+), and implies a degree of overlap. In essence, the gravity rule states that if an object A is not lying on the floor,<hen there must be some other object such that it overlaps A in the x and y directions, and supports it (“W) in z. Similarly the no-intersection rule prohibits two objects from overlapping in all three axes. Note that the INT relation (-i, bi, ii, -f, bf, if, 4, b+, i+) is a hierarchical abstraction for the nine relations contained in it and can be thought of as a generalized overlap. Many similar conceptual clusterings are possible, for example: A smaller-than B = (bi,ii,fi); touch- contact = I-b, +f); nocontact= (++, -); flush-overlap = (-f, b+, bi, ifl, etc. Furthermore, there are built-in continuities that can be used to formulate powerful “inductive bias”es for learning. This is shown in the continuum graph of figure 5, which shows the progression of states as objects move relative to one another. One could use inductive bias to conclude, for example for the well-known arch structure, that if A owZaps B and A contained-in B are both valid rules, then all in-between relations are also valid rules: At-i, bi, ii)B are all valid constructs, and A(bf, -f, 4, b+)B, etc., are invalid rules (since A is on the smaller-than-B branch of the continuum). This model has been used for spatial learning using a robot/teach pendant setup for recognizing the structure of geometric assemblies (with a overhead camera to obtain the part geometry information). Three-dimensional structures are created using the teach pendant and these are identified by the user as positive or negative examples. The system then obtains the underlying concept behind the examples shown [Mukerjee and Bratton 901. II The discussion on transitivity, and the very nature of binary spatial relations raises the spectre of combinatorial explosion. A little consideration reveals several ameliorating aspects. One of the motivating factors behind this model is that all relations should be Iocally relevant. If Galveston is near Houston and Richardson is a suburb of Dallas, then the relation between Richardson and Galveston does not need to be represented directly. For simple path planning problems, it can be shown that for non-contact relations, only nearest neighbors need be modeled, and objects such as roads, three relations need to be stored. Thus the total number of relations is sharply reduced proportional to the number of neighbors and intersections (ml. Occasionally, one may store additional information- for example, global as well as local orientation, to constrain the angle further. As the number of objects (N) increases, the number of nearest neighbors per object remains constant. While the n umber of intektions may increase, it is usually small, and is bounded by the resolution in the domain . This O(mNL and if m means that the storage requirements can be bounded, we obtain O(N) storage. are be represented well using this mechanism. One such instance is that of “near” or “large”, which require some degree of quantitative information which this model does not provide for, although one can extend the logic with the notion of operators that would permit one to model such attributes also. However, it must be realized that for these two predicates at least, there is considerable ambiguity involved in the semantics, and to model these may introduce a degree of arbitrariness that would defeat one of the principal objectives of this work. More precise definitions can be obtained by representing concepts such as these as a predicate in terms of the translation and rotation operators mentioned above. In conclusion, we have discussed a simple yet powerful mechanism for representing the spatial relations between objects. This technique offers expressive power and logical transitivity, and is capable of dealing with imprecision in the spatial knowledge. [Allen 831 Allen, James F., Maintaining knowledge about temporal intervals, Communications of the ACM, vol.26(11), November 1983, pp.832-843. [Ambler and Popplestone 751 Ambler, A.P., and R.J. Popplestone, Inferring the positions of bodies from specified spatial relations, Artificial Intelligence, v.6:129-156. [Dennett 751 Dennett, David C., Spatial and temporal uses of english prepositions - an essay in stratificational semantics, Longman Group, London 1975. [[Ernest 861 Ernest, Davis, Representing and acquiring geographic knowledge, Morgan Kaufman, Los Altos, CA, 1986,223 pages. [King and Mukerjee 901 King, Scott, and Mukerjee, Amitabha, Inexact Visualization, IEEE Conference on Visualization in Biomedical Computing (Also Texas A&M TR 90-05), May 22- 25, 1990, Atlanta. [Kuipers and Byun 881 Kuipers, Benjamin J., and Yung-Tai Byun, A robust qualitative method for robot spatial learning, AAAI 88, p.774779. [Malik and Binford 831 Malik, Jitendra and Thomas 0. Binford, Reasoning in Time and Space, IJCAI-1983, p. 343-345. [McCarthy 771 McCarthy, John, Epistemological problems of artificial intelligence, Proceedings LICAI-77, Cambridge MA, 1977, p.l038-1044. [Mukerjee and Bratton 90) Mukerjee, Amitabha, and Samuel E. Bratton, Qualitative Spatial Reasoning and the learning of structural descriptions, Texas A&M University Technical Report 90-010. [Peuquet and Ci-Xiang 871 Peuquet, Donna J., and Zhan Ci- Xiang, An algorithm to determine the directional relationship between arbitrarily-shaped polygons in the plane, Pattern Recognition, v.20( 1):65-74, 1987. [Requicha 801 Requicha, A.A.G., Representation for Rigid solids: Theory, methods, and systems, ACM Computer Surveys, Dec. 1980. [Retz-Schmidt 881 Retz-Schmidt, Gudula, 1988. Various views on spatial prepositions, Al Magazine, Summer 1988, p. 95- 105. [Winston 751 Winston, Patrick Henry, 1975. Learning structural descriptions from examples, The Psychology of Computer Vision, ed. Patrick Henry Winston, McGraw Hill, 1975, p.157- 209. At this point it would be appropriate to note that clearly there are a number of spatial relations of interest that cannot MUKERJEE AND JOE 727
1990
132
885
Reasoning about Qualitative Temporal Information Peter van Beek Department of Computer Science1 University of Waterloo Waterloo, Ontario, CANADA NZL 3Gl pgvanbeek@dragon.waterloo.edu Abstract Interval and point algebras have been proposed for representing qualitative temporal information about the relationships between pairs of intervals and pairs of points, respectively. In this paper, we address two related reasoning tasks that arise in these algebras: Given (possibly indefinite) knowledge of the relation- ships between some intervals or points, (1) find one or more scenarios that are consistent with the informa- tion provided, and (2) find all the feasible relations between every pair of intervals or points. Solutions to these problems have applications in natural language processing, planning, and a knowledge representation language. We define computationally efficient procedures for solving these tasks for the point algebra and for a corresponding subset of the interval algebra. Our algorithms are marked improvements over the previously known algorithms. We also show how the results for the point algebra aid in the design of a backtracking algorithm for the full interval algebra that is useful in practice. Introduction Much temporal information is qualitative informa- tion such as “The Cuban Missile crisis took place during Kennedy’s presidency,” where only the order- ing of the end points of the two events is specified. Allen [l] has proposed an interval algebra and Vilain & Kautz [20] h ave proposed a point algebra for representing such qualitative information. In this paper, we address two fundamental reasoning tasks that arise in these algebras: Given (possibly indefin- ite) knowledge of the relationships between some intervals or points, 1. find one or more scenarios that are consistent with the information provided. 2. find all the feasible relations between every pair of intervals or points2. Specific applications of solutions to these tasks 1 Author’s current address: Department of Computing Science, University of Alberta, Edmonton, Alberta, CANA- DA T6G 2Hl. 2 The terminology is from 17). Other names for task 1 in- clude consistent singleton labeling 118) and a satisfying as- signment of values to the variables (121. Other names for task 2 include deductive closure 1211, minimal labeling 1181 and, as it arises as a general constraint satisfaction prob- lem, minimal network 1151. 728 KNOWLEDGEREPRESENTATION include natural language processing (Allen [2]), plan- ning (Allen & Koomen [4]), and a knowledge representation language (Koubarakis et al. [lo]). As well, the techniques developed here could be part of a specialist in a general temporal reasoning system that would have other specialists for other kinds of temporal information such as quantitative informa- tion about the distances between intervals or points (Dechter et al. [?‘I, Dean [5]), or combinations of qualitative and quantitative information (Allen & Kautz [3], Ladkin [ll]). The main results of the paper are as follows. For the point algebra and for a corresponding subset of the interval algebra, we give computationally effi- cient procedures for solving both tasks 1 & 2. Our algorithms are marked improvements over the previ- ously known algorithms. In particular, we develop an O(n’) time algorithm for finding one consistent scenario that is an O(n) improvement over the pre- viously known algorithm [12], where n is the number of intervals or points, and we develop an algorithm for finding all the feasible relations that is of far more practical use than the previously known algo- rithm [18]. For the full interval algebra, Vilain & Kautz [20, 211 show that both of these tasks are NP-Complete. This strongly suggests that no polynomial time algo- rithm exists. We show how the results for the point algebra aid in the design of a backtracking algorithm for finding one consistent scenario that, while exponential in the worst case, is shown to be useful in practice. A similar backtracking approach is given for finding all the feasible relations. The results here are less encouraging in practice and we conclude that a better approach in this case is to, if possible, accept approximate solutions to the prob- lem (Allen [l], van Beck & Cohen [18, 191). Background, Definitions, and Example In this section we review Allen’s interval algebra and Vilain & Kautz’s point algebra. We end with an example from the interval algebra of the two reason- ing problems we want to solve. Definition. Interval algebra, IA (Allen [l]). There are thirteen basic relations (including inverses) that can hold between two intervals. From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. relation symbol inverse meaning x before y x meets y x overlaps y x during y x starts y x finishes y x equal y b bi m mi 0 oi d di S si f fi eq eq xxx YYY XXXYYY xxx YYY xxx YYYYY xxx YYYYY xxx YYYYY xxx YYY We want to be able to represent indefinite informa- tion so we allow the relationship between two inter- vals to be a disjunction of the- basic relations. We use sets to list the disjunctions. Somewhat more for- mally, let I be the set of all basic relations, {eq, b, bi, m, mi, o, oi, d, di, s, si, f, fi}. IA is the algebraic structure with underlying 2 jet 2I, the power set of I, unary operator inverse, and binary operators inter- section and composition (denoted “constraints” in [l]; see that reference for the definition). Definition. Point algebra, PA (Vilain & Kautz [20]). There are three basic relations that can hold between two points <, =, and >. As in the interval algebra, we want to be able to represent indefinite information so we allow the relationship between two points to be a disjunction of the basic relations. PA is the algebraic structure with underlying set (0, <, 2, =9 >, 2, f, ?}, unary operator inverse, and binary operators intersection and composition (denoted addition and multiplication in [20] where the operators are defined over bit vector representa- tions of the underlying set; see that reference for the definitions). Note that 2, for example, is an abbre- viation of {<, =}, 0 is the inconsistent constraint, and 2 means there is no constraint between two points, {<, =, >}. Vilain & Kautz show that a subset of the interval algebra can be translated into their point algebra. We denote as SIA the subset of the underlying set of the interval algebra that can be translated into relations between the endpoints of the intervals using the underlying set of PA (see [19] for an enumeration of SIA). We will use a graphical notation where the ver- tices represent intervals or points and the directed edges are labeled with elements from the appropriate algebra representing the disjunction of possible basic relations between the two intervals or points. A consistent scenario is a labeling of the graph where every label is a singleton set (a set consisting of a single basic relation) and it is possible to map the vertices to a time line and have the single rela- tions between vertices hold. The set of feasible relations between two vertices consists of only the elements (basic relations) in that label capable of being part of a consistent scenario. Finding the feasible relations involves removing only those ele- ments from the labels that could not be part of a consistent scenario. Here is an example from the interval algebra of our two reasoning tasks. Suppose interval A either overlaps or starts interval B, but we are not sure which, and interval B meets interval C. We represent this as follows where the label I, the set of all basic relations, shows we have no direct knowledge of the relationship between A and C. There are two possible answers to problem 1: find a consistent scenario. Their map- pings to a time line are shown below. In the map- ping on the left, A overlaps B, in the one on the right, A starts B, and in both mappings B meets C and A is before C. v-i F-i B 1 c B 1 c I I It remains to answer problem 2: find all the feasible relations between every pair of intervals. The one change is that the set of feasible relations between A and C is just {b}, the “before” relation. We see that this is true in the mappings above. No other rela- tion between A and C can be part of a consistent scenario. The Point Algebra and a Subset of the Interval Algebra In this section we examine the computational prob- lems of finding consistent scenarios and finding the feasible relations for the point algebra, PA, and the corresponding subset of the interval algebra, SIA. Finding Consistent Scenarios Ladkin & Maddux [12] give an algorithm for finding one consistent scenario that takes O(n3) time for PA networks with n points. If no consistent scenario exists, the algorithm reports the inconsistency. Their algorithm relies on first applying the path con- sistency algorithm [l3, 151 before finding a consistent scenario. Here we give an algorithm for finding one con- sistent scenario that takes O(n2) time for PA net- works with n points. Our starting point is an obser- vation by Ladkin & Maddux 112, p.341 that topologi- cal sort alone will not work as the labels may be any one of the eight different PA elements, (0, <, 5, =, >, 2, #, ?}, and th us may have less information about the relationship between two points than is required. For top sort we need all edges labeled VAN BEEK 729 Input: A PA network represented as a matrix C where element Cij is the label on edge (i, j). Output: A consistent scenario (a linear ordering of the points). Step 1. Identify all the strongly connected com- ponents (SCCs) of the graph using only the edges labeled with <, 5, and =. Condense the graph by collapsing each strongly con- nected component into a single vertex. Let WI, s2, - - *J Sm} be the SCCs we have found (the Si partition the vertices in the graph in that each vertex is in one and only one of the Si). We con- struct th^e condensed graph and its matrix represen- tation, C, as follows. Each Si is a vertex in the graph. The labels on the edges between all pairs of vertices is given by 6ij + n Cvw, i, j= 1,. . .,m v ES, w ESj If the empty label, 0, results network is inconsistent. on any edge, then the Step 2. Replace any remaining < labels in 6 with < and p$rform a topological sort using only the edges in C labeled with <. Fig. 1. Cons. Scenario Alg. for PA Networks with <, >, or ? (see [9]). The “problem” labels are then {=, 0, 5, 2, #}. The intuition behind the algorithm is that we somehow remove or rule out each of these possibilities and, once we have, we can then apply top sort to give a consistent scenario. Much of the discussion to follow relies on the assumption that looking at paths (the transitivity information) is sufficient for deciding the label on an edge. The only exception to the truth of the assumption is that looking at paths will sometimes assign a label of <_ instead of < (see Fig. 3) but this will not affect the discussion. Step 1: The = relation. To remove the = rela- tion from the network, we identify all pairs of points that are forced to be equal and condense them into one vertex. By forced to be equal, we mean that in every consistent scenario the vertices are equal, so no other relation will result in a consistent scenario. More formally, we want to partition the vertices into equivalence classes Si, 1 <_ i 5 m , such that vertices u and w are in the same equivalence class if and only if they are forced to be equal. But, the vertices u and UI are forced to be equal precisely when there is a cycle of the form u+.J~-~- _ <w<u where one or more of the < can be =. This is the same as saying u and w arrin the same equivalence 4 = 8 = 5 s, = 0, 7, 8) S3 = (4, 5) S2 = (2, 3) S4 = (6) Fig. 2. Strongly Connected Components class if and only if there is a path from w to w and a path from w to v using only the edges labeled with < or =. This is a well-known problem in graph theory. Determining the equivalence classes is the same as identifying the strongly connected com- ponents of the graph and efficient algorithms are known (Tarjan [IS]). An example is shown in Fig. 2. Only < and = edges are shown except that self-loops are aEo omitted (each vertex is equal to itself). There are four strongly connected components. Condensing the strongly connected components is described in Step 1 of the algorithm shown in Fig. 1. Step 1: The @ relatibn. To rule out the 0 relation we must determine if the the network is inconsistent. The network is inconsistent if a vertex is forced to be <, >, or # to itself. That is, when there is a cycle of the form or of the form u<v< . . . <w<u where all but one of the < can be < or =. It turns out that we can identify these cases simply by also looking at edges labeled with < when identifying the strongly connected components. The inconsistencies are then detected when the strongly connected corn- ponents are condensed (Step 1 of Fig. 1). For exam- ple, suppose the label on the edge (1, 7) in the graph shown in Fig. 2 was < instead of the < shown. Con- densing the strongly connected component Si gives 61 + cl7 n cIg n c71 n c78 n cgI n Cm + {<>n {>,=}n {>}n {<,=}n {<,=>n {>,=} +0 where again we have omitted the self loops Cii. Step 2: The 5, 2 relations. To remove the 5 relation from the network, we simply change all < labels to <. This is valid because, as a result of Step 1, we know that a consistent scenario exists and that no remaining edge is forced to have = as its label in all consistent scenarios. So, for any particular edge labeled with < there exists a consistent scenario with < as the singleton label. But, changing a 5 to a < 730 KN~WLEDOEREPRESE~ATION can only force other labels to become <; it cannot force labels to become =. (Using the terminology of the algorithm, no new strongly connected com- ponents are introduced by this step; hence no new labels are forced to be equal and no new inconsisten- cies are introduced.) So, after all the changes, a con- sistent scenario will still exist. Step 2: The # relation. We can now perform topological sort to find one consistent scenario. It can be shown that, because of the previous steps of the algorithm, the # relations will now be handled correctly (and implicitly) by top sort. The output of top sort is an assignment of numbers to the vertices (a mapping of the vertices to a time line) that is con- sistent with the information provided. As an exam- ple, consider the algorithm in Fig. 1 applied to the network in Fig. 3. Depending on the particular implementation of top sort, one possible result of the algorithm is the following assignment of numbers to vertices: s t 0, v t 1, ~1 t 2, and t t 3. Theorem 1. The algorithm in Fig. 1 correctly solves the consistent scenario problem for PA and SIA networks in O(n2) time, where n is the number of points or intervals. Note that for SIA networks we must first translate the network into a PA network, solve, then translate back. For the time boundd finding the strongly connected components is O(n ) [16], con- densing the graph looks at each edge only once, and topological sort is O(n2) 191. It is easy to see that the algorithm is asymptotically optimal as we must at least examine every edge in the network, of which there may be as many as O(n2). If we do not, we can not be sure that the label on that edge is not involved in a contradiction by, for example, being part of a loop that causes a vertex to be less than itself. Determining the Feasible Relationships Ghallab & Mounir Alaoui [8] give an incremental procedure, based on a structure called a maximal indexed spanning tree, that is shown to work well in practice. The path consistency algorithm (PC) [l3, 151 can be used to find approximations to the sets of all feasible relations [l]. Much previous work are efforts at identifying classes of relations for which PC will give exact answers. Montanari [15] shows that PC is exact for a restricted class of binary con- straint relations. However, the relations of interest here do not all fall into this class. ValdCs-Perez [l7] shows that PC is exact for the basic relations of IA. In 118, 211, we show that PC is exact for a subset of PA and a corresponding subset of SLA. The new point algebra differs from PA only in that # is excluded from the underlying set. But we also give examples there that show that, earlier claims to the contrary, the path consistency algorithm is not exact for PA nor for SXA networks and we develop an O(n4) strong four-consistency algorithm that is exact, where n is the number of intervals or points. Here we give an algorithm for finding all feasible relations that, while still O(n4) in the worst case for PA networks with n points, is of far more practical use than our previous algorithm (that algorithm is still of importance as an approximation algorithm for instances of the problem from the full interval alge- bra; see [18, 211 for the details). Our strategy for developing an algorithm for PA networks is to first identify why path consistency is sufficient if we exclude f -from the language and is not sufficient if we include f. Fig. 3 gives the smal- lest counter-example showing that the path con- sistency algorithm is not exact for PA. The graph is path consistent. But it is easy to see that not every basic relation in the label between s and t is feasi- ble. In particular, asserting s = t forces v and w to also be equal to s and t. But this is inconsistent with v # &. Hence, the = relation is not feasible as it is not capable .of being part of a consistent scenario. The label between-s and t should be <. v t Fig. 3. “Forbidden” Subgraph This is one counter-example of four vertices. But are there other counter-examples for n > 4? The following theorem answer this question and is the basis of an algorithm for finding all feasible relations for PA networks. Theorem 2 (van Beek & Cohen [19]). The network in Fig. 3 is the smallest counter-example to the exactness of path consistency for PA- networks and, up to isomorphism, is the only counter- example of four vertices. Also, any lar&r counter- example must have a subgraph of four vertices iso- morphic to the example. We shall solve the feasible relations problem by first applying the path consistency algorithm and then systematically searching for “forbidden” sub- graphs and appropriately changing the labels (see Fig. 4; the path consistency algorithm is slightly sim- plified because of properties of the algebras). The algorithm makes use of adjacency lists, For VAN BEEK 731 A PA network represented as a matrix c where elemen t Cjj is the label on edge (6 j)- output: The set of feasible relations for Cij, i, j = 1, . . . ,n. procedure FEASIBLE begin PATHCONSISTENCY FINDSUBGRAPHS end procedure PATHCONSISTENCY begin Qc u RELATEDJ’ATHS (a, j) l<i<j<n while (Q is not empty) begin select and delete a path (i, k, j) from Q t + Ci,- n Cik * Ckj if (t # Cij) begin C;j t t Cji + INVERSE (t ) Q t Q U RELATED-PATHS (i, j) end end procedure RELATED-PATHS (i, j) return { (i, j, k), (k, i, j) I I<k<n,k#i,k# j) procedure FINDSUBGRAPHS begin for each v such that adj+(v) # 0 for each s E ad j>(v) for each t E adjl(v) if (ad&(s) n ad&(v) n ad&(t) # 0) begin Fig. 4. Feas. Relations Alg. for PA Networks example, adj<(v) is the list of all vertices, w, for which there iZ an edge from v to w that is labeled with ‘5’. Changing the label on some edge (s, t) from ‘<’ to ‘<’ may further constrain other edges. The question immediately arises of whether we need to again apply the path consistency algorithm following our search for “forbidden” subgraphs to propagate the newly changed labels? Fortunately, the answer is no. Given a new label on an edge (s, t), if we were to apply the path consistency algorithm, the set of pos- sible triangles that would be examined is given by {(s, t, k), (k, s, t) 1 1 5 k 5 n, k # s, k # t) (see procedure RELATED-PATHS in Fig. 4). Thus there are two cases. For both, we can show that any changes that the path consistency algorithm would make will already have been made by pro- cedure FINDSUBGRAPHS. Case 1: (s, t , k). Changing the label on the edge (s, t) from ‘<’ to ‘<’ would cause the path con- sistency algorithm to change the label on the edge (s , k) only in two cases: s < t, t < k, and s < k - - s<t,t= k,ands zk In both, the label on (s, k) will become ‘<‘. For (s, t) to change we must have the situation depicted in Fig 3., for some v and w . But v < t and w < t together with t < k (or t = k) imply that v 5 k and w < k (we can assume the relations were pro- pagated because we applied the path consistency algorithm before the procedure for finding “forbid- den” subgraphs). Hence, (s , k) also belongs to a “forbidden” subgraph and the label on that edge will have been found and updated. Case 2: (k, s , t ). Similar argument as Case 1. Theorem 3. 7lie algorithm in Fig. 4 correctly solves the feasible relations problem for PA and SIA networks. Note that for SIA networks we must first translate the network into a PA network, solve, then translate back. For a time bound, the path consistency procedure is O(n3) [14] and the find sub- graphs procedure is easily shown to take O(n4) time in the worst case, where n is the number of points. This is the same as the previously known algorithm [18]. However, this comparison is misleading, as the algorithm in [18] always takes O(n4) time, no matter what the input is. A desirable feature of procedure FIND~SUBGRAPHS is that its cost is proportional to the number of edges labeled #. The worst cases for the algorithm are contrived and presumably would rarely occur. As experimental evidence, the algorithm was implemented in a straightforward way and tested on random problems up to size 100. It was found that about 90% of the time was spent in the path consistency algorithm and only about 2% in FINDXJBGRAPHS. Hence, the O(n3) path con- sistency procedure dominates the computation. The Full Interval Algebra In this section we examine the computational prob- lems of finding consistent scenarios and finding the feasible relations between intervals for the full inter- val algebra, IA. Vilain & Kautz [20, 211 show that both of these problems are NP-Complete for the interval algebra. Thus the worst cases of the algo- rithms that we devise will be exponential and the best we can hope for is that the algorithms are still useful in practice. We discuss to what extent this is achieved below. 732 KNOWLEDGEREPRESENTATION Single: (1’2) (1’3) {es1 04 {oil 61 where I = {eq, b, bi, m, mi, o, oi, d, di, s, si, f, fi} WI Fig. 5. Example IA Network Finding Consistent Scenarios Allen [l] proposes using simple backtracking search to find one consistent scenario of an IA network or report inconsistency. Valdes-Perez [17] gives a dependency-directed backtracking algorithm. Both search through the alternative singleton labelings. As well, there has been much work on improving the performance of backtracking that could be applied to this problem (see [6] and references therein). Here we show how the results for the point alge- bra can be used to design a backtracking algorithm for finding one consistent scenario that is shown to be useful in practice. The key idea is that the O(n2) decision procedure for SIA networks (Step 1 of Fig. 1) can be used to decide whether a partial solution found so far is con- sistent (acceptable) and so might be part of a solu- tion to the whole problem. The benefits go beyond a fast test for acceptability. Whereas Allen and ValdCs-Perez search through alternative singleton labelings, we can now reduce the cardinality of the domains we are searching through by decomposing the labels into the largest possible elements of SIA. For example, if the label on an edge is {b, bi, m, o, oi, si}, there are six possible ways to label the edge with a singleton label: {b}, {bi}, {m}, {o}, {oi}, {si}, but only two possible ways to label the edge if we decompose the labels into the largest possible ele- ments of SIA: {b, m, o} and {bi, oi, si}. It is easy to see that this is guaranteed to be better since, for any choice of a singleton label, we can choose a label of larger (or equal) cardinality that is a superset of the singleton label. If the singleton label is consistent, so is the larger label. And, of course, there will be times when the larger label is consistent and the sin- gleton label is not. Recall that what we want to find is a labeling of the edges of the graph such that every label contains a single basic relation and it is possible to map the vertices to a time line and have the single relations between vertices hold. Finding a consistent scenario is now done in two stages: the output of the back- tracking algorithm will be a consistent SIA network . (“II!j 01 (1’3) WI (2’3) {bi,oi} (24 Wil (374 -tb) Fig. 6. Backtrack Search and the scenario algorithm for SIA networks (Fig. 1) is then used to find a consistent scenario of this net- work. As an example, consider the network shown in Fig. 5. The backtrack search will look at the edges in the order (1,2), (1,3), (2,3), (1,4), (2,4), and (3,4). A record of both methods of search is shown in Fig. 6. Moving to the right and downward in the figure means a partial solution is being extended, moving to the left and downward means the search is back- tracking. Note that, for example, when searching through alternative singleton labelings, much search is done before it is discovered that no consistent scenario exists with edge (1,2) labeled with {es}, but when decomposing the labels into the largest possible elements of SIA and searching through the decom- positions, no backtracking is necessary. The algorithm was implemented and tested on random instances from a distribution designed to approximate planning applications (as estimated from a block-stacking example in [4]). In planning, as formulated by Allen, and Koomen [4], actions are associated with the intervals they hold over and the full interval algebra is used. Finding one consistent scenario corresponds to finding an ordering of the actions that will accomplish a goal. Hence, the results here are directly applicable. For a problem size of n = 20, the average time to find a solution was about seven seconds of CPU time (25 tests VAN BEEK 733 performed). For n = 40, it was 74 seconds (average over 21 tests). This seems surprisingly fast. How- ever, it should be noted that four of the tests for n = 40 were not included as they were stopped before completion as a limit on the number of con- sistency checks was exceeded. n Determining the Feasible Relationships A similar backtracking algorithm as in the previous section can be designed for finding all the feasible relations. Again, instead of searching through the alternative singleton labelings of the edges, we decompose the labels into the largest possible ele- ments of SIA and search through the decomposi- tions. In the previous. section when finding a con- sistent scenario we stopped the backtracking algo- rithm after one consistent SIA network was found. To determine the feasible relations we must find all such consistent SIA networks. For each such con- sistent SIA network we find the feasible relations using the algorithm of Fig. 4. The feasible relations for the IA network is then just the union of all such solutions. Initial experience, however, suggests this method is practical only for small instances of the problem, or for instances where only a few of the relations between intervals fall outside of the special subset SIA. We conclude that in most cases a better approach is to, if possible, accept approximate solutions to the problem (Allen [l], van Beek & Cohen [18, 191). Acknowledgements. Many thanks to my supervi- sor Robin Cohen and to Fahiem Bacchus, Charlie Colbourn, Fei Song, Bruce Spencer, and Paul van Arragon for help, advice, and encouragement and to Peter Ladkin for fruitful discussions over the inter- net. PI PI PI PI PI PI References Allen, J. F. 1983. Maintaining Knowledge about Temporal Intervals. Comm. ACM 20, 832-843. Allen, J. F. 1984. Towards a General Theory of Action and Time. Artificial Intelligence 23, 123-154. Allen, J. F., and H. Kautz. 1985. A Model of Naive Temporal Reasoning. In Formal Theories of the Commonsense World, J. Hobbs and R. Moore (eds.), Ablex, 251-268. Allen, J. F., and J. A. Koomen. 1983. Planning Using a Temporal World Model. Proc. of the 8th IJCAI, 741-747. Dean, T., and D. V. McDermott. 1987. Tem- poral Data Base Management. Artificial Intel- ligence 32, l-55. Dechter, R., and I. Meiri. 1989. Experimental Evaluation of Preprocessing Techniques in Con- straint Satisfaction Problems. Proc. of the 11th PI PI PI PO1 Pll PI 1131 WI PI PI P71 PI PI PO1 Fl IJCAI, 271-277. Dechter, R., I. Meiri, and J. Pearl. 1989. Tem- poral Constraint Networks. Proc. of the 1st Int. Con f. on Principles of Knowledge Representation and Reasoning, 83-93. Ghallab, M., and A. Mounir Alaoui. 1989. Managing Efficiently Temporal Relations Through Indexed Spanning Trees. Proc. of the 11th IJCAI, 1297-1303. Knuth, D. E. 1973. Sorting and Searching. Addison-Wesley, 258-265. Koubarakis, M., J. Mylopoulos, M. Stanley, and A. Borgida. 1989. Telos: Features and Formali- zation. Technical Report KRR-TR-89-4, Dept. of Computer Science, University of Toronto. Ladkin, P. B. 1989. Metric Constraint Satisfac- tion with Intervals. Technical Report TR-89- 038, International Computer Science Institute, Berkeley, Calif. Ladkin, P. B., and R. Maddux. 1988. The Algebra of Constraint Satisfaction Problems and Temporal Reasoning. Technical Report, Kestrel Institute, Palo Alto, Calif. Mackworth, A. K. 1977. Consistency in Net- works of Relations. Artificial Intelligence 8, 99-118. Mackworth, A. K., and E. C. Freuder. 1985. The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfac- tion Problems. Artificial Intelligence 25, 65-74. Montanari, U. 1974. Networks of Constraints: Fundamental Properties and Applications to Picture Processing. Inform. Sci. 7, 95-132. Tarjan, R. 1972. Depth-First Search and Linear Graph Algorithms. SIAM J. Comput. 1, 146-160. ValdCs-Perez, R. E. 1987. The Satisfiability of Temporal Constraint Networks. A-oc. of the 6th National Conf. on AI, 256-260. van Beek, P. 1989. Approximation Algorithms for Temporal Reasoning. Proc. of the 11th IJCAI, 1291-1296. van Beek, P., and R. Cohen. 1990. Exact and Approximate Reasoning about Temporal Rela- tions. Computational Intelligence. To appear. Vilain, M., and H. Kautz. 1986. Constraint Propagation Algorithms for Temporal Reason- ing. Proc. of the 5th National Conf. on AI, 377-382. Vilain, M., H. Kautz, and P. van Beek. 1989. Constraint Propagation Algorithms for Tem- poral Reasoning: A Revised Report. In Read- ings in Qualitative Reasoning about Physical Systems, D. S. Weld and J. de Kleer (eds.), Morgan-Kaufman, 373-381. 734 KN~wLEDGEREPRJ~sENX~TI~N
1990
133
886
etric Constrai Systems Glenn A. Kramer School of Cognitive and Computing Sciences Schlumberger Laboratory for Computer Science* University of Sussex P.O. Box 200015 Brighton BNl 9&H, England Austin, Texas 78720-0015 gak@slcs.slb.com Abstract Finding the configurations of a set of rigid bod- ies that satisfy a set of geometric constraints is a problem traditionally solved by reformulating the geometry and constraints as algebraic equa- tions which are solved symbolically or numerically. But many such problems can be solved by rea soning symbolically about the geometric bodies themselves using a new technique called degrees of freedom analysis. In this approach, a sequence of actions is devised to satisfy each constraint incrementally, thus monotonically decreasing the system’s remaining degrees of freedom. This se- quence of actions is used metaphorically to solve, in a maximally decoupled form, the equations re- sulting from an algebraic representation of the problem. Degrees of freedom analysis has signifi- cant computational advantages over conventional algebraic approaches. The utility of the technique is demonstrated with a program that assembles and kinematically simulates mechanical linkages. Introduction Solving geometric constraint systems is an important problem with applications in many domains, for ex- ample: describing mechanical assemblies, constraint- based sketching and design, geometric modeling for CAD, and kinematic analysis of robots and other mech- anisms. An important class of such problems involves finding the configurations (positions and orientations) of a set of rigid bodies that satisfy a set of geomet- ric constraints. This paper first examines traditional means of solving such problems. Degrees of freedom analysis is then introduced as a novel and more intu- itive solution technique with substantially better com- putational properties. The power of this technique is demonstrated with a system that kinematically simu- lates mechanical linkages. Mechanical design presents interesting challenges due to the intimate role that complex 3D geometry *Author’s current address. 708 KN~WLEDGEREPRESENTATION plays in design analysis and synthesis [Dixon, 19861. While algebraic methods are dominant in mechanism analysis, purely geometric methods are also used be- cause they ‘maintain touch with physical reality to a much greater degree than do the algebraic methods’ and ‘serve as useful guides in directing the course of equations’ [Hartenberg and Denavit, 19641. This paper describes how to use geometric reasoning to guide the solution of the sets of complicated nonlin- ear equations that arise in mechanism simulation. A program called TLA embodies this methodology. It simulates a mechanism by first reasoning at the geo- metric level about how to assemble it. TLA then uses this assembly plan as a metaphor to solve the equations in a stylized, highly decoupled manner. Efficient solu- tion is important because these equations are solved repeatedly in tasks such as simulation and optimiza- tion. The approach described in this paper greatly reduces the computational complexity of solving such systems, and is a strategy which is unique to TLA. Kinematic simulation Kinematic analysis answers questions about the mo- tion of mechanisms, without regard to the forces which produce that motion [Hartenberg and Denavit, 19641. Kinematic assembly of a mechanism requires determin- ing the configuration of each body to satisfy all as- sembly constraints. These are either joint constraints, which describe how bodies may move relative to each other, or driving input constraints, which further re- strict a joint by specifying a value for an angle or dis- placement. Kinematic simulation involves repeatedly finding the configurations of the parts of a mechanism for par- ticular values of the driving input constraints; this is effectively the same as repeatedly assembling the mech- anism for different values of the driving inputs. As the values of the driving inputs are varied, the mech- anism will trace its characteristic path. The motion is a function only of geometric relationships between the various joints. Thus, engineers use kinematic di- agrams, which are stick-figure ‘schematics’ of mecha- From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. nisms. They contain only geometry relating the joints, not the actual shapes or boundaries of the parts. They help designers understand a mechanism’s kinematic be- havior. This research is concerned with simulating mechanisms at the level of kinematic diagrams. Mechanical constraints Constraints describing joint behavior can be modeled as relationships between sets of points on different bod- ies. A marker consists of a point in 3D space, along with two orthogonal axes, z and x, which emanate from the point. The position of a marker is the position of its point, while its orientation is determined by its axes. Since all bodies are rigid, constraints between markers constrain the bodies to which they are attached. The constraints between pairs of markers ml and m2 are: coincident (ml, mz): Markers ml and m2 are spa- tially coincident. in-line(ml, m2): ml lies on the line through m2 par- allel to mfL’s z axis. in-plane(ml, m2): ml normal to m2’s z axis. lies in the plane through m2 parallel-z( ml, m2): are parallel. the z axes of markers ml and m2 perpendicular-z(ml are perpendicular. , ml:): the z axes of ml and m2 co-oriented(ml, ma, a): the z axes of ml and m2 are parallel; and the angle from ml’s x axis to mz’s x axis is LY. screw(ml, m2, S): the z axes of ml and m2 are par- allel; and the angle from ml’s x axis to m2’s x axis is linearly related to the distance between ml and m2 by a pitch constant S. Combinations of these constraints, relating mark- ers on different rigid bodies, may be used to model all of the ‘lower pair’ joints described by Reuleaux ,, [Reuleaux, 18761. For example, a revolute joint, which allows one rotational degree of freedom between two bodies, is modeled with a coincident constraint and a parallel-z constraint. A translational, or prismatic, joint is modeled with an in-line constraint and a co- oriented joint. Some types of higher pairs may also be modeled with the above constraints, for example, the ‘universal’joint and ‘slotted pin’ joint. The constraints defined above are sufficient to describe all mechanical linkages as well as many static mechanical assemblies; there is no restriction to ‘fixed axis’ mechanisms as is common in the literature [Faltings, 1989; Joskowicz, 19871. Figure 1 illustrates the modeling of a crank-slider mechanism. The crank-slider consists of three parts. The ground, G, is fixed in space, and serves as the global reference frame. Markers gl and 92 are therefore also grounded, or fixed in space. In the figure, marker R rl - t-2 (4 G 04 Figure 1: Crank-slider: (a) parts; (b) assembled. z axes are shown in black; if not shown, they point out of the page. Relevant marker x axes are shown in grey. The geometric constraints are: in-line(r1, gl) coincident (92, c2) coincident (r2, cl) paralleLz(g2, c2) paralleLz(r2, cl) co-oriented(g2, c2, (Y) The in-line constraint models a pin (~1) in a slot (gl’s z axis). The coincident, parallel-z pairs model revolute joints. The revolute joint 92, c2 has a driving input cy, which fully constrains crank C’s position and orientation relative to ground. Rotation of the crank is accomplished by changing the value of Q. As the crank C rotates, marker rl of the connecting rod R slides along the z axis of grounded marker gl. Equational solution Constraint systems like the crank-slider described above are usually solved by modeling the geometry and constraints with algebraic equations. A local coordi- nate frame is assigned to each body. Then the configu- ration variables of the different bodies - the six quan- tities that uniquely specify a local coordinate frame [Snyder, 19851 - are related by equations that model the problem constraints. Solving these equations yields the desired configuration for each b0dy.l A simple ex- ample, involving a single rigid body, illustrates the so- lution of a small set of such equations: the brick of Figure 2 must be configured to satisfy the three co- ‘Solving these types of equations for robotics applica- tions is usually not too difficult because most robot ma- nipulators are open-loop mechanisms. Mechanical link- ages, however, involve closed loops. This leads to a much greater degree of equation coupling. Hence, solving these equations must be done simultaneously and is substantially more difficult . KRAMER 709 b3 Figure 2: A brick with three coincident constraints. incident constraints graphically depicted as the grey lines between the brick’s markers bl, b2, b3 and the de- sired locations, denoted by markers gl, 92, g3 fixed in the global coordinate frame. Equations are developed to relate the configuration variables of the brick’s co- ordinate frame to those of the global coordinate frame. The equations may then be solved either numerically or symbolically. Numerical solution Numerical solutions represent constraints using error terms, which have zero value when the constraint is satisfied, and otherwise have some value proportional to the degree to which the constraint is violated. The objective function is the sum of all error terms. Nu- merical techniques try to find a zero of the objective function by ‘sliding’ down the function’s gradient. This process is necessarily iterative for nonlinear problems, which include any problem involving rotation. Fig- ure 3 shows, in grey, some of the intermediate configu- rations reached using Newton-Raphson iteration (one of the most efficient methods [Press et al., 19861) to move the brick from its initial configuration to one satisfying the constraints. Numerical techniques have many drawbacks. Each iteration of Newton-Raphson is slow, taking O(n3) time, where n is the number of con- straints. Overconstrained situations, which are quite common, require pre- and post-analysis to remove re- dundant constraints before solving and to check them later for consistency. Symbolic solution Symbolic solutions use algebraic re-write rules or other techniques to isolate the configuration variables in the equations in a predominantly serial fashion. Once a solution is found, it may be re-used (executed) on any topologically equivalent problem. Execution is fast, approximately linear in the number of constraints. If numerical stability is properly addressed, the solution can be more accurate by virtue of being analytic; there is no convergence tolerance as found in numerical tech- niques. The principal disadvantage of symbolic tech- niques is the excessive - potentially exponential - time required to find a solution or determine one does not exist. Poorly-chosen configuration variable assign- Figure 3: Brick solution using Newton-Raphson. b2” Figure 4: Brick solution using geometric approach. ments can exacerbate the problem by coupling the equations in unnecessarily complicated ways, requir- ing more clever and complex inferences. Thus, the symbolic techniques are feasible and complete only for small problems. Many shortcomings of the above methods can be traced to problems inherent in the configuration vari- able representation and the complexity of the result- ing equations. This suggests a different approach to the solution of geometric constraint problems: avoid equational reformulation entirely, reasoning instead di- rectly about the geometric entities. A program called TLA has been developed to do this. Geometric solution TLA solves the brick problem using geometric knowl- edge to satisfy the constraints incrementally. The solution is shown in Figure 4. Assume that ini- tially the brick is free to move anywhere; it just hap- pens to be in the given initial configuration CO. To satisfy coincident (bl, g 1) , TLA translates the brick by the vector from bl to gl, leaving the brick in configuration C1. To ensure coincident (bl , g 1) re- mains satisfied, all further actions that move the brick must be rotations about gl, i.e., the brick has only its rotational degrees of freedom left. To satisfy coincident (b3, g3), TLA measures the vector v 1 from 710 KN~WLEDOEREPRESENTATION g 1 to b3’ (where b3 has been moved by the previ- ous translation) and vector v2 from gl to g3. These two vectors are shown as dashed lines in Figure 4. Then TLA rotates the brick about gl around vector vl x v2 by the angle between vl and v2, to config- uration C2. This satisfies coincident(b3,g3) without violating coincident (b 1, g 1). This action also removes two of the remaining rotational degrees of freedom; in order to preserve the two already-satisfied constraints, all future actions must be rotations about v2. To satisfy the final constraint, TLA drops perpendiculars from b2” to v2, and from g2 to v2, and rotates the brick about v2 by the angle between the perpendicu- lars. This brings the brick to its final configuration. The solution is very deliberate, as opposed to the me- andering of the numerical approach of Figure 3. The sequence of actions performed above constitute a plan for moving the brick from an arbitrary position to one satisfying the constraints. For this part of the problem solution, TLA reasons only about geometry, actions and degrees of freedom. No equations are developed, and no model requir- ing configuration variables or other abstract state is needed. Constraints are satisfied by measuring the brick’s geometric properties (often using additional geometric constructions) and then moving it. This method is called degrees of freedom analysis. The brick-moving plan derived using this method is next used to solve for the brick’s configuration variables as represented in a computer; this may be done regard- less of how the local coordinate frame of the brick is described. All that is required is a set of operators for translating and rotating rigid bodies, and a set of functions that can measure, relative to a global coordi- nate system, points and vectors attached to any rigid body. These capabilities are provided by homogeneous coordinate transforms [Snyder, 19851, which most 3D graphics and robotics systems use. The plan, when executed, becomes a metaphor for solving the equational representation of the constraint system. By using the primitive actions of translation and rotation, which are implemented as matrix mul- tiplications, the plan effectively decouples the equa- tions into small independent sets that can be solved analytically. 2 As new constraints are satisfied, previ- ously satisfied constraints (which may correspond to complicated relations between configuration variables) become invariants for later steps in the solution. Ge- ometry, as used in the metaphorical plan, provides the vocabulary and operators that allow preserving these invariants. The use of the assembly plan as a metaphor to guide equation solution distinguishes TLA from other equations. programs that solve large sets of nonlinear Since the plan does not depend on metric properties of the problem, it can be executed on any topologi- tally equivalent problem. 3 The time required for plan generation is thus amortized over repeated executions. Degrees of freedom analysis TLA keeps track of the number and types of degrees of freedom each body (or link) has as it solves a prob- lem. It represents this information with predicates of the form link-has-n-TDOF(linL, arg1, arg2, . . . ) and link-has-n-RDOF(ZinE, arg1, arg2, . . . ), for n E (0, 1,2,3). TDOF stands for translational degrees of freedom, and RDOF for rotational degrees of free- dom. The arguments urgl, urg2, . . . specify any fixed points or axes on the links that restrict their freedom. Initially, every link in the system except the grounded body has 3 TDOF and 3 RDOF. As actions are taken to satisfy constraints, the links in the system lose some of their degrees of freedom. When all bodies have 0 TDOF and 0 RDOF, the problem is solved. At each step in solving for a body’s configuration, TLA must know what action to take given the body’s current constraints, and how that action further re- duces the body’s degrees of freedom. This informa- tion is stored in a plan fragment table. Conceptually, the plan fragment table is a three-dimensional dispatch table, indexed by TDOF, RDOF, and constraint type. Each entry in the table specifies how to move the rigid body to satisfy the new constraint using only available degrees of freedom, and what degrees of freedom the body will have after the action is performed. The plan fragment table contains information about how to sat- isfy constraints when one of the markers participating in the constraint has its appropriate attributes fixed, or globally known. Thus, a globally known position of one marker is required for solving a coincident con- straint, and a globally known z axis is needed to solve a perpendicular-z constraint. In the brick example, the first constraint to be satis- fied is arbitrarily chosen to be coincident(b1, gl). The global position of gl is known. Initially the brick has 3 TDOF and 3 RDOF; thus the index into the plan frag- ment table is (3,3, coincident). This entry contains the following information (modified for readability): Initial state: link-has-3-TDOF( link) link-has-3-lXDOF( Zink) 2Not all problems may be solved analytically; some re- 3Actually, this is not quite true. Mathematical degen- quire iterative solutions. In such cases TLA fails in the eracies may cause the plan to fail. For example, the brick plan construction phase. It is possible, however, to use the plan fails to remove the final rotational degree of freedom information from the failure to reduce significantly the di- if the three markers are collinear. TLA can test for such mensionality of the iterative problem that must be solved. degeneracies, and try to generate a new plan taking them See [Kramer, in preparation] for details. into account, if possible. KRAMER 711 Plan fragment: begin translat e( link, vector-difference(gmp( Ml), gmP(M2))); end; New state: link-has-0-TDOF( Zink, gmp( M2)) link-has-3-RDOF( link) Explanation: Body link is free to translate. A coincident constraint must be satisfied between marker M1, whose global position is known, and marker M2 on link. Therefore link is translated by the vec- tor from the current global position of M2 to the known global position of M1. This action re- moves all three translational degrees of freedom. The variable Zink is bound to the object represent- ing the brick. The initial state of the link is that it has all six of its degrees of freedom; it is free to trans- late and rotate through space. The variable Ml gets bound to the globally known marker (i.e., gl), while variable M2 is bound to the underconstrained marker in the coincident constraint being satisfied (i.e., bl). The plan fragment specifies how to move the body to satisfy the constraint (the function name gmp stands for “global marker position”). In the specification of the new state, the predicate link-has-0-TDOF has an additional argument which specifies the point on the body which is constrained to be stationary. The textual explanation - with variable names replaced by their bindings - helps the user to understand the solution process. The next constraint satisfied in the brick example is coincident(b3, g3). S ince the brick now has 0 TDOF and 3 RDOF, the index into the plan fragment table is (0,3, coincident). The plan fragment in that entry specifies how to rotate a body with 0 TDOF, 3 RDOF to satisfy a coincident constraint, and specifies that the new state of the body is 0 TDOF, 1 RDOF. The process continues until all constraints are satisfied. For the constraints defined in this paper, there are 112 valid entries in the plan fragment table; some plan fragments are quite simple, like the one described above, while others involve more complex calculations and conditionals to handle potential mathematical de- generacies. The complete plan fragment table appears in [Kramer, in preparation]. Interacting bodies Bodies rarely interact exclusively with fixed points, as in the brick example. Often, they interact with other partially constrained bodies. In Figure 5 body A is constrained to 0 TDOF, 1 RDOF by the constraints bl, $2 I /’ .i \ “%. q +<.’ /’ ‘.... -15.... ~ ..-c ‘.-.. . . y c . . . . . . . ““-* ----.. .’ --..... . . . . . . . .-.--* . ..*. Figure 5: Solving for two interacting bodies (z axes point out of the page). coincident(a1, gl) and parallel-z(u1, gl). TLA infers that marker a2 must lie on a circle about al. Body B is similarly constrained. To satisfy coincident (a2, b2), TLA intersects the circles to find the two globally ac- ceptable locations for the markers. TLA distinguishes the locations with a branch variable q. A user of TLA chooses which solution to use by specifying the value of q. TLA places a ‘pseudo-marker’ p at the this location; this is a marker which is not part of the original prob- lem specification, but is introduced during the problem solution. With the intersection point defined, TLA satisfies the coincident constraint for bodies A and B inde- pendently. It does this by introducing the constraints coincident (p, ~2) and coincident (p, b2). Since v’s position is globally known, the plan fragment table may be used to find the appropriate actions to sat- isfy the two introduced constraints. When they are satisfied, coincident(u2, b2) is also satisfied. In this manner, local information, in the form of loci of points on partially constrained bodies, may be combined through locus intersection to yield infor- mation about globally permissible locations of points. Pseudo-markers denote these intersections, and auxil- iary constraints are introduced to relate the partiallv constrained markers to the pseudo-marker. Then the plan fragment table is used to find the appropriate ac- tions to satisfy the constraints. TLA uses a locus table to specify the loci to which pa.rtially constrained markers are confined. Loci are determined completely by the degrees of freedom that a body has. For example, all markers on a body with 0 TDOF, 1 RDOF are constrained to lie on circles around the body’s fixed point. Markers on a body with 0 TDOF, 3 RDOF must lie on spheres, and markers on a body with 2 TDOF, 0 RDOF must lie in planes. A locus intersection tubZe allows TLA to know when enough information is known about sets of partially 712 KN~WLEDOEREPRESENTATION constrained markers to determine their configurations fully. This table has entries for all pairs of shapes in the locus table. For example, a sphere intersected with a circle yields at most two discrete points (except in the degenerate case of the circle lying on the sphere); a plane intersected with a cylinder yields an ellipse. For the constraints described in this paper, all loci are an- alytically describable, as are all pairwise intersections of loci. Plan generation The plan fragment table and locus tables allow sim- ple and efficient algorithms to solve geometric con- straints. TLA’s metaphorical plan construction dif- fers from blocks world planners like HACKER [Suss- man, 19731, which generate physically realizable plans to get from one world state to another. Kinematic dia- (1 grams do not represent the true physical boundaries of the mechanism’s parts, so the geometric entities may pass through each other in intermediate plan states as they move toward their final configurations. TLA’s only concern is the final plan state, where all objects satisfy their constraints. This lack of concern about in- termediate states allows TLA to satisfy the constraints incrementally, without backtracking. An assembly plan generated by TLA is compiled into an assembZy procedure, which is a machine executable version of the plan. The assembly procedure is opti- mized in various ways: removing nested function calls, removing duplicate calculations, etc. Mechanism simu- lation is accomplished by alternately changing the val- ues of the driving inputs and then calling the assem- bly procedure; the simulation moves the mechanism through its characteristic motions. The assembly procedure may be reused when the sizes and shapes of the parts change; however, if the mechanism topology (e.g., number of bodies, or num- ber or types of joints connecting the bodies) changes, a new assembly plan and procedure must be derived. Implementation The current version of TLA is written in Common Lisp and CLOS, and runs on a Symbolics Lisp Ma- chine. A rule-based system generates the assembly plans. Each rule implements part of the plan frag- ment table or the locus tables, of which about 60% have been implemented to date. A database stores as- sertions during the assembly planning. The database grows monotonically; no retractions are made. A sim- ple pattern matcher is used, rather than full unifica- tion, and the few search heuristics (for efficiency only) are hard-wired into the rule triggers. This allows a simple control structure: o Make any applicable deduction (e.g., ‘marker m lies on a circle’). e Perform any rotation). applicable action h., a translation or e Succeed dom. when all bodies have zero degrees of free- e Fail when there is no applicable deduction or action. While a rule-based system allowed flexibility in de- ciding how TLA would be structured, a future imple- mentation will use explicit tables and object-oriented programming to avoid the need for pattern matching, substantially reducing the computational complexity of constructing assembly plans. Complexity analysis A complete analysis of the computational complexity of TLA is given in [Kramer, in preparation]; only the results appear here. For the rule-based implementa- tion, plan generation time is O(nd), where n is the number of constraints, and d is a constant determined by the average number of arguments for each database predicate (Ca M 3). In practice, TLA’s plan generator tends to run in time nearly proportional to n2. Thus, for generating a solution, TLA’s planning algo- rithm has polynomial complexity, as opposed to the ex- ponential complexity of symbolic algebraic techniques. For executing a solution, TLA’s compiled plan runs in O(n.) time, as opposed to the O(n3) time of iterative numerical methods. Speed comparisons TLA has simulated dozens of complex planar and spa- tial mechanisms, the largest example being a sofa-bed, shown in Figure 6. This mechanism has 16 links, 22 joints, and two driving inputs, and is described by 115 algebraic constraints, 19 of which are redundant. A plan is generated in 297 seconds, and the assembly procedure compiled from it (655 lines of Lisp code) executes in 0.29 seconds on a Symbolics 3675. This is almost two orders of magnitude faster than simula- tion speeds using some of the commercially available numerically-based simulators, after scaling for differ- ences in processor speed (commercial programs run on machines other than the Symbolics). Discussion Algebra has long been the lingua fruncu of science and engineering, but it can provide only a partial apprecia- tion of the actual domain under study. An understand- ing of geometry is essential to solving problems in- sightfully and efficiently in the mechanical world. TLA demonstrates this for the task of mechanical assembly and simulation. By using geometry to guide equation solving, TLA provides orders of magnitude speedup over ‘general-purpose’ mathematical techniques. This KRAMER 713 thesis, Stanford University, Stanford, California, July 1979. Figure 6: Sofa-bed mechanism (extended). means that interactive tools for the simulation, opti- mization, and synthesis of complex mechanical devices become feasible [Kramer and Barrow, 19891. Using degrees of freedom analysis to generate an assembly plan, and using the resulting plan as a metaphor -to guide equation solution both appear unique to TLA. Sketchpad [Sutherland, 19631 and ThingLab [Borning, 19791 represented geometric con- straints equationally, relying on relaxation for nonlin- ear equations. Popplestone et al. explored, with lim- ited success, solving assembly problems algebraically using some geometric guidance [Popplestone eZ al., 19861. More recently Popplestone has focused on us- ing group theory to represent geometric symmetries [Popplestone, 19871. Th is work could profitably be incorporated into TLA. Faltings [Faltings, 19891 and Joskowicz [Joskowicz, 19871 are investigating deriving kinematic constraints directly from geometry. Such a facility would free the user of TLA from having to model a mechanism in terms of abstract concepts like markers. The ideas embodied in TLA may be extended in many ways, including: expanding the range of con- straints TLA understands (e.g., gears, cams, etc.); an- alyzing dynamic behavior more efficiently by virtue of understanding the kinematics; using knowledge of ge- ometry to aid-in design synthesis. In all of these cas&, geometric knowledge leads to a better understanding of the underlying mathematics. Degrees of freedom analysis allows unifying geometric reasoning with al- gebraic techniques for efficient and intuitive modeling of real-world mechanisms and assemblies. Acknowledgments Phil Agre helped implement the latest version of TLA, and contributed technically in many ways. I would also like to thank Harry Barrow, David Barstow, Geoff Goodhill, David Gossard, Walid Keirouz, Mark Shirley, Reid Smith, Bob Young, and David Ullman. References [Borning, 19791 Alan H. Borning. Thingla b: A Constraint-Oriented Simulation Laboratory. PhD [Dixon, I.9861 John R. Dixon. Artificial intelligence and design: A mechanical engineering view. In Pro- ceedings of the National Conference on Artificial In- telligence, pages 872-877, Seattle, WA, 1986. [Faltings, 19891 Boi Faltings. Reasoning about kine- matic topology. In Proceedings of the International Joint Conference on Artificial Intelligence, Detroit, Michigan, August 1989. [Hartenberg and Denavit, 19641 R. S. Hartenberg and J. Denavit. Kinematic Synthesis of Linkages. Mc- Graw Hill, New York, 1964. [Joskowicz, 1987’1 Leo Joskowicz. Shape and function in mechanical devices. In Proceedings of the National Conference on Artificial Intelligence, Seattle, WA, August 1987. [Kramer and Barrow, 19891 Glenn A. Kramer and IIarry G. Barrow. New approaches to linkage syn- thesis. In International Joint Conference on Arti- ficial Intelligence (video track), Detroit, Michigan, August 1989. [Kramer, in preparation] Glenn A. Kramer. Geomet- ric Reasoning in the Kinematic Analysis of Mecha- nisms. PhD thesis, University of Sussex, Brighton, UK, (in preparation). [Popplestone et al., 19801 R. J. Popplestone, A. P. Ambler, and I. M. Bellos. An interpreter for a lan- guage for describing assemblies. Artificial Intelli- gence, 14( 1):79-107, August 1980. [Popplestone, 19871 R. J. Popplestone. The Edinburgh Designer System as a framework for robotics or, the design of behavior. COINS Technical Report 87- 47, University of Massachusetts, Amherst, MA, May 1987. [Press et QZ., 19861 William H. Press, Brian P. Flan- nery, Saul A. Teukolsky, and William T. Vetterling. Numerical Recipes: The Art of Scientific Comput- ing. Cambridge University Press, Cambridge, Eng- land, 1986. [Reuleaux, 18761 M. M. Reuleaux. The Kinematics of Machinery. Macmillan & Co., New York, 1876. Translated by Alex B. W. Kennedy. [Snyder, 19851 Wesley E. Snyder. Industrial Robots: Computer Interfacing and Control. Industrial Robot Series. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1985. [Sussman, 19731 Gerald Jay Sussman. A Computa- tional Model of Skill Acquisition. PhD thesis, MIT, Cambridge, Massachusetts, August 1973. [Sutherland, 19631 -Ivan E. Sutherland. Sketchpad: A Man-Machine Graphical Communication System. PhD thesis, MIT, Cambridge, Massachusetts, 1963. 714 KN~WLED~EREPRESE~~TA~~N
1990
134
887
laini or ces to Create Useful Concepts for Evaluating States Richard C. Yee Sharad Saxena Paul E. Utgoff Andrew G. Barto Department of Computer and Information Science University of Massachusetts, Amherst, MA 01003 yee@cs.umass.edu, saxena@cs.umass.edu utgoff@cs.umass.edu, barto@cs.umass.edu Abstract We describe a technique for improving problem- solving performance by creating concepts that al- low problem states to be evaluated through an ef- ficient recognition process. A temporakdiflerence (TD) method is used to bootstrap a collection of useful concepts by backing up evaluations from recognized states to their predecessors. This pro- cedure is combined with explanation- based gener- alization (EBG) and goal regression to use knowl- edge of the problem domain to help generalize the new concept definitions. This maintains the effi- ciency of using the concepts and accelerates the learning process in comparison to knowledge-free approaches. Also, because the learned definitions may describe negative conditions, it becomes pos- sible to use EBG to explain why some instance is not an example of a concept. The learning technique has been elaborated for minimax game- playing and tested on a Tic-Tat-Toe system, T2. Given only concepts defining the end-game states and constrained to a two-ply search bound, exper- iments show that T2 learns concepts for achieving near-perfect play. T2’s total searching time, in- cluding concept recognition, is within acceptable performance limits while perfect play without the concepts requires searches taking well over 100 times longer than T2’s. 1 Introduction The use of concepts holds the potential for improv- ing both the speed and accuracy of a problem-solving agent. Concepts define sets over the space in which input problem instances are represented. They repre- sent classes of inputs that are significant with regard to achieving the goals of the agent. Concepts are only This material is based upon work supported by the Na- tional Science Foundation under Grants IRI-861910’7 and ECS-8912623, by the Air Force Office of Scientific Research, Bolling AFB, under Grant AFOSR-89-0526 and by the Of- fice of Naval Research through the University Research Ini- tiative program, contract N00014-86-K-0764. 882 MACHINE LEARNING useful, however, if their definitions support a recogni- tion process that is faster than other means available to the agent for computing the same information. For example, if an agent can determine that property P is true of a problem instance as quickly through search as through concept recognition, then there is no ben- efit in forming and storing a concept for recognizing P. On the other hand, using concepts that are effi- cient sources of valuable information allows an agent to r.espond more quickly to problems or to spend extra time computing higher-quality responses. This paper focuses on a method for enabling an agent to identify and define concepts that improve its performance in a task. The method is implemented in a system called T2, which operates within the do- main of minimax game-playing. T2’s learning process is closely related to the one used by Samuel’s checkers- playing program, which recursively improved its abil- ity to evaluate board positions based on a bounded- depth search of its current position [Samuel, 19591. The general approach of learning through recursively caching state evaluations was also demonstrated in the pole-balancing system of Barto, Sutton and An- derson [1983]. Recent related work may be found in the methods of temporal diflerences (TD) discussed by Sutton [1988] and in the learning architecture of Sut- ton’s Dyna system [1990]. Also, Barto, Sutton and Watkins [1990a; I990b] discuss TD methods from the perspective of the general framework of dynamic pro- gramming as suggested by Werbos [1977]. One of the mechanisms Samuel used for caching board payoffs was a rote memory that contained corn- plete descriptions of individual boards and their payoff values. In contrast, T2 takes advantage of its knowl- edge of the problem domain to generalize the descrip- tions of states before they are cached. The general- izations are achieved through explanation- based gener- alization (EBG) [Mitchell et al., 19861 followed by a slightly modified form of goab regression [Waldinger, 19761. Consequently, T2’s memory contains general- ized specifications of states rather than descriptions of individual boards. The efficiency of using concepts learned through : From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. explanation-based approaches has been studied by a number of researchers. Minton [1988] demonstrated that a system employing EBG to learn search con- trol knowledge may become ineffective if no attempt is made to identify and eliminate low utility concepts whose expected benefits are outweighed by their aver- age matching costs. Tambe and Newell [1988] showed that the Soar learning architecture [Laird et al., 19861 is susceptible to forming such low utility concepts, so- called “expensive chunks”. In such cases, overall per- formance after learning may be worse than in the initial system state. Tambe and Rosenbloom [1989] propose restricting the expressiveness of the language used to define concepts as a way of addressing the expensive chunks problem. In a similar spirit, concepts in T2 possess restrictive definitions that are fast to evaluate. Concept recognition in T2 tends to be “perceptual” rather than analytic. The motivation behind the T2 learning mechanism has been specifically to maintain the efficiency of new concept definitions while recur- sively improving the significance of the concepts being formed. 2 Task Performance and Learning Given a game state in which it must move, T2 chooses its response by performing a bounded-depth minimax search to try to identify the best child state. Ulti- mately, the value of any state depends upon a payoff function that maps states into real values in the inter- val [-1.0, +l.O] h w ere +l.O is identified with wins for T2, -1.0 with losses, and 0.0 with draws. In T2, the backed-up minimax payoff is discounted by a uniform cost assigned to moves. This is done so that states with non-zero payoffs can be distinguished on the basis of the minimum number of steps to a win. We chose an arbitrary positive discount factor of 0.9. Since T2’s search is never deeper than a pre- determined number of levels, many states at interior nodes of the complete game tree will appear as leaves of T2’s truncated search tree. To approximate the com- plete search therefore requires a payoff function that can evaluate states representing intermediate stages of play. However, T2 starts out with a payoff func- tion only capable of accurately evaluating completed games: wins, losses and draws. All intermediate states evaluate to 0.0, the same value as a draw. Thus, the learning problem in T2 is as follows: Given a payoff function sufficient for evaluating the leaves of a complete game tree, develop an en- hanced payoff function that can also provide use- ful evaluations of states at intermediate stages of Play* The usefulness of the payoff function’s evaluations is determined by how well the truncated search identi- fies perfect lines of play. The rationale behind this approach is that using the enhanced payoff function should be a much more tractable computation than determining payoffs via deeper search. Hence, a pri- mary constraint on the learning process is that the application of the new payoff function be efficient. 3 Caching Boards in Concepts: An Overview of T2 T2 uses a collection of concepts to associate states with payoff values. Each concept is uniquely associated with a discounted payoff value found in the minimax game tree, and each has a definition that, ultimately, covers all the states corresponding to its payoff. To ensure efficiency in concept recognition, definitions are not al- lowed to specify properties of a state requiring infor- mation derived from combinatoric searches. In partic- ular, definitions may only refer to directly observable features of the given state representation, which we call the structural state description. Identifying concepts with payoff values has limita- tions. A more general approach would view the map- ping from input instances to concepts as an interme- diate or supplemental step in the process of mapping from inputs to evaluations. For example, the mapping from inputs to concepts could be composed with a non- trivial mapping from concepts to payoff values. Such an extension of the current approach could be useful for problems with a large number of payoff values. A general picture of the concept learning process in T2 consists of first detecting that an unknown state belongs to a particular payoff group and then caching that result in the definition of the proper concept. Rather than simply caching the entire state as an in- stance of the payoff group, generalizing the state be- fore caching it provides significant advantages both in the efficiency of the new concept definition and in the speed of the learning process because a set of states is learned in a single step. The new concept then provides a basis for further concept learning. Hence, the entire collection of concepts is built up in a bootstrapping fashion. 3.1 A simple Test Domain: Tic-Tat-Toe Our concept learning method was developed from con- sidering the game of Tic-Tat-Toe. One advantage of this domain is that its simplicity allows attention to be focused on the problem of integrating EBG into the process of learning efficient definitions for concepts that enable perfect play. Also, the method has been designed with particular regard to the class of minimax problems, and therefore it is potentially applicable to more domains amenable to minimax search. Initially, T2 is given exactly three concepts for recognizing any legal Tic-Tat-Toe board b: nuZZ (b), win(X, b) and win(0, b); assuming T2 plays X, these represent the payoffs: 0.0, +l.O and -1.0, respectively. The null concept covers every board not covered by any other concept in the collection. Hence, it is always cor- rect for the boards that do not possess a guaranteed winning combination for either player. The concepts YEEETAL. 883 for 0 need not be given explicitly since they can be obtained by using the X concepts on inverted boards. Clearly, performing a complete minimax search grounded in T2’s initial collection of concepts is suf- ficient for producing perfect play. However, given T2’s performance constraints -a search depth bounded at two-ply-this initial collection yields playing behavior only slightly better than random. All boards leading to forced wins or losses occurring beyond T2’s search horizon are simply recognized as null-payoff boards. To learn concept definitions that correct this situ- ation, one must first be able to identify a misclassi- fied board and its proper payoff value. Misclassifica- tions are identified by temporal differences: the differ- ence between the predicted value of a board and the value backed up either from search or from actual state transitions experienced during problem-solving. It is expected that the backed-up payoff values are more accurate than those derived directly from the payoff function. This will eventually become true even if the backed-up values are themselves ultimately derived from the same payoff function because the backed-up values are based on more accurate information con- cerning the future consequences of potential actions. To play a game using minimax search, the process of using concepts to determine payoff values need only occur for boards at the leaves of the search tree. For concept learning, however, concepts are also used to determine payoffs for boards at interior nodes of the tree, including at the root. As in Samuel’s system, for a given board b, the payoff determined from memory is a prediction of what b’s backed-up search payoff will be. A violated prediction indicates that the current collection of concepts is inadequate. Thus, concept learning in T2 is triggered in the following situation: If board b’s membership in concept C yields a pay- off that is different from the backed up minimax payoff, then C has a definition that is overly gen- eral. Its definition needs to be restricted to ex- clude (at least) the board b. In T2’s initial collection of concepts, the given def- initions for w&(X, b) and win(0, b) are correct, but the null definition, which covers every board, over- generalizes in covering boards with intermediate pay- off values. For example, a board b leading to a win in one step will be recognized only as a null board, but it will be found through minimax to have a payoff of f0.9. Therefore, it is necessary to prevent nubb(b) from recognizing b. Since null (b) covers all boards, it is “pre-empted” by creating a definition for a non-null concept that will cover b. The proper non-null concept to create or modify is the one representing b’s backed- up search payoff, +0.9. This concept will be created if it does not already exist in the current collection; otherwise, its definition will be modified to cover b, thereby excluding b from null (b). A useful perspective is that all of the non-null con- cepts define groups of exceptions to the null concept. Input: A board b at a node of a minimax search tree. Output: The collection of concepts that has been modified as appropriate to better predict b’s true payoff in the complete minimax game tree. Method: 1. Compute b’s payoff from a concept C, where bE c. 2. Compute b’s backed-up search payoff, p. 3. If Puyofl-of (C) # p then (a) Identify th e relevant children of b, {bi}, and their corresponding concepts (Di}. (b) Form a generalization of b based on C, (bi} and (D;}. (c) Use the generalization to restrict C. Table 1: An overview of the concept learning algorithm This view characterizes the learning process in T2: only overly general concept predictions are detected and subsequently corrected through the learning of ex- ceptions. Under-generalizations of non-null concepts are only detected when such failings lead to incorrect predictions of the null payoff. The learning of excep- tions to the null concept translates into the learning of positive examples for the non-null concepts. This learning process produces concepts possessing a log- ical structure similar to Vere’s mubtilevel counterjuc- tuals [1980]. One consequence of this counterfactual structure is that it becomes possible, in certain cases, to use EBG to explain “why not”, i.e., to explain why an instance is not an example of a concept. 3.2 Generalizing Boards Before Caching The algorithm for learning concepts is summarized in Table 1. This section gives a brief description of the steps involved in generalizing a board b for which a pay- off prediction from a concept C has been invalidated- steps (3.a-c) of the algorithm. Further details are pro- vided in the sections indicated below, and a summary of the algorithm for step (3.b) is given in Table 2. To generalize the structure of board b, more informa- tion is needed than simply the board and its backed-up payoff. Because the children of b determine its payoff, information about their structures is needed as well. It is necessary to determine which of b’s children partic- ipated in the prediction violation and which concepts gave rise to their payoffs. Determining which children are relevant for correcting the parent’s misclassification depends upon whether the backed-up payoff was better or worse than predicted. When the backed-up payoff is better than predicted for the parent, it is because at least one child had a better payoff than predicted for the children. If there is more than one such child, one is selected arbitrarily. When the backed-up search payoff is worse for the parent, it is because all of the children have worse payoffs than predicted for them. These two cases yield the relevant children, (bd}, in- dicated in step (3.a). For each such child, it may be 884 MACHINE LEARNING Figure 1: The formation of a clause for pre-win (X, b) necessary to know the structure on which its own pay- off is based. This structure can be extracted from the definition of the concept 13i used to assign the child its payoff. In step (3.b), the key to generalizing the parent b is to use EBG appropriately to generalize the set of rele- vant children (bd). G iven a child bi, we rely on EBG to identify the child’s relevant features, where relevance is determined with respect to some concept l?. Typi- cally, EBG is used only when an example is known to be a member of a concept, i.e., when bd E I?. However, because concept definitions in T2 may explicitly rep- resent exceptions, it also becomes possible to produce useful generalizations of an example with respect to its nonmembership in a concept: bi 4 I’. Section 5.1 describes the explanation process used in T2, and in particular, points out how the case for explaining non- membership arises. In either case, EBG extracts from a child board bi a general board structure that is suf- ficient for explaining its relationship to T. After obtaining the structures of the relevant chil- dren, we are in a position to back up this information to generalize the parent. A structure at the parent’s level is produced from a child’s structure by undoing the move that led to the child. This is accomplished by applying a slightly modified version of goal regression, which is a general technique for recovering the pre- image of a set by passing the set backwards through an operator. In T2, each particular move is an oper- ator. The corresponding backwards operator that we use produces a set that contains the pre-image of the child’s structure, possibly as a proper subset. This process is further described in Section 5.2. Undoing the moves in each child’s structure yields structures at the parent’s level of description. The re- gressed structures are then conjoined into a single spec- ification that describes a general structure that occurs in the parent b. Finally, this new specification is it- self conjoined as an exception within the definition of the concept C that incorrectly predicted the payoff of b-step (3.~) of the algorithm. An Example Consider the formation of a clause for the concept representing the payoff of +0.9, indicating that X can achieve a win in one step. Call this concept pre-win. Suppose that a board b of the form in Figure 1 is eval- uated. Board b does not contain a win, so its pay- off is initially 0.0, given by null (6). However, one of its children, bi, satisfies win(X, bi) yielding the pay- off +l.O. Therefore, the payoff of b should have been +0.9. This identifies a learning situation. For con- venience, label the nine board squares from 0 to 8 in order from left-to-right, top-to-bottom. The por- tion of the definition of win (X, bi) that is the rea- son for bi E win (X, bi) is found through EBG to be: ((X at 0) A (X at 4) A (X at 8)). Undoing the move that led to this child yields b’s structural generaliza- tion: ((X at 0) A (X at 4) A (blank at 8)). This struc- ture has been found to be the reason that b has the payoff $0.9. Hence, it is cached in the definition of the concept pre-win (X, b). 4 Concept Definitions To achieve an efficient recognition process, concepts are required to have definitions that eliminate or bound the amount of search that may be done to match an in- stance with a concept. By expressing definitions solely in terms of the immediately observable features used to represent board structures, concepts’are not allowed to refer to functional properties requiring search through the state space. A further restriction is that definitions may not have variables at nested levels because these lead to combinatorial searches for bindings on a given board’s structure. In T2 the representation of boards consists of a list of the nine squares, in which each square takes on one of the three values: X, 0 or blank The values of specific squares are the only terms used in concept definitions. The only bindings allowed occur at the level of entire boards, i.e., the eight board symmetries provided by rotations and reflections are used in matching boards with concepts. This gives a fixed number of possible bindings across all concept definitions. Generalized board structures are represented as reg- ular boards in which the squares may also take on the value “don? care”. Such specifications shall be called gen-boards. A gen-board is used as a definition for the set of boards matching the specified structure. Because one cannot determine from a gen-board which player is on-move, it is necessary to record this information explicitly in the concept definitions. A concept definition also has a disjunctive expres- sion in which each clause specifies a subset of legal Tic- Tat-Toe boards. Each clause is a recursive expression, which is represented as a tree in which the nodes are gen-boards. The significance of the root gen-board of a clause is that its structure was sufficient for explaining why a particular board achieved the payoff value repre- sented by the concept possessing the clause. The signif- icance of any child gen-boards of the root is that they describe structures that occur in exceptions-boards containing the root structure yet not yielding the as- sociated payoff. This gives rise to a recursive logical YEEETAL. 885 Figure 2: Deriving a clause for an exception structure in clauses: there can be exceptions to excep- tions, to any level of nesting. Hence, a clause is either a gcn- board or a gen-board conjoined with a conjunc- tion of negated clauses. For a given board to satisfy a clause, it must match the root gen-board while fail- ing to match each child clause. Henceforth, one entire clause (tree) of a concept definition will be called a concept-cZuuse when it is necessary to distinguish it from child clauses. Recall that non-null concepts can be considered as exceptions to the null concept. This indicates that all of the concept-clauses for the non-null concepts can be treated as children of the null gen-board. Thus, the entire set of concepts forms a single tree with the null gen-board at the root and all of the concept-clauses at the first-level of children. The given concept win (X, b) (hence win(O), b)) is defined by three concept-clauses each of which is a single gen-board. There is a gen- board for an edge row: ((X at O)A(X at l)A(X at 2)), and, similarly, one for a middle row and one for a di- agonal row. Using the board symmetries, these are sufficient for recognizing any win. 5 Learning New Clauses This section describes how the generalization of a board, b, is derived using EBG and a modified version of goal regression. Figure 2 illustrates the process in the case of a single relevant child. The total informa- tion required for creating a new clause is: the concept C that incorrectly predicted b’s payoff, the children (bi) that produced the prediction violation for b, and each such child’s corresponding concept, Di. 5.1 Explaining 6LWhy9s and “Why NotS9 The membership of board b in the concept C, pre- dicts that the best payoff among all of b’s children will be given by the concept C’, where the payoff of C’ is the payoff of C divided by the discount factor. Therefore, in order to generalize b as an exception to C, we would like to know why the relevant children were not in C’. In this case, EBG is sometimes able to identify features of a child bi that are relevant for Input: A concept C where b E C, b’s relevant children (bi} (i = 1, s ma 9 n), their corresponding concepts {Di}. Output: A clause generalizing b. Method: 1. Let C’ be the concept predicted for the chil- dren, based on b E C. 2. For each child bi If 7; t Explain (b; $! C’) then Return: Undo- Move- Clause (7;) 3. For each child bi (a) 7i + Explain (bi E Di) (b) ri t Undo-Move-Clause (ri) 4. Return: Conjoin-Clauses (71 ,‘yz, . . . , m) Table 2: The generalization algorithm: (3.b) of Table 1 non-membership in C’. This type of explanation is at- tempted first because, when it succeeds, it appears to yield a more precise generalization of the parent b than does explaining bi E Di. Explaining a child’s member- ship in its own concept, Di, provides a reliable back- up strategy. Whichever approach is used, the resulting explanation is a clause specifying a set of boards that includes ba. T2’s back-up strategy employs the standard EBG approach to explain why bi E Di. The explanation is a clause that is a portion of Di’s definition that matched bi and was sufficient for concluding concept membership. Specifically, it is a concept-clause of Di that is satisfied by b;. The concept-clause is a general- ized structural specification that covers bi along with a group of boards sharing the relevant structures and, therefore, sharing the payoff represented by Di. Explaining non-membership in a concept is possi- ble because the concept-clauses may recursively specify alternating levels of positive and negative conditions. A non-trivial generalization of a non-member example can be obtained by explaining the example’s member- ship in one of the child clauses of a concept-clause. Suppose we wish to explain bi $Z 6’. Clearly, one pos- sible reason could be that ba does not match the root gen-board of any of C”s concept-clauses, but this is a trivial explanation since the default assumption for all boards is that they are null. Such an explanation cannot be used to improve the system’s overall knowl- edge. The potentially interesting case occurs when bi satisfies the root gen-board of a concept-clause for C’ yet also satisfies at least one of the root’s child clauses which specify exceptions. In this case, there is prima facie evidence that bi belongs in the concept, yet it is an exception. The explanation of br’s non-membership in C’ is a matching child clause of a concept-clause whose root also matches b;. 5.2 Undoing Moves In T2 each move rni is an operator yielding bi from b. We wish to regress each generalized child clause back through its respective move operator to recover a 886 MACHINE LEARNING F! -100 L T2 vs. depth 3 -100 L T2 vs. depth 4 Games played -50 t 2 -100 L T2 vs. depth 5 -100 L Figure 3: The success of T2 against opponents of varying depths Games played T2 vs. depth 6 standard minimax clause for a pre-image containing b. The conjunction of the regressed clauses specify the components of b that make it an exception to the overly general concept C. Each move consists of a player’s symbol and a board square. Undoing a move in a clause is accomplished by recursively traversing the clause’s tree structure, and undoing the move in the gen-board at each node. Un- doing a move in a gen-board is illustrated by the follow- ing example. Suppose that in gen-board G the move “X to square 3” is to be undone. If 3 contains an X replace it with a blank. If 3 is a don’t care, then we use the heuristic of returning G unchanged. Strictly speaking, in such a case G should be returned with the added specification “blank at 3”. Using the heuristic rule is an attempt to produce a useful generalization of b. In many cases, specifying the additional blank squares introduces unnecessary constraints that can be expected to incre<ase the number of concept-clauses and to slow significantly the speed of learning. Therefore, we are willing to tolerate the possibility that G may be slightly overly general since the learning mechanism can produce corrections if necessary. While we do not yet have a proof that perfect concepts will eventually be formed, experiments demonstrate that the concepts come to support near-perfect play. After undoing the moves in the clauses, they are conjoined into a single clause by conjoining their root gen-boards into a single gen-board. All child clauses become children of this new root-step (4) in Table 2. 6 Experiments To evaluate our approach, T2 was played against op- ponents using standard minimax searches of differ- ent fixed depths. T2 always performed a two-ply search. The only concepts known to the opponents were: null (b), win(X, b) and win(O,b). In both T2 T2 Opponen/t ’ x’ 2 3 4 5 6 Search depth (Opponent) Figure 4: The maximum time used by the players to make a move. Time is shown on a logarithmic scale. and the opponents, if more than one child of a board returned the best payoff value, then a child was ran- domly selected from among these child these experiments was to determine: ren. (4 The goal of whether T2 can improve its performance sufficiently to match or surpass any opponent, and (b) whether T2 always uses an acceptable amount of resources, especially time. Figure 3 shows the performance of T2 against four opponents. Their searches are bounded from depths three to six; a six-ply search is sufficient for perfect play in Tic-Tat-Toe. The abscissa is the number of games played, and the ordinate is the cumulative dif- ference between the number of games won by T2 and the number it lost. The performance of T2 may be judged by the average slope of the line, e.g., a positive slope indicates a dominant winning trend. The graphs indicate that the learned concepts enable T2 to win 35-40% of the time against all opponents that search to a depth of five or less. Against the depth-six op- ponent the slope of the line showing T2’s performance approaches zero indicating that T2 is approaching the perfect play of the opponent. Figure 4 uses a logarithmic scale, to show the max- imum time ( in seconds) used for making a move bY T2 and each successive opponent. The time repor ,ted is for compiled functions written in Common-Lisp and run on a SUN 3/60. Each value is the maximum for any move occurring in ten additional games that were played after the 200 shown in Figure 3. Figure 4 shows that the time required by T2 to make a move is nearly constant regardless of the opponent. As one would ex- pect, the opponents’ times- &how exponential growth with increasing search depth. Figure 3 shows that T2 performs as well or better than its opponents while Figure 4 shows that T2 is achieving its results faster than any opponent using a search of depth four or more. In particular, T2 can approach the level of perfect play. Using the learned concepts for this level of play is well over 100 times faster than using standard search alone. YEEETAL. 887 7 Conclusions We have described a technique for combining a temporal-diflerence learning method with ezplanation- based generalization and a slightly modified form of goal regression to enable an agent to learn concepts that improve its problem-solving performance. The learned concepts are generalized memories of problem- solving experiences, and they can be used to evaluate quickly similar problem states encountered in the fu- ture. The information for forming concepts can be de- rived from either local search employing a model (plan- ning) or from direct environmental feedback. We have been interested in situations in which the agent is able to integrate concept learning with actual task perfor- mance. Consequently, neither the learning process nor the subsequent process of recalling information can be allowed to interfere seriously with meeting the time constraints of performance. The T2 system implements the technique in the do- main of minimax game-playing and has been tested on Tic-Tat-Toe. The value of the approach can be un- derstood by following Minton’s analysis of the benefits versus the costs of using learned concepts. The TD process of backing up board evaluations ensures that application of the concepts will yield significant bene- fits, which are measured in terms of the depth of the searches necessary for computing the same payoff in- formation. The generdizations and use of board sym- metries help ensure that each definition covers a rela- tively large number of instances resulting in wide ap- plicability of the concepts. Also, there is a correspond- ing increase in the speed of learning. In considering the costs of the concepts, it is seen that the generaliza- tions also help ensure that there will be a small number of gen-boards to match in the definitions. Restricting the expressiveness of the definitions strictly controls the binding problem during matching as Tambe and Rosenbloom have also demonstrated. Finally, the logi- cal structure of the concepts probably also contributes to the efficiency of recognition. Since concepts are rec- ognized by applying a level-by-level series of general positive and negative tests, it may be expected that, for most concepts, most boards will either pass or fail early in the process. Moreover, in certain cases, this logical structure allows us to use EBG to explain why a given instance is not a member of some concept. Exploiting domain knowledge within the context of problem solving using local search appears to be an effective method of learning. Our approach efficiently defines concepts whose significance lies in the fact that they distinguish only those regions of the input rep- resentation space that are relevant to to the goals of the agent. This addresses fundamental computational issues in producing goal-directed behavior, and we ex- pect that further research will produce more general formulations of these principles. 888 MACHINE LEARNING Acknowledgements We are grateful to Steven Bradtke, Carla Brodley, Jamie Callan, Margie Connell, and Tom Fawcett for many valuable comments on a draft of this paper. References [Barto et al., 19831 A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike elements that can solve difficult learning control problems. IEEE Z’ramactiom on Systems, Man and Cybernetics, 13:835-846, 1983. [Barto et al., 1990a] A. G. Barto, R. S. Sutton, and C. J. C. H. Watkins. Learning and Sequential Deci- sion Making. In M. Gabriel and J. W. Moore, (eds.), Learning and Computational Neuroscience, MIT Press, Cambridge, MA, Forthcoming. [Barto et al., 1990b] A. G. Barto, R. S. Sutton, and C. J. C. H. Watkins. Sequential decision problems and neural networks. In D. S. Touretzky, (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann, San Mateo, CA, Forthcoming. [Laird et aZ., 19861 J. E. Laird, P. S. Rosenbloom, and A. Newell. Chunking in soar: The anatomy of a gen- eral learning mechanism. Machine Learning, 1:1 l-46, 1986. [Minton, 19881 S. Minton. Quantitative results concerning the utility of explanation-based learning. In Proceeding8 of the Seventh National Conference on Artificial Intel- ligence, pages 564-569, Morgan Kaufmann, San Mateo, CA, 1988. [Mitchell et al., 1986] T. Mitchell, Fl. Keller, and S. Kedar- Cabelli. Explanation-based generalization: A unifying view. Machine Learning, 1:47-80, 1986. [Samuel, 19591 A. L. Samuel. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3:210-229, July 1969. [Sutton, 19881 R. S. Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3:9- 44, 1988. [Sutton, 19901 R. S. Sutton. Integrated architectures for learning, planning and reacting based on approximating dynamic programming. Submitted to the 1990 Interna- tional Machine Learning Conference, 1990. [Tambe and Newell, 19881 M. Tambe and A. Newell. Some chunks are expensive. In Proceedings of the Fifth Con- ference on Machine Learning, Morgan Kaufmann, San Mateo, CA, 1988. [Tambe and Rosenbloom, 19891 M. Tambe and P. Rosen- bloom. Eliminating expensive chunks by restricting ex- pressiveness. In Proceedings of the Eleventh Interna- tional Joint Conference on Artificial Intelligence, pages 731-737, Morgan Kaufmann, San Mateo, CA, 1989. [Vere, 19801 S. A. Vere. Multilevel counterfactuals for gen- eralizations of relational concepts and productions. AT- tificial Intelligence, 14:138-164, 1980. [Waldinger, 19761 R. Waldinger. Achieving several goals simultaneously. In E. W. Elcock and D. Michie, (eds.), Machine Intelligence, Wiley and Sons, New York, 1976. [Werbos, 19771 P. J. W er b OS. Advanced forecasting meth- ods for global crisis warning and models of intelligence. General Systems Yearbook, 22:25-38, 1977.
1990
135
888
Effective Generalization of Relational Descriptions Larry Watanabe and Larry Rendell Beckman Institute and Department of Computer Science University of Illinois at Urbana-Champaign 1304 W. Springfield Avenue, Urbana, Illinois 61801 U.S.A. watanabe@cs.uiuc.edu Abstract The problem of computing maximally-specific generali- zations (MSCG~) of relational descriptions can be modelled as tree search. We describe several transfor- mations and pruning methods for reducing the com- plexity of the problem. Based on this analysis, we have implemented a search program (X-search) for finding the MSCG's. Experiments compare the separate and combined effects of pruning methods on search efficiency. With effective pruning methods, full-width search appears feasible for moderately sized relational descriptions. Introduction Since Hayes-Roth’s (1977) SPROUTER, several sys- tems have been designed that manipulate structural descriptions. Michalski’s (1983) INDUCE learns con- cepts expressed using both attributes and predicates, preferring simpler structural descriptions. VanLehn’s (1989) Sierra also learns concepts, efficiently updating the set of minimally general hypotheses. Whereas these systems input examples for supervised learning, Quinlan’s (1989) FOIL inputs a set of descriptions for unsupervised learning. Instead of generalizing descrip- tions, Falkenhainer, Forbus, and Gentner’s (1989) SME (structure mapping engine) was designed to find analo- gies by matching descriptions between a target and base domain. Although one purpose of this paper is to compare algorithms that manipulate structural descriptions, its main purpose is to explore a new algorithm X-search. First, we detail the basic problem and some assump- tions, then develop and explains the X-search algo- rithm. Next, we describe an empirical analysis of X- search. We review other programs, and compare them to X-search. The last section summarizes our work. Problem and Assumptions The problem of matching two structural descriptions . :s often expressed in terms of first order predicate logic. Given such a representation, certain assump- tions and constraints may simplify algorithms. 1. This research was supported in part by grant IRI 8822031 from the National Science Foundation. rendell@cs.uiuc.edu Representation and Matching A relation is a k-ary predicate P’. Since any k-ary predicate can be represented using a combination of unary and binary predicates, we pssume a fixed set R of m utary relstions P, , . . ., P, and n binary rela- tions P, ,..., las over R P, (Haussler 1989). ‘I;he atomic formu- are the literah Pi (z) (d < n) and 'j (‘19 zJ (i 5 ‘)9 where each (subscripted) z?s a vari- able. To simplify notation we drop the variables in literals and the superscripts of literals indicating unary versus binary predicates. An existential conjunctive expression is a formula F = 3 2r,. . ., 2,: P, & P, &. . . & P,, where n > 1. These descriptions can be viewed as graphs, in which the nodes are the variables or constants, and the edges are the binary relations. Unary relations may be drawn as reflexive edges. We can view the problem of structure matching from two perspectives. One is pictorial: a set of rela- tions is a graph, and the problem is to superimpose two candidate graphs so that their nodes and edges agree. The other view is logical: a set of relations is an existential conjunctive expression, and the problem is to unify two such expressions by performing suitable substitutions of constants for variables in the two can- didate expressions. The following section develops the logical view of matching. The Mechanics of Matching We briefly review the notions from logic needed to create or to match existential conjunctive expressions. Then we define the problem of matching. Substitution and consistency. A substitution 6 = { c 1/q, l l ‘9 ~,/a+,,} is set of correspondences between constants ci and variables xi. Under the 1:l mapping assumption (Hayes-Roth 1977), every constant in a consistent substitution corresponds to exactly one variable, and every variable corresponds to exactly one constant. A substitution instance of a literal P is the result of replacing each of the variables in P by the correspond- ing constants in 8. This is denoted by Poe. A description F matches a set of literals L if for some substitution 8, FotJ C L. A description is con- sistent with a set of positive examples if it matches all WATANABEANDRENDELL 875 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. of them. The*problem and context of matching. Matching appears in several guises. One problem is to find a maximally specific common generalization MSCG of two or more objects. Another problem is to unify two or more descriptions of object classes. In these and other cases, the matching problem may be reduced to pairwise matching. For example, if we want to find a generalization of positive examples, we can first form an existential conjunctive expression from one exam- ple, then iteratively match each of the remaining examples to the composite. This version of the match- ing problem is concept learning. In Mitchell’s (1978) version space method of concept learning, the set of candidate concepts is represented by two boundary sets G and S. G is the set of all maximally general expressions consistent with the examples, and S is the set of all maximally specific expressions consistent with the examples. When a positive example p is encountered by the system, 5’ is minimally generalized so that each expression in S is consistent with p. Similarly, when a negative example n is encountered, G must be minimally specialized to become consistent with n. Van Lehn (1989) gives an efficient algorithm for updating the G set. In theory, however, G can grow exponentially even for the propositional case (Haussler 1989). In contrast, a propositional S never contains more than one hypothesis (Bundy, Silver, & Plummer 1985). But for structural domains, Haussler shows that S and G can grow exponentially. Yet Haussler (1989) notes that many of the tech- niques developed for learning in structural domains appear to work well in applications of practical interest. This is our motivation in trying to improve current techniques despite the intractability of the problem. In particular, one-sided methods using only the S set might be more efficient than two-sided methods in domains of practical interest. Our Approach to the Problem Generalizing specific expressions. In the one-sided version space method, the set S must be modified to make its members consistent with a new positive example. This leads to the following generalization problem: Given: s , a member of S, and pi a positive example Find: a set of existential descriptions C such that: every concept in c matches p, every concept in C is a generalization of s, and no concept in C is a generalization of any other concept in C. The set C is simply the set of MSCG’s of s and p . As Van Lehn (1989) notes, S may have multiple members so the above problem must be solved for each member of S, and the results merged. In the one-sided approach, S is replaced by the merged C at each step. Hayes-Roth (1977) proposed an interference matching algorithm for computing C. In the version space framework, interference matching involves the following steps: for every literal 1 in s Add {I} to C Repeat Choose an unchosen literal I from s For each c in C if c + (1) matches p Add c + {I} to C Prune C according to heuristics until all literals from s have been chosen Interference matching uses some sophisticated heuristics to make a good choice of literal for speciali- zation and to prune 6. However, pruning may prevent some elements of C from being found. Decomposition into connected components. Our method uses the fact that graph matching is computa- tionally less complex if only disconnected subgraphs are matched. This is is related to factoring of version spaces (Genesereth & Nilsson 1988). Similar methods are used by Falkenhainer, Forbus and Gentner (1989) for analogical mapping. A graph represents a conjunctive description F = 3 ZI,..., 2,: P, & P, &...& P,. The existential expression is connected iff for every z;, Zj, either z,. and zj occur in the same literal P, or there is an zlt such that xi and z1 are connected and xL and xi are connected. An existential conjunctive expression can be normal- ized by finding its connected components and creating a new existential conjunctive expression for each com- ponent. The result is a set of existential conjunctive expressions. The set of expressions matches another expression E iff every element of the set matches E. As shown in Figure l., the S set of a version space can be normalized by replacing each of its elements by the element’s connected components, and removing the non-maximally specific components. Although some Figure 1. Normalized S set of version space 876 MACHINE LEARNING or all of the new elements of S may be less specific than the old elements of S, together they define the same boundary of the version space as previously. The X-Search Algorithm The X-search algorithm computes the set of MSCG’s for a given description s and a positive example p. This computation can be viewed as search in a tree T, where nodes correspond to descriptions, and a branch corresponds to a literal that is used to specialize the description. Every description is more general than s, and is constrained to match p. For example, interfer- ence matching searches the tree shown in Figure 2. From this perspective, a branch (or its literal) can be viewed as specializing a node (or its description) of the search tree. Our algorithm uses a connectivity constraint: a literal can specialize a node only if it is connected to its description. The empty description, when at the root node, is defined to be connected to every literal in s . The search tree can be viewed in yet another per- spective. Suppose n is a node in the search tree with description d. If a branch specializes n to a node n’ with a literal I, then the subtree rooted at n’ searches for the MSCG’s that are supersets of d + {I}. The i- th sibling of n’, n- searches for the MS&G’s that are supersets of d + ‘;I;>. S’ mce any two MSCG’s must differ by at least one literal, no node in the subtree rooted at n,. needs to be specialized by I,, I,, . . . 1,-r. This avoids an inefficiency in the interference match- ing algorithm: an MSCG will be found at many leaf nodes of the search tree, once for every permutation of its literals. We refer to these constraints as literal constraints. The tree searched by a depth-first stra- tegy with literal constraints is shown in Figure 3. Previous systems have used some form of connec- tivity and literal constraints. The next two con- straints appear to be new. The observation that any two MSCG’s must differ by at least one literal motivates another pruning C b c a b a Figure 2. Tree searched by interference matching Figure 3. Tree searched with literal constraints. method. This method, called root pruning, prunes the branches leading from the root. Specifically, when a MSCG is found, we can prune all of the branches from the root that are labelled by a literal in the MSCG. Because this pruning technique can only be applied once during the search, X-search waits until it finds a reasonably “large” . MSCG before applying root prun- ing. The branches that are pruned by root pruning are marked by “X” in Figure 3. We will discuss the final pruning method, substitution pruning, after present- ing the main procedures of the X-search algorithm. Figure 4 shows the X-search algorithm. X-search is given s, a connected structural description, and p, a positive example, and returns a superset of the MSCG’s of s and p. Each literal in s is matched against p to initialize its substitutions list, used later in CreateNode (Fig. 5). The search stack is initialized to contain the root node corresponding to the empty description. A node has a marked list to keep track of which literals have been tried at the node, and a literal-constraint list to keep track of which literals are not allowed to X-search( s , p ) for each literal f E s Init-Substitutions( I); stack := NULL; root = {}; Push( root, stack); while stack # NULL; fop = Pop(&ack) while 3 1 E s s.t. Expandable(top, I) Mark( t op, I); nezt := CreateNode(top, I) if nezt # FAIL Push( top, stack); top := nezt if fop is a leai then Push( top, Saved) if top is “large” apply root pruning; remove subsumed elements from Saved; return( Saved) Figure 4. Basic MSCG algorithm X-search. WATANABEANDRENDELL 877 specialize a node because of the literal constraint. A literal 1 is added to the marked list of a node n by a call to Mark( n, I), and to the literal-constraint list by Add-Literal-Constraint( n, I). This information is used to determine which literals can be used to special- ize a literal. The root node can also have laterals marked by by root pruning. The function Expandable( d , I) returns true iff 1 can be used to specialize d. For the root node, this is true iff 1 is not on the marked or literal-constraint lists of d. For all other nodes, Expandable(d, I) returns true iff Z is not on the marked or literal-constraint lists of d, and 1 is connected to d. The main procedure of X-search conducts a depth-first search for a superset of the MSCG’s, which is stored on Satled. Afterwards, the non-maximally specific elements of hued are removed. CreateNode forms a new node, if possible, from a node d, and a literal I. CreateNode’s main function is to find- the matches from d and Z to p, and to store these on the substitutions list of d and 1. The matches are generated by multiplying the substitu- tions of d by the substitutions of I, and saving the consistent ones. If there are no consistent ones, CreateNode returns FAIL. If there are some con- sistent ones, CreateNode creates a literal constraint for that literal &nd the node d, and returns the specializa- tion d + {I}. The literal constraints of the parent node d are inherited here by the new node d + {I }. CreateNode also calls Substitution-Prune (Fig. 6), which implements the final pruning method. Substitu- tion pruning attempts to prune the generation of sub- stitutions at a new node. This is important for efficiency because the number of substitutions at a node can grow exponentially in the depth of the tree. Substitution pruning has a second advantage: nodes without any substitutions are pruned. Thus, pruning substitutions can indirectly lead to pruning of nodes. Substitution pruning is based on an analysis of the necessity for backtracking. First, backtracking might be necessary when a node with description d matches several different parts of p. For each match, it may be possible to specialize d with one of the literals. CreateNode( d , I) for 6 E Substitutions(d) if -Prune(d) 0, I) or d = root for sub’ E Substitutions(l) if Consistent( 6, sub’) Add-Substitution( d +{ Z}, 8+ sub’) if Substitutions(d + {I}) = NULL then return(FA.IL) else for I’ E s if Literal-Constraint( d, Z’) = TRUE then Add-Literal-Constraint(d+{I}, 1’) return(d + {I}) However, it may not be possible to specialize d with any pair of these literals, because the may-be in different parts of p . matching _ literal23 Even if d has only one match to p, backtracking may be needed because of the 1:l mapping assump- tion. Suppose description d is matched to p with sub- stitution 8. Specializing d with a literal Z may require assigning a constant c from a literal in p to a variable z from 1. Specializing d with another literal 1’ may require assigning the same constant c to a different variable 2 from I’. Because of the 1:l mapping assumption, we cannot assign c to both z and z’, so d cannot be specialized with both 1 and 1’. If the graph is acyclic, then two literals are incom- patible only if they have the same predicate name, and their corresponding variables are bound to the same constants by 8. This defines an equivalence class of literals, any of which may map to the same subset of literals in p . If there are fewer literals in the equivalence class than corresponding literals in p, then the substitution can be pruned. Figure 6 shows the implementation of substitution pruning. The equivalence class is initialized to the first literal that matches p using the substitution B (plus some other assignments of constants to vari- ables). The next time Prune is called with d and 8, the substitution is pruned if 1 is not in the equivalence class. Empirical Results Three experiments were run. The first experiment measured the cpu times, real times, and substitution counts for X-search as a function of the size of the examples. Th e second experiment compared compared the effects of X-search’s pruning methods by running X-search without each of the pruning methods in turn. The third experiment compared X-search against a random beam search. The data were generated from the structural exam- ples used in Hoff, Stepp, and Michalski (1983). Each Prune-Substitutions( d, 8, I) if d + {I} matches p using 0 then if EquivClassLiteral( d, fl) is undefined then set EquivClassLiteral( d ,O) to Z return FALSE else if 1 is not in the same equivalence class as EquivClassLiteral( d) fl) under 6 return TRUE else if Competitors(d) 0, I) < Resources( s , 8, I) then return TRUE else return FALSE else return TRUE Figure 5. Procedure CreateNode Figure 6. Procedure Prune-Substitutions 878 MACHINJZ LEARNING data set consisted of a pair of positive examples. The examples of the dst pair were created by merging i examples of i different classes from Hoff et al.‘s data. Additional literals were added to the examples to con- nect their descriptions. The real times, cpu times, and substitution counts for X-search are shown in Table 1. The example size is the average number of literals in an example. The substitution counts for X-search in several configurations is given in Table 2. The configurations correspond to the single pruning constraint that was not used in the test run. These are: LC = literal con- straints, RP = root pruning, SP = substitution prun- ing, and CC = connectivity constraints. Where no table entry is given, the job was killed after taking excessive time. For the third experiment, X-search and the beam search were compared along two dimensions. The first dimension measured how many nodes beam search expanded to find the full MSCG set. This was deter- mined by rerunning beam search with incrementally increasing beam width until the full MSCG set was found. Only the number of nodes expanded during the last run was counted. The second test measured how many descriptions were found by beam search when it expanded the same number of nodes as needed by X- search. The test data consisted of about 33 pairs of examples. For the first test, beam search expanded about four times as many nodes as X-search. For the Table 1. CPU time and substitution counts. Table 2. Effect of removing pruning methods 45 13234 16087 349176 - 63 56842 69286 - - 86 67070 73639 - - second test, beam search found 70% of the descrip- tions found by X-search. Discussion X-search was able to handle moderately sized descriptions, up to 86 binary predicates per example, in .4 seconds of CPU time and of 226.2 seconds of real time, on a Sun4 workstation. These results indicate that full-width search may be feasible for moderate- sized examples. Previously, full-width search methods have been avoided because they are expensive even for small problems. An alternative to full-width search is a beam search, typically based on information- theoretic criteria or an evaluation function. An information-theoretic or evaluation function based approach considers the number of positive and negative examples when evaluating partial descrip- tions. For example, one might prune a node if it corresponded to a description that covered many nega- tive examples and few positive ones. This is an impor- tant source of information that methods such as ver- sion spaces ignore. Instead, methods such as version spaces are guided by their inductive bias. The advan- tage of the version space approach is that when there are few examples, and the examples are carefully chosen, a great deal of information about the concept can be extracted. Thus, computing MSCG’s is impor- tant for problems such as analogy, or learning from a helpful teacher. In contrast, an information-theoretic approach will probably perform better when doing unsupervised learning in a complex and noisy domain with many examples. Although the pruning methods used by X-search seem to give significant increases in efficiency, the problem is still intractable in the worst case. How- ever, efficient pruning methods can delay the point at which full-width search becomes infeasible. X-search must also be viewed from the perspective of its intended use. Typically, the learned descriptions are used as classification rules for an expert system. The expert system is limited by the same matching com- binatorics, as X-search, and cannot efficiently use extremely large and complex rules. The results show that removing even one of the methods leads to a substantial increase in the number of substitutions. The most important constraint is connectivity, followed by substitution pruning, literal constraints, and root pruning. Although substitution pruning is an effective con- straint, it requires that the descriptions be acyclic. This is a strong requirement, so we will be generalizing the method to handle cyclic descriptions in future research. The use of a random beam search is a weak basis for comparison. However, it was quite difficult to come up with a good evaluation function for beam search for this problem. When given only two positive exam- ples, an information-theoretic evaluation function is of WATANABEANDRENDELL 879 little value. Evaluation functions that score descrip- tions according to some desirability metric have the effect of concentrating the beam in the same region of the search space. Thus, the same MSCG was rederived by many elements of the beam. Several of our initial attempts at evaluation functions produced worse results than the random strategy, and others were insignificantly better. Although the results of this experiment were inconclusive, our difficulties have led us to conjecture that a good evaluation function for beam search must evaluate the beam as a whole, rather than any particular element of it. The empirical evaluation of X-search gives some indication of how well it performs. However, these indicators are based only on a few examples with simi- lar structure, namely pseudo-chemical molecules. We will be conducting more extensive empirical evaluation and comparisons in future research. Other Approaches Earlier algorithms share some similarities with each other and with our program. Our approach also differs in some ways. SPROUTER Hayes-Roth’s (1977) interference matching algorithm SPROUTER is one of the earliest and most widely imi- tated method for learning structural descriptions. Hayes-Roth makes the 1:l mapping assumption in early versions of his system, although he notes the inadequacy of this assumption for learning many classes of interesting concepts. The interference matching algorithm matches literals in a member s of the specific set S against literals in p, checking the consistency of the bindings, and extending the general- ized description if possible. Only the best w descrip- tions are kept at each step. Hayes-Roth’s (1977) paper also discusses many issues in structural learning that we have not addressed here. INDUCE Michalski’s (1983) INDUCE is one of the earliest struc- tural learning algorithms. INDUCE is similar to interference matching in using a beam search through the space of possible abstractions, but differs in search- ing for a maximally general description that is con- sistent with the negative examples, rather than a max- imally specific description that is consistent with the postive examples. INDUCE also uses a two-space search method: first a description is found in structure only space, and then attribute-based learning methods are used to specialize the structural description. INDUCE uses an evaluation function that evaluates the completeness and consistency of the descriptions with respect to the positive and negative example. The evaluation function gives INDUCE an information- theoretic flavor, in contrast to version spaces which relies more heavily on the characteristics of a particu- lar example. INDUCE’s two space search method, in which attributes are ignored when first finding a struc- tural description, is similar to part of Falkenhainer, Forbus and Gentner’s Structure Mapping Engine (1989)’ h h g w ic i nores attributes when finding an initial set of analogical mappings. Michalski’s representation is more expressive than ours, allowing attributes to be combined with operators such as <, 2, =, 2, and >, to permit expressions such as [distance(z,y) < 331. INDUCE can also learn disjunctive descriptions, unlike version-space based methods. SUBDUE Holder’s (1989) SUBDUE uses clustering methods to construct features from the examples. The examples are simplified by replacing parts of their initial descriptions by constructed features. These modified examples are given to INDUCE which performs the actual learning. Holder’s two-step approach to learn- ing structural descriptions is more efficient than a single-step approach. The Structure Mapping Engine Falkenhainer , Forbus and Gentner’s (1989) SME addresses the problem of structural matching in the context of analogical reasoning. Analogical reasoning has some rather different properties from learning rela- tions, as predicates can be matched against other predicates. Falkenhainer et al.‘s approach is interest- ing in its use of extensive knowledge to guide the matching process. The initial construction of match- ings, is similar to VanLehn’s (1989) method in that it enumerates the possible correspondences of objects in the target and base domain. SME is efficient when knowledge about the base and target domains is avail- able to guide the mapping process. Sierra VanLehn (1989) g ives an efficient method for updating the G set when a negative example is encountered that matches G. VanLehn enumerates the substitutions, and uses an efficient bit-representation for the set of substitutions. He reduces the problem of updating G to a series of cover problems, and uses Well’s algo- rithm (1971, sec. 6.4.3) for finding irredundant covers to solve the problem. The length of the bit representations is c !/(c-N)!, where N is the number of variables in s and c is the number of constants in n, the negative example. Although this number is very large, the efficiency of the bit operations is sufficient to produce some impres- sive results. In addition, VanLehn enforces the equivalent of type constraints on substitutions. 880 MACHINE LEARNING FOIL Quinlan’s (1989) FOIL uses a full first-order predicate representation to learn relations. An interesting feature of FOIL is its ability to learn relations in either supervised or unsupervised mode. FOIL uses an information-theoretic evaluation function to perform an heuristic search for a single description. Unlike most other structural learning systems, FOIL is capa- ble of learning disjunctive, recursive descriptions, with embedded skolem functions. for helpful discussions about structural Commonalities Many programs, such as SME, X-search, Sierra, and SPROUTER are incremental, conjunctive learners that rely on the conjunctive bias, their current hypothesis, and the current example to form a hypothesis. Other programs, such as INDUCE and FOIL are non- incremental, disjunctive learners that use information-theoretic methods or an evaluation func- tion to form a hypothesis. Haussler (1989) shows a number of interesting results about the difficulty of learning existential con- junctive descriptions. He proves that this problem is NP-complete, even when no binary relations are defined, attributes are Boolean valued, and each exam- ple contains exactly two objects. However, he notes that heuristic methods for learning existential concepts can be effective, if not always efficient. Haussler (1989) also presents a subset query method for pat-learning learning existential conjunctive descriptions that have a fixed number of variables. The main technique he uses is matching positive exam- ples with each other and with intermediate hypotheses to form MSCG ‘s. This is the basis of many learning algorithms for existential conjunctive concepts (Diet- terich 1983). Conclusions This paper describes a new search method, X-search, for computing the maximally specific common general- izations (MsCG’S) of a description and a positive example. Finding MSCG’s is important for learning relational descriptions and plays a role in many struc- tural learning programs. This is the main problem in updating the specific (S) set in a version space algo- rithm for structured descriptions. The main weakness of X-search is that under the 1:l mapping assumption, it is restricted to descriptions corresponding to acyclic graphs. Our preliminary results indicate that X- search is a fast and effective method for computing the MSCG (S) set. Acknowledgements We want to thank Ryszard Michalski for supporting the first author during part of this research, and Robert Stepp, Larry Holder, Diane Cook, and Brad Whitehall learning. References Bundy, Alan, A. Silver, and D. Plummer, An Analyti- cal Comparison of Some Rule-Learning Programs, Artificial Intelligence, vol 27, pp 137-181, 1985. Dietterich, Thomas G. and Ryszard S. Michalski, A Comparative Review of Selected Methods for Learning from Examples, in Machine Learning: An Artificial Intelligence Approach ed. R. S. Michalski et al, pp. 41-81, Tioga, 1983. Falkenhainer, Brian, Kenneth D. Forbus, and Dedre Gentner, The Structure-Mapping Engine: Algorithm and Examples,Artificial Intelligence, vol. 41, no. 1, pp. l-63, 1989. Genesereth, Michael R. and Nils J. Nilsson, Logical Foundations of Artificial Intelligence, Morgan Kauf- man, 1988. Haussler, David, Learning Conjunctive Concepts in Structural Domains, Machine Learning, vol. 4, no. 1, pp. 7-40, 1989. Hayes-Roth, Frederick and John McDermott, Knowledge Acquisition from Structural Descriptions, Proc. fifth I t n ernational Joint Conference on Artificial Intelligence, pp. 356-362, Morgan Kaufman Publishers, Inc., Cambridge, Massachussetts, August, 1977. Hoff, William A., Ryszard S. Michalski, and Robert E. Stepp, INDUCE 3: A Program for Learning Struc- tural Descriptions from Examples, Department of Computer Science, University of Illinois at Urbana- Champaign, Urbana-Champaign, Illinois. Holder, Lawrence B., Empirical Substructure Discovery, Proceedings of the Sixth International Workshop on Machine Learning, pp. 133-136, Mor- gan Kaufman Publishers, Inc., Ithaca, New York, June, 1989. Michalski, R. S., A Theory and Methodology of Induc- tive Learning, in Artificial Intelligence, vol. 20, 2, pp. 111-161, 1983. Mitchell, Thomas M., Version Spaces: An Approach to Concept Learning, Stanford University, 1978. Ph.D. Thesis Quinlan, J. R., Learning Relations: Comparison of a Symbolic and a Connectionist Approach, University of Sydney Technical Report, no. TR-346, Basser Department of Computer Science, University of Syd- ney, Sydney, Australia, May, 1989. VanLehn, Kurt, Efficient Specialization of Relational Concepts, Machine Learning, vol. 4, no. 1, pp. 99- 106, 1989. WATANABE AND RENDELL 88 1
1990
136
889
On Analytical and Similarity-Based Classification Marc Vilain, Phyllis Koton, and Melissa P. Chase The MITRE Corporation Burlington Road, Bedford, MA 01730 Internet: Abstract This paper is concerned with knowledge representation issues in machine learning. In particular, it presents a representation language that supports a hybrid analytical and similarity-based classification scheme. Analytical classi- fication is produced using a KL-ONE-like term-subsumption strategy, while similarity-based classification is driven by generalizations induced from a training set by an unsuper- vised learning procedure. This approach can be seen as providing an inductive bias to the learning procedure, thereby shortening the required training phase, and reducing the brittleness of the induced generalizations. Introduction Classification is a central concern of knowledge representation and machine learning. At the heart of many knowledge representation systems is a classifi- cation procedure which determines where an individual fits within the knowledge base. These classification pro- cedures are realized in various ways, for example in rule- based frameworks, or with the term subsumption strategy of KL-ONE. They are all, however, analytic methods. A major problem for these analytic methods is that they require a large knowledge base to guide classification; typically this knowledge base is constructed by hand. The machine learning community’s concern with classification addresses the above problem by auto- matically acquiring classification schemes from a collection of examples. Although various techniques have been developed, such as inductive learning and case-based reasoning, all can be thought of as statistical classification mechanisms. A major problem with these methods is their “example complexity,” the large number of training examples required if one wants to induce a classification that distinguishes unusual classes while ensuring that typical classes are recognized as such. Further, if the classification method is incremental, it tends to be sensitive to the order in which the examples are presented; to recover from a poor classification typically requires a large number of “normal” cases. The extent of this example complexity was measured by Aghassi (1990) in the context of the Heart Failure program, a model-based expert system that diagnoses patients with heart failure (Long et al. 1987). Aghassi estimates that in this moderately complex domain, a case-based classifier with no prior background knowledge would need to be trained with as many as 100,000 cases to provide reasonably accurate similarity classifications. We believe that the approaches to classification taken by the knowledge representation and machine learning communities complement each other and can be fruitfully combined: express the normal cases within an analytical classification framework and use the statistical classification procedure to identify the exceptional cases. To accomplish this requires a hybrid knowledge represen- tation that combines analytical and contingent languages. In the following sections we describe a machine learning classification system and show how analytic knowledge can be incorporated within it, then we describe the hybrid knowledge representation, and discuss the relationship between this work and the machine learning community’s concept of inductive bias. A Hybrid Learning System Background: Dynamic Memory We have previously described a medical diagnosis program that used case-based reasoning to construct a knowledge base for classifying cases of heart failure (Koton 1988). The knowledge base, in the form of a dynamic memory (Kolodner 1983), was constructed entirely by the program, using the cases that had been presented to it, and using assistance from the Heart Failure program. A dynamic memory records instances of cases, organizing them in a hierarchy. Cases are made up of attribute-value pairs (or features). The memory also applies a learning procedure to create generalizations, frames that record the similarities between a group of cases. Each generalization maintains a list of norms, the features that are common to most of its descendants in the hierarchy’. The descendant of a generalization need not share all of the generalization’s norms, in which case it is said to be differentiated from the generalization by its distinguishing attributes. ‘In many dynamic memory systems, “most” is implemented as 22/3. VILAINETAL. 867 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. DISEASE-G2084 CASES: 23 NORMS cardiomegaly(x) = LV s2(x) = single characteristic-murmur(x) = AS pulse(x) = slow-rise apex-impulse(x) = sustained chest-pain(x) = anginal dyspnea(x) = on-exertion Figure 1: A generalization frame. As a result of seeing many instances of patients with similar sets of symptoms, our program created genera- lization categories that associated certain symptoms with other symptoms. For example, the symptom syncope on exertion identifies a disease category containing 23 of the 45 cases presented to the program. Patients who exhibited syncope on exertion also had other features in common, for example, chest pain, a heart murmur, and an enlarged heart (Figure 1). Diagnosing these patients using the Heart Failure program revealed that 16 of them had the disease aortic stenosis. What our program had done, in effect, was to create a category describing the common symptoms of aortic stenosis, without having any previous knowledge of this disease. However, since the memory construction was guided statistically, many “typical” cases of aortic stenosis had to be presented so that unusual cases would not produce an inaccurate classification scheme. As suggested above, this requirement could have been reduced by first defining some “normal” classifications analytically. Representing analytical and contingent knowledge To allow for analytical knowledge, we have extended the dynamic memory representation with an analytic term-subsumption language in the style of KL-ONE. This language is used to define salient categories of a domain - for medical diagnosis, these consist of important combination of symptoms indicating the likelihood of a disease. The statistical associations induced by the case- based learning procedure are then used to extend the analytic framework with contingent knowledge, further refinements of the framework, and exceptions2. The representational challenge is in meaningfully combining the first-order analytical definitions with the contingent knowledge, which is neither analytical nor first-order. As an example of this hybrid representation, consider the representation fragment shown in Figure 2. This fragment contains two preconstructed categories, Zikely- cardiac-disease, and likely-valve-disease. For the first, 2Much like a medical student at the beginning of the third year, whose book knowledge becomes extended by clinical experience. 868 MACHINE LEARNING 3 I I LIKELY-CARDIAC-DISEASE DEFN h(x) disease(x) A syncope(x) = on-exertion A chest-pain(x) = angina1 NORMS age(x) = 60-or-greater sex(x) = male cardiac-catheterization(x) = single-or-multivessel-disease diagnosis(x) = cardiovascular-disease LIKELY-VALVE-DISEASE DEFN h(x) likely-cardiac-disease(x) A characteristic-murmur(x) = systolic-ejection NORMS ekg(x) = LVH xray(x) = cardiomegaly cardiac-catheterization(x) = normal diagnosis(x) = aortic-stenosis Figure 2: A representation fragment. textbook knowledge tells us that symptoms of syncope on exertion and chest pain are indicators of the likelihood of heart disease. This is captured in the representation by giving the category likely-cardiac-disease the definition3 kc (syncope(x) = on-exertion)A(chest-pain(x) = anginal). In addition, the representation of this category indicates a number of norms, contingent properties which tend to co- occur or co-vary with syncope and chest pain (though not necessarily in the strict statistical sense), for example, age > 60, sex = male, and single- or multi-vessel disease on cardiac catheterization. In fact, the probability of syncope and chest pain being caused by cardiovascular disease is sufficiently high that the likely-cardiac-disease category includes that diagnosis as a norm. Norms are interpreted as defaults, so this knowledge structure does not imply that all patients with syncope and chest pain are males over 60 with cardiovascular disease, but that these symptoms are found primarily in older males and are caused by cardiovascular disease. The default nature of norms is evidenced by the cate- gory likely-valve-disease, which is analytically differenti- ated from the category of patients with syncope and chest pain by the presence of the symptom systolic ejection murmur. Here, the default diagnosis of cardiovascular 3Properly speaking, this definition actually denotes something like medical-case-whose-symptoms-are-likely-indicators-of-cardiac-disease. We are using shorter names for their obvious legibility benefits! disease is overridden by the default diagnosis of aortic stenosis. Also, we find a different set of symptoms that covary with syncope, chest pain, and murmur, for example, left ventricular hypertrophy on EKG and an enlarged heart on x-ray. Incorporating specific cases The norms in the preceding examples could have been defined by the user as part of the knowledge acquisition process. More interestingly, they could equally have been induced by the dynamic memory learning procedure from a set of training cases, as with the generalization in Figure 1. The learning procedure can thus be seen as a mechanism for acquiring the contingent knowledge encoded in the representation. Additionally, by indexing training cases into the analytical hierarchy, the learning procedure extends the hierarchy with (contingent) refinements and exceptions. The learning procedure indexes training cases in the hierarchy by matching them to existing categories. For example, suppose the system is presented with a case x which is described by the features syncope=on-exertion, chest-pain=anginal, murmur=systolic-ejection, sex=male, and age=30-or-less. The values for attributes syncope, chest-pain, and murmur identifies this case as an exemplar of the category likely-valve-disease. However, the value 30-or-less for attribute age distinguishes this case from the category likely-valve-disease because the majority of cases in that category have value 60-or-more for age (a norm inherited from likely-cardiac-disease). The new case is thus inserted into the hierarchy directly underneath the likely-valve-disease frame, differentiating it by providing a different value to the age attribute. When multiple cases are indexed to the same place in the hierarchy, the learning procedure creates a generalization capturing their common features as a set of norms. The cases are then indexed below the new generalization, differentiated from it by the features they fail to share. It is this process which inductively extends the analytical framework with contingent knowledge derived from the training cases. The Analytical Language Our analytical language is an extremely simple definitional frame language, which is based on the term subsumption strategy of KL-ONE. The language provides three frame-forming operators which are used to form complex frame terms: definitions are performed by naming the resulting terms. Our first operator, AND, simply conjoins frame terms. We interpret the expression (AND qp1 432 . . . cpn) as ax CplW A q2c-d A *.* A cp,w The second operator, ATTR, restricts a frame to take a certain value for an attribute. We thus intepret the expression (ATTR a p) as ax ~<-a We will adopt a restriction common in the machine learning classification literature, and treat attributes as functions. Therefore, ATTR expressions such as the one above can additionally be read as il x a(x) = p Additionally, it is often necessary to form the disjunction of attribute selections, for which we use our third frame-forming operator, ATTR*. We thus interpret the expression (ATTR* al p1 ... a, fin) as ~-d~l(x)=pl)v **- w%l(x)=p*) Analytical definition are created by naming a complex term. In the heart disease domain, we might define likely cases of valve disease as follows LIKELY-VALVE-DISEASE = (AND LIKELY-CARDIAC-DISEASE (A TTR CHARACTERISTIC-MURMUR SYSTOLIC-EJECTION)) As usual, such definitions as these are interpretable as universally quantified biconditionals, in this case vx LIKELY-VALVE-DISEASE (x) H LIKELY-CARDIAC-DISEASE(x) A CHARACTERISTIC-MURMUR (x) = SYSTOLIC-EJECTION As with other KL-ONE derivatives, classification in our analytical language is performed by term subsumption. That is, frames are organized in a hierarchy, with frame ql placed below frame 4.~2 just in case the meaning of rpI is entailed by that of (~2, i.e., just in case the sentence ‘V’X qpl(x) * 92(x) is valid. We then say that qI is subsumed by ~2. In effect, qI is (non- trivially) subsumed by (~2 just in case ql satisfies the definition of 45 and additionally possesses one or more attribute assignments not valid for (~2. Classifying a frame cp simply consists of finding those frames in the hierarchy that immediately subsume cp. Despite the extreme simplicity of this language, frame classification in general can be shown to be NP-complete, by a straightforward reduction from 3-SAT. However, for the knowledge bases of interest to hybrid classification, we will be concerned only with definitions in the normal form ~1 = (AND ~2 (ATTR al PI) . . . (ATTR ai pi) (ATTR* aj Pj . . . % Pn>> where ~1 and “/2 are categories, the a terms are attri- butes, and the p terms are values. This normal form is of interest as it characterizes the generalizations induced by the learning procedure. As we shall see below, for knowledge bases of such definitions, analytical classifi- cation is no longer NP-complete, but tractable. VILAIN ET AL. 869 The Contingent Language The term hierarchy defined in the analytical language provides a framework within which to express the contin- gent associations derived by the case-based learning procedure. In essence, these associations describe the covariances of attribute-value assignments that hold for a particular generalization. These covarying feature assignments can be interpreted as mutually providing evidence for each other: if any single one of them holds, the others can be assumed to do so as well, at least in the context of the same generalization. Consider, for example, two generalizations ~1 and m, in which ‘y2 differs from y1 by assigning to some attributes aI, a2 . . . . an the values /?I, p2, . . . . pn. If we know that, for some entity e, ~1 (e) A al(e) = PI, then if we additionally choose to believe Yz(e), we would assume a2(e) = p2 , . ..,an(e) = /In by default, as these feature assignments covary with al(e) = PI. The process of making assumptions in this way captures the essence of similarity-based classification. Knowing that e is similar in some way to instances of m allows us to infer that e might possibly be an instance of ~2, and thereby possess other features common to the class. In earlier papers ((Koton & Chase 1989), (Koton, Chase, & Vilain 1990)), we applied Reiter’s default logic to model similarity-based classification in case memory. This treatment can be extended to encompass analytical classification as well. We do so by interpreting the generalizations formed by the case classification procedure with the following two axiom schemata, one analytical and the other contingent. Let ~1 and ~2 be generalizations that, as above, differ in 71’s assigning values PI . . . & to attributes al . . . an. Then jp (AND ~1 (ATTR* al p1 . . . a, pn)) (link) Y2W : wW=P1 a1W=P1 ,.--, Y2W : %W=PIl %W=P* (norm) The link schema creates an analytical definition for m, requiring it to differ from y1 by at least one of the assignments to attributes al . . . a,. The norm schema captures the covariance of the ai by introducing a normal default rule (Reiter 1980) for each assignment of some pi to the corresponding ai. It is easy to see how these axiom schemata enable similarity-based classification. For example, say that for some particular 71, al, PI, etc, an entity e is described by the theory e(e) = yl(e) A (al(e) = pl) A . . . . Then m(e) is true from the biconditional interpretation of the link schema, and the remaining ai = pi become true by default, so long as they are consistent with e(e). The contingent language of our representation scheme is just the language of normal default rules that have the 870 MACHINE LEARNING 0(e) = disease(e) A syncope(e) = on-exertion A chest-pain(e) = anginal A characteristic-murmur(e) = systolic-ejection DISEASE (e) = cardiovascular-disease LIKELY-CARDIAC-DISEASE DEFN h(x) disease(x) A syncope(x) = on-exertion A chest-pain(x) = anginal NORMS likely-cardiac-disease(x) : diagnosis(x) = cardiovascular-disease diagnosis(x) = cardiovascular-disease . . . f$(e)= $(e) h . . . A diagnosis(e) = aortic-stenosis LIKELY-VALVE-DISEASE DEFN h(x) likely-cardiac-disease(x) A characteristic-murmur(x) = systolic-ejection NORMS likely-valve-disease(x) : diagnosis(x) = aortic-stenosis diagnosis(x) = aortic-stenosis . . . Figure 3: A memory fragment showing cancellation. form of the norm schema. However, there is more to the representation of contingent reasoning than the specifi- cation of those normal defaults that express the case memory covariances induced by the learning procedure. The hierarchies induced by the learning procedure typically require the cancellation of feature assignments on the basis of specificity (as with the assignments to diagnosis in Figure 3). With a default encoding of property inheritance, this leads to case memory theories having multiple extensions, only the most specific of which is legitimate (e.g., 82 in Figure 3). We must therefore indicate how to interpret this non-monotonic aspect of property inheritance. Understanding Cancellation In default logic, the traditional approach towards handling cancellation in non-monotonic inheritance is through semi-normal defaults ((Etherington & Reiter 1983), (Reiter & Criscuolo 1983)). This solution is unappealing for several reasons. For one, it potentially requires encoding the topology of the entire inheritance network into each norm default. The global reasoning task this presupposes is a poor model of inheritance with cancellation, which is typically understood as a local y1 DEFN X(x) person(x) A (party(x) = g.o.p. v pacifism(x) = hawk) NORMS yl(x) : pacifism(x) = hawk pacifism(x) = hawk B(nixon) = . . . faith(nixon) = quaker A party(nixon) = g.0.p. PERSON eI(nixon) = e(nixon) A pacifism(nixon)=dove 8 2(nixon) = B(nixon) A pacifism(nixon)=hawk 72 DEFN I,(x) person(x) A (faith(x) = quacker v pacifism(x) = dove) NORMS 3 (x) : pacifism(x) = dove *‘* pacifism(x) = dove Figure 4: The infamous Nixon diamond. reasoning process. Additionally, with a semi-normal encoding of cancellation, the default characterizations of memory structures induced by the learning procedure are not necessarily ordered in Etherington’s sense (Koton, Chase, & Vilain 1990). This means that they can not be guaranteed an extension by existing default proof theories (Etherington 1988). As case memory theories actually always do have extensions, this makes a semi-normal encoding of cancellation even less appropriate. We have chosen the alternative of separating inheritance from cancellation, expressing property inheritance with normal defaults (as above), and relying on an external criterion to determine cancellation. In (Koton, Chase, & Vilain 1990), we describe one such criterion based on Poole’s notion of theory preference (Poole 1985). We can easily extend this criterion (and in fact simplify it) to encompass our analytical terms Say e(e) is some theory of some entity e, and say 81 (e) and 02(e) are extensions of e(e) that assign different values p1 and p2 to an attribute a. Note that these values must have been respectively assigned by some default rules 61 and & such that We then say that 81 < a 82 just in case yl (the pre- condition of 61) subsumes y2 (the precondition of 82). Subsumption in this case is simply the relationship of meaning-entailment defined over terms in the analytical language which is used to define the preconditions of defaults. In Figure 3, for example, 81 <diagnosis 02 because the precondition of the default that assigned cardiovascular-disease to diagnosis in 81 is subsumed by that of the default that assigned it aortic-stenosis in 02. To enforce cancellation on some feature a, we simply select as our prefered extensions those maximal in <a. To extend this minimality criterion into a theory- preference criterion, say e(e) is a theory of some entity e, with extensions @l(e) . . . e,(e). From among these 0i, the prefered extensions are those which are maximal in <a for all attributes a. In Figure 3 for example, 02 is prefered over 81, thus cancelling the assignment of cardiovascular-disease to diagnosis which holds in 81. In contrast, the 81 and 02 of Figure 4 are both maximal, since the defaults that lead to their incompatibility on pacifism have incomparable preconditions (neither one subsumes the other). In essence, our preference criterion selects those extensions of a theory in which attribute values are only assumed using the most specific applicable default. What’s especially appealing about this approach is that the determination of specificity is entirely cast in terms of the analytical language which naturally supports specificity through subsumption. Integrated Classification The knowledge representation task begins by using the analytical language to define initial categories which will later be extended by the learning algorithm4. The representation hierarchy is then built by classifying these definitions with respect to the subsumption relation; as noted above, this is an NP-complete process. This NP- completeness must be taken in perspective, however. For the purpose of identifying salient diagnostic categories, one can limit definitions to conjunctions of feature assignments created by only using AND and ATTR expressions. In this case, classification becomes tractable with an algorithm such as that of (Schmolze & Lipkis 1983). Next, the hierarchy is extended with the case memory learning procedure. This is accomplished by classifying training cases using both analytical and similarity cri- teria. Simultaneously, running statistics are maintained of the relative frequency of attribute-value assignments: 4These are similar to the T Box definitions of (Brachman, Fikes, & Levesque 1983) which are completed by (user-declared) A Box axioms. VILAIN ET AL. 871 Turning to the details of the classification algorithm, to classify a case x, with initial description 8: (1) (2) (3) (4) (5) (6) Let known be a list of properties known to be true of x, and let defaults be a list of properties conjectured to be true of x. The known properties are simply those which hold in 8, the conjectured ones are added by the norm defaults. Starting with the root of the hierarchy, proceed down the hierarchy by following links which match propositions in known. (Analytical classification) If nodes ~1 and y2 are linked by any necessary links, then to proceed from “/1 to ~2 all such links must be matched by properties in known . (Similarity classification) Separately, if nodes ~1 and ~2 are only linked by contingent links, then to proceed from ~1 to y2 at least one such link must be matched by properties in known. Potentially, several paths could be followed from a given node y, each leading to a different classification. If so, the algorithm follows each path independently. For each norm at each node, if the norms does not contradict some property in known, it is added to defaults, overriding any properties in defaults which it contradicts. these are used to establish norms, and reorganize the hierarchy should it prove to reflect a skewed presentation order. We will say little about the statistical aspects of the learning procedure, focussing on the classification process instead (see (Kolodner 1983) for details). The classification algorithm The hierarchy is implemented as a graph in which categories are connected by two kinds of attribute-value links. Necessary links correspond to conjoined attribute- value assignments, such as those in the analytical definitions. Contingent links correspond to disjuncts from A T TR * expressions, and are used to implement similarity-based classification. The key to the operation of the algorithm is that all of the contingent links connecting two nodes are interpreted as part of the same disjunction. This effectively restricts the analytical definitions of categories to be in the normal form. ~1 = (AND j-9 (ATTR al PI) . . . (ATTR ai pi) (ATTR* aj Pj . . . an Pn>) The ATTR expressions in the normal form correspond to feature conjunctions specified in the category defi- nitions predefined by the user. The ATTRR* expression encodes the feature assignments by which the learning procedure determined the category to be differentiated from its parent in the hierarchy. (7) The algorithm terminates (along each indepen- dent path) when the leaves of the hierarchy are reached, or no links can be followed from a node. The algorithm returns, ~1 . . . Xn, the nodes at the end of each path, along with the values of known and defaults. It can be shown that the nodes returned by this algo- rithm define the maximal extensions of the initial theory 0(x). The theories are simply defined by the properties which hold at these nodes (the union of known and defaults in the algorithm). It is also easy to show that the algorithm computes these extensions in polynomial time. For details see (Koton, Chase, & Vilain 1990). In order to actually add x to the hierarchy, x must additionally be indexed to each Xi in ~1 . . . Xn, If Xi is a generalization, x is inserted below Xi and linked to it by a contingent link for each property of Xi which x does not share. If Xi is another case, a generalization x’ is first created, and given as norms those properties shared by Xi and x. These two cases are then indexed below x’ with a contingent link for each property they don’t share with x’. Indexing x below the Xi effectively joins the separate paths taken by the algorithm. It is easy to show that indexing a case to the hierarchy with this strategy effectively provides the case with a normal form definition. It is also possible to show that the structure of the hierarchy encodes the subsumption relations between the definitions of its category node$. This leads directly to the tractability of computing the subsumption of the normal form definitions induced by the learning procedure. A classification example To illustrate this classification process, say we had created a hierarchy by predefining the categories likely- heart-disease and likely-valve-disease. The definitions of these categories are reproduced in Figure 5 (below), along with some norms that might have been assigned at some point to these categories by the learning procedure (assume for now that disease-g0017 has not been created yet). Say we are now shown a case e of youthful aortic stenosis, a fictitious but illustratively useful disease, with e characterized by the following feature assignments. syncope(e) = on-exertion chest-pain(e) = angina1 characteristic-murmur(e) = on-ejection age(e) = 30-or-under diagnosis(e) = youthful-aortic-stenosis 51n brief, this follows from the fact that the same attribute-value pair can never appear twice on a given path, thus imposing an ordering on the disjunctive components of definitions. 872 MACHINE LEARNING LIKELY-CARDIAC-DISEASE DEFJ (= necessary links) A.(x) disease(x) A syncope(x) = on-exertion A chest-pain(x) = anginal NORMS (= contingent links) age(x) = 60-or-greater diagnosis(x) = cardiovascular-disease I LIKELY-VALVE-DISEASE DEFN (necessary links) h(x) likely-cardiac-disease(x) A characteristic-murmur(x) = systolic-ejection NORMS (contingent links) age(x) = 60-or-greater (inherited) diagnosis(x) = aortic-stenosis DISEASE-GO017 “Youthful valve disease” 1 NORMS (contingent links) age(x) = 30-or-under diagnosis(x) = youthful-aortic-stenosis . . . Figure 5: An induced category (DISEASE-GO017). During classification, say e enters the hierarchy at the level of disease. It is then compared to the necessary links between disease and its descendant likely-cardiac- disease. These necessary links are simply the two expli- cit feature assignments in the latter category’s definition: syncope(x)=on-exertion and chest-pain(x)=anginal. Since e has feature assignments to syncope and chest-pain that match these necessary links, it traverses them and is thus analytically classified as an instance of likely-cardiac- disease. Similarly, e matches the necessary links between likely-cardiac-disease and likely-valve-disease, allowing it to be analytically classified as an instance of the latter category. The case is then entered into the hierarchy by creating node disease-g00I 7 and giving it as norms the feature assignments to age and diagnosis that differentiate e from likely-valve-disease. These norms are in turn interpreted as contingent links between likely-valve-disease and disease-g001 7; these can be exploited to perform similarity-based classifi- cation. For example, say we are now shown a new case f which shares e’s feature assignments for syncope, chest pain, characteristic-murmur, and age, but has no assign- ment to diagnosis. Like e, f will be classified analy- tically below likely-cardiac-disease and likely-valve- disease. From the latter, the contingent link for age can be followed down to disease-g001 7, as f also assigns 30- or-less to the age feature. This classifies f by similarity as an instance of disease-g001 7; f then inherits this category’s consistent norms by default, including an assignment of youthful-aortic-stenosis to diagnosis. Relation to Inductive Bias The enterprise we have described above can be related to a concern of researchers in the machine learning community, inductive bias. The task of the statistical classification methods described earlier is to induce a classification upon being presented with a sequence of examples. That is, the learning program partitions the examples into a set of (not necessarily disjoint) classes. When the examples are labeled with their class (often just a binary labeling), the task is called “learning from examples” or “supervised learning,” and the learning program produces an inten- sional description of the classes. When the examples are not labeled, and the learning program must induce the classes as well as the intensional descriptions of the classes, the task is called “concept formation” or “un- supervised learning.” (Gennari, Langley, & Fisher 1989) Both learning tasks may be viewed as a search through a space of hypotheses, in which each hypothesis represents a partition of the examples. To keep the learning task tractable, machine learning researchers early recognized the need to incorporate into their systems an inductive bias, namely, some mechanism for controlling the search of the hypothesis space (Russell & Grosof 1990). This bias controls the number of examples needed to induce a classification. There are two major types of bias (Utgoff 1986): (1) restricting the hypothesis space, and (2) ordering the hypotheses. The first bias is usually imposed by limiting the concept description language; the second is often achieved through some general preference, such as, prefering a simpler to a more complex description. More recently, researchers have proposed a more general framework for inductive bias, namely, viewing it as prior knowledge that becomes part of the context within which the learning system operates (Russell & Grosof 1990). In particular, explanation-based learning, an analytic learning technique, can be viewed as using “background knowledge” as a type of bias; the generalizations pro- duced through explanation-based learning are biased towards those that can explained in terms of this background knowledge (Ellman 1989). Much of the research into inductive bias, particularly that which makes use of analytic learning techniques, has focused upon (supervised) learning from examples. VILAIN ET AL. 873 Our hybrid learning system, however,-can be viewed as employing an analytical classification method as an inductive bias for an unsupervised concept formation task. In our system, the hypothesis space is implicitly represented; the current memory structure represents the current hypothesis regarding the classification scheme. The prior knowledge encoded in the memory structure is a declarative bias. When a new instance is presented to be incorporated into the memory structure (perhaps causing the memory to be restructured), the classification algorithm can be viewed as conducting a search of the hypothesis space, using the analytic knowledge as a bias focusing the search. Conclusion In this paper we have presented a mechanism for combining the machine learning and knowledge representation approaches to classification, and described a hybrid knowledge representation appropriate for handling analytic and contingent knowledge. Our goal has been to use analytic knowledge as an inductive bias to focus the statistical learning procedure. One obvious extension is to use prior contingent knowledge along with analytic knowledge to set up the initial memory. The analytic language was used to define the essential links of the hierarchical classi- fication; we could also use the contingent language to define some of the well-known, textbook covarying features in advance. For example, a patient with general symptoms of heart disease who happens to be thirty or younger is almost certainly suffering of valve disease, despite the fact that the age norm for valve disease is greater than sixty. We expect that allowing for such non- analytic associations as these would further reduce the statistical learning method’s sensitivity to exceptional cases and to poor presentation order. We should note that our work to date has focussed primarily on theoretical considerations of representation and tractability. We intend to further validate our results by carrying out experiments to compare the performance of the learning method with and without the use of prior analytic knowledge. Finally, we would like to imple- ment our work in other diagnostic domains and explore the feasibility of this approach to other application areas. References Aghassi, D. 1990. Evaluating Case-based Reasoning for Heart Failure Diagnosis. SM thesis, Lab for Computer Science, Massachusetts Institute of Technology. Brachman, R.J., Fikes, R.E., and Levesque, H.J. 1983. Krypton: A Functional Approach to Knowledge Representation. Computer, 16(10):67-73. Ellman, T. 1989. Explanation-Based Learning: A Survey of Programs and Perspectives. Computing Surveys, 2 1: 163-22 1. Etherington, D.W. 1988. Reasoning with Incomplete Information . Research Notes in Artificial Intelligence. Morgan Kaufmann Publishers, Inc., San Mateo, CA. Etherington, D.W., and Reiter, R. 1983. On Inheritance Hierarchies with Exceptions. In Proceedings of the Third National Conference on Artificial Intelligence, Washington, DC. Gennari, J.H., Langley, P., and Fisher, D. 1989. Models of Incremental Concept Formation. Artificial Intelligence, 40: 1 l-61. Kolodner, J.L. 1983. Maintaining Organization in a Dynamic Long-Term Memory. Cognitive Science, 7:243-280. Koton, P.A. 1988. Reasoning about Evidence in Causal Explanations. In Proceedings of the Seventh National Conference on Artificial Intelligence, St. Paul, MN. Koton, P.A., and Chase, M.P. 1989. Knowledge Representation in a Case-Based Reasoning System: Defaults and Exceptions. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, Toronto, Ontario, Canada. Koton, P.A., Chase, M-P., and Vilain, M.B. 1990. Knowledge Representation in a Case-Based Reasoning System: An Extended Version. In preparation. Long, W.J., Naimi, S., Criscitiello, M.G., and Jayes, R. 1987. The Development and Use of a Causal Model for Reasoning about Heart Failure. In Proceedings of the 1 lth Symposium on Computer Applications in Medical Care, Poole, D. 1985. On the Comparison of Theories: Preferring the Most Specific Explanation. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA. Reiter, R. 1980. A Logic for Default Reasoning. Artificial Intelligence, 13:8 1- 132. Reiter, R., and Criscuolo, G. 1983. Some Representational Issues in Default Reasoning. Computers and Mathematics with Applications, 9: 15- 27. Russell, S.J., and Grosof, B.N. 1990. Declarative Bias: An Overview. In D. Paul Benjamin, ed. Change of Representation and Inductive Bias. Kluwer Academic Publishers, Boston, Dordrecht, London. Schmolze, J.G., and Lipkis, T.A. 1983. Classification in the KL-ONE Knowledge Representation System. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, FRG. Utgoff, P. 1986. Machine Learning of Inductive Bias. Kluwer Academic Publishers, Boston, Dordrecht, Lancaster. 874 MACHINE LEARNING
1990
137
890
Refinement of Approximate Domain Theories by Knowledge-Based Neural Networks* Geoffrey G. Towel1 Jude W. Shavlik Michiel 0. Noordewier University of Wisconsin - Madison 1210 West Dayton Street Madison, Wisconsin 53706 Abstract Standard algorithms for explanation-based learning re- quire complete and correct knowledge bases. The KBANN system relaxes this constraint through the use of empirical learning methods to refine approximately correct knowledge. This knowledge is used to deter- mine the structure of an artificial neural network and the weights on its links, thereby making the knowledge accessible for modification by neural learning. KBANN is evaluated by empirical tests in the domain of molec- ular biology. Networks created by KBANN are shown to be superior, in terms of their ability to correctly classify unseen examples, to randomly initialized neural networks, decision trees, “nearest neighbor” matching, and standard techniques reported in the biological lit- erature. In addition, KBANN’S networks improve the initial knowledge in biologically interesting ways. Introduction Explanation-based learning (EBL) (Mitchell e2 al. 1986; DeJong & Mooney 1986) provides a way of in- corporating pre-existing knowledge into a learning sys- tem. However, the basic algorithms suffer from the fact that the pre-existing knowledge cannot contain imper- fections (Mitchell et al. 1986). Conversely, empirical learning is a method for learning solely from training examples (e.g., Quinlan 1986). Empirical learning sys- tems have problems such as misclassification due to spu- rious correlations in the training data. Recent work (e.g., Flann & Dietterich 1989; Shavlik & Towel1 1989) combines empirical and explanation- based learning to overcome the problems of each ap- proach by using training examples to inductively re- fine pre-existing knowledge. Beyond overcoming the problems of each approach, hybrid systems should, af- ter training, be superior to EBL systems in terms of the range of examples over which they are correct. More- over, given the same set of training examples, hybrid systems should be superior, in terms of classification accuracy, to empirical learning systems. *This research was partially supported by a grant from the University of Wisconsin Graduate School. This paper describes the KBANN (Knodedge-Based Artificial Neural Networks) hybrid learning system and demonstrates its superiority to empirical and explanation-based learning systems along these di- mensions. Briefly, KBANN uses a knowledge base of hierarchically-structured rules which may be both in- complete and incorrect to form an artificial neural net- work (ANN). In so doing, KBANN makes it possible to apply neural learning techniques to the empirical, in- cremental improvement of knowledge bases. At present, KBANN is restricted to non-recursive, propositional (i.e., variable-free) domain theories. Un- der these restrictions, the ability of EBL to speedup a problem solver (Minton 1988) is not utilized. While this speedup is the primary strength of EBL, the sec- ondary strengths of this form of learning are directly applicable. Specifically, the domain theory indicates the features which are believed to be important to an example’s classification. The theory also specifies im- portant derived features; through their deduction the complexity of an ANN’s final decision is reduced. The following section presents the KBANN algorithm. In the subsequent section, KBANN is applied to a real- world problem in the domain of molecular biology. KBANN is shown to produce results better than those reported in the biological literature. Additionally, KBANN’S results are shown to be superior to randomly started ANNs, ID3 (Q uinlan 1986) - a symbolic empir- ical learning system, and “nearest neighbor” classifica- tion. Moreover, ANNs created by KBANN are shown to have improved upon the original domain theory in bio- logically interesting ways. The paper concludes with a discussion of research related to KBANN and the areas which our research is currently pursuing. The KBANN Algorithm KBANN uses a knowledge base of domain-specific infer- ence rules in the form of PROLOG-like clauses to define what is initially known about a topic. The knowledge base need be neither complete nor correct; it need only support approximately correct explanations. KBANN translates the knowledge base into an ANN in which TOWELLETAL. 861 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. units and links’ in the ANN correspond to parts of the knowledge base, as described in Table 1. The next section presents the approach KBANN uses to translate rules into neural networks. Subsequent sections present the KBANN algorithm and provide an example of its op- eration. Table 1: Knowledge Base - ANN Correspondences I Knowledqe Buse I Neural Network Final Conclusions Output Units Supporting Facts Input Units Intermediate Conclusions Hidden Units Denendencies Weighted Connections Translation of rules This section describes how KBANN translates rules con- taining AND, OR and NOT into an ANN. Rules are assumed to be conjunctive, nonrecursive and variable- free; disjuncts are encoded as multiple rules. (To sim- plify discussion in this section, only binary-valued fea- tures are assumed to exist. Handling of non-binary fea- tures is described on the next page.) The KBANN method sets weights on links and biases of units so that units have significant activation only when the corresponding deduction could be made us- ing the knowledge base. For example, assume there exists a rule in the knowledge base with n mandatory antecedents (i.e., antecedents which must be true) and m prohibitory antecedents (i.e., antecedents which must not be true). The system sets weights on links in the ANN corresponding to the mandatory and prohibitory dependencies of the rule to w and -w, respectively. The bias on the unit corresponding to the rule’s consequent is set to n * w - 4. 4 is a parameter chosen so that units have activation - 0.9 when their antecedents are satisfied and activation - 0.1 otherwise.2 This mapping procedure is sufficient only for a purely conjunctive knowledge base. Disjuncts cannot be han- dled because there is no way to set the bias of a unit that can be “deduced” in multiple ways such that no unintended combinations are allowed. For example, as- sume there exists a consequent T which can be proven by two rules, RI and R2. Further assume, that there are 7 antecedents (labeled to 0, . . . ,6) to “r and that an- tecedents [0 121 are mandatory for RI while antecedents [3 4 5 61 are mandatory for R-J. If the antecedents of RI and R2 are all connected to r such that either [0 1 21 or [3 4 5 61 can activate T, then there is no way to set the bias of Y such that unwanted combinations (e.g., [0 1 3 41) cannot also activate T. KBANN handles disjuncts by creating units Yl and T2, which correspond to RI and R2, using the approach ’ Unit refers to a processing element in a neural network. Link refers to a connection between units. 2 Currently, we use w = 3.0 and q5 = 2.3, values empiri- cally found to work well on several domains. 862 MACHINE LEARNING for conjunctive rules described above. These units will only be active when their corresponding rule is true. KBANN then connects Yi and T2 to ‘r’ by a link of weight w and sets the bias of ‘I+ to w - 4. Hence, ‘I’ will be active when either rl or r2 is active. Algorithm specification Three additional steps are required to complete ANN following the initial translation of the knowledge base. First, input units corresponding to features of the en- vironment that do not appear as an antecedent of any rule must be added to the network. These units are necessary because an approximately correct knowledge base may not have used some features that are neces- sary to accurately express a concept. Second, links must be added to the network to give existing rules access to items not mentioned in the knowledge base. These links initially have weight equal to zero. They are placed by grouping units according to their maximum path length from an input unit and adding links between all units in successive groups. Third, the network must be per- turbed by adding random numbers within E of zero to all link weights and biases to avoid symmetry breaking problems (Rumelhart et al. 1986).3 The KBANN algorithm is summarized in Table 2. Once the network is produced, it is refined by providing training examples which are processed using backprop- agation (Rumelhart et al. 1986). Table 2: Overview of the KBANN Algorithm 1. Translate rules to set initial network structure. 2. Add units not specified by translation. 3. Add links not specified by translation. 4. Perturb the network by adding near zero ran- dom numbers to all link weights and biases. Example of the algorithm As an example of the KBANN method, consider the ar- tificial knowledge base in Figure la which defines mem- bership in category A. Figure lb represents the hier- archical structure of these rules: solid and dotted lines respectively represent necessary and prohibitory depen- dencies. Figure lc represents the ANN that results from the translation into a neural network of this knowledge base. Units X and Y in Figure lc do not correspond directly to consequents in the knowledge base. Rather, the units are introduced to handle the disjunction in the knowledge base as described above. The thick lines in Figure lc represent the links in the ANN that corre- spond to dependencies in the explanation. Thus, with w = 3, the weight on thick solid lines is 3, while the weight on thick dotted lines is -3. The lighter solid lines represent the links added to the network to allow refinement of the domain theory. 3We currently use 6 = 0.01. A :- B, C. B :- not F, G. B :- not H. C :- I, J. F G H I J K (b) I Figure 1: Translation of a Knowledge Base into an ANN Numbers beside the unit names in Figure lc are bi- ases of the units. So, with 4 = 2.3, the bias of unit B is set to 0.7 so it is activated when either Y or Z is ac- tive. The bias of unit A is set to 3.7 so it is active only when both C and B are active. Conversely, the bias of unit X is set to 0.7 so that it will be active when input is received from unit G and not from unit F. Lacking any mandatory antecedents, the bias of Y is set to -2.3. Hence, Y will be active except when H is active. Handling non-binary features Currently, the system can handle three types of fea- tures: nominal, linear and hierarchical. Discussions of the exact approach used to handle these feature types, and the added information they require, follows. Nominal Nominally valued features (i.e., features whose possible values can be listed and have no struc- ture) are handled by assigning one input unit to each value of the feature. To do this, KBANN must be given a list of the possible values a feature can have. For ex- ample, if the feature color is stated to have three values: red, green and blue, then three input units: color-is-bhe, color-is-red and color-is-green, will be created. Linear Linear features may take on an infinite number of values within some region of the number line. As a result, the method for handling nominal features cannot be used. Instead, KBANN requires a user-defined partitioning of the range over which the feature varies. The partitioning provides a nominal representation (e.g., small, medium, and large) which may be used by rules in a knowledge base. In addi- tion, each partition is assigned an input unit in the ANN. For values within the partition of a particu- lar unit, the unit has an activation of one. Outside the partition, the unit should still be active accord- ing to how close the value is to the partition. This allows the network to learn subranges that are not among those initially specified. To implement this, units for linear features have the activation function !P(MAX(O, (ASS(Midp oint -Actual) - Range/Z))/ Range) where: Midpoint is the midpoint of the subrange, Range is the width of the subrange, Actual is the the exact value of the feature, and Q is a function based upon the standard normal distribution.4 Hierarchical Hierarchical features are handled, with one exception, as if a set of rules defined the ISA hierarchy. The exception is that whenever a rule in the knowledge base refers to an element in a hierarchy, in addition to the high weight link from that element, low weight links are created from all ancestors and descen- dants of the element. So, looking at Figure 2, if a rule contains non-insulating as an antecedent, the unit cor- responding to the consequent of this rule would be given low weight links to material, paper and ceramic. In this way, the network is given the capability to specialize or generalize the initial rule according to the hierarchy. Material 17-K Non-insulating Insulating /7-N /I““\ Ceramic Paper Styrofoam Open-cell foam Figure 2: A Hierarchy of Cup Materials Experimenting with KBANN This section reports a study of the utility of the KBANN algorithm. The real-world task of recognizing biological concepts in DNA sequences was investigated. In par- ticular, the task was to recognize promoters in strings that represent nucleotides (one of A, G, T, or C). A pro- moter is a genetic region which initiates the first step in the expression of an adjacent gene (transcription). Table 3 contains the initial domain theory used in the promoter recognition task. The first rule says that a promoter involves two subcategories: a contact and a conformation region. The second rule states that a contact involves two regions, while subsequent rules de- fine alternative ways these regions can appear. This set of rules was easily derived, by one of us (Noordewier, who is also a biologist) from the biological literature (Harley & Reynolds 1987; Hawley & McClure 1983). It 4Tlle standa rd normal distribution is a common statisti- cal probability distribution. TOWELLETAL. 863 should be noted that this domain theory fails to cor- rectly classify any positive example in the training set. Table 3: A Domain Theory For Promoters promoter :-contact, conformation. contact :-minus35, minus-lo. minus-35 :-Q-37 “cttgac” . minus35 :-Q-36 “ttgxca”. minus35 :-Q-36 “ttgaca”. minus-35 :-Q-36 “ttgac”. minus-l 0 :-Q-14 “tataat”. minus-l 0 :-Q-13 “taxaxt”. minus-10 :-Q-13 “tataat”. minus-10 :-Q-12 “taxxxt”. conformation:-@-45 “aaxxa”. conformation:-@-45 “axxxa”, Q-4 “t”, Q-28 “txxxtxaaxxtx”. conformation:-@-49 “axxxxt”, Q-1 “a”, Q-27 “txxxxaxxtxtg” . conformation:-@-47 “caaxttxac”, Q-22 “gxxxtxc”, @-8 “gcgccxcc”. The input features are 57 sequential DNA nu- cleotides. A special notation is used to simplify spec- ifying locations in the DNA sequence. The biological literature counts locations relative to the site where transcription begins. Fifty nucleotides before and six following this location constitute an example. When a rule’s antecedents refer to input features, they first state the starting location, then list the sequence that must follow. In these specifications, “x” indicates that any nucleotide will suffice. Hence, the first rule for conformation says that there must an “a” 45 nu- cleotides before the site where transcription begins. An- other “a” must be at position -44, then any two nu- cleotides can appear, and finally there must be a “t” at location -41. prom,oter DNA sequence Figure 3: The Initial ANN for Promoter Recognition This domain theory is translated by KBANN into a neural network with the topology shown in Figure 3. Recall that the algorithm adds additional low-weighted links (not shown) so that if additional sequence infor- mation is relevant, the algorithm can capture that in- formation during training. Fifty-three sample promoters and 53 nonpromoter se- quences were used to refine the initial neural network. The 53 sample promoters were obtained from a com- pilation produced by Hawley and McClure (1983). An initial concern of ours was the construction of nega- tive training examples (i.e., sequences that contained no promoters). Most studies randomly permute sequences in an effort to derive examples that do not meet con- sensus criteria described below, but nonetheless retain the correct nucleotide frequencies (Lapedes et al. 1989 ). DNA, however, is known to be highly non-random. Negative training examples were thus derived by select- ing contiguous substrings from a 1.5 kilobase sequence provided by Prof. T. Record of the Univ. of Wisconsin’s Chemistry Dept. This sequence is a fragment from E. coli bacteriophage T7 isolated with the restriction en- zyme HaeIII. By virtue of the fact that the fragment does not bind RNA polymerase, it is believed to not contain any promoter sites [Record, personal communi- cation]. In order to get an estimate of how well the algorithm learned the concept of promoter, a standard experi- ment al methodology called “leave-one-out” (or “cross- validation”) was used. This technique operates by training using N - 1 examples, then testing using the example left out. The procedure is repeated N times (N = 106 here), so that each example is excluded once from the training set. The error rate is the number of er- rors on the single test cases, divided by N. This proce- dure was repeated 10 times for neural-based algorithms because they use random numbers during initialization and training. Using the same methodology, three other learning algorithms were applied: standard backpropagation, Quinlan’s ID3 (1986)) and “nearest neighbor.” For standard backpropagation, the same number of hidden units (16) was used as was used in the ANNs created by KBANN .5 All of the input units were connected to each hidden unit and every hidden unit was connected to the output unit. All weights were randomly initial- ized to a number near zero. ID3 is a non-connectionist empirical learning algorithm. It uses training data to construct a decision tree for determining the category of an example. At each step, a new node is added to the decision tree by partitioning the training examples based on their value along a single, statistically most- informative feature. “Nearest neighbor” compares the current instance to all known instances, locating exact matches or the k most similar. The classification of the instance is the classification of the majority of the k most similar neighbors. With distance defined as the number of mismatched nucleotides, k = 3 was found to work best on this task. Table 4 contains the number of errors on the 106 5Networks with 16 hidden units were locally superior to networks with a greater (21) or lesser (11) number of hidden units. 864 MACHINE LEARNING training examples for the three learning algorithms.6 In all cases, each algorithm correctly classified all members in the training sets. Hence, although each algorithm fully accounted for the training data, KBANN did a bet- ter job of generalization, in that its error rate on previ- ously unseen examples was substantially lower. Finally, Table 4 contains the results of O’Neill’s (198913) ad hoc partial pattern matching approach for promoter recog- nition that is the best method reported in the biological literature. Table 4: Error Rates in the Promoter Experiment 1 System Error Rates 1 KBANN Standard Backpropagation O’Neill Nearest Neighbor ID3 As KBANN and standard backpropagation were each run 10 times, the respective error rates can be statisti- cally compared. The t-test indicates that on individual runs KBANN is superior to Standard Backpropagation with 99.95% certainty (t = 5.29,d.f. = 18). Human inspection of the network after learning clearly showed which input positions were most impor- tant for recognizing promoters. Combinations of as few as six nucleotides were found to be sufficient to dis- criminate promoters from nonpromoters. By contrast, a recent study using more conventional techniques sug- gested using a minimum of twelve nucleotides (O’Neill 1989a). This “consensus sequence” was determined by noting which positions displayed the same base in greater than 50% of the class of promoters under study. Unfortunately, such a consensus fails to recognize any true promoters, due to excessive stringency if exact matches are required at each position. Furthermore, KBANN’S neural network assigned particular importance to bases in certain positions. These highlighted posi- tions correspond exactly to the most conserved bases in (Hawley & McClure 1983). Finally, the network learned that certain values for some base pairs indicate that a promoter is probably not present. For instance, a C in base pair -8 and an A in base pair -36 both strongly sug- gest that a promoter is not present. This ability may be useful to address the problem that promoters lose their biological activity when specific single nucleotides are mutated (Youderian et al. 1982). O’Neill notes that this is an unresolved problem for consensus methods, since the alteration of a single base does not degrade the quality of the match very much. A neural network, 6Rather than simply taking the average of the error rates over 10 runs for the neural learning algorithms, the acti- vation of the output for each test example in each of the 10 runs was averaged. This average output was then used to determine the classification of each example. This tech- nique slightly reduced the error rates of both neural-based approaches. on the other hand, is capable of severely penalizing in- dividual bases, by attaching large negative weights to the input units representing those positions. This experiment demonstrates, using an important real world problem, the promise of the KBANN ap- proach. It produced a more accurate recognizer of pro- moters, demonstrating the value of incorporating pre- existing knowledge about the task being learned. Related Work This paper extends and realistically tests the ideas first presented in (Shavlik & Towel1 1989). One problem, specific to neural networks, addressed by KBANN is topology determination. In relatively early work on ANNs, topological decisions were restricted to the size of a single layer of hidden units in fully- connected networks (e.g., Rumelhart et al. 1986). This decision is important, because an ANN with too few units will be unable to learn a concept, and an ANN with too many hidden units may generalize poorly (Kruschke 1988). More recently, full connectivity has been shown to hinder learning on some tasks (Rueckl et al. 1988). Moreover, different random settings of link weights can result in radically different learning rates and generalization (Shavlik et al. in press). Thus, de- termining the topology of an ANN requires deciding about: the pattern of connectivity, the number and dis- tribution of hidden units, and the link weights. In general, two approaches have been taken to this problem. The first approach, similar in spirit to KBANN, makes most or all topological decisions prior to training (Rueckl et al. 1988; Katz 1989). The second approach modifies network structure as a part of the learning process. This approach includes recruitment learning (e.g., Honavar & Uhr 1988) in which hidden units are added to the network as during learning and methods for removing excess hidden units (e.g., Kruschke 1988). A second problem specific to neural networks is the integration of existing information into the network. Complex, hand-designed networks (e.g., Rueckl et al. 1988) can be viewed as an attempt to give networks some implicit knowledge of a problem domain. How- ever, little work other than KBANN, has been done on how to explicitly give ANNs background information. The work that has been done is similar in approach to KBANN but does not focus on improving incorrect do- main theories. For example, Katz’ (1989) work stresses improving the execution speed of neural networks by adding links that effectively reduce the depth of the network. ANNs have been essentially unused as a tool for im- proving approximately correct domain theories. How- ever, much work has been done on the use of other empirical learning techniques to modify and correct do- main theories. For instance, the IOE system (Flann & Dietterich 1989) uses conventional inductive learning to empirically analyze a collection of explanations, thereby refining the domain theory. TOWELLETAL. 865 Current Research Issues An extension to KBANN being pursued is automatic in- terpretation of networks after training. As pointed out in the molecular biology experiments, interpretation of ANNs after learning can be helpful in understanding why the ANN behaves as it does. Automatic transla- tion is expected to take advantage of the meaningful starting configuration of the ANN to allow the post- learning ANNs to be understood. Preliminary inves- tigations suggest that analysis of the changes in link weights and biases in combination with observation of activations over several inputs can provide an accurate picture of how the network arrives at its conclusions. Another extension currently receiving attention is the use of reasoning by explanation failure to constrain er- ror propagation in the network. The method, based upon work by Hall (1988), makes directed changes to link weights when false negative answers are generated. A further extension to KBANN is the addition of hid- den units to the network beyond those specified by the knowledge translation. These added units would allow the network to learn relations not anticipated in the pre-existing knowledge. Currently we are considering adding hidden units as a fixed percentage of the existing hidden units at each layer in the ANN. Other methods for adding hidden units such as recruitment learning (e.g., Honavar & Uhr 1988) are also being investigated. Conclusions The KBANN approach has been shown to make it pos- sible to use ANNs to refine pre-existing knowledge. In addition, it was demonstrated that the KBANN method can automatically generate ANNs that are well-suited to the task they are intended to learn. KBANN does this by using a knowledge base of approximately cor- rect, domain-specific rules to determine the structure and set the initial weights for an ANN. An experiment in molecular biology demonstrated the effectiveness of the KBANN approach. Taking ad- vantage of a knowledge-based initialization, networks created by KBANN were superior in terms of their gener- alization ability to randomly initialized networks, classi- fication trees, “nearest neighbor” methods, and the best technique reported in the biological literature. Further, neural learning improved the accuracy of the provided domain theory. Thus, the KBANN method provides a technique both for automatically generating ANNs with good initial topologies and for empirically improving domain theories. References Hall, R. 1988. Learning by failing to explain: Using partial explanations to learn in incomplete or intractable domains. Machine Learning, 3~45-77. Harley, C. and Reynolds, R. 1987. Analysis of E. coli pro- moter sequences. Nucleic Acids Research, 15:2343-2361. Hawley, D. and McClure, W. 1983. Compilation and analysis of Escherichia coli promoter DNA sequences. Nucleic Acids Research, 11:2237-2255. Honavar, V. and Uhr, L. 1988. A network of neuron- like units that learns to perceive by generation as well as reweighting of links. In Proc. Connectiondst Models Sum- mer School, pages 472-484. Katz, B. 1989. EBL and SBL: A neural network synthesis. In Proc. Eleventh Conference of the Cognitive Science Society Conference, pages 683-689. Kruschke, J. 1988. Creating local and distributed bottle- necks in hidden layers of back-propagation networks. In Proc. 1988 Connectiondst Models Summer School, pages 357-370. Lapedes, A.; Barnes, C.; Burkes, C.; Farber, R.; and Sirotkin, K. 1989. Application of neural networks and other machine learning algorithms to DNA sequence analysis. In Computers and DNA, SFI Studies in the Science of Com- plexity VII. Addison-Wesley, Reading, MA. Minton, S. 1988. Quantitative results concerning the util- ity of explanation-based learning. Artificial Intelligence, 42:363-391. Mitchell, T.; Keller, R.; and Kedar-Cabelli, S. 1986. Explanation-based generalization: A unifying view. Ma- chine Learning, 1~47-80. O’Neill, M. 1989a. Escherichia coli promoters: I. Consensus as it relates to spacing class, specificity, repeat substructure, and three dimensional organization. Journal of Biological Chemistry, 264:5522-5530. O’Neill, M. 1989b. Escherichia coli promoters: II. A spac- ing class-dependent promoter search protocol. Journal of Biological Chemistry, 264:5531-5534. Induction of decision trees. Machine Quinlan, J. 1986. Learning, 1:81-106. Rueckl, J.; Cave, K.; and Kosslyn, S. 1988. Why are “what” and “where” processed by separate cortical visual systems? Journal of Cognitive Neuroscience, l(2). Rumelhart, D.; Hinton, G.; and Williams, J. 1986. Learning internal representations by error propagation. In Rumel- hart, D. and McClelland, J., editors, Parallel Distributed Processing, Vol. 1, pages 318-362. MIT Press, Cambridge, MA. Shavlik, J. and Towell, G. 1989. An approach to combining explanation-based and neural learning algorithms. Connec- tion Science, 1~233-255. Shavlik, J.; Mooney, R.; and Towell, G. in press. Symbolic and neural net learning algorithms: An empirical compari- son. Machine Learning. Forthcoming. DeJong, G. and Mooney, R. 1986. Explanation-based learn- Youderian, P.; Bouvier, S.; and Susskind, M. 1982. Sequence ing: An alternative view. Machine Learning, 1:145-176. determinants of promoter activity. Cell, 10:843-853. Flann, N. and Dietterich, T. 1989. A study of explanation- based methods for inductive learning. Machine Learning, 4~187-226. 866 MACHINE LEARNING
1990
138
891
Adding Domain Knowledge to SBL through Feature Construction Christopher John Matheus* GTE Laboratories Incorporated 40 Sylvan Rd., Waltham MA 02254 cjml@bunny.gte.com Abstract This paper presents two methods for adding domain knowledge to similarity-based learning through feature construction, a form of representation change in which new features are constructed from relationships de- tected among existing features. In the first method, domain-knowledge constraints are used to eliminate less desirable new features before they are constructed. In the second method, domain-dependent transforma- tions generalize new features in ways meaningful to the current problem. These two uses of domain knowl- edge are illustrated in CITRE where they are shown to improve hypothesis accuracy and conciseness on a tic-tat-toe classification problem. Introduction One advantage of explanation-based learning (EBL) is its ability to learn from few examples by exploiting domain-specific constraints represented in a domain theory. Similarity-based learning (SBL), on the other hand, requires relatively large numbers of training in- stances, but is more readily applicable because a do- main theory need not be available. Recent research in machine learning has begun to focus on methods of integrating EBL and SBL techniques (see the section entitled Combining Empirical and Explanation-based Learning in [Segre, 19891). This paper proposes an integrated approach that incorporates domain knowl- edge into SBL systems through feature construction. Feature construction is the process of defining new features based on useful relationships discovered among existing features. Constructed features are typ- ically used to re-represent the training instances in hopes of making the target concept easier to learn [Matheus, 19891. Because the space of new features is usually intractable to search, practical feature con- struction requires strong and appropriate constraints. Domain knowledge can provide the appropriate con- straints, similar to the way that domain theories con- “This research was supported by a University of Illinois CS/AI Fellowship, the National Science Foundation, Grant No. IRI 8822031, and the Office of Naval Research, Grant No. N00014-88K-0124. strain learning in EBL. Unlike EBL, however, domain knowledge used in feature construction need not be a complete theory, but may comprise simple, disjoint pieces of problem-specific information. The use of this sort of knowledge-driven feature construction provides a way of adding simple domain knowledge to SBL sys- tems. Since only the features representing the training instances are affected by this approach, the underlying inductive algorithm need not be altered. Several machine learning systems perform feature construction; recent examples include DUCE [Muggle- ton, 19871, FRINGE [Pagallo, 19891, MIRO [Drastal and Raatz, 19891, PLSO [Rendell, 19851, STABB [Ut- f off, 19861, and STAGGER [Schlimmer, 19871 (see Matheus, 19891 for a description of these and other feature construction systems). Only a few of these systems, however, explicitly use domain knowledge during feature construction (e.g., MIRO). This pa- per describes two methods for using domain knowl- edge in feature construction, and outlines their im- plementations in CITRE [Matheus and Rendell, 1989, Matheus, 19891. Experimental results are presented that demonstrate the successful application of these methods on a tic-tat-toe classification problem. CITRE CITRE is a decision-tree-based learning system that performs feature construction by selecting relation- ships for new features from positive tree branches. Al- though similar in this respect to FRINGE [Pagallo, 19891, CITRE d’ff i ers in its use of a variety of new- feature selection methods, its use of domain knowl- edge to filter out undesirable features, its potential for generalizing new features, and its evaluation of con- structed features. Figure 1 illustrates CITRE’s learning algorithm. A learning problem is submitted to CITRE as a set of original features F = a set of tralnlng lnst$;s-l) PA cded primitives and . . . = {il , . . . , ira} described in terms of the primitives. The instances and features are used to construct an initial decision tree based on the information-theoretic splitting criteria employed by ID3 [Quinlan, 19831. The constructed tree is then MATHEUS 803 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Decision Tree Feature Construction Construction P- F = 1 pl- PJ I I = { i,.. iII) I I I ----------------- -I Figure 1: Illustration of CITRE’s learning algorithm. pruned using pessimistic pruning [Quinlan, 19871, and passed as input to CITRE’s feature-construction mod- ule. New features are constructed by selecting relation- ships from the decision tree as described below. The new features F* are added to the active feature set, and the entire process is repeated. This iterative learning algorithm terminates when either the current tree con- tains no positive branches consisting of more than one feature, or all potential new features from the current . . tree are already present in the active feature set. Al- though the version of CITRE described in this paper constructs only Boolean features, the system is capa- ble of working with learning problems having nominal, ordinal, or continuous-valued primitives. Feature Construction in CITRE Feature construction can be viewed in terms of four aspects (see [Matheus, 19891): 1. the detection of when new features are required 2. the selection of relationships used to define new fea- tures 3. the generalization of new features 4. the global evaluation of constructed features Descriptions of CITRE’s approach to detection, selec- tion, and evaluation can be found in [Matheus and Rendell, 1989, Matheus, 19891. This paper focuses on CITRE’s methods of domain-knowledge filtering dur- ing selection and new-feature generalization. During selection, CITRE selects pairs of Boolean re- lations (i.e., feature-value pairs) from the nodes in pos- itively labeled branches of a decision tree, and conjoins them to form new Boolean features, e.g., and(coZor = red, size = big). This selection is achieved by one of five methods: root, fringe, root-fringe, adjacent, or all. The root method selects the relations in the first two nodes of each positive branch, the fringe method se- lects the last two (i.e., the method used by FRINGE), the root-fringe method selects both root and fringe pairs, the adjacent method selects all adjacent pairs along each branch, and the all method selects every possible combination of feature pairs along each in- dividual branch. The adjacent selection method was used for the experiments described in this paper (see [Matheus, 19891 for comparisons of other methods). The selected pairs of relations are passed through a domain-knowledge filter eliminating pairs that do not satisfy the constraints imposed by the available do- main knowledge. Domain knowledge is represented in CITRE as predicates that define relationships required of all selected pairs. For example, in the tic-tat-toe experiments described below, information about piece adjacency was encoded as a domain-knowledge con- straint restricting operands to pairs of spatially adja- cent features. A virtually unlimited number of domain- knowledge constraints may be defined in this way. On the other hand, domain-knowledge constraints are not required; in their absence, all selected relation pairs are used to define new features. Domain-knowledge filtering reduces the space of new features and can result in a more tractable space con- taining a higher percentage of useful features. Unlike a domain theory in EBL, CITRE’s constraints need not define complete relationships between the goal con- cept and the primitives. As a result, simple common- sense types of information can be made available to the learning algorithm, e.g., in board games spatial proximity is often an important factor. After selection, CITRE performs feature generuliza- tion using domain-dependent generalization operators. The available generalization operators are applied se- quentially to each potential new feature, resulting in one or more generalized new features. In the tic-tat-toe experiments, four generalization operators were used to spatially translate candidate features up, down, left, and right on the game board. This process of feature generalization can help identify potentially useful fea- tures more quickly and with fewer training examples. As with filtering, all available domain knowledge is ap- plied to every new feature. Together, domain-knowledge filtering and general- ization operators provide a simple, systematic method for incorporating specific domain knowledge into fea- ture construction, and thereby into the SBL induc- tive processes. As the experiments in the next sec- tion demonstrate, the addition of domain knowledge in this way can result in significant improvements in hypothesis accuracy, hypothesis conciseness, and over- all learning time. Tic-tat-toe Classification The use of domain knowledge in CITRE is demon- strated in this section on a tic-tat-toe classification problem. This learning problem was selected because of the disjunctive form of the target concept (which poses difficulties for many SBL algorithms), and be- cause of the availability of simple domain knowledge relevant to board games. The target concept for this classification problem is “a win for x.” The feature set comprises nine nominal primitives corresponding to the nine board locations (labeled pll, ~12, ~13, ~21, ~22, ~23, ~31, ~32, and p33 in Figure 2). All features range over the values of x, o, and blank. Using these 804 MACHINE LEARNING win for x win for 0 draw Figure 2: The tic-tat-toe board on the left shows the feature labels for each of the nine board squares. Ex- amples of win for x, win for o, and draw are shown to the right. Below each example is the number of in- stances of that type found in the set of all obtainable, final tic-tat-toe boards. primitives, the target concept can be expressed by the following disjunctive relationship: [(pll = 2) A (~12 = z)A(p13 = zc)]V[(p21 = z)A(p22 = z)A(p23 = z)]V[(p31 = %)A@32 = z)A(p33 = s)]V[(pll = ~)A@21 = z)A(p31 = z)]V[(pl2 = 4A(p22 = z)A(p32 = z)]V[(pl3 = z)A(p23 = z)A(p33 = e)]V[(pll = s)A(p22 = ~)A@33 = e)]V[(pl3 = c) A (~22 = z) A (~31 = z)]. Although the instance space defined by the nine fea- tures has cardinality 3’ = 19,.683, the rules for tic-tac- toe permit only 958 final tic-tat-toe boards, of which 626 are wins for x, 316 are wins for o, and 16 are draws (see Figure 2). This entire set of instances was used as the test set in all experiments. Ten training sets for each of five training sizes (100, 200, 300, 400, and 500 examples) were randomly drawn from this set. For each independent variable tested in the experi- ments below, ten test runs were made at each of the five training sizes while the following dependent vari- ables were recorded: 1) accuracy of the final hypothesis on classifying the test-set instances, 2) length of the fi- nal hypothesis measured as the number of primitives required to define all internal nodes, and 3) CPU time consumed during the entire learning process. These recorded values were averaged and statistically ana- lyzed using a t-test with 95% confidence intervals. CITRE without Domain Knowledge CITRE was run without domain-knowledge filtering or generalization in this first series of experiments. The results are shown in Figure 3 for the adjacent method and for the decision-tree algorithm without feature construction (referred to as the none method). Fea- ture construction improves accuracy by greater than 15% at the higher training sizes. The final hypothe- ses, however, are less concise. Although part of this increase in length is due to the additional nodes re- quired to improve hypothesis accuracy, the adjacent method produces excessively long final hypotheses at the higher training sizes (for example, at training size 400, final hypotheses are at least twice as long as nec- essary for the corresponding level of accuracy). CPU times are also much greater than those for the none loo- 90 - 80 - 70- 60 ,-lc-::::j 0 100 200 300 400 500 Number of Training Examples 80 60 40 20 0 1 I I I 0 100 200 300 400 500 Number of Training Examples 10 8 6 4 2 0 0 100 200 300 400 500 Number of Training Examples Figure 3: Results of CITRE’s application to tic-tac- toe classification without the use of domain knowledge: * = adjacent, o = none. method (over 700 times as much at training size 500). This increase is due to the additional time required to construct new features and to re-run the tree-induction procedure over several generations. In summary, the adjacent method alone improves accuracy significantly but results in lg?s concise hypotheses and requires large amounts of CPU time. Domain-Knowledge Filterjng The next series of experiments tested the use of domain-knowledge filtering by adding two knowledge constraints: piece adjacency and piece type. Piece adjacency encodes the importance of piece proximity in board games by constraining new features to rela- tions between features physically adjacent on the game board (e.g., pll and ~12, but not pll and ~13). Piece- MATHEUS 805 A 90 E U : 80 ; 70 600 0 100 200 300 400 500 Number of Training Examples ; 40 20 0 0 100 200 300 400 500 Number of Training Examples 10 $ 8 u 6 - 00 0 100 200 300 400 500 Number of Training Examples Figure 4: Results with the addition of domain- knowledge filtering: o = adjacent + filtering, 0 = adjacent + filtering with deferred pruning, * = adjacent, o = none. type knowledge encodes the importance of distinguish- ing between the players of the game (Z versus o) by constraining new features to relations between features possessing the same value (e.g., pl 1 = x and p12 = x, but not pll = x and ~12 = 0). New features were re- quired to satisfy both domain-knowledge constraints. Although these pieces of domain knowledge are rel- evant to a larger class of board-game problems, the specific predicates that defined them in these experi- ments were tailored for tic-tat-toe. Figure 4 compares the domain-knowledge filtering results to the previous experiment. Feature construc- tion with filtering improves hypothesis accuracy (sig- nificant at size 500), however, the improvement is much less than was achieved with the adjacent method with- out filtering. Although the hypothesis lengths and CPU times are substantially reduced, these improve- ments are less meaningful in light of the lower accu- racy. This poorer performance suggests that the con- straints imposed by the domain knowledge were per- haps too restrictive, preventing the construction of useful features. However, although some potentially useful features would have been filtered out for fail- ure to satisfy the piece-adjacency and piece-type con- straints, the most useful features were still realizable (e.g., and(pl1 = x, and(p22 = x,p33 = x))). The reason more of these features were not constructed is that the pruning of trees during feature construction severely reduces the length of positive branches, and thereby the pool of potential new feature (the effects of pruning are discussed more fully in [Matheus, 19891). To overcome this problem, an approach was imple- mented in which pruning is deferred until after the fi- nal tree is constructed. - With this approach feature -- construction operates on all the information in the un- pruned trees, while the benefits of pruning are main- tained in the final hypotheses. The results of deferred pruning tests are graphed as the @ plots in Figure 4. For filtering, deferred pruning greatly improves accu- racies and has insignificant effects on conciseness and learning time. These results compare favorably with the accuracy results of the pure adjacent method (i.e., the * plot in Figure 4). Feature Generalization In the next series of experiments new features were gen- eralized by spatially translating them up, down, left, and right on the game board. All translations were made in a single direction but could be extended up to two places in distance. As a result, the minimum number of new features generalizable from a single new feature was zero and the maximum was three. Fig- ure 5 shows the new feature and(pl1 = x,p21 = x) translated one place down, one place to the right, and two places to the right, resulting in three new features: and(p21 = x, ~31 = x), arid(pl2 = x,p22 = x), and xl X +-I- Figure 5: An example of the spatial translations used in feature generalization. The constructed feature and(pl1 = x,p21= right one square, x) is translated down one square, and right two squares to produce three new features. 806 MACHINE LEARNING and(pl3 = x, ~23 = x). Although the predicates used to perform the generalizations are specific to tic-tac- toe, spatial translations of this sort are relevant to a wide range of learning problems. As shown in Figure 6, generalization improves accu- racy by more than 18% relative to the control method (except at training size loo), and by as much as 10% over the adjacent method alone. Conciseness is not significantly affected by generalization, as the final hy- potheses continue to be longer than necessary. CPU times, however, increase to more than 1000 times that required by the control method at size 500. The im- proved accuracy achieved with generalization would be attractive if conciseness and CPU times could be im- proved. This situation suggests a combined approach using generalization and domain-knowledge filtering. 100 90 80 70 1 1 60 -I- I I I I I I O 100 200 300 400 500 Number of Training Examples O& 0 100 200 300 400 500 Number of Training Examples 10 8 6 4 2 0 0 100 200 300 400 500 Number of Training Examples Figure 6: Results with feature generalization: e = adjacent + generalization, * = adjacent, o = none. Combining Filtering and Generalization The final experiment combined domain-knowledge fil- tering and feature generalization. The results are graphed in Figure 7 along with plots for the adjacent method without filtering or generalization and the none method. The combined method significantly improved hypothesis accuracy relative to the control method. Although the accuracies are slightly less than those achieved by generalization alone (except at train- ing size loo), they are slightly better than achieved with filtering alone. As hoped for, the combined method achieves these improved accuracies while re- ducing hypothesis length and learning times relative to the individual methods of filtering and generaliza- tion. 100 A 90 : U k 80 $ 70 60 lb0 260 3io 4bo 560 Number of Training Examples 0 100 200 300 400 500 Number of Training Examples g 8 u 6 H 4 ; Ei 2 0 0 100 200 300 400 500 Number of Training Examples Figure 7: Results for the combined use of generaliza- tion and domain-knowledge filtering: * = adjacent + generalization + filtering (with deferred pruning), * = adjacent, o = none. MATHEUS 807 Conclusions An approach was presented for incorporating domain knowledge into SBL through feature construction. Its use was demonstrated in a version of CITRE that in- corporates domain knowledge during feature construc- tion in two ways: as constraints on the types of features that may be constructed, and as transformations for generalizing new features. In empirical tests on tic-tac- toe classification, domain knowledge used in this way improved hypothesis accuracy and conciseness, and re- duced the computational costs of feature construction. This approach offers two main advantages. First, because the domain knowledge is used in feature con- struction, the underlying inductive algorithm is not af- fected. As a result, this approach should be readily applicable to existing SBL systems through the ad- dition of a feature construction component. Second, this approach works with domain knowledge ranging in quality from irrelevant information to complete domain theories. When the domain knowledge is inappropri- ate, few if any useful features will be constructed and performance will fall to the level achievable using the primitives alone. With complete domain knowledge, useful features can be constructed from few examples and optimal accuracy and conciseness can be achieved. The specific results of CITRE’s application to tic- tat-toe classification were presented to demonstrate the central idea of using domain knowledge during fea- ture construction. There are many improvements that could be made both to the general approach and to the specific techniques used in CITRE. In particular, the use of domain-knowledge filtering as described in this paper is rather rigid: if a new feature is incon- sistent with the knowledge, it is not constructed. A more flexible approach would use knowledge as a guide for suggesting new features while retaining the abil- ity to construct features unanticipated by the domain knowledge. For example, when the domain-knowledge filtering constraints were found to be too severe in the second experiment, the constraints might have been re- laxed to permit the construction of additional features. This idea is evident in MIRO [Drastal and Raatz, 19891 where domain knowledge used to construct new fea- tures is retracted if the active features do not permit the construction of a consistent hypothesis. Another issue that deserves further consideration is the generalization of new features. The method used in the tic-tat-toe experiments is very primitive: a fea- ture is translated into features having a similar level of descriptive power (i.e., at the same level of gener- alization). A true generalization would, for example, take a feature that detects an x in the first and sec- ond squares and generalize it to a feature that detects an occurrence of two x’s side-by-side anywhere on the board. Generalized features of this sort can lead to faster learning from fewer examples, but because they also increase the complexity of the search space, they require even stronger constraints. Acknowledgments I would like to thank Chuck Anderson, Gunnar Blix, Carl Kadie, Gregory Piatetsky-Shapiro, Larry Rendell, Hong Shinn, Bernard Silver, and Richard Sutton for and suggestions during prepa- their helpful comments ration of this paper. References Drastal, George and Raatz, Stan 1989. Learning in an abstract space. Technical Report DCS-TR-248, Department of Computer Science, Rutgers University. Matheus, Christopher J. and Rendell, Larry A. 1989. Constructive induction on decision trees. In Proceed- ings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI. 645-650. Matheus, Christopher J. 1989. Feature Construction: An Analytic Framework and An Application to Deci- sion Trees. Ph.D. Dissertation, University of Illinois at Urbana-Champaign. Muggleton, Steve 1987. DUCE, an oracle based ap- proach to constructive induction. In Proceedings of the Tenth International Joint Conference on Artifi- cial Intelligence. 287-292. Pagallo, Giulia 1989. Learning DNF by decision trees. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI. Morgan Kaufmann Publishers, Inc. Quinlan, J. Ross 1983. L earning efficient classification procedures and their application to chess end games. In Machine Learning: An Artificial Intelligence Ap- proach. Tioga Press, Palo Alto. 463-482. Quinlan, J. Ross 1987. Simplifying decision trees. In- ternational Journal of Man-Machine Studies 27:221- 234. Rendell, Larry A. 1985. Substantial constructive induction using layered information compression: Tractable feature formation in search. In Proceed- ings of the Ninth International Joint Conference on Artificial Intelligence. 650-658. Schlimmer, Jeffrey C. 1987. Incremental adjustment of representations in learning. In Proceedings of the Fourth International Workshop on Machine Learning, Irvine, CA. Morgan Kaufmann Publishers, Inc. 79- 90. Segre, Alberto, editor 1989. Sixth International Workshop on Machine Learning. Morgan-Kaufmann, Ithaca, NY. Utgoff, Paul E. 1986. Shift of bias for inductive con- cept learning. In Machine Learning: An Artificial In- telligence Approach, Vol II. Morgan Kaufmann Pub- lishers, Inc., Los Altos, CA. 107-148. 808 MACHINE LEARNING
1990
139
892
arallel synchronous James G. Schmolze and Suraj Goel Department of Computer Science Tufts University / Medford, MA 02155 / USA E-mail: schmolze@cs.tufts.edu Abstract To speed up production systems, many researchers have turned to parallel implementations. We describe a system called PARS that executes production rules in parallel. PARS is novel because it (1) executes many rules simultaneously, (2) runs in a highly asyn- chronous fashion, and (3) runs on a distributed mem- ory machine. Item (1) improves available concur- rency over systems that only perform the MATCH step in parallel. Item (2) reduces bottlenecks over syn- chronous parallel production systems. Item (3) makes the techniques more available given the lower cost of distributed versus shared memory machines. The two main problems regarding correctness, namely serial- ization and the maintenance of consistent distributed databases, are addressed and solved. Estimates of the effectiveness of this approach are also given. Introduction Production systems have been effective vehicles for im- plementing expert systems. The terminology of pro- duction systems and the usual serial execution model are presented in Figure 1. Unfortunately, large produc- tion systems are often slow and many will require sub- stantial speed improvements. To speed them up, re- searchers have studied parallel implementations, with much of that research focusing on OPS5 [Forgy, 19811. Since most cpu time in OPS5 is spent in the MATCH step (over 90% according to [Forgy, 19’79]), many ef- forts (e.g., [Gupta, 1983; Stolfo, 1984; Gupta, 1986; Miranker, 1987; Oflazer, 19871) have tried to make par- allel that one step while leaving the system to continue executing only one rule at a time. Overall, the maxi- mum speedups realized by these approaches are about ten times the fastest sequential version of OPS5, no matter how many processors are used [Gupta, 1986]. To gain additional speedup, several researchers have investigated systems that execute many rules simulta- neously. We call these systems multiple rule execution systems. When two or more rule instantiations execute *This work was supported in part by the National Sci- ence Foundation under grant number IRI-8800163. Production Memory: A set of rules or productions. Working Memory (WM): A set of facts. Working Memory Element (WME): Each fact in the WM is a WME. Each is a ground literal consist- ing of a class name plus a sequence of arguments. Condition Element (CE): A test found in a rule. Left-Hand Side (LHS): A sequence of CEs associated with a rule that determine when the rule matches against a specific sequence of WMEs. Action: An operation performed when the rule executes. Usually adds or removes a WME. Right-Hand Side (RHS): A sequence of actions asso- ciated with a rule that determines what to do when the rule is executed. Basic Loop: The following loop is executed until no matches are found in the first step. MATCH: For each rule, determine all sequences of WMEs that match its LHS. Each match results in an in- stantiation, which consists of the rule plus the sequence of WMEs matching the unnegated condition elements (CEs) of its LHS. A ’ gl sm e rule may have many instan- tiations. SELECT: Choose exactly one instantiation according to some predefined conflict resolution strategy. ACT: Perform the actions on the RHS of the rule for the selected instantiation. Figure 1: Terminology and Serial Execution Model for Productioti Systems simultaneously in one of these systems, we say they co-execute. Multiple rule execution systems have two important problems. (1) The order of,rule execution is difficult to control. (2) The results produced may not be producible under any serial execution scheme, i.e., the final results may not be serializable. [Boulanger, 1988] and [Morgan, 1988] each take this approach and each offers a way to control partially the order of rule execution. However, neither guarantees SCHMOLZEANDGOEL 65 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. serializable results. In contrast, [Ishida and Stolfo, 1985; Moldovan, 1986; Pa&k, 1989; Schmolze, 1989; Ishida, 19901 do not address the issue of controlling the order of rule execution, but each guarantees seri- alizability. In addition, each takes advantage of par- allel matching algorithms, thereby gaining additional speedup. All of the above multiple rule execution systems are designed to operate synchronously, i.e., all processors execute each step together. To obtain good speedup, the work must be effectively distributed for each of the three steps. This is likely to prove difficult since re- sources must be allocated to solve three different distri- bution problems. Unfortunately, no efforts along this line have been reported to date. Only the MATCH step has so far been studied in this fashion. An alternative is to execute a multiple rule execution system in a highly asynchronous fashion. This elimi- nates the time wasted in synchronous systems where some processors are forced to wait for other processors to finish the current step. The distribution problem is still challenging in an asynchronous system, but it is simplified to one distribution problem. Some pre- liminary work on an asynchronous parallel production system is reported in [Miranker et al., 19891. Herein, we address the problem of executing a multi- ple rule execution system that operates asynchronously on a distributed memory machine. The advantages of this approach follow. e We obtain speedup over serial and parallel-match- only systems by executing all steps in parallel. o We obtain speedup over synchronous parallel sys- tems by a reduction of bottlenecks and a simplifica- tion of the distribution problem. l We take advantage of distributed memory technol- ogy, which is less expensive and more accessible than shared-memory technology given today’s market.’ This work is related to distributed database (DDBMS) technology, the main difference being the source of the transactions. For DDBMSs, they come from users, and are highly unpredictable. For pro- duction systems (PSs), they come from the rules and WM, and are more predictable. Thus, special tech- niques that take advantage of off-line rule analysis can yield considerable improvement. Sellis et al [Sellis et al., 1987; Raschid et al., 19891 take a database ap- proach by delaying certain transactions using locks in order to guarantee serializability. However, they do not take strong advantage of off-line rule analysis, and their system can cause deadlock, whose detection and resolution may require considerable resources. Given that, it is appropriate to pursue other approachs such as ours. 1 Furthermor e Gupta and Tambe [Gupta and Tambe, , 19891 have argued that distributed memory machines are good choices for production systems. The remainder of this paper describes the design problems we faced, solutions to those problems, and estimates of the effectiveness of our solutions. The Execution Model We assume a distributed memory machine with a set of processors, P, and a fast interconnection network between them. Any processor can communicate with any other, either directly or through intermediaries. We are given a set of rules, R, a set of class names, C, that comprise all the classes used in R, and an initial working memory, WM,. Each WME belongs to exactly one class in C. Each processor uses the same execution model and each is capable of storing rules and WMEs. For simplicity, we use a static distribution. Each rule in 72 is assigned to exactly one processor from P. For each rule r on each processor p,- we store on p all classes of WMEs that could match a CE of r’s LHS. If a CE in r has a variable appearing as the class, then all classes in C are stored on p. By storing a class c on p, we mean that all WMEs whose class is c are stored on p. Thus, classes that appear on the LHS of several rules may be stored in several processors. Each processor p executes a variant of the basic loop for serial systems, as explained below. Each executes the loop at its own pace-in an asynchronous fashion. MATCH: In performing the MATCH step, p only examines the rules and WMEs stored locally. Since all WMEs that could affect the matching of each rule (i.e., all WMEs affecting its LHS) are stored on the same processor as the rule, no matches are missed. We use the TREAT match algorithm [Mi- ranker, 19871. SELECT: The SELECT step is also performed us- ing only local data. As we will explain later, cer- tain rules may be temporarily disabled and instan- tiations of such rules are ignored here. Since we do not guarantee any particular overall order of rule execution, each processor arbitrarily selects an in- stantiation whose rule is not currently disabled. If there are none, p returns to the MATCH step. Oth- erwise, p goes to the DISABLE step. Assume for now that p selects an instantiation of rule r. DISABLE: Before executing the selected instantia- tion, some synchronization must occur. As we will discuss later, we guarantee serializability by pro- hibiting the co-execution of certain pairs of rules. Thus, in this step, p requests certain other pro- cessors to disable certain other rules, namely, those rules that cannot co-execute with r. If some of those rules are currently executing, then their execution is completed, after which their processor disables them. -p waits until it receives messages verifying that all appropriate rules are disabled. While wait- ing, it is possible that p received new actions from elsewhere that invalidated the selected instantiation 66 AUTOMATEDREASONING or that another processor disabled r (p responds im- mediately to such requests). Thus, p checks this, and if so, p does not execute the instantiation and jumps to ENABLE. Otherwise, p goes to ACT. ACT: The ACT step also requires communication since some of the actions may add or remove WMEs that are stored on other processors. In such cases, the actions are forwarded as appropriate. After ac- tions are sent, the processor waits for acknowledge- ments from all recipients before re-acknowledging and proceeding. The reasons for this will become clear when we later explain our solution to the in- consistency problem. Thus, at the completion of this step, all processors who store the affected WMEs have taken the actions into account. Actions that affect the local WM are taken immediately. ENABLE: Rules on other processors that were previ- ously disabled in the DISABLE step are re-enabled. p now returns to MATCH. Processors accept messages from other processors at the start of the MATCH step and while waiting dur- ing the DISABLE step. Even though some synchro- nization is needed in the DISABLE and ACT steps, it comprises a reasonably small portion of the basic cycle time (MATCH consumes the most) and thus leaves the system highly asynchronous. We define the period of the execution of an instantia- tion i as extending from the time that the commitment to execute i is made to the later of either (1) the time that i’s local actions are taken, or (2) the time that all acknowledgements have been received regarding the delivery of i’s actions. (A commitment to execute i is made in the DISABLE step, not the SELECT step.) We say that a set of instantiations co-execute iff their periods of execution overlaps. In addition, we have an algorithm that is guaran- teed to determine when all processors have no instan- tiations. When this occurs, all processors are termi- nated. The Serialization Problem The parallel execution of a set of instantiations is se- rializable if the result produced by the parallel system could be produced by executing the same instantia- tions in some serial order. The goal of serialization is to ensure that every execution sequence by our parallel system is serializable. There are three causes of non-serializability in our parallel model. The first cause is disabling. We say that one instantiation disables another if executing the first causes the second to no longer match. This occurs iff the first adds (removes) a WME that the second matched negatively (positively). If we co-execute a pair of instantiations where each disables the other, the result is not serializable because, in a serial system, one instantiation would execute but not both. In general, when given a set of instantiations, not all should co- execute if there is a cycle of disabling relations among them. Pl: +(A <x>), -(C <y>) - +(C <x>), +(D <x>) P2: +(B <x>), -(C <y>) - +(C <x>), +(E <x>) For example, assume that we execute rules Pl and P2 rules from an initial WM of {(A l),(B 2)). Here, both rules match and, if co-executed, produce ((A l),(B 2),(C l),(D l),(C 2),(E 2)). However, no serial execution can produce this result. If Pl alone were executed from the original WM, the re- sult would be {(A l),(B 2),(C l),(D l)}, in which P2 no longer matches. Similarly, if P2 were exe- cuted alone from the original WM, the result would kai$%sl),(B 2),(C 2),(E 2)], in which Pl no longer . We solve this problem by preventing the co- execution of any set-of instantiations of disabling relations among them. that have a cycle The second cause is clashing. We say that one in- stantiation clashes with another if, whenexecuted, one would add a WME that the other would remove. If (a) two instantiations clash and (b) one rule can dis- able the other, then we might obtain non-serializable results. Pl: +(A <x>), -(C <y>) - +(C <x>), +(D <x>) P3: +(D <x>), -(C <y>) - -(D <x>), +(E <x>) For example, rules Pl and P3 would match in {(A l),(D l)), b u executing Pl will disable P3. Thus, t there are two possible serial orders. We can execute Pl and not P3, or we can execute P3 followed by Pl. If the two are co-executed, then the result must be identi- cal to that resulting from executing P3 followed by Pl. Since P3 deletes (D 1) while Pl adds it, (D 1) must be in the final WM. Therefore, if these two rules are co- executed, the result must be {(A l),(C l),(D l),(E 1)). In our parallel model, we cannot make this guaran- tee because our control of the order of rule execution is not sufficiently fine. Instead, we solve the problem by prohibiting the co-execution of such pairs of instan- tiations. The third cause arises from temporary inconsisten- cies in the distributed WM. We deal with this problem two sections hence. Solution to the Serialization Problem If the execution of two instantiations do not overlap in time, i.e., they do not co-execute, then serialization is guaranteed. Namely, an appropriate serial order simply has the first before the second. We therefore concern ourselves only with sets of instantiations that co-execute. For now, we assume a consistent WM. We solve the serialization problem by: (1) deter- mining pairs of rules whose instantiations should not co-execute (we say that any such pair of rules are to be SCHMOLZEANDGOEL 67 . . Collect all pairs of rules from our set of rules, 72, that possibly clash and, for each pair, add an edge to an undirected graph called RSynch. RSynch has a node for each rule in 72 and will eventually have an edge between each distinct pair A and B iff A and B must be synchronized. !. Construct RDO(‘R). Th en, construct a modified subgraph of RDO(R), called M-RDO. We start with RDO(R) and, for each edge from A to B in RSynch, remove any edge from A to B or from B to A in RDO(R). 1. Construct an acyclic sub-graph of M-RDO called A-RDO. The goal is to remove only edges from M-RDO in producing A-RDO, and to remove as few edges as possible since each edge removed will create a rule synchronization. We begin A-RDO with a node for each rule but no edges. We maintain a reachability matrix, M, for A-RDO such that M[rj,rk] is true iff there is a directed path from rj to rk. We examine each edge, e, from M-RDO in turn. Let e go from rj to rk. If adding e to A-RDO would not create a cycle, we add e to A-RDO and update M. Otherwise, we add an edge from rj to rk in RSynch and, if there is an edge from Q to rj in M-RDO, we remove it. Afterwards, A-RDO is an acyclic sub-graph of M-RDO and, for all edges e in M-RDO but not in A-RDO, the undirected version of e appears in RSynch. 1. We synchronize each pair of distinct rules that have an edge between them in RSynch. Figure 2: Off-Line Algorithm to Identify Rules to Synchronize synchronized 2), and (2) enforcing these rule synchro- nizations. We work at the rule, not the instantiation, level because the latter requires too much communica- tion for our hardware model. We say that one rule possibly disables another if any instantiation of the first can disable any instantiation of the second. We define possibly clashes in a corre- sponding fashion. Off-line tests for these two inter-rule relations are given in [Schmolze, 1989].3 We define a directed graph, called RDO(R), that represents the rule disabling order. For each rule in R, there is a node in RDO(R). For each A and B in R where B possibly disables A, there is an edge from A to B in RDO(R) (note the reversal of A and B). This com- prises all of RDO(R). Also, we define Rules(Z) as the set of rules that have an instantiation in Z. Theorem 1: 4 The co-execution of a set of instanti- ations, Z, in our parallel model, is serializable if 1. RDO(Rules(Z)) is acyclic except for self loops, 2. rules that have self loops in RDO(Rules(Z)) have only one instantiation in 2, 3. no two distinct rules in Rules(Z) can possibly clash, 4. rules that possibly clash with themselves have only one instantiation in Z, and 5. the WM is consistent. * Since each rule appears on exactly one processor in our model, we cannot have two instantiations of the same rule co-executing. Thus, conditions 2 and 4 are 2We borrow this term from [Ishida and Stolfo, 19851 who first proposed this approach to solving the serializa- tion problem. Our solution represents an improvement over their method along with an adjustment made for our par- allel model, which differs from theirs. See [Schmolze, 19891 for a complete discussion. 31n essence, ri p ossibly disables r2 if an add (remove) action of rl unifies with a negative (positive) CE of t-2. r1 possibly clashes with r2 if an add (remove) action of rl unifies with a remove (add) action of r2. *The proof can be found in [Schmolze, 19891. met. We meet condition 3 by synchronizing every pair of rules that possibly clash. We meet condition 1 by identifying a near-minimum number of pairs of rules to synchronize such that, for any set Z of instantiations to co-execute, RDO(Rules(2)) is acyclic. The rules requiring synchronization are identified using the off- line algorithm in Figure 2. The output is a set of pairs of rules that must be synchronized. Condition 5 is assumed for now and dealt with in the next section. Our method of enforcing rule synchronization was given in the explanation of our execution model. To reduce the work in enforcing these synchronizations, and to prevent deadlock, we take several additional steps. First, we remove from this set of pairs of rules to synchronize any pair where both are on the same processor. No synchronization is needed between such pairs as only one can execute at a time. Second, we recognize that in order for two rules to synchronize, only one of them needs to initiate the needed disabling. Let A and B be such a pair and let A be the one to initiate the disabling. When B’s processor wishes to execute B, it simply does so as long as B is not disabled. When A’s processor wishes to execute A, it first disables B, then executes A, and finally re-enables B. If we think of the pairs of rules to synchronize as an undirected graph, this simplification has the effect of directing each edge. In this case, the undirected edge between A and B becomes a directed edge from A to B, indicating that A can execute only if B is disabled. However, cycles in this directed graph can lead to dead- lock. For example, let A require that B be disabled, B require that C be disabled and C require that A be disabled. Let all three are on separate processors and let A, B and C each be selected for execution at the same time. Disabling messages go out from each processor and each waits for a.cknowledgements before responding to the disable request it receives, leading to deadlock. 68 AUTOMATEDREASONING oR1: +&++A e R3: +A,-B- +S o R4: -A,+B--, +T Figure 3: Temporary Inconsistencies Can Cause Non- Serializable Effects Fortunately, this is easily solved because we can al- ways take an undirected graph that has no self loops and direct each edge such that there are no cycles. To do so, number the M nodes from 1 to M. Go through each edge and direct it from the lower numbered node to the higher numbered one. Since there no self loops, this is an unambiguous choice.5 As a result, synchronization is simplified and dead- lock free. Moreover, the results of our parallel sys- tem are serializable with respect to both disablings and clashes. In fact, we expect only very small delays from the needed synchronizations.6 Temporary Database Inconsistencies Given that our WM is distributed, temporary WM in- consistencies can occur while actions are being sent, which can lead to non-serializable effects. For an ex- ample, we offer a case with four rules as shown in Fig- ure 3. Here, Rl and R2 match if there is a WME of class Q and they add WMEs of classes A and B, re- spectively. R3 matches if there is some WME of class A and no WMEs of class B, and it adds a WME of class S. R4 matches if there are no WMEs of class A and some WME of class B, and it adds a WME of class T. We assume there are no other rules. Let us examine all serial orders that can arise from an initial WM of {Q}. Both Rl and R2 match, but R3 and R4 do not. Let Rl execute, which adds an A WME. Now, R3 matches as well as R2. Let R3 execute, which adds an S WME. Only R2 matches, so we exe- cute it, which adds a B WME. R4 never matched and still does not due to the presence of the the A WME. Thus, our first serial order is (1) Rl, R3 followed by R2. There are three other serial orders, namely, (2) R2, R4 followed by RI, (3) R2 followed by Rl , and finally, (4) Rl followed by R2. However, there is no serial order in which all four rules execute. Unfortunately, with temporary WM inconsistencies in our parallel model, all four rules can execute. As- sume that each of these rules is on its own processor. Let Rl and R2 co-execute and let each send out its re- spective action at the same time. Furthermore, let the 51f there were a loop, it must involve at least two nodes, and thus must have an edge from a higher numbered node to a lower numbered node, which is impossible.0 6 We note that I/O should also be serialized. We accom- plish this by assigning all rules that perform I/O to a single processor. Thus, only one I/O rule executes at a time. time of message delivery be small between Rl and R3 and between R2 and R4. Conversely, let the time of de- livery be large between Rl and R4 and between R2 and R3. Thus, R3 will receive the +A action from Rl be- fore receiving the +B action from R2. Let us assume that after receiving the +A and before receiving the +B, R3’s processor commits to execute R3 (remem- ber, the processors run asynchronously). Note that no rule synchronization is necessary here since there are no possible disabling cycles and no possible clashes. In a similar fashion, R4 receives the +B action before the +A action, and during the interim period, R4’s pro- cessor commits to executing R4. As a result, we get all four rules executing, which is not serializable. Solution to the Inconsistency Problem We solve this problem by incorporating a double ac- knowledgement protocol when sending actions. The protocol for the sender of actions is as follows. 1. Send actions to all appropriate processors. 2. Wait for all recipients to acknowledge receipt of the actions. 3. Send re-acknowledgements to all recipients. The protocol for the receiver of actions is as follows. 1. Send an acknowledgement to the original sender. 2. Disable all rules affected by the actions. 3. Continue with the basic loop. 4. Upon re-acknowledgement from the original sender, re-enable the rules disabled by the actions. The basic idea here is that the original receipt of actions from another processor indicates that certain WMEs will be added or removed. However, to prevent inconsistency problems, the receiver prevents the use of this information until it is sure that all appropriate processors have received it. It is sure of this upon receipt of the re-acknowledgement . The rules disabled by the receiver are those rules having CEs that might test either positively or nega- tively for the WMEs affected by the action. The reason the receiver disables and re-enables rules is that, while waiting for re-acknowledgements, it can continue with its basic 10op.~ For an example, we return to Figure 3 when Rl and R2 are about to send out their respective actions to both R3 and R4’s processors. R3’s processor receives the +A action, disables rule R3 since it tests for a WME of class A, and sends an acknowledgement to R 1 ‘s processor. At about the same time, R4’s pro- cessor receives the +B action, disables rule R4, and ‘In an alternative design, the processor could simply wait for the re-acknowledgements instead of continuing on with its basic loop. It thus would not need to disable and re-enable rules. However, this alternative is wasteful of time, so we reject it. SCHMOLZEANDGOEL 69 No. Non I&S PARS I/O Rules Mapper 77 13.2 6.9 ToruWaltz 31 15.2 6.9 Weighted Average Avg. Improvement Over I&S , 13.8 6.9 1.0 2.0 Table 1: Comparison of Number of Synchronizations per Rule sends an acknowledgement to R2’s processor. Similar activities occur when R3’s processor receives the +B action and when R4’s processor receives the +A ac- tion. Soon thereafter, Rl’s processor has received the two acknowledgements it wants, and so it sends out re- acknowledgements. At about the same time, R2 does the same. Note that each of the rules R3 and R4 have been disabled twice. Thus, neither is re-enabled until both re-acknowledgements are received. By the time this occurs, both R3 and R4’s processors have received both actions so, in this case, neither R3 nor R4 exe- cute. As a result, we have a protocol that prevents non- serializable effects due to temporary inconsistencies in our distributed WM. The cost of this guarantee is that additional messages must be sent and, for certain pe- riods of time, certain rules are disabled. However, these periods are short as they last only long enough for an acknowledgement and re-acknowledgement mes- sages to be sent. Overall, we estimate this to be a rea- sonably small fraction of the basic cycle time, leading to a small cost for this guarantee. Assessment of Effectiveness An implementation of PARS is currently under con- struction, so we are not able to report on actual speedups realized. Instead, we estimate speedup in terms of increased numbers of co-executing instantia- tions. We applied our rule analysis method to two produc- tion systems that are commonly used as benchmarks for parallel production systems. These systems are (1) the Manhattan Mapper [Lerner and Cheng, 19831, which provides travel schedules for trips through Man- hattan, and (2) T oruwaltz, which applies Dave Waltz’s constraint propagation algorithm to identify objects from line drawings [Waltz, 19751. First, we measured the number of pairs of rules to be synchronized. Sec- ond, we examined a number of conflict sets from serial execution traces and determined how many instantia- tions would co-execute in our system if it were running synchronously. We compare our system to that of Ishida and Stolfo’s (I&S) [Ishida and Stolfo, 19851 as their’s is the most influential work in the area of multiple rule execution . . ~ Table 2: Concurrency Estimates using Number of Co- Executions Available systems. Thus, we are comparing a synchronous ver- sion of our approach to another synchronous system. Table 1 compares the number of rules to be synchro- nized for the method of I&S versus ours. As shown in the table, we improve upon their method by having each rule synchronize with, on the average, about half the number of rules that I&S require. Table 2 compares the number of instantiations that could co-execute in our sample traces. For 8 processors, our system co- executes over twice as many instantiations than does I&S. For 32 processors, our system co-executes almost four times as many. While this demonstrates improve- ment, it does not take into account the cost of our differing execution model, including message passing times. We are thus optimistic that our system will show notable improvements in speedup. However, fi- nal results must await actual run times. Conclusions We presented a parallel production system called PARS. Novel about PARS is that it (1) executes many rules simultaneously, (2) runs mostly asynchronously, and (3) uses a distributed memory machine. Its ad- vantages are that it (1) improves the available concur- rency over serial and match-only parallel approaches by having multiple rule executions, (2) reduces a num- ber of bottlenecks found in synchronous multiple rule execution systems, (3) simplifies the distribution prob- lem over these synchronous systems, and (4) makes the techniques more generally available by using lower cost distributed versus shared memory machines. PARS guarantees that the final results it produces are serializable. However, like other similar systems, PARS does not guarantee any particular order of rule execution. To help with this, we have added non-OPS5 control constructs. Unfortunately, space does not al- low their presentation here (but see [Schmolze, 1988]).8 Overall, the speedups estimated for PARS are better than those for other similar systems. However, these speedups are not spectacular. This arises from the 8Space also does not allow us to present our distribution algorithm (but see [Goel, 19901). 70 AUTOMATEDREASONING benchmarks having been written for serial systems. In the future, we will write and use more appropri- ate benchmarks. We will experiment with variations of the strategies presented herein, including dynamic distribution schemes. We will incorporate rule de- composition strategies (e.g., [Pasik and Stolfo, 1987; Pasik, 19891). Finally, we will continue to design new control strategies for multiple rule execution systems. Acknowledgements Many thanks go to the students who have contributed ei- ther ideas or programs to the development of PARS, which includes David Cheung, Michael Futeran, Eric Gothelf, Ki- ran Shah, David Yates, and the 1987 class of CSC 280. In addition, special thanks goes to Dan Miranker et al for making the OPSSC system available to us for use in the PARS implementation. References [Boulanger, 19881 Albert Boulanger. The Modification of a Rule-based Diagnostic System for Routinized Paral- lelism on the Butterfly Parallel Computer. Technical Report 6713, BBN Laboratories Inc., Camb., MA, Jan- uary 1988. [Forgy, 19791 C.L. Forgy. On the Deficient Implementation of Production Systems. PhD thesis, Dept. of Computer Science, Carnegie Mellon Univ., Pittsburg, PA, 1979. [Forgy, 19811 C.L. Forgy. OPS5 User’s Manual. Techni- cal Report CMU-CS-81-135, Dept. of Computer Science, Carnegie Mellon Univ., 1981. [Goel, 19901 Suraj Goel. Rule Partitioning for Parallel Production Systems. Technical Report 90-2, Dept. of Computer Science, Tufts Univ., Medford, MA, 1990. In preparation. [Gupta and Tambe, 19891 A. Gupta and M. Tambe. Suit- ability of Message Passing Machines for Implementing Production Systems. In Proc. of AAAI-88, St. Paul, Minnesota, August 1989. American Assoc. for Artificial Intelligence. [Gupta, 19831 A. Gupta. Implementing OPS5 Production Systems on DADO. Technical Report CMU-CS-84-115, Dept. of Computer Science, Carnegie Mellon Univ., De- cember 1983. [Gupta, 1986] A. Gupta. Parallelism in Production Sys- tems. PhD thesis, Dept. of Computer Science, Carnegie Mellon Univ., March 1986. (Also appears as Technical Report CMU-CS-86-122). [Ishida and Stolfo, 19851 T. Ishida and S.J. Stolfo. To- wards the Parallel Execution of Rules in Production Sys- tem Programs. In Proc. of the International Conference on Parallel Processing, 1985. [Ishida, 19901 T oru Ishida. Methods and Effectiveness of Parallel Rule Firing. In Proc. of the 6th IEEE Conference on Artificial Intelligence Applications, March 1990. [Lerner and Cheng, 19831 M. Lerner and J. Cheng. The Manhattan Mapper Expert Production System. Techni- cal report, Dept. of Computer Science, Columbia Univ., May 1983. [Miranker et al., 19891 Daniel P. Miranker, Chin-Ming Kuo, and James C. Browne. Parallelizing Transfor- mations for a Concurrent Rule Execution Language. Technical Report TR-89-30, Dept. of Computer Science, Univ. of Texas at Austin, Austin, Texas, October 1989. [Miranker, 19871 D. P. Miranker. TREAT: A New and Eficient Match Algorithm for AI Production Systems. PhD thesis, Dept. of Computer Science, Columbia Univ., 1987. (Also appears as Report No. TR-87-03, Dept. of Computer Science, Univ. of Texas at Austin, 1987.). [Moldovan, 1986] Dan I. Moldovan. A Model for Paral- lel Processing of Production Systems. In Proc. of IEEE International Conerence. on Systems, Man and Cyber- netics, pages 568-573, Atlanta, GA, 1986. IEEE. [Morgan, 19881 Keith Morgan. BLITZ: A Rule-Based Sys- tem for Massively Parallel Architectures. In Proc. of 1988 ACM, Snowbird, Utah, July 1988. ACM Confer- ence for Lisp and Functional Programming. [Oflazer, 19871 K. Oflazer. Partitioning in Parallel Pro- cessing of Production Systems. PhD thesis, Dept. of Computer Science, Carnegie Mellon Univ., 1987. (Also appears as Tech. Rep. CMU-CS-87-114, March 1987.). [Pasik and Stolfo, 19871 Alexander J. Pasik and Salva- tore J. Stolfo. Improving Production system Per- formance on Parallel Architectures by Creating Con- strained Copies of Rules. Technical Report CUCS-313- 87, Columbia Univ., New York, New York, 1987. [Pasik, 19891 Al exander J. Pasik. A Methodology for Pro- gramming Production Systems and its Implications on Parallelism. PhD thesis, Columbia Univ., New York, 1989. [Raschid et al., 19891 Louiqa R-chid, Timos Sellis, and Chih-Chen Lin. Exploiting concurrency in a DBMS Im- plementation for Production Systems. Technical Report CS-TR-2179, Dept. of Computer Science, Univ. of Mary- land, College Park, MD, January 1989. [Schmolze, 19881 James G. Schmolze. An Asynchronous Parallel Production System with Distributed Facts and Rules. In Proc. of AAAI-88 Workshop on Parallel Algo- rithms for Machine Intelligence and Pattern Recognition, St. Paul, Minn., August 1988. [Schmolze, 19891 J ames G. Schmolze. Guaranteeing Seri- alizable Results in Synchronous Parallel Production Sys- tems. Technical Report 89-5, Dept. of Computer Science, Tufts Univ., Medford, MA, October 1989. [Sellis et al., 19871 Timos Sellis, Chih-Chen Lin, and Louiqa Raschid. Implementing Large Production Sys- tems in a DBMS Environment: Concepts and Algo- rithms. Technical Report CS-TR-1960, Dept. of Com- puter Science, Univ. of Maryland, College Park, MD, December 1987. [Stolfo, 19841 S.J. Stolfo. Five Parallel Algorithms for Pro- duction System Execution on the DAD0 Machine. In Proc. of the AAAI-84. American Assoc. for Artificial In- telligence, 1984. [Waltz, 19751 D.L. Waltz. Understanding Line Drawings of Scenes with Shadows. In P. Winston, editor, The Psychology of Computer Vision, pages 19-91. McGraw Hill, New York, NY., 1975.
1990
14
893
Two Case St ies in Cost-Sensitive Co cquisition* Ming Tan and Jeffrey C. Schlimmer School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 tan@cs.cmu.edu, schlimmer@cs.cmu.edu Abstract This paper explores the problem of learning from exam- ples when feature measurement costs are significant. It then extends two effective and familiar learning meth- ods, ID3 and IBL, to address this problem. The exten- sions, CS-ID3 and CS-IBL, are described in detail and are tested in a natural robot domain and a synthetic do- main. Empirical studies support the hypothesis that the extended methods are indeed sensitive to feature costs: they deal effectively with varying cost distributions and with irrelevant features. Introduction Consider a simple mobile robot whose primary goal is to pick-up cups and discard them. At some point the robot must determine which of its sensing procedures allow it to distinguish between different types of cups and other objects. In the general situation, an agent must decide which of its available information resources are actually useful. If sensing procedures could be ex- ecuted instantaneously, this problem would be greatly simplified (thus this assumption is frequently made); however, sensing procedures may incur significant ex- pense, limiting the ability of the robot to routinely execute large numbers of sensing procedures. As the robot determines the relationships between sensing procedures and object types, it is learning from examples (LFE) , where objects in its environment map to examples, sensing procedures map to features of examples, and appropriate ways to pick up objects map to classes of examples. Like sensing procedures, measuring features of examples may also be expen- sive. From this point of view, researchers have typi- cally studied a degenerate case of LFE where feature expense is assumed to be negligible, and thus all (or nearly all) features are evaluated for each example. In general, individual features incur different measure- ment costs, and thus cost-sensitive learning methods must limit the overall expense of evaluating features during construction and use of class descriptions. (For convenience, we refer to feature measurement cost as *This research is supported by the National Science Foun- dation under grant IRI-8740522 and by a grant from Digital Equipment Corporation. 854 MACHINE LEARNING feature cost.) This may involve evaluating many in- expensive features or a few expensive ones as the sit- uation dictates. This paper presents two incremental, cost-sensitive LFE methods, CS-ID3 and CS-IBL, gen- eralizations of ID3 (Q uinlan 1986) and IBL (Aha & Ki- bler 1989), and it empirically evaluates the four meth- ods in the robot’s domain and a synthetic domain. Cost-Sensitive Learning Methods Two notable cost-sensitive learning methods are Nunez’s (1988) extension to ID3 and Gennari’s (1989) focus-of-attention extension to COBWEB. Nunez’s approach focuses on minimizing feature evaluation for test examples (i.e., during classification) but assumes that all features are evaluated for training examples ( i.e., during learning). As such, it represents more of an approach to cost-sensitive classification rather than cost-sensitive learning. Gennari’s approach, while addressing the more complex task of concept forma- tion, assumes it is sufficient to reduce the number of features evaluated, independent of individual feature costs. Both of the LFE methods we describe here relax this latter assumption, instead assuming that features may vary in cost, and they attempt to reduce the over- all cost of evaluating features during both construction and use of class descriptions. After describing the mo- tivating domain in some detail, this section focuses on two effective LFE methods, namely ID3 and IBL, and presents extensions designed to make them sensitive to feature costs during learning and classification. The Robot Cup Collecting Domain Consider again the robot whose task is to collect dis- carded cups. For this task we have experimented with a Heath Hero 2000, a mobile robot with a five degree of freedom arm (torso, arm, elbow, pitch, and roll) and a two finger hand. The Hero robot has a wrist sonar with range of 127 and accuracy of 0.5 inches. To locate objects, the robot uses a ceiling-mounted vision system and a heuristic path-planner. We have currently defined four sensing procedures for the robot by composing arm movements with inter- mittent sonar sensings: two forward-facing, rotating From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. Table 1. ID3’s feature selection measure, where F is a feature, J$ is the ith value of F, and (C} is the set of classes. I(F) = c -p(Ci) log,p(Ci) xp(F = K) c -p(CjlF = K) log,p(CjlF = K) 1 ] L”G’ sweeps (bottom-up and left-to-right) - V-sweep and H-sweep, and two translating scans (forward-facing and downward-facing) - H-scan and D-scan. The costs of the sensing procedures vary from 28 to 60 seconds. Each sensing procedure yields one or more sensory- based features of the environment.’ For instance, V-sweep yields the height-deg feature. We have also defined five specific instantiations of more general grasping procedures for the robot: a front-facing, wide- open grasp - front-wrap, two overhead, downward, wide-open grasps - top-grip and top-wrap, and two overhead, edge grasps - low-pinch and high-pinch. There are seven types of objects in the robot’s world, five to be grasped (32 oz. plastic cups stand- ing or lying down, 16 oz. waxed paper cups, 8 oz. Styrofoam cups, and tennis ball cans) and two to be ignored (rectangular boxes standing or ly- ing down). Appropriate actions for these objects are high-pinch, top-wrap, low-pinch, top-grip, front-wrap, ignore, and ignore, respectively. To map the robot’s task into LFE, we utilize addi- tional knowledge about the task. Specifically, sensing procedures yield different values at different places be- cause objects’ appearances vary with viewing angle and distance. To reduce this ‘noise,’ sensing procedures are instantiated at particular distances and orientations with respect to the object measured. Rather than in- stantiate sensing procedures at all distances and orien- tations, a domain-dependent procedure identifies a few, preferred distances and orientations (Tan 1990). For the sensing procedures and objects listed above, this yields three preferred, relative distances (6, 13, and 26 inches) and four relative orientations (0, 90, 180, and 270 degrees). Therefore, the possible features of an ex- ample are the sensor-based features resulting from ap- plying sensing procedures at each distance-orientation pair. The class of an example is simply its appropriate grasping procedure. Feature costs arise from the complexity of a sensing procedure and the amount of motion required to get to a preferred distance-orientation pair. Moving costs are roughly proportional to distance: for 12 inches, the robot requires approximately 30 seconds to aim, move, and position itself. However, as the robot confronts its environment, it may encounter obstacles that force lThese features are noisy and are filtered by sonar-specific parameters (cf. Tan & Schlimmer 1989). jE{Cl 1 additional navigation and increase moving costs. This domain requires sensitivity to the costs of measuring features. Brute-force methods that apply all sensing procedures as they build class descriptions would take more than 30 minutes to gather data on each potential cup. Cost-sensitive methods illustrate that less than 2 minutes is typically needed. In gen- eral, without cost-sensitive learning methods, users are penalized for including additional, potentially useful features and are forced to determine feature relevance, a job perhaps better left to the learning method. Learning Decision Trees ID3 is a LFE method that constructs decision trees to represent concepts (Quinlan 1986). Given a train- ing set of examples with their features and classes, it produces a discrimination tree whose nodes indicate useful feature tests and whose leaves assign classes to examples. ID3 uses a divide-and-conquer algorithm, selecting feature tests to divide the training set and re- cursively building subtrees to describe partitions. To select new features, ID3 applies an information theo- retic measure to estimate the correlation between ex- ample features and classes. Specifically, ID3 selects the feature that maximizes the equation in Table 1. The process terminates when the training examples have the same class or when there are no more features to test. To classify new examples, ID3 repeatedly tests the feature at the current subtree root and follows the matching branch. When it reaches a leaf, it predicts that the new example’s class is the most common. Cost-Sensitive ID3 First, to make ID3 cost- sensitive, its feature selection measure should be a function of feature costs as well as I. Nunez (1988) uses this approach, making feature selection inversely proportional to cost. Our cost-sensitive ID3 (CS-ID3) follows Nunez’s approach and uses the function 12/C, where C is the cost of evaluating a feature. Second, unlike ID3 and Nunez’s work, CS-ID3 can- not assume that all features are evaluated for all ex- amples, so during learning its termination criteria must consider empty examples which have only a class label but no feature values. In this case, if there is a class ambiguity at a leaf, CS-ID3 continues discrimination by evaluating the next least expensive feature. If fur- TANAND~CHLIMMER 855 ther discrimination is required, CS-ID3 evaluates the next least expensive feature, and so on. This greedy strategy is biased toward using many, inexpensive fea- tures and against using a few, expensive ones. Third, to facilitate comparison with incremental systems, CS-ID3 incrementally processes its training examples in two aspects: (a) it does not evaluate newly suggested features for previous examples, and (b) it in- crementally revises its decision tree based on features currently preferred by the feature selection measure. The first constraint is enforced by evaluating for a new example only the features referenced by the decision tree during the new example’s classification. A result- ing implication is that CS-ID3 may require significantly more examples to converge than ID3. Also, training examples may not have a value for each tested fea- ture; these partially-valued examples are used to com- pute the feature selection measure but are otherwise ignored. For the second constraint, ID5 demonstrates a computationally efficient method to revise trees (IJt- goff 1989), but given our interest here in environmen- tal costs and for simplicity of implementation, CS-ID3 simply rebuilds trees from scratch. Detailed Example Returning to the robot’s do- main, consider how CS-ID3 builds and uses a decision tree to determine how to grasp the seven object types. Assume the robot is 35 inches away from and at a 270’ orientation to the objects. The first object is a box lying down. Since CS-ID3 has no prediction, it sim- ply saves this empty example. The second object is a standing box. CS-ID3 builds a degenerate, empty tree, correctly predicts that this object should also be ignored, and saves it in the training set. The third object is a tennis ball can. This time the prediction is incorrect, and to distinguish between objects to be ignored and those grasped with front-wrap, CS-ID3 applies the cheapest sensory procedure, V-sweep, at the most convenient, preferred distance and orienta- tion, 26 inches and 270’. The resulting value of 17 for the feature height-deg is added to this example before it is saved. The fourth object is an 8 oz. cup. The simple tree applies V-sweep and returns a value of 6 for height-deg; CS-ID3 incorrectly predicts the tennis can’s grasping procedure. The values 17 and 6 are sufficient for discrimination, so no other features are evaluated before the fourth object is stored. When the fifth object (16 oz. cup) is processed, CS-ID3’s tree applies V-sweep and gets a value of 12. Because the current tree splits height-deg at 11.5, CS-ID3 incor- rectly predicts the tennis ball can class. CS-ID3 could discriminate between 12 and 6 with an additional bi- nary split, but it prefers to find features that discrimi- nate in a single split. It applies the next cheapest sen- sory procedure V-sweep at the next closest distance 13 inches before storing the fifth object. As more examples are processed, CS-ID3’s feature selection measure becomes more useful, and after 21 examples it converges to the tree depicted in Figure 1. Note that CS-ID3 prefers to reuse features (with their zero subsequent cost). For comparison to CS-IBL (in the following section), after 35 examples, CS-ID3 has made 8 prediction errors, saved all 35 examples, and applied sensory procedures an average of 1.5 times per example. Learning Instance-Based Descriptions Like ID3, instance-based learning (IBL) is also an effec- tive LFE method (Aha & Kibler 1989). In its simplest form, instead of constructing an explicit, abstract rep- resentation of the training examples, IBL simply saves examples and relies on abstract matching. Given a new example to classify, IBL finds the most similar example in memory and predicts its class for the new example. The similarity measure IBL uses is the negation of the Euclidean distance between two examples, where nu- meric features are normalized to the range [O,l] and non-numeric features have a difference of 0 if equal in value and 1 otherwise. At its simplest IBL saves all new examples, but it is more effective to save only those which were incorrectly predicted. Other, more sophisticated extensions are possible, and they appear to improve performance in difficult learning situations (Aha & Kibler 1989). Cost-Sensitive IBL To make IBL cost-sensitive, the classification process it uses to find similar, stored examples must specify which features (and how many) should be evaluated for a new example. Following the spirit of IBL, our approach (CS-IBL) uses stored ex- amples to serve as templates for feature evaluation. In- stead of evaluating all features of all stored examples, CS-IBL repeatedly selects one stored, cost-effective ex- ample and evaluates one of its features for the new example until the closest example has been found. First, because all stored examples are equally close to a new, empty example, CS-IBL selects the closest example that: (a) has features that are not yet eval- uated for the new example, (b) has common feature values, and (c) uses inexpensive features. Specifically, CS-IBL selects the example that maximizes the ratio of expected match success to cost. The former is es- timated by the product of independent feature-value probabilities, and the latter, by summing over feature costs times their decreasing likelihood of being eval- uated. Eq. 1 implements this ratio, where Pij is the frequency that feature j has the value it does for stored example i, {F’} is the set of features evaluated for the stored example but not for the new example (sorted 856 MACHINE LEARNING top-wrap top-grip front-wrap (32 oz. lying) (8 oz. cups) high-pinch (tennis cans) (32 oz. standing) Figure 1. CS-ID3’s decision tree for distance = 35 inches, orientation = 270’. Within nodes lines are: the appropriate motion command and its cost in seconds, the sensory procedure to be applied, and the feature to be evaluated. Leaves in the tree correspond to predicted grasping procedures. in decreasing order of Paj /C(Fi)) (cf. Cox 1988)) and C( Fj ) is the cost of evaluating feature j. rI- 3E{F’} fij c jE{F’) c(Fj) X II&:(1 - f’ilc) 0) Second, given an example to guide feature selec- tion, CS-IBL chooses a feature which has a likely value and is inexpensive, or maximizes Pij /C( Fj ). CS-IBL repeats this and the above step until the upper bound on the distance from the selected stored example is less than the lower bound for one from any other class. This stopping criteria is conservative and reminiscent of Gennari’s (1989) criteria for focus-of-attention in COBWEB. A final modification ensures that CS-IBL does not end up storing a set of empty or insufficiently described examples. After a prediction error, CS-IBL evaluates one or more additional features for the new example if it appears identical to the stored one. New features to evaluate are first drawn from the closest stored example and then on a cost basis until the two examples are sufficiently different. Detailed Example Returning again to the robot’s domain, CS-IBL initially stores examples and evaluates features in a manner similar to CS-ID3, but later pro- cessing reveals three primary differences. First, unlike CS-ID3 which saves all objects, CS-IBL does not save those whose class was correctly predicted. Second, CS- IBL is lazier with respect to expanding the set of evalu- ated features than CS-IDS. Whereas CS-ID3 evaluates more features when a single binary split is insufficient for discrimination, CS-IBL waits until two examples from different classes cannot be discriminated at all. Third, if CS-IBL is mislead by the cost-driven heuris- tics to evaluate an irrelevant feature value, the mistake is subsequently propagated, and whenever a stored ex- ample with that feature is selected, the irrelevant fea- ture will be re-evaluated for new examples. Though not as bad as evaluating all irrelevant features (as IBL does), it is not as good as evaluating none (as CS-ID3 does). CS-IBL converges after processing 35 examples and has made 11 errors, saved 12 examples, and ap- plied sensory procedures an average of 1.7 times per example. Compared to CS-IDS, this represents slower convergence, more errors, fewer saved examples, and comparable numbers of features evaluated. At an ab- stract level, one difference between the two methods is that CS-IBL avoids committing to a particular test or- dering, and we suspect that this may make it easier to tolerate feature noise and dynamic changes in feature costs. Empirical Evidence One might expect that LFE methods which evalu- ate all example features are more expensive and incur fewer errors prior to convergence than cost-sensitive methods. The empirical results of this section bear this out. This leaves two open questions: how cost efficient are the methods, and how many errors do they incur prior to convergence given different feature cost distri- butions and numbers of irrelevant features? To provide TANAND~CHLIMMER 857 Table 2. Performance of the four learning methods in the robot’s domain. 0 IRRELEVANT FEATURES 12 IRRELEVANT FEATURES AVERAGE COST (SEC) TOTAL ERRORS AVERAGE COST (SEC) TOTAL ERRORS ID3 2111.0 5.5 3118.5 5.5 CS-ID3 101.8 9.3 106.1 9.7 IBL 2111.0 6.0 3118.5 6.0 CS-IBL 100.5 11.6 109.1 15.5 initial answers to these questions, this section compares ID3 and three incremental methods CS-ID3, IBL, and CS-IBL in the robot’s domain and in a synthetic one. For these studies, ID3 is applied as a brute-force incre- mental method, rebuilding the tree from scratch after predicting the class of each new example. Revisiting The Robot’s Domain Applying the four methods to the robot’s domain gen- erally indicates that the cost-sensitive methods are more efficient but may incur more errors prior to con- vergence. Table 2 summarizes two dependent measures prior to convergence given 24 relevant features plus 0 or 12 irrelevant features: (a) the average cost of fea- ture evaluation per example during classification and learning, and (b) the total errors incurred. Given an initial distance of 35 inches, data are an average over five object orders and the four preferred orientations. Irrelevant features were designed to simulate out-of- range sonar readings and had constant values; these features were assigned random costs consistent with the variance of other feature costs. Note that the cost- sensitive methods are highly cost-efficient compared to ID3’s and IBL’s strategy of evaluating all features for each example, representing an order of magnitude sav- ings. In terms of errors, ID3 and IBL outperform their cost-sensitive derivatives by approximately two to one. Further note that the cost-sensitive methods appear to scale well given irrelevant features; adding 50% more features results in at most a 9% increase in average cost and at most a 33% increase in errors. This latter observation is somewhat surprising and is investigated further in the next subsection. ‘Using A Synthetic Domain The robot domain reflects realistic properties of LFE, but it can be difficult to accurately assess those prop- erties. Synthetic domains, conversely, afford precise control over experimental conditions. The experi- ments in this subsection use a simple Boolean concept, (A A B) v (C A D), t o study the effects of differing costs and numbers of irrelevant features. Given four 858 MACHINE LEARNING q CS-ID3 q CS-IBL 0 ID3 0 IBL 0 500 1K 1.5K 2K N of Examples Figure 2. Total errors prior to convergence given four irrelevant features and moderately uneven feature costs. irrelevant features and moderately uneven costs (Con- dition 3 below), Figure 2 depicts total errors prior to convergence for each of the four methods ID3, CS-IDS, IBL, and CS-IBL.2 ID3 yields the fewest errors; IBL comes in second (also the slowest to converge) with CS-ID3 and CS-IBL bringing up the rear. For the same conditions, Figure 3 depicts the num- ber of features measured by each of the methods as training progresses. The cost-sensitive methods mea- sure considerably fewer features than ID3 and IBL. As both cost-sensitive methods search the space of feature sets, CS-ID3 settles to evaluating a near optimum num- ber of features, but CS-IBL does not. We suspect that this is an artifact of CS-IBL’s conservative stopping criteria. To be efficient, cost-sensitive methods should be sensitive to differential feature costs. Specifically, they should use less expensive features when available. Standard deviation is one quantitative measure of fea- ture cost skew; identical costs have a zero standard deviation, and uneven costs have a standard deviation 2Results are averaged over five executions; bars denote stan- dard deviation. . ID3, IBL O ODtimal 0 500 IK 1.5K N of Examples Figure 3. Features evaluated per example prior to con- vergence given four irrelevant features and moderately uneven feature costs. much greater than one (when some features are much less expensive than others). Given this metric, we can vary the relative costs of features and observe the re- sulting cost-sensitive behavior of the four methods. Us- ing the simple Boolean concept above, we divide 40 cost units among groups of 4 features to yield 4 con- ditions: (1) each feature costs 10, (2) 1 feature costs 1, and 3 features cost 13, (3) 2 features cost 1, and 2 features cost 19,3 and (4) 3 features cost 1, and 1 fea- ture costs 37. When irrelevant features are included, they are assigned the same costs as relevant features. Given four irrelevant features, Figure 4 depicts the av- erage feature costs per example, and Figure 5, total errors prior to convergence as a function of different relative feature costs. In terms of average cost, both cost-sensitive methods exhibit close to optimal asymp- totic performance. They are also considerably less than ID3 and IBL. In terms of errors, the methods sepa- rate naturally into three groups, from best to worst, ID3, IBL, and the cost-sensitive methods. This latter, poor performance may arise because the cost-sensitive methods must search for an effective set of features to evaluate. Cost-sensitive methods should also be sensitive to the number of features as focus-of-attention methods are (cf. Gennari 1989). Using the simple Boolean con- cept, we added 0, 2, 4, and 8 irrelevant features that have random binary values. Given moderately uneven costs (Condition 3), Figure 6 depicts average feature costs per example, and Figure 7, total errors prior to convergence as a function of the number of irrelevant features. In terms of costs, the cost-sensitive methods 3For simplicity of analysis, in this condition features A and B cost 1. q CS-ID3 q CS-IBL . . ID3, IBL O Optimal I I 0 17 Standard Deviation of Feature Costs Figure 4. Average feature cost per example prior to convergence given four irrelevant features. q CS-ID3 q CS-IBL T o ID3 0 17 Standard Deviation of Feature Costs Figure 5. Total errors prior to convergence given four irrelevant features. again perform at a near optimal level. (CS-ID3 appears slightly below due to its early behavior.) In terms of errors, the methods appear to fall into three groups, from best to worst: ID3, CS-ID3, and the instance- based methods. Both of the instance-based methods incur a sharply increasing number of errors as irrel- evant features increase, something that may be reme- died by more sophisticated version of IBL (cf. Aha Ki- bler 1989). Unlike the lower performance of CS-ID3 compared to ID3, CS-IBL and IBL appear equal. Summary This paper addresses the general problem of learning from examples when features have non-trivial costs. Though this work utilizes inductive methods, comple- TANAND~CHLIMMER 859 0 0 2 4 8 N of irrelevant Features Figure 6. Average feature costs per example prior to convergence given moderately uneven costs. 0 2 4 6 8 N of Irrelevant Features Figure 7. Total errors prior to convergence given mod- erately uneven costs. mentary research has investigated similar issues using analytic, or explanation-based learning methods. For instance, Keller (1987) d escribes a system that trades off the operationality of a concept description for pre- dictive accuracy: typically expensive, fine-grained tests are pruned for the sake of overall improvement. Like Keller’s system, CS-ID3 and CS-IBL also attempt to make concept descriptions more operational by min- imizing feature measurement costs, but they do not trade off cost for accuracy. Despite encouraging empirical evidence supporting the hypothesis that CS-ID3 and CS-IBL are sensi- tive to costs, there are still several open questions: how relevant is CS-IBL’s classification flexibility (as compared to CS-IDS), how can cost-sensitive meth- ods reason about parallel feature evaluation, and can cost-sensitive methods tolerate noise? Notwithstand- ing these, modifying methods to deal with feature costs appears feasible, and we suspect necessary, in future machine learning research. Acknowledgements We would like to thank Tom Mitchell and Long-Ji Lin for the robotics hardware used in this research, and David Aha, Rich Caruana, Klaus Gross, and Tom Mitchell for their comments on a draft of this paper. Thanks also to the ‘gripe’ group for providing a consistent and reliable computing envi- ronment . References Aha, D. W., & Kibler, D. 1989. Noise-Tolerant Instance-Based Learning Algorithms. In Proceedings of the Eleventh International Joint Conference on Arti- ficial Intelligence, 794-799. Detroit, MI: Morgan Kauf- mann. Cox, L. A. 1988. Designing Interactive Expert Classification Systems That Acquire Expensive In- formation ‘Optimally.’ In Proceedings of the Euro- pean Knowledge Acquisition Workshop for Knowledge- Based Systems. Bonn, Germany. Gennari, J. H. 1989. Focused Concept Formation. In Proceedings of the Sixth International Workshop on Machine Learning, 379-382. Cornell, NY: Morgan Kaufmann. Keller, R. M. 1987. Defining Operationality for Explanation-Based Learning. In Proceedings of the Sixth National Conference on Artificial Intelligence, 482-487. Seattle, WA: Morgan Kaufmann. Nunez, M. 1988. Economic Induction: A Case Study. In Proceedings of the Third European Work- ing Session on Learning, 139-145. Glasgow, Scotland: Pitman. Quinlan, J. R. 1986. Induction of Decision Trees. Machine Learning l( 1):81-106. Tan, M. 1990. CSL: A Cost-Sensitive Learning System for Sensing and Grasping Objects. In Pro- ceedings of the 1990 IEEE International Conference on Robotics and Automation. Cincinnati, OH. Tan, M., and Schlimmer, J. C. 1989. Cost- Sensitive Concept Learning of Sensor Use in Approach and Recognition. In Proceedings of the Sixth Interna- tional Workshop on Machine Learning, 392-395. Cor- nell, NY: Morgan Kaufmann. Utgoff, P. E. 1989. Incremental Induction of Deci- sion Trees. Machine Learning 4(2):161-186. 860 MACHINELEAFWING
1990
140
894
Incremental Non-Backtracking Focusing: A Polynomially Bounded Generalization Algorithm for Version Benjamin D. Smith University of Southern California Department of Computer Science Los Angeles, CA 90089-0782 benjamin@castor.usc.edu Abstract The candidate elimination algorithm for inductive learning with version spaces can require both expo- nential time and space. This article describes the Incremental Non-Backtracking Focusing (INBF) algorithm which learns strictly tree-structured con- cepts in polynomial space and time. Specifically, it learns in time O(pnrC) and space 0( nk) where p is the number of positives, n the number of negatives, and k the number of features. INBF is an extension of an existing batch algorithm, Avoidance Focus- ing (AF). Although AF also learns in polynomial time, it assumes a convergent set of positive exam- ples, and handles additional examples inefficiently; INBF has neither of these restrictions. Both the AF and INBF algorithms assume that the positive examples plus the near misses will be sufficient for convergence if the initial set of examples is con- vergent. This article formally proves that for tree- structured concepts this assumption does in fact hold. Introduction The candidate elimination (CE) algorithm [Mitchell, 19821 learns a concept from a set of positive and neg- ative examples of that concept. The concept to be learned is called the target concept. A concept is usually described by a tuple of featzlres each of which can have one of several values. These values can be discrete, tree- structured, or lattice-structured. The features and their possible values comprise the generalization language for a particular domain. The CE algorithm learns the target concept by searching a hypothesis space consisting of all concepts described by the generalization language. The concepts are organized into a lattice defining a partial order of *This project is partially supported by the National Aeronautics and Space Administration under cooperative agreement number NCC 2-538. Thanks go to Haym Hirsh, Yoram Reich, and Richard Young for their helpful comments on earlier drafts of this article. Paul S. Rosenbloom Information Sciences Institute University of Southern California Marina Del Ray, CA 90292 rosenbloom@&xa.isi.edu generality on the concepts. This space is searched bidi- rectionally for the target concept. The top-down search maintains a set, G, of the maximally general concepts in the space which are still candidates for the target concept. A negative example causes all concepts in G which cover it to be specialized just enough to exclude the example. A positive example prunes concepts in G which do not cover it. The bottom-up search maintains a set, S, of maximally specific candidates. A positive example generalizes concepts in S which do not include it just enough to include the example. A negative ex- ample prunes concepts from S which do not cover it. Initially, S contains the first positive example and G contains the maximally general concept in the space. The examples are then processed incrementally until convergence (S = G). At this point, there is only one concept remaining in S U G which is consistent with the examples (assuming a conjunctive concept language). This is the target concept, C. That the CE algorithm can consume space exponen- tial in the number of negative examples has been shown in [Haussler, 19881. This rapid growth is caused by fragmentation of the G set. When G is specialized by a common class of negative examples known as fur misses (described later), each concept in G is specialized into several concepts (always the same number for a given far miss). Each of these specializations appears in the specialization of G, so a given far miss can be thought of as fragmenting each concept in G into several concepts. Fragmentation means G can grow exponentially in the number of far misses. Since the candidate elimina- tion algorithm requires examination of every concept in G for each new example, time is also exponential. When learning concepts with all tree-structured features, S contains exactly one concept [Bundy et al., 19851 and thus does not impact the time and space bounds. So for such concepts, if G could be kept from fragment- ing, then the time and space bounds could possibly be brought into polynomial ranges in the number of exam- ples. This is the central principle behind the Incremen- tal Non-Backtracking Focusing (INBF) algorithm. The INBF algorithm is an extension of the avoidance 848 MACHINE LEARNING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. focusing (AF) algorithm [Bundy et al., 1985]. In gen- eral, focusing algorithms for tree- or lattice-structured concepts [Young and Plotkin, 19771 bound the search space with two concepts, upper and lower, instead of with two sets, S and G. For a negative example, up- per is specialized on one feature to exclude the value of the example on that feature. If this feature’s value is not included by the target concept, then this is a cor- rect specialization, otherwise the specialization leads to incorrect concept formation. When the mistake is dis- covered, the focusing algorithm backtracks and selects a different feature on which to specialize upper. Back- tracking is focusing’s equivalent of fragmentation. The AF algorithm [Young and Plotkin, 1977; Bundy et at., 19851 is a batch algorithm that avoids backtrack- ing by assuming that it has been given a convergent set of examples, and processing all the positive examples first so that lower is at the target concept, and then processing just the near misses[Winston, 19751 to con- verge upper to the target concept. This assumes that the near misses will always be sufficient for convergence. Converging upper to the target concept confirms that the examples are consistent. A near miss is an example for which only one feature is not covered by the target concept. Since lower is at the target concept, all the near misses can be correctly identified along with the feature that is not covered by the target concept. Since this feature is known for each near miss, choosing this feature always leads to a correct specialization, and so the AF algorithm never backtracks. INBF uses a similar approach to avoid backtrack- ing (fragmentation). However, there are some inherent drawbacks to the AF algorithm which INBF avoids. AF assumes that there are enough positive examples for lower to converge to the target concept. If there are insufficient positive examples, so that lower is below C, then correct identification of all the near misses can not be guaranteed. Furthermore, if more examples are pre- sented later, they may generalize lower, in which case the original negatives must be reprocessed since they may now be identifiable as near misses. But because AF is a batch algorithm, reprocessing the old negatives is very inefficient, especially with many new example sets. Lastly, many induction problems require an in- cremental algorithm, not a batch algorithm. Though AF could be rerun with the new examples, it would be terribly inefficient, even when modified to save as much work as possible from previous batches. This article introduces the INBF algorithm, an in- cremental focusing algorithm which has tri-polynomial space and time bounds in the number of positive exam- ples, negative examples, and features. This algorithm is limited to learning tree-structured concepts’ in con- junctive languages. Being incremental, additional ex- ‘Discussion with Richard Young leads us to believe that INBF should be extensible to upper semi-lattices, as de- scribed in [Young and Plotkin, 19771. amples are handled efficiently, and it uses less space than its batch counterpart. Unlike the AF algorithm, the INBF algorithm (with extensions) can learn from an incomplete set of positive examples. In this case, the concept learned will be incomplete, but equivalent to the one learned by the CE algorithm from the same set of examples. The rest of this article first describes the fragmen- tation problem, then explains the INBF algorithm and how it avoids this problem. This is followed by time and space comparisons of the INBF, AF, and CE al- gorithms. Next, it is, proven that the near misses plus the positive examples are indeed sufficient for con- vergence if the entire set of examples is convergent, as is required by both the AF and INBF algorithms. Though several authors have suggested that this is the case [Young and Plotkin, 1977; Bundy et al., 1985; Hirsh, 19901, no formal proof appears in the literature. Finally, an extension to INBF is suggested that handles non-convergent examples, and limitations of the INBF algorithm are discussed. Fragmentation When G is specialized by a negative example, the spe- cialization must cover the target concept but not the negative example. Only specializatons of G made on certain features, called guiZty features, will satisfy these criterion. Specifically, guilty features are exactly those features, f, for which feature f of the negative example is not covered by feature f of the target concept. Frag- mentation occurs because it is not generally possible to distinguish the guilty features of a negative example from the non-guilty features. At best, only some of the non-guilty features can be identified. These are exhon- eruted features. The non-exhonerated features include all of the guilty features and some non-guilty features. This is the set of possibZy guilty features. Since the CE algorithm can not accurately select a single guilty feature from this set, it specializes G on uZZ of the fea- tures in the set. Specializations made from non-guilty features will not cover the target concept and will even- tually be pruned by positive examples. Only specializa- tions made from the guilty features will remain. Each possibly guilty feature makes its own specialization of each concept in G. So if there are n possibly guilty fea- tures, then for every concept in G there are n concepts in the specialization of G. G has fragmented. But why can’t the guilty features be distinguished from the others? Guilty features are those features of a negative example which are not covered by the target concept. But the target concept is not known a priori, so the guilty features can not always be distinguished from the others. Some features, though, can be exhon- erated as definitely not guilty. All concepts subsumed by S are subsumed by the target concept, so if fea- ture f of the negative example is subsumed by feature f of a concept in S, then f can not be guilty. How- SMITHANDROSENBLOOM 849 ever, some concepts subsumed by the target concept are not subsumed by S, so some non-guilty features can not be exhonerated. Since all of the non-guilty features can not be exhonerated, the guilty features can not be distinguished from the non-guilty features. The more general S is, the more concepts it covers and the more non-guilty features it can exhonerate. When S reaches the target concept, all of the non-guilty features can be exhonerated. The only kind of negative example guaranteed not to fragment G is a near miss. These examples have only one possibly guilty feature, so each concept in G maps to only one concept in G’s specialization. By con- trast, a negative example with more than one possibly guilty feature is called a fur miss. The more specific S is, the few& non-guilty features it can exhonerate, and the more features it must therefore label as possi- bly guilty. Thus a negative example which has several possibly guilty features (a far miss) with respect to a specific S may have only one possibly guilty feature (be a near miss) with respect to a more general S. Since S increases monotonically in generality to the target con- cept, examples which are initially classifiable only as far misses may later be identifiable as near misses. The INBF algorithm relies on this phenomenon. The Algorithm The INBF algorithm is an incremental version of the batch AF algorithm. Like the AF algorithm, the INBF algorithm exploits the fact that upper is guaranteed to converge without backtracking to the target concept when specialized only by all of the near misses from a convergent set of examples. The key is guaranteeing that all of the near misses can be recognized, and there- fore used. But a near miss could possibly be classified as a far miss if lower is more specific than C. The batch algorithm guarantees identification of all near misses by processing all of the positive examples first so that lower is at C (assuming a convergent set of examples). Then just the near miss negatives are processed, and the far misses are discarded. An incremental algorithm, however, can not expect all the positive examples first, so lower is more spe- cific than C during most of the learning. Therefore INBF can not simply discard far misses as does AF: near misses that should be processed might be dis- carded because they were temporarily misclassified as far misses. Since these far misses may later be recog- nizable as near misses as lower converges to C, INBF saves the far misses on a list until they can be recog- nized. Avery time lower is generalized, the list of saved far misses is scanned for far misses which are now rec- ognizable as near misses. These near misses are used to specialize upper, and are then removed from the list. Figure 1 presents the INBF algorithm. lower is initial- ized to the first positive example, p’; upper is initialized to the maximally general concept, (ANY, . . . , ANY); INBF() lower= pl; upper = (ANY, . . . , ANY); wait-list= {} LOOP until upper = lower OR collapse For a positive example, p Generalize lower to include p FOR each negative example, n, in wait-list Do PROCESS-NEGATIVE(n) For a negative example, n IF n is not covered by (less than) upper THEN Discard n ELSE PROCESS-NEGATIVE(n) PROCESS-NEGATIVE(n) Compare n and lower to determine number of possibly guilty features. CASE (number of possibly guilty features) OF guilty = 0 : collapse guilty = 1 : Specialize upper on the single guilty feature of n. Remove n from wait-list. guilty > 1 : Add n to wait-list if it’s not already there. Figure 1: Incremental Non-Backtracking Focusing and the list of saved negative examples, wait-list, is ini- tialized to empty. If the algorithm receives a positive example, lower is generalized to include it. Some far misses may be recognizable as near misses by the new lower, so wait-list is scanned for near misses. upper is specialized to exclude any newly discovered near misses, and the near misses are removed from the list of saved negatives. If after an example has been processed up- per and lower are equal, then they have converged to the target concept, and the algorithm has succeeded. Otherwise, the next example is processed. Negative examples are first compared to upper. Those not less than upper would be classified as negative anyway, and are discarded. The CE algorithm performs a similar test. Otherwise, the negative is compared to lower to determine the number of possibly guilty fea- tures. If the example is guilty in zero features, then the version space is inconsistent. If the example is a near miss then it specializes upper. If the example is a far miss, then it is saved on wait-list until it can be recognized as a neaS miss. Time/Space Summary The INBF algorithm uses considerably less time and space than the CE algorithm in the worst case, and the same amount in the best case. The worst case bounds are derived as follows. Let p be the number of posi- tive examples, n the number of negatives, and k the number of features. Each concept requires O(k) units 850 MACHINELEARNING of storage (for k features). INBF maintains only three variables: upper, lower, and wait-list. upper and lower are single concepts, and wait-list clearly has an upper bound of n concepts. Thus the total space requirement is O(k) + O(k) + O(nk) = O(nk). PROCESS-NEGATIVE can be done in O(k) time if its argument, n, a nega- tive example, can be added or deleted from wait-list in constant time, and if the guilt of a feature of n can be determined in constant time. The latter is possible if examples are represented in vector notation, explained in the next section. Additions can be done in constant time by adding to the front of the list. For deletions, INBF knows where n is in wait-list at the time INBF Cdk PROCESS-NEGATIvE. If we pass this pointer to PROCESS-NEGATIVE along with n, then n can be deleted from wait-list in constant time. Each positive example calls PROCESS-NEGATIVE once for each example in wait- list. Each negative example makes a single call. Total time is thus O(pnk) + O(nk) = O(pnk). Derivation of the other results appear in [Smith and Rosenbloom, 19901. Worst Case Analysis 1 INBF AF Time O(pnk) O(pk + nk) $km+‘) Space 0( nk) O(pk + nk) O(kn”) The AF algorithm performs poorly on additional batches. All the negatives seen so far must be repro- cessed for each new batch in order to find near misses not recognized in previous batches. The following is based on b batches of one positive and one negative example each. Worst Case Analysis: Multiple Batches II INBF I AF 1 CE J Time O(b2k) O(b3k) O(kb+‘) Space O(bk) w4 O(kb+l) The best case for all the algorithms occurs when all the negative examples are near misses. For the INBF algorithm, the list of saved negatives is always empty. For the CE algorithm, G never fragments. Best Case Analysis INBF AF CE Time O(pk + nk) O(pk + nk) O(pk + nk) Space O(k) O(pk + nk) O(k) Near Miss Convergence Theorem The Near Miss Convergence Theorem is the crucial re- sult upon which both the INBF and AF algorithms rely. That is, that the positive examples plus the near misses alone are sufficient for convergence if the entire set of examples (positives plus negatives) are convergent. The theorem and proof are given formally below. The proof relies on a vector notation which is explained first, fol- lowed immediately by the proof of the theorem. P = {e 1 e is a positive example} F = {e 1 e is a far miss negative example} M= {e 1 e is a near miss negative example} N = {e 1 e is a negative example} = F U M Theorem 1 (Near Miss Convergence Theorem) If a set of strictly tree-structured examples P U F U M of a concept is suficient for convewence (G = S = C) of the Candidate Elimination Algorithm, then PU M is also suficient for convergence. Vector Notation Concepts and examples with tree-structured attributes can be represented as vectors. This notation is used throughout the proof to simplify the arguments. Each feature of a concept (or example) is a component of the vector and is assigned an integer indicating its distance from p’, the first positive example. The distance be- tween feature i of a concept and feature i of p1 is the number of nodes that feature i of p1 must be moved up in the feature tree to reach a value which subsumes feature i of the concept. For example, assume we have a concept with two fea- tures, each feature having the structure shown in Fig- ure 2. If p’ = (d,e), then the vector for (b,b) would be ANY / \ P A d e f Figure 2: Feature Tree (1,2). Feature i of vector p is denoted pi. For any given tree-structured feature, i, the values that could be part of the target concept must lie between pi and ANY. This constrains the values to a single path from ANY to pt. Thus, vector notation indicates, for a particular value, where that value lies on the path. Values off of the path are never part of the target concept, so each vector de- fines a unique concept. For example, if pt = e, then the only possible values for Ci are e, c, and ANY. 0 refers to e, 1 to c, and 2 to ANY. If Ai 1 Bi then A subsumes B on feature i. If ViAi 1 .& then concept A subsumes concept B (A 2 B). Vec- tors form a partial order of generality, with smaller vec- tors being more specific than larger vectors. Concept A can be generalized to include concept B by setting each feature, i, of A to max(A;, B;). Then V;A, 2 Bi, so A subsumes B. Furthermore A is the maximally specific concept which does so. A can be specialized to exclude B by causing A to not subsume B on some feature, i. This is done by setting Ai to B, - 1. Since B; - 1 is the smallest integer which fails to subsume &, A is the maximally general concept which SMITHANDROSENBLOOM 851 excludes B on feature i. Aside from notational simplic- ity, this method also makes specialization and gener- alization very fast operations. The usual tree searches can be avoided altogether once a concept’s vector has been established with respect to p’. Definitions, Lemmas, and Observations Definition 1 (Least Upper Bound of G: LIG) The Least Upper Bound of G, written uG, is the vec- tor (uG1, uG2,. . . , uG~), where each uG~ is the most specific value for feature i s.t. tlgE G U Gi 2 gi. Observation 1 For tree-structured features, UG; can also be defined as the least common parent of the set of values taken by concepts in G on feature i. That is, uG~ is the least common parent of the values in the set bib E G) h w ere i is a tree structured feature. has either one, or multiple elements (in a consistent ver- sion space, G can not have zero elements). Assume G has multiple elements. By Definition 1, VgE G g 5 IJG. But {uG} = S by assumption, and since the version space only contains concepts with tree-structured fea- tures, S has only one element, s. Thus Vg f G g 5 s. But the version space collapses when every concept in G is strictly less than some concept in S, so there must be at least one element, g, of G that is equal to s. Since s subsumes every concept in G, so does g. But G con- tains only concepts which are not subsumed by other concepts in G, so G = 69). This violates our assump- tion that G has multiple elements. Therefore G must have exactly one element. Call it t . Then by Obser- vation 3, UG = Z; but {uG) = S by assumption, and thus S = (uG3 = {z) = G. So G = S. 0 Observation 2 Let V be a set of tree structured val- ues. Let the least common parent of these values be denoted UV (by Observation 1, the least common par- ent of V is also the least upper bound of V, hence the notation). Adding OT deleting a value, v, from V will have no effect on UV as long as v is subsumed by some other value in V. Formally, UV is the least common parent of the following sets of tree structured values: V, V U (v), and V - (v) where v is subsumed by a value in V. Proof of Lemma 2 Let f be a consistent far miss, and G’ be the set of con- cepts that results from specializing G with f. G’ can be constructed from G and f by the construction algo rithm (Figure 3) in two stages. Call the sets formed in these stages G1 and G2 respectively. The construction algorithm is the vector-notation equivalent of the ac- tion taken by the CE algorithm on a negative example. G2 is then returned as G’. For each feature, i, let Vi be the set of values for feature i held by concepts in G, i.e., Vi = {g; jg E G). Similarly, I$’ = {gt lg’ E GL) and V,’ = {gz]g2 E G2). We will show that for any feature, the set of values for the feature held by the concepts in each stage is formed by adding or deleting zero or more values which are subsumed by values in the previous stage. Thus by repeated application of Observation 2, UK = lJv,l = UVi2. Then by Observation 1, UG; = UVi and uG{ = uVi2. Thus uGi = UGI, and since this holds for all i, UG = UG’. Lemma 1 In an uncollapsed version space with strictly tme-structured attributes G = S ifi {uG) = S. Lemma 2 If G’ is the set of concepts that result from specializing a set of tree-structured concepts, G, with a consistent far miss negative example, then UG’ = UG. Observation 3 If a set, G, contains only one element, x, then UG = x. We must show that P u M yields G = S = C. Assume Proof of Theorem 1 that all the positive examples have been seen, and thus that S = C. By Lemma 1 it is therefore sufficient to show UG = C. But by Lemma 2 we know that F has no effect on UG, and thus whether or not G = C is independent of F. Therefore since PUMUF is sufficient for UG = C, so must be P U M. 0 where Ei is a set of values subsumed by values in Vi. First we show that for every feature, i, Vi1 = V; U Ei In stage one, G1 is initially empty. Every concept g in G then adds one concept to G1 for every possibly guilty feature of f. When a given concept g in G, is specialized on a possibly guilty feature j, g adds one concept to G1 with a value for feature j of m.in(gj, fj - 1). When specialized on a possibly guilty feature k # j, each g in G adds a concept to G1 with a value of gj for Proof of Lemma 1 feature j. So if there are at least two possibly guilty (1) If G = S then {uG) = S Assume that G = S and that the concepts in the ver- sion space have only tree-structured features. For ver- sion spaces with such concepts, S has only one element [Bundy et al., 19851. Call this element s. G = S by assumption, so G = {s). By Observation 3, UG = s and thus (uG) = (s) = S. (2) If {uG) = S then G = S Assume that {uG) = S, and that concepts in the ver- sion space have only tree-structured features. Then G (1) FOR each possibly guilty feature, i, off E F DO FOR each concept g in G DO 9’ = 9 gl = min(gi, fi - 1) Add g’ to G’ (2) Remove from G’ every concept subsumed by . another concept in G’. Figure 3: Construction Algorithm 852 MACHINELEARNING features, then for every feature i, the set of values, Vi’, taken by the concepts in G1 on feature i is {g;lg E G} U (min(g;,f; - 1)/g E G) = Vi U Ei. If i is not a possibly guilty feature, E; = {}. We are guaranteed at least two possibly guilty features since G is being specialized by a far miss, f, and far misses always have multiple possibly guilty features. Note that for every i, min(gi, fi - 1) 5 g;, i-e, min(gi, fi - 1) is subsumed by gi. Thus for every i, every value in Ei is subsumed by some value in Vi, so by Observation 2, LIVE = uVil. Next we show that q2 is vi1 - & where & is a set of values subsumed by values in I<‘, and i is any feature. In stage two, G2 is formed by removing from G1 all concepts which are subsumed by another concept in G1. Let the set of removed concepts be R. Thus Vi2 is Vi1 minus the set of values, & taken by concepts in R on feature i. But every concept in R is subsumed by a concept in G1, so every value in Ri is subsumed by some value in Vi’. Therefore Vi2 = Vi1 - &, and fi is subsumed by Vi’, so by Observation 2, LIVE’ = uVi2. 0 Incompletely Learned Concepts There are many learning domains in which the avail- ability of examples is limited, or for which prohibitively many examples are required for convergence. In such domains there are insufficient examples for convergence, so the set of examples is non-convergent and the target concept is incompletely learned. However, we would still like to classify new instances as being positive or negative examples of the incompletely learned target concept. In the CE algorithm, examples subsumed by S are always positive, those not subsumed by G are neg- ative, and those between S and G can not be classified. In the INBF algorithm, examples below lower are positive, and those not below upper are negative. But there could be examples under upper which the CE al- gorithm would classify as negative. This is because not all the near misses have been recognized and thus upper bounds the target concept more loosely than does G. Let G-final be the G set after the CE algorithm has processed the (incomplete) set of examples. INBF can generate G-final from upper by processing all the saved negatives the same way the CE algorithm does. In the CE algorithm, G converges on G-final non- monotonically. Negative examples cause G to increase in size by fragmenting it, and positive examples reduce G by pruning concepts. Under many example order- ings, G can initially fragment beyond the size of G-final, meaning that the size of G fluctuates as it approaches G-final. By contrast, the size of the G set in the INBF algorithm increases monotonically to the size of G-final. This is because only the saved negative examples are be- ing processed to specialize upper. A negative example can only cause G to increase in size, so G must increase monotonically from one concept (upper) to several (G- final). Thus the maximum size attained by G for INBF is usually less, and never greater, than the maximum size of G for the CE algorithm. The degree to which the maximum sizes differ depends on the order of the examples. Since the space bound of both algorithms is propor- tional to the maximum size attained by G, INBF saves space over the CE algorithm for most example order- ings. For the remaining orderings, both algorithms have equivalent space bounds. Since time is proportional to space, these savings apply to the running time as well. Conclusions and Limitations For strictly tree-structured concepts, INBF is prov- ably equivalent to the CE algorithm, but is polyno- mial rather than exponential. Unlike the AF algorithm, INBF can efficiently process additional examples, and can deal with non-convergent sets of examples and in- completely learned concepts. For such sets, INBF has the same worst-case time and space bounds as the CE algorithm, but better best-case bounds. The INBF algorithm is restricted to learning strictly tree-structured concepts (and possibly upper semi- lattices). It is, at present, unable to learn concepts with arbitrary lattice-structured features. Lattice structured features allow pairs of values to have more than one least common parent. This means that there is more than one way to generalize a concept to include a pos- itive example, and thus S can fragment. It also means that a near miss no longer specifies a unique specializa- tion (generalization) of G (S), and thus near misses can not prevent fragmentation. Learning lattice-structured concepts is an area of future research. References (Bundy et al., 1985) A. Bundy, B. Silver, and D. Plum- mer. An analytical comparison of some rule-learning programs. Artificial Intelligence, 27, 1985. (Haussler, 1988) David Haussler. Quantifying induc- tive bias: Artificial intelligence learning algorithms and valiant’s learning framework. Artificial Intelli- gence, 36, 1988. (Hirsh, 1990) Haym Hirsh, 1990. Private communica- tion. (Mitchell, 1982) T om M. Mitchell. Generalization as search. Artificial Intelligence, 18, 1982. (Smith and Rosenbloom, 1990) B.D. Smith and P.S. Rosenbloom. Polynomially bounded generalization algorithms for version spaces. In preparation, 1990. (Winston, 1975) P. Winston. Learning structural de- scriptions from examples. In The PsychoEogy of Com- puter Vision. McGraw-Hill, New York, 1975. (Young and Plotkin, 1977) Richard M. Young and Gordon D. Plotkin. Analysis of an extended concept- learning task. In IJCAI, 1977. SMITHANDROSENBLOOM 853
1990
141
895
Complementary Discrimination Learning: uality between Generalizatio iscrimination Wei-Min Shen ACT/AI, MCC, 3500 West Balcones Center Drive Austin, TX 78759 wshen@mcc.com Abstract Although generalization and discrimination are commonly used together in machine learning, little has been understood about how these two methods are intrinsically related. This paper describes the idea of complementary discrimination, which ex- ploits semantically the syntactic duality between the two approaches: discriminating a concept is equivalent to generalizing the complement of the concept, and vice versa. This relation brings to- gether naturally generalization and discrimination so that learning programs may utilize freely the advantages of both approaches, such as learning by analogy and learning from mistakes. We will give a detailed description of the complementary discrimination learning (CDL) algorithm and ex- tend the previous results by considering the effect of noise and analyzing the complexity of the al- gorithm. CDL’s performance on both perfect and noisy data and its ability to manage the tradeoff between simplicity and accuracy of concepts have provided some evidence that complementary dis- crimination is a useful and intrinsic relation be- tween generalization and discrimination. Introduction Learning by generalization and learning by discrimina- tion are two basic approaches commonly used in ma- chine learning. In learning by generalization, the search for a target concept proceeds from specific to gen- eral guided by the similarity between instances. Win- ston’s work (1975) on arch learning is a typical exam- ple. In learning by discrimination, on the other hand, the search proceeds from general to specific guided by the difference between instances. Feigenbaum and Si- mon’s EPAM (1984) and Quinlan’s ID3 (1983) serve as good representatives. Although much work, such as AM (Lenat 1977), V ersion Spaces (Mitchell 1982), Counter- factuals (Vere 1980), and STAR (Michalski 1983) have used both methods, little of it has revealed the relation between these two seemingly very different approaches. This paper describes the idea of complementary dis- crimination (Shen 1989). The key observation is that discriminating a concept is equivalent to generalizing its complement, and vice versa. This is to say that the effect of discrimination and generalization can be achieved either way. For example, generalizing a con- cept using the similarity between instances can be ac- complished by discriminating the concept’s complement using the difference between instances, and vice versa. Exploiting this duality brings together naturally both discrimination and generalization so that a learning al- gorithm can make intelligent choices about which ap- proach to take. For example, if a task requires learning from mistakes, then one might prefer using discrimina- tion to achieve generalization. On the other hand, if there are existing theories for finding the relevant simi- larities between instances, then using generalization to accomplish discrimination might be better. The CDL algorithm to be described here has implemented only the part that uses discrimination to achieve generalisa- tion, yet it has already demonstrated some very encour- aging results. CDL learns concepts from training instances. It can incrementally learn any subset of the instance space (or concepts) from either perfect or noisy data, although some concepts may be preferred over others. Since CDL uses Predicate Calculus to represent concepts, it can deal with variables and relations, and can be easily in- tegrated with problem solving systems and autonomous learning systems. Although CDL’s learning is super- vised, when combined with problem solving and exper- imentation its behavior is autonomous. In some earlier papers (Shen 1989, Shen and Simon 1989), we have shown how CDL is used for learning from the environment and compared it with Version Spaces, STABB and discovery systems like GLAUBER and DALTON. In this paper, we will give a detailed descrip- tion of the algorithm and report its performance on two typical concept formation tasks to show how CDL learns from noisy data and data that has exceptional cases. We will also evaluate CDL’s complexity and an- alyze the quality of its learning. 834 MACHINE LEARNING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. The CDL Algorithm CDL is an algorithm that learns concepts from training instances. It learns a concept by learning,in parallel, the concept itself, C, and its complement, C. Driven by the data, CDL will incrementally revise the complemen- tal form, either C or c, whose prediction about a new instance is incorrect. It will first find the differences be- tween the new instance and the previous instances that are correctly classified, then discriminate the comple- mental form by conjoining it with the difference. After that, CDL will update, or generalize, the complement of the complemental form to be the negation of the new complemental form. This paper uses only propositional logic but the al- gorithm still works with first order logic. Here, the representation of instances is binary-valued feature vec- tors and the representation of concepts is propositional logical formulas. Multiple-valued features will be rep- resented by multiple feature variables. For example, if a feature F has values a, b and c, we will use two variables zz and y so that F=a will be zy = 00, F=b will be xy = 01 and F=c will be xy = 10. Although features are binary-valued, the representation allows Qdon’t care” (*) as a value of features so that fea- tures with unknown values can be represented. Mul- tiple classes (or concepts) will be represented by class variables. For example, to learn multiple classes W, X, Y and Z, we will use two class variables cl and c2 so that W, X, Y and Z will be crcz = 00, crc2 = 01, crc2 = 10 and crc2 = 11 respectively. These class variables will be given as inputs to the learning algorithm as if they are some extra features of instances. Thus, instead of telling CDL that an instance I belongs to a class C, we willstate that (I C) is a positive instance. For example, to say that an instance (1 0 0 1) belongs to W, we will say that (10 0 1~1 ~2) is a positive instance; and to say that an instance (1 1 0 0) does not belong to Y, we will say (1 1 0 0 cl ~2) is a negative instance. Using these class variables, CDL can learn multiple concepts even though syntactically it learns only a single concept. For example, W, X, Y and Z will be learned as C = (. . . W) v (-* X) v (- - Y) v (-a Z). Concepts will be logical propositions, such as z A y or f V a. The complement of a concept will be the negation of the concept. For example, XA y and TV~ are complements to each other. For convenience, concepts will be either in DNF or CNF; if a concept is in DNF then its complement will be in CNF, and vice versa. To classify a new instance, a concept will be matched to the instance. For example, if the instances are vectors of (u b c) then a concept a A b will match (1 * 0) but not (* 0 *). As extremes, a concept T will match everything and a concept NIL will match nothing. Table 1 shows the core of the algorithm. The proce- dure IN-CLASS?(i) serves as the supervisor; it returns T when the instance i is positive, or NIL when i is negative. The procedure PREDICT&C) returns T if i Let C be a concept and E be C’s complement. Let POS be a set of positive examples. Let NEG be a set of negative examples. Procedure CDL() Let C be T and c be NIL; While i t GET-THE-NEXT-INSTANCE() If PREDICT(i,C) # IN-CLASS?(i) then if PREDICT(i,C)=T (i.e. C covers too much) then D +- DIFFERENCES(i,POS) C t REVISE(C,D,NEG) 7%----C else (i.e. c covers too much) D + DIFFERENCES(i,NEG) c +- REVISE(E,D,POS) If IN .,A%$?; l then insert(i,POS) eke insert(i,NEG). Procedure DIFFERENCES(i,INSTS) Let DIFF be { j\i 1 j E INSTS} where j\i is the set of features that are in j but not in i; Return the minimum subset of DIFF that covers M% of INSTS. Procedure REVISE(X,D,OPPOSITE) If X=T then return D else Y +- XAD; For each y in Y If y is logically subsumed by some ZEY then delete y from Y; If y = y’~a A -~a (i.e. y contains contradictions) then Let P=%overs no instance in OPPOSITE”, If y’Aa satisfies P and y’Ala does not, then replace y by y’Aa; If y’Ala satisfies P and y’Aa does not, then replace y by ~‘ATz else delete y from Y; Return Y. Table 1: The core of the CDL learning algorithm. is matched by C and NIL otherwise (matched by c). When a new instance comes, CDL makes a prediction by calling PREDICT on the instance. If the prediction equals the value of IN-CLASS?(~), the concepts remain the same. Otherwise, CDL will call DIFFERENCES to find the differences between this troublesome instance and previous classified instances, then use REVISE to update the complemental form, either C or c, that made the incorrect prediction by conjoining it with the differences. The complement of the complemental form is then set to be the negation of the revised complemen- tal form. To illustrate the procedure DIFFERENCES(i,INSTS), consider i = (a&d) and INSTS = ((8kd) (&d) (izbzd) (abc~)}. Since (i&d) \ (u&d) = (ET), (Z&d) \ (u&d) = (z), (Z?d) \ (u&d) = (2~) and (a&?) \ (u&d) = (bJ), DIFF will be { (~J)(Z)((hbZ)(b;i)}. Among the four elements in DIFF, the procedure will return {(E) (b 2)) because this subset is the minimum that covers all the SHEN 835 Description Represent at ion Class height hair e yes abc d short blond brown 0 0 1 0 tall dark brown 1 0 0 0 + tall blond blue 1011 tall dark blue 100 1 short dark blue 000 1 + tall red blue 110 1 tall blond brown 1 0 1 0 + short blond blue 0011 Table 2: A simple learning task. instances in INSTS. For all the experiments in this pa- per, the parameter M is set to 90. In a later section, however, we will suggest a way to determine M au- tomatically. The result returned by this procedure is always interpreted as a DNF logical form; so {(z) (b Ti)} means nb’;2. The main task of procedure REVISE(X, D, OPPO- SITE) is to conjoin the differences D with the con- cept X and simplify the results. In case when the results contain contradictions, the procedure also de- cides how to resolve the contradictions. For exam- ple, consider D = EVb?, X = iib V cd, and OPPO- SITE = { (abc$(abzz)(abc’;i)}. Then Y will be XAD = EbficdvEb%bcd~. Among these literals, aba will be deleted because it is subsumed by Eb; bcdz contains a contradiction and will be replaced by bed because bed does not cover any instance in OPPOSITE while bcx does. Let us now observe how CDL performs on the learn- ing task from (Quinlan 1983). Each instance is de- scribed as a conjunction of three attribute-value pairs, using the attributes: height, color of hair, and color of eyes. The instances and their representation in CDL are shown in the Table 2. We use variable a to repre- sent height (0 for short and 1 for tall), bc for color of hair (00 for dark, 01 for blond and 10 for red), and d for color of eyes (0 for brown and 1 for blue). The performance of CDL on this task is summarized in Table 3. When the first instance arrives, CDL pre- dicts that the instance is positive (C matches it), but the prediction is wrong. Since there is no previous posi- tive instance, procedure DIFFERENCES returns NIL and procedure REVISE sets C to NIL and c to T. On the second instance, CDL predicts that it is negative (c matches it). Since the prediction is correct, the con- cepts remain the same. On the third instance, CDL’s prediction is negative but wrong. This time the proce- dure DIFFERENCES finds the differences between (1 0 1 1) and previous negative examples (( 1 0 0 0) (0 0 1 0)) to be ZkZ~. The procedure REVISE sets E to be equal to the difference z&$? (because the current c is T) and sets C to be the new c’s complement: (c v d)(a v d). Based on the new concepts, CDL predicts that the 836 MACHINE LEARNING Insts 0010 1000 1011 1001 0001 1101 1010 0011 Pred Class Diff N:L NIL T NIL NIL T T NIL NIL T NIL NIL T NIL T c c T NIL NIL T z&d (avd) (cvd) tavd) (4 zaw kdVk (bvavd) (bvc) W) (4 kvz Table 3: A performance on the learning task. fourth instance is positive but the prediction is wrong. This time, the instance (10 0 1) is compared with pre- vious positive examples (( 1 0 1 1)) and the differences found is (c). This difference is conjoined with C to form the new concept C, (u V d)(c), and c is set to be the new C’s complement: T&/E CDL’s prediction on the fifth instance is correct so concepts are not changed. When the sixth instance comes along, CDL predicts it is negative because c matches the instance. Upon noticing the prediction is wrong, CDL finds the differences between (1 1 0 1) and previous negative examples ((0 0 0 1) (1 0 0 1) (1 0 0 0) (0 0 1 0)) to be (5). Th us, the c is revised to be %ZV~E~ and C be (b V c) (b V a V d). Using these new concepts, CDL predicts that the seventh instance is positive (C matches it) but the prediction is wrong. Comparing (1 0 1 0) with the previous positive examples { (1 1 0 1) (1 0 1 1)), CDL finds the differences to be (d). So the concept C is revised to be (b V c)(d), and c is revised as &Vz. At this point, the learned concept classifies all the instances correctly. From this example we can see that CDL does both generalization and discrimination, as the way the con- cept C is developed. Since CDL achieves the same effects as any bi-directional learning algorithm, it is important to point out the advantages of using com- plementary discrimination. Unlike most bi-directional learning algorithms, CDL does not require additional domain knowledge specified a priori to bias hypothe- sis selection that may restrict the concepts that can be learned (e.g., the generalization hierarchies used by LEX). In fact, CDL can learn any subset, either con- junctive or disjunctive, of the instance space although some may be preferred over others. CDL manages its search by ‘jumping” to a new and reasonable hypothe- sis in the concept space whenever the concept is revised. This can be seen in the way that differences are found: a whenever the concepts are revised. The test data are 500 instances that are randomly generated with the same noise rate. CDL’s performance curve is shown in Figure 1. The x-axis shows the number of training instances and the y-axis shows the percentage of correct predictions on the 500 test instances. One can see that the prediction rate is generally increasing with the num- ber of instances processed. After 350 instances, CDL’s performance is oscillating around 74% (between 72.4% to 770/o).’ Continuing to run CDL on more instances shows that the performance rate will stay in that range. As a point of comparison, the performance of an non- incremental algorithm IWN is 73.3% (or 71.1%) on 400 instances (Tan and Eshelman 1988); and the perfor- mance of ID3 is 71.1% using 2000 training instances (Quinlan 1986). 0.750 0.700 0.650 0.600 0.550 0.500 ‘t 0.450 3, 0 50 100 150 200 250 300 350 400 Figure 1: CDL’s performance on noisy LED data. revised concept is neither the most general nor the most specific concept that covers the instances, but a bound- ary between positive and negative instances that has the simplest syntax structure. The preference for seek- ing the simplest differences is the backbone of CDL’s hypothesis selection. Finally, although CDL currently implements only learning from mistakes, it can be easily extended to similarity-based learning as well. For example, after correctly predicting the second instance is negative, CDL could, instead of doing nothing, find the similarity between the two negative instances (bd in this case) and revise c by disjoining c to the similarity. CDL can also be extended to select training instances actively; and it has been used in (Shen 1989) to design experiments for autonomous learning from the environment. Experiments To illustrate CDL’s performance on noisy data, we will report CDL’s performance on two typical concept learn- ing tasks: the faulty LED display data and Multiplexor. The former data is noisy; the later data, although noise-free, often causes problems for noise-tolerant al- gorit hms. Learning from Noisy Data The LED domain, introduced by Breiman (Breiman et al. 1984), is concerned with displaying decimal digits using seven segments, like those on a calculator’s dis- play. To introduce noise, consider that each display segment has a 10% chance to be inverted. The task is to recognize the correct digit despite the faulty display. Breiman has shown that for 10% noise rate, the upper bound for the performance of any system is 74%. We run CDL incrementally on 400 randomly gen- erated instances, uniformly distributed among the 10 digits. In this experiment, each instance has 7 binary- valued attributes representing the seven segments, plus 4 class variables to represent the 10 digits (10 classes). Since CDL is incremental, we evaluate the performance Learning from Data with Exceptional Cases A Multiplexor is shown in Figure 2. The inputs a and b indicate which input among c, d, e, or f will be output at g. Since variables are binary-valued, there are 64 pos- sible instances in this experiment. The task is to predict the output value given an input vector. It is difficult for noise-free learning algorithms because the relevance of each of the four data-bit attributes is a function of the values of the two address-bit attributes (Utgoff 1988). It is also difficult for noise-tolerant learning algorithms because every training case is equally important (Wil- son 1987). The 64 instances are presented incrementally to CDL as a circulated list. CDL’s performance on this task is reported in Figure 2, along with the performances of the algorithms reported in (Utgoff 1988) and IWN. The algorithms with hats are versions in which decision trees are updated only when the existing trees would misclassify the training instance just presented. One can see that CDL’s performance is better than others in terms of the number of training events (instances) that are required to reach the 100% prediction rate. Note that time comparison between CDL and others may not be meaningful because CDL is tested on a different machine. Analysis Complexity of CDL In this section, we analyze CDL’s complexity based on two assumptions: (1) the training data is noise-free so that the parameter M is set to 100; (2) there are no contradictions when a concept is conjoined with the dif- ferences. The second assumption may seem too strong, but we need it at the moment to prove the following theorem. ‘We think the r eason that CDL’s performance is some- times higher than 74% is that the testing data represent only a small subset of the sample space. SHEN 837 Algorithm Events ID3 53 1% 61 ID4 384 I= 384 ID5 57 I= 74 IWN 320 CDL 52 i e i IT c ii b Time 630.4 92.3 327.7 258.1 184.3 83.5 1;t.S Proportion 100 100 63 (not stable) 50 (not stable) 100 100 97.6185.1 100 Figure 2: CDL’s performance in Multiplexor domain. Theorem 1 Let i be a new instance wrongly covered by a concept X, and D be the differences found by DIF- FERENCES with M=lOO. If X cqvers POS and excludes NEG, then the revised concept X A D will cover POS and exclude NEG and i. To see the theorem is true, note that when M=lOO DIFFERENCES guarantees that D excludes i and covers POS. XA D will cover POS because both X and D covers POS. XA D will exclude NEG and i because X excludes NEG and D excludes i. Let I be the number of instances and A be the num- ber of attributes of each instances. The procedure DIF- FERENCES will take O(A.llog I) because DIFF will take O(A-I) to construct, and sorting DIFF and determining the minimum subset of it will take O(A - I log I). The procedure REVISE will take O(AYI+Y2) because deter- mining the logical subsumption of each y in Y will take Y2, and to resolve contradictions takes A. Y.OPPOSITE. To learn a concept that classifies the I instances cor- rectly, the body of CDL will loop no more than I times because of the theorem and the assumption of no con- tradictions. Thus, the total number of attributes ex- amined by CDL is: I ~O(A+logi+AYi+Y2) = i=l O(A?-logI+AYp+Y21) If we relax the assumption of no contradictions, then the theorem will not hold because contradictions cause some of the literals in X A D to be modified or deleted. In that case, there may exist instances that are wrongly classified even if all the instances are processed. CDL must go through these “left over” instances and process them again. In all the noise-free experiments we ran, these instances are rare and can be cleaned up quickly. For all the experiments in this paper, the parameter Minprocedure DIFFERENCES~~~~~~~ set to90. How- ever, to learn from natural data CDL must determine the value of M automatically. One way to achieve this is to initiate M=lOO and lower its value when evidence shows that the data is noisy. This can be done at the time of searching for the minimum subset of DIFF. If the data is indeed noisy, then the size of the minimum subset that covers INSTS will grow larger and the some of its elements may only cover a single instance. When this happens, CDL will lower M’s value so that the dif- ferences that cover single instances will be discarded. Note that CDL remembers all the previous instances. This could be a weak point if the number of training instances is large. One solution is to limit the memory to recent instances only. We have done some experi- ments on that matter; however, no formal analysis can be given at this point. The Quality of Learned Concepts Besides time complexity, another measure of concept learning algorithms is the quality of the learned con- cepts. We will consider two aspects here: the predictive power of the concept and the simplicity of the concept. The concepts learned by CDL have simple structures yet strong predictive power. For example, the concept learned by CDL in the Multiplexor domain is the fol- lowing: & V iibd V de V ub j V ue j V bdf V acd V he V cdej It can be used to predict the output feature even if the input features are not complete (Fisher 1989). For instance, knowing only that features a, e and j are true, one can conclude g even if the value of b, c and d are unknown. The concept can also be used to predict other features when certain features are known. For instance, knowing that g is true and a is false, one can conclude that c or d must be true because the concept has three literals &, abd and acd that contain -ii, and to make g true, one of zc, bd and cd must be true. This seems to be an advantage over ID3 because this predictive ability cannot be easily obtained if the concept is represented by a decision tree. The concepts learned by CDL also have a close re- lationship to classification rules. For example, using the variables cic2c3c4 to represent ten digits, CDL can learn the following concept from an error-free LED display:2 2The variables U, Ul, U,, M, Bl, B,, and B represent the display of Upper, Up-Left, Up-Right, Middle, Bottom- Left, Bottom-Right and Bottom segment respectively. 838 MACHINE LEARNING of the CYC project for their gen- (& &MB, B,. Bz,z2z3-c,) v (U& U,MBlR1&z&) v(UMBlBr~1~2~3c4) V(iTl UrMB, BE1&Z4C3) v(u&~&&4c3) v (u~&M&&&C4C3) -- v(~U,B~J&~~~~C~) V(~&1&C&) V(U.l~rMB~&~@&) v (U,MBrBrz1z4c3c2) -- v(UU,MB,z1c3c2c4) v (U~,~~~‘1c3c2c4) --- -- - v(UU~BIBz1c3c2c4) v (UU,MBrB~lc3c2c4) v(U&J~MB,BF~~~-~,C,) v (UU&MB~z2z3z4cl) v(u&i?&2c&c~) v (u&u,&c4&c1) Note that each conjunction (a single line) is like a rule: When the segments of display matches the de- scription, one can conclude the digit that is displaying. For example, seeing that UlU,.mBl B,. B is displayed, one can conclude it is a 0 (i.e. i?r&&&) even if the upper segment is not lit. Because the concepts learned by CDL are very similar to rules, the algorithm can be easily integrated with systems that solve problems and learn by experimentation, as we have shown in (Shen 1989, Shen and Simon 1989). In those cases, CDL will become an unsupervised (or internally supervised) learner because its feedback is from the performance of its problem solver. Having a simplicity criterion is especially important when the data is noisy because concepts could- overfit the data. Fortunately, experiments have shown that CDL manages the tradeoff between simplicity and pre- dictability well. For example, after processing 400 nbisy instances in the LED domain, the concept learned by CDL is a DNF of 36 disjuncts. Continuing to run on more new instances will not jeopardize its simplicity nor its predictive ability. We have also run CDL repeatedly on a database of 100 noisy LED instances; the num- ber of disjuncts oscillates between 20 and 30 and the learned concept does not overfit the data. Conclusion This paper examines semantically and formally the syn- tactic duality between generalization and discrimina- tion: generalizing a concept based on similarity be- tween instances can be accomplished by discriminating the complement of the concept based on the difference between instances, and vice versa. Experiments of the CDL algorithm on both perfect and noisy data have shown that exploiting this relation can bring together the advantages of both generalization and discrimina- tion, and can result in powerful learning algorithms. Further studies will be applying CDL to large scale learning tasks, such as learning and discovering new concepts in the CYC knowledge base. Acknowledgment I thank Mark Derthick, Doug Lenat, Kenneth Mur- ray, and two anonymous reviewers for their useful com- ments, and members erous support. References (Breiman et al., 1984) Breiman, L.; Fredman, L.H.; Olshen, R.A.; and Stone, C.J. 1984. CZussificu- tion and Regression Trees. Wadsworth International Group. (Feigenbaum and Simon, 1984) Feigenbaum, E.A. and Simon, H.A. 1984. EPAM-like models of recognition and learning. Cognitive Science, 8. (Fisher, 1989) F’ h 1s er, D.H. 1989. Noise-tolerant con- ceptual clustering. In Proceedings of 11 th IJCAI. (Lenat, 1977) L enat, D.B. 1977. The ubiquity of dis- covery. In Proceedings of 5th IJCAI. (Michalski, 1983) Michalski, R.S. 1983. A theory and methodology of inductive learning. Artificial Intelli- gence, 20. (Mitchell, 1982) Mitchell, T.M. 1982. Generalization as search. Artificial Intelligence, 18. (Quinlan, 1983) Quinlan, R.J. 1983. Learning effi- cient classification procedures and their application to chess end games. In Machine Learning. Morgan Kaufmann. (Quinlan, 1986) Q um an, * 1 R.J. 1986. Simplifying deci- sion trees. In Knowledge Acquisition for Knowledge- bused Systems Workshop. (Shen and Simon, 1989) Shen, W.M. and Simon, H.A. 1989. Rule creation and rule learning through envi- ronmental exploration. In Proceedings of 11th IJCAI. (Shen, 1989) Sh en, W.M. 1989. Learning from the En- vironment Bused on Actions and Percepts. PhD the- sis, Carnegie Melloin University. (Tan and Eshelman, 1988) Tan, M. and Eshelman, L. J. 1988. Using weigthed networks to represent clas- sification knowledge in noisy domains. In The Pro- ceedings of the 5th International Machine Learning Workshop. (Utgoff, 1988) Utgoff, P.E. 1988. ID5: an incremen- tal ID3. In The Proceedings of the 5th International Machine Learning Workshop. (Vere, 1980) Vere, S.A. 1980. Multilevel counterfatuals for generalizations of relational concepts and produc- tions. Artificial Intelligence, 14. (Wilson, 1987) Wilson, S.L. 1987. Classifier systems and the animat problem. Machine Learning, 2:199- 288, 1987. (Winston, 1975) Winston, P.H. 1975. Learning struc- tural descriptions from examples. In The psychology of computer vision. MacGraw-Hill. SHEN 839
1990
142
896
A Proven Domain-Independent Scientific Function-Finding Algorithm Cullen Schaffer Department of Computer Science Rutgers University 9 New Brunswick, NJ l 08903 201-932-4635 l schaffer@paul.rutgers.edu Abstract Programs such as Bacon, Abacus, Coper, Kepler and oth- ers are designed to find functional relationships of scientific significance in numerical data without relying on the deep domain knowledge scientists normally bring to bear in an- alytic work. Whether these systems actually perform as intended is an open question, however. To date, they have been supported only by anecdotal evidence-reports that a desirable answer has been found in one or more hand- selected and often artificial cases. In this paper, I describe a function-finding algorithm which differs radically from previous candidates in three respects. First, it concentrates rather on reliable identifi- cation of a few functional forms than on heuristic search of an infinite space of potential relations. Second, it intro- duces the use of distinction, significance and lack of fit- three general concepts of value in evaluating apparent func- tional relationships. Finally, and crucially, the algorithm has been tested prospectively on an extensive collection of real scientific data sets. Though I claim much less than pre- vious investigators about the power of my approach, these claims may be considered-to a degree quite unfamiliar in function-finding research-as conclusively proven. Evaluating Function-Finding Systems Over the past ten years, programs like Bacon [Lang- ley et al., 19871, Abacus [Falkenhainer, 1985; Greene, 19881, Coper [Kokar, 19861, Kepler [Wu and Wang, 19891 and others have been designed to attack a problem I call domain-independent scientific function- finding. Each program accepts numerical data and, without relying on knowledge of the domain in which it was collected, attempts to find the underlying func- tional relationship which might be proposed by a sci- entist examining the same data. Unfortunately, while a great deal of effort has been expended in designing function-finding systems, little has been done to test them. Researchers have nearly always relied on anecdotal evidence, reporting the suc- cesses of their programs on a few hand-selected cases, most of which have consisted of artificial data gener- ated to conform exactly to a functional relationship. Also, although performance clearly depends on the en- vironment in which a function-finding system is de- ployed, researchers have omitted specification of such an environment in their reporting. What we would really like to know about a function- finding program is not its record of successes on ar- tificial problems chosen by the programmer, but its likelihood of success on a new problem generated in a prespecified environment and involving real scientific data. To date, function-finding research has provided no information on which an estimate of this likelihood might be based. In view of this, my recent research has concentrated on the problem of evaluating function-finding systems [Schaffer, 1989a; Schaffer, 1989b], and, in the process, I have amassed quite a large collection of real scientific data for use in testing. While the five reports cited above mention a total of only six real data sets, I have collected 352. Moreover, as I will soon describe, part of this data was collected in a systematic fashion from a specified environment, making it possible to conduct prospective trials of function-finding algorithms. Contact with real data did more than provide an acid test for existing notions, however. It led me to a fundamentally novel conception of the problem of func- tion finding. While previous researchers have concen- trated mainly on constructing one of an infinite number of possible functional forms or, equivalently, searching an infinite space of formulas, I believe it is both more accurate and more productive to view function-finding as a classification problem-one of deciding reliably be- tween a fixed, finite set of potential relationships. This viewpoint is developed in [Schaffer, 1990b] and more fully in [Schaffer, 199Oa]. In both places, I ana- lyze the well-known Bacon algorithm and show that, while it is surprisingly successful in the face of prospec- tive testing, virtually all of this success is accounted for, not by the search heuristics on which published reports have concentrated, but by a mechanism for evaluating potential relationships of which the authors have said that they “hold no particular brief.” Clearly, however, if evaluation and not search is the key to successful function-finding with real data, it ought to be possible to improve performance by de- 828 MACHINE LEARNING From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. veloping more sophisticated evaluation criteria. The result of my attempt to do just this is a new algorithm which it is my main purpose in this paper to present. Before I do so, though, let me take a moment to de- scribe the data which served as both inspiration and testbed for my ideas. Test Data The 352 data sets of which I have spoken all con- sist of real, measured scientific data. Each set in- volves precisely two variables for which the reporting scientist has hypothesized a single functional relation- ship. The function finding I have investigated is thus of the simplest possible kind. Previous researchers have attempted to provide methods for more complex function-finding problems in addition to this simple one. My bivariate data gives such strong evidence of the difficulty of even the most basic problem, however, that it casts serious doubt on most of these attempts. The 352 data sets are organized into 217 cases, each case containing from one to four data sets reported by a scientist in a single publication in support of a common hypothesized relationship. In testing an algo- rithm, I use these cases as basic units. If an algorithm discovers the scientist’s proposed relationship in two of four data sets making up a case, I credit it with half a “correct” answer. If it reports a different answer for one of the sets, I count that as a quarter of an “incor- rect” answer. Clearly, it is possible that the scientist is wrong or that the algorithm’s incorrect answer is, in fact, just as scientifically significant. Given the scien- tist’s domain-knowledge advantage, however, it seems reasonable to consider his or her answer as the best available reference standard and to assume that other answers identify spurious patterns-false leads of no particular scientific significance. Note, finally, that an algorithm may report that it is unable to find a rela- tionship for a given data set. In this case, I count it as neither correct nor incorrect. The first 57 of my 217 cases were collected from a wide variety of sources: dissertations, journals, hand- books, undergraduate laboratory reports, textbooks and others. Based on my experience with these, I be- gan to develop function-finding ideas and to design algorithms accordingly. To test these, I initiated a project of collecting data sets systematically from is- sues of the journal PhysicaZ Review published in the early years of this century. In this case, I made ev- ery attempt to collect aZZ tabulated bivariate data sets for which scientists had hypothesized functional rela- tionships and hence to get a representative sampling of function-finding problems in a real scientific environ- ment. These systematically collected data sets had a dev- astating effect both on my programs and on my basic conception of function finding. By the time I had col- lected 60 cases from the Physical Review, however, I had made the rather radical conceptual shift described above and designed a new algorithm to reflect it. I then set out to conduct a prospective trial of my ideas by collecting an additional 100 cases from the Physical Review for use in testing. In what follows, I will refer to the 117 cases collected in my development phase as preliminary and the remainder as test cases. Note that the tested algorithms and algorithm parameters were fixed in every detail before any of the test cases was collected. The E* Algorithm As I have indicated, the algorithm E* which I am about to present concentrates on identifying a fixed set of re- lationships reliably rather than on searching an infinite space of possibilities. Three main observations influ- enced my decision to proceed in this manner. First, experience with the preliminary Physical Review cases showed that scientists reporting in the journal pro- posed functional relationships of a few simple forms in as many as 70 percent of the cases I collected. Second, in testing a reimplementation of Bacon’s core bivariate function-finding algorithm on these preliminary cases, I found that, although the algorithm is equipped to consider an infinite number of complex relationships, its actual successes were limited to a handful of simple ones. Finally, preliminary testing of this and other al- gorithms suggested strongly that function finding was as much a matter of avoiding incorrect hypotheses as of proposing correct ones. As data presented below will show, the Bacon algorithm gets nearly one wrong answer for every right one; in a sense, it leaves a major part of the work of function finding to the user, who must decide when to trust the program. Practically speaking, follow-up of false leads is a waste of scientific resources and spurious answers thus constitute a cost of function finding which it is essential to control. This point has not been considered sufficiently, I think, by researchers who have conducted tests on selected and often artificial cases and who thus, for the most part, have only successes to report. The E* algorithm, then, considers only eight pos- sible answers: the linear relationship y = kr~ + IQ, six power proportionalities, y = bxn for n E { -2, -1, -.5, .5,1,2), and the null answer “No rela- tionship identified.” In deciding between these, E* employs a fairly complicated scheme developed on the basis of my experience with the preliminary cases. This scheme is quite definitely the result of trial and error, rather than an implementation of a preconceived the- ory of data analysis. Still, it may be useful to construe the approach as an application of three basic abstract notions. By the first, significance, I mean the strength of a functional pattern measured in terms of how unlikely it is to have arisen by chance in purely random data. Note that, though I am borrowing both the concept and the term from statistics, I am not speaking of con- ducting strict tests of statistical significance; I only SCHAFFER 829 propose to make use of the statistical measures under- lying such tests, inasmuch as these provide evidence regarding how much we ought to trust an apparent pattern. For the second notion, I will use the new term dis- tinction, since I do not believe the idea has heretofore received attention. In general, in conducting function- finding work, it is natural to consider measures of fit which indicate how well a functional relationship ap- proximates the data in hand. The statistician’s R2 is a simple example. By distinction, I mean any indica tion provided by such a measure which suggests that a candidate function stands apart from other functional forms with which it might easily be confused. If we want to ensure the reliability of a function-finding sys- tem, it seems reasonable that we should attempt to keep the system from reporting relationships when con- fusion is likely and hence that distinction ought to be relevant to evaluation. The third basic concept is what statisticians refer to as systematic lack offit and it is illustrated by the data set plotted in the lefthand graph of Figure 1. This is real scientific data taken from one of my Physical Review cases and, apparently, is an example of a strong linear relationship. Actual measured data is always somewhat in error, however, so, even if the underlying relationship is in fact linear, the best-fitting formula of the form lcrz + Z& will not predict y values perfectly. We must always expect a discrepancy or residual equal to y - (Elx + ZQ). We should not, however, find a relationship between the value of x and the value of the residual. If such a relationship were to exist, we could use it to add a correction to the original formula and do a better job of predicting y. Moreover, this correction would have to be non-linear, since we have already assumed that the linear coefficients are optimal. The implication of a functional relationship between x and the residuals would thus be a non-linear relationship between x and y. res plex. In general, we might expect that systematic lack of fit is grounds for suspecting that a relationship is not the one proposed by a scientist and hence that it will be useful as an evaluation criterion. Evaluating Power Proport ionalit ies Having introduced these general ideas, let me now de- scribe how they form the basis of E*‘s specific crite- ria for evaluating power proportionalities. A statisti- cian might measure the fit of a relationship of the form Y = Icxn by regressing y on xn (without including an intercept) and checking the associated R2 value.’ In E*, the basic measure of fit is a monotonic transfor- mation of this statistic:2 1 MF=- I- fitc’ E* thus begins by measuring MF for each of six power proportionalities noted above. The relationship with the greatest degree of fit-the highest MF value-is selected for further evaluation. As a measure of the distinction of this relationship, which I will call the candidate, E* uses the ratio of its MF value to the next highest value among the original six. This ra- tio, D, will be two if the best relationship leaves half as much unexplained variation in y as the next-best relationship, ten if it cuts this next-best unexplained variation by a factor of ten and so on. In general, the higher the value of D, the more the candidate is distinguished from other low-order power proportion- alities and the more confident E* may be in reporting it. Significance is applied by E* somewhat indirectly. Since the algorithm is considering the relation y = ZCX*, a statistician would likely consider a test of the statis- tical significance of the coefficient Ic. That is, he or she might attempt to show that we are unlikely to have collected data which so strongly supports a non-zero value of Ic, if k: is, in fact, zero. E* reverses the application. It considers a more com- plicated relationship, y = ?QX* + Ha, and uses standard regression techniques to calculate an optimal value for ?Q. Then, however, it attempts to show that it would not be unlikely to collect data that supports a non- zero value for ?Q as strongly as the actual observed data even if ZC~ is, in fact, zero. If this is true, it sug- gests that any apparent benefit of adding the intercept Figure 1: Data to Illustrate Lack of Fit If we plot the residuals against x for the data of the example, as in the righthand graph of Figure 1, how- ever, we do find an extremely clear pattern. In this case, we say that the proposed linear relationship suf- fers from systematic lack of fit. The graph provides strong evidence that the relationship between x and y in the example is not linear and, in fact, the scientist’s hypothesized relationship in this case is far more com- ‘For those wi thout statistical training, [Schaffer, 199Oa] provides the background necessary to understand the ap- proach sketched here. 2The advantage of this measure is simply that it is easier to interpret when R2 is close to unity, as is very often the case with the scientific data I have examined. Consider, for example, two relationships with respective R2 values of .9891 and .9981. These raw numbers make it hard to see just how much better the second relationship is than the first. The corresponding MF values 91.74 and 526.3, however, allow us to see easily that the second relationship cuts the unexplained variation by roughly a factor of five. 830 MACHINE LEARNING is spurious and, hence, provides evidence in favor of the original relationship y = Icon. Evidence regarding the significance of /Q is provided by the statistician’s t-statistic. This will be large in absolute value if it is unlikely that an apparent non- zero value of Ez is due to purely random fluctuations in the data-that is, if the non-zero value appears significant-and it will be near zero otherwise. To summarize, E* calculates the t-statistic for Ica. It considers large absolute values as evidence of the significance of this intercept-hence, evidence against the candidate y = Icx”. Conversely, it considers near- zero values as evidence against the intercept and in favor of the candidate.3 Having calculated the indicators D and t, then, E* must combine information provided by these to decide whether to report the best-fitting of the six power pro- portionalities it considers. The basis of the combina- tion rule is the graph of Figure 2. This figure contains one point for each data set in the preliminary cases. This point is plotted according to the values of D and t calculated for the candidate function. Note that I have taken logarithms of D and t, since the raw values span many orders of magnitude. W) 7.91 -4.04 -- WD) Figure 2: Using t and D to Evaluate Power Propor- tionalities In the graph, a + symbol represents a data set in which the candidate function matches the scientist’s reference function and a - symbol represents one in which these functions are different. In the first case, 3Statistical sophisticates might worry here about two points. First, the degree to which a given value of the t-statistic may be considered “large” depends on the num- ber of observations on which it is based. E* uses the raw value of t without adjusting for this effect, which may be substantial for very small data sets. Second, normal use of the t-statistic depends on certain strong assumptions about the meaning of the phrase “purely random fluctuations” and these are likely to be violated severely in many of the Physical Review cases. Together, these points suggest that the value of t may be a misleading measure of significance in some cases. See [Schaffer, 199Oa] for further discussion of this point. E* should report the candidate; in the second, it should not. The evaluation question thus boils down to iden- tifying the largest possible region of the D-t plane in which we may be fairly sure that a new point is much more likely to represent a + than a - case. Standard pattern recognition techniques are cer- tainly applicable here, but the region in question seemed so clear to me when I first examined this graph that I simply drew the dotted line shown in the figure by eye and adopted it as the evaluation criterion for E*. The equation of the line is: lnt = .6lnD - 2 Hence, E* reports the candidate power proportionality if lnt < .6lnD - 2. Evaluating linear relationships If this criterion rejects the best-fitting power propor- tionality, E* considers the linear relationship y = Icrx + Icz. In evaluating this new candidate, three eval- uation criteria come into play. First, as with power proportionalities, E* compares the fit of the candidate to other functional forms with which it might easily be confused. The candidate may be written as y = Erx’ + L,; hence, E* checks functions of the form y = klxn + Ic2 for n near 1. Normally, the values used for n are .5 and 1.5. If any value of x is negative, however, the transformations xs5 and x1e5 are impossible and E* uses the values - 1 and 2 for n instead. E* begins, then, by calculating the measure of fit MF for each of three fitted functions, the candidate and y = krxn + E2 for n in either {.5,1.5} or {-1,2}. Having done so, however, the algorithm does not look for the fit of the candidate to be sharply better than its rivals, as in the case of power proportionalities, but rather simply checks if it is the best of the three-a kind of local maximum. This is clearly a very different instantiation of the concept of distinction than the one presented above, although the abstract purpose in both cases is to provide evidence that the candidate may be distinguished from similar functional forms. If the candidate is distinguished in the new, weak sense, E* proceeds to consider a second criterion, which applies the concept of significance in a rather straightforward fashion. Having fit the linear formula y = Icrx+12 by regression, E* calculates the t-statistics associated with the two fitted coefficients and rejects the formula unless both are of absolute value greater than two.4 4As noted before, the use of the t-statistics is normally conditioned on acceptance of strong assumptions about the type of random noise affecting measurements and, even in this case, the cutoff value should depend on the number of data points. I am relying here on faith-and empirical evidence-that even when abused as I have described, the t statistic is useful in evaluation. SCHAFFER 831 Finally, E* checks to make sure the linear relation- ship does not suffer from systematic lack of fit. Since it cannot rely on visual inspection of plots, the algo- rithm makes do with a rough numerical approach. It begins by calculating the residuals r of the best-fitting linear relationship and then continues by carrying out a second regression to determine optimal coefficients in the quadratic equation r = E1x2 + E2x + k3 If, in fact, there is no functional relationship between x and r, we would expect the significance of these co- efficients to be low. On the other hand, if there is a functional relationship between x and r and if a second-order approximation to this relationship is at all accurate over the given range, we would expect the coefficients to appear significant. Thus, E* considers the t-values associated with the coefficients ICI, Ic2 and Icg and concludes that it has detected systematic lack of fit if the absolute value of any of these is greater than five.5 In this case, it re- jects the candidate linear relationship between x and y and reports that no relationship was identified in the input data. Otherwise, the linear candidate has satis- fied each of the three criteria I have described and E* will propose it. Results of a Prospective Test Code for the algorithm I have just described is given in [Schaffer, 199Oa]. It runs through the 100 test cases- a total of 192 data sets-in about 11 minutes of real time on a Sun-3 workstation. As noted above, the algorithm was designed in its entirety before any test cases were collected. The first row of Table 1 thus shows the results of a purely prospective test of E* on these cases. B( 1.875) B(3.75) B(7.5) W5) ww 1 B(50) Correct 31.50 .50 4.58 12.83 17.75 24.91 33.00 37.91 Table 1: Results for E* and B(A) Compared For purposes of comparison, the remaining rows of the table show the results of a prospective test of a reimplementation of the Bacon algorithm on the same cases. Like the original, this implementation employs 5A more conventional criterion would make use of the F statistic. See [Schaffer, 199Oa] for discussion of this point. 832 MACHINE LEARNING a tolerance parameter A which critically affects perfor- mance. The table shows results for a range of A values specified before the test cases were collected: B(A) de- notes the reimplemented Bacon algorithm with toler- ance set at A percent. Details and code are provided in [Schaffer, 199Oa]; f or p resent purposes, I only want to note that the indicated performance of B(A) is slightly better than we could expect from the original Bacon program. The results tabulated above suggest a number of im- portant points. First, they make it possible to charac- terize the performance of E* as suggested earlier in this paper. When analyzing bivariate data sets of the kind published in the Physical Review in the first quarter of this century, E* has approximately a 30 percent chance of giving the same answer as the reporting scientist. Moreover, the algorithm operates in this environment at a cost of roughly one incorrect answer for every three correct ones.’ Note that, although the algorithm may be considered as proven only in the specified environ- ment, we may reasonably expect that it would perform comparably in others. The Physical Review of the early 1900s published work in a wide range of subdisciplines of physics and chemistry and is thus a good candidate to serve as a general representative of the quantitative physical science of that time. Second, the Bacon algorithm operates in the Physi- cal Review environment at a cost of roughly one incor- rect answer for every correct one over a very wide range of A settings. E* thus cuts function-finding costs-or, equivalently, increases function-finding reliability-by about a factor of three. Third, concentration on reliability has had very lit- tle effect on the range of application of E*. Though it considers just seven relationships instead of an in- finite space of formulas, E* handles nearly as many cases correctly as the most noise-tolerant of the B(A) algorithms. Finally, it is worth noting that the performance re- sults reported here for both E* and B(A) are the first evidence ever presented to show that domain- independent function-finding systems can operate suc- cessfully on problems not specially selected by their authors. I personally find it rather striking that, while scientists bring a huge store of detailed domain knowl- edge to bear in analyzing data, it is possible without relying on such knowledge to duplicate their conclu- sions with some reliability in this environment in as many as a third of reported cases. : Comments For an extensive and careful consideration of many points I have touched on briefly above, please refer to [Schaffer, 199Oa]. In particular, [Schaffer, 199Oa] ‘See [Schaffer, 199Oa] for caveats regarding these esti- mates and confidence intervals to suggest how much they may be affected by sample variability. balances the positive conclusions of this paper against indications of the limitations of domain-independent function finding. In concluding, let me anticipate a possible objection, namely that the Bacon algorithm is a weak straw man against which to compare a new function-finding ap- proach. In fact, though Bacon is the oldest and sim- plest of AI function finders, my experience suggests that it is quite difficult to outperform. I expect that more recent systems-including the new IDS [Nord- hausen, 1989]- would do worse than Bacon in a sim- ilar test. Certainly, I am prepared to make the trial, if the author of any such system is willing to provide code and suggest appropriate parameter settings. References (Falkenhainer, 1985) Brian Carl Falkenhainer. Quan- titative empirical learning: An analysis and method- ology. Master’s thesis, University of Santa Clara, August 1985. (Greene, 1988) Gregory H. Greene. The Abacus.2 sys- tem for quantitative discovery: Using dependencies to discover non-linear terms. Technical Report ML1 88-17, George Mason University, Machine Learning and Inference Laboratory, June 1988. (Kokar, 1986) Mieczyslaw M. Kokar. Discovering functional formulas through changing representation base. In Proceedings of the Fifth National Confer- ence on Artificial Intelligence, 1986. (Langley et al., 1987) Pat Langley, Herbert A. Simon, Gary L. Bradshaw, and Jan M. iytkow. Scientific Discovery: Computational Explorations of the Cre- ative Processes. MIT Press, 1987. (Nordhausen, 1989) Bernd Enno Nordhausen. A Computational Framework for Empirical Discovery. PhD thesis, University of California, Irvine, 1989. (Schaffer, 1989a) Cullen Schaffer. Bacon, data analy- sis and artificial intelligence. In Proceedings of the Sixth International Workshop on Machine Learning, 1989. (Schaffer, 1989b) Cullen Schaffer. An environ- ment/classification scheme for evaluation of domain- independent function-finding programs. In Proceed- ings of the IJCAI Workshop on Knowledge Discov- ery in Databases, 1989. (Schaffer, 1990a) Cullen Schaffer. Domain-Independent Scientific Function Finding. PhD thesis, Rutgers University, May 1990. (Schaffer, 1990b) C u en Schaffer. Scientific function 11 finding is classification. To be submitted, 1990. (Wu and Wang, 1989) Yi-Hua Wu and Shu-Lin Wang. Discovering knowledge from observational data. In Proceedings of the IJCAI Workshop on Knowledge Discovery in Databases, 1989. SCHAFFXR 833
1990
143
897
Knowledge Level and Inductive Uses of Chunking (EBL) Paul S. Rosenbloom Information Sciences Institute University of Southern California 4676 Admiralty Way Marina de1 Rey, CA 90292 rosenbloom@isi.edu Abstract When explanation-based learning (EBL) is used for knowledge level learning (KLL), training examples are essential, and EBL is not simply reducible to partial evaluation. A key enabling factor in this behavior is the use of domain theories in which not every element is believed a priori. When used with such domain theories EBL provides a basis for rote learning (deductive KLL) and induction from multiple examples (nondeductive KLL). This article lays the groundwork for using EBL in KLL, by describing how EBL can lead to increased belief, and describes new results from using Soar’s chunking mechanism - a variation on EBL - as the basis for a task-independent rote learning capability and a version-space-based inductive capability. This latter provides a compelling demonstration of nondeductive KLL in Soar, and provides the basis for an integration of conventional EBL with induction. However, it also reveals how one of Soar’s key assumptions - the non-penetrable memory assumption - makes this more complicated than it would otherwise be. This complexity may turn out to be appropriate, or it may point to where modifications of Soar are needed. Introduction’ Recent analytical papers on explanation-based learning (EBL) (DeJong & Mooney, 1986; Mitchell, Keller, & Kedar-Cabelli, 1986) comment on how training examples are not logically necessary for EBL (Prieditis, 1988; van Harmelen & Bundy, 1988). Their point is that a training example may serve a useful search control function in EBL - guiding the learner to regions of performance which it would be useful to operationalize - but that the resulting operationalized rule is just a specialization, in the general case, of what a partial evaluation 1 This research was sponsored by the National Aeronautics and Space Administration under cooperative agreement number NCC Z-538, and by the Defense Advanced Research Projects Agency (DOD) under contract number NOOO39-86C-0033 (via subcontract from the Knowledge Systems Laboratory, Stanford University). We would like to thank Yoram Reich, Haym Hirsh, Craig Miller, Peter Pirolli, and John Laird for their helpful comments. Jans Aasman Traffic Research Centre Rij ksuniversiteit Groningen Rijksstraatweg 76, 9752 AH Haren (Gn), The Netherlands aas%hgrrug5.bitnet@cunyvm.cuny.edu (PE) mechanism could achieve without the training example. This is an important point which reveals a previously submerged connection between learning and program transformation. However, it is flawed by its neglect of the use of EBL in knowledge level learning (KLL) (Flann & Dietterich, 1989; Rosenbloom, Laird, & Newell, 1987); that is, for the acquisition of knowledge not implied by what is already known (Dietterich, 1986). In such situations, the combination of the training example and goal concept - an instance - plays an essential role; one that is quite comparable to the role of instances in classic inductive concept learning systems. The first task of this article is to lay necessary groundwork for the use of EBL in KLL. The key idea is to explain - actually, to rationalize - instances via a low-belief domain theory, and then to use EBL to acquire a high-belief rule from the rationalization. Unlike in PE, the instances play a crucial role here in determining what is rationalized, and thus what becomes believed. We then build on this foundation in the context of Soar (Laird, Newell, & Rosenbloom, 1987; Rosenbloom et al., 1990) - whose learning mechanism, chunking, is a variation on EBL (Rosenbloom & Laird, 1986) in which new rules are acquired from a dependency analysis of the traces of rules that fire during subgoal-based problem solving - to take several steps towards the realization of the integrated-learning hypothesis (Rosenbloom, 1988). This hypothesis states that “Rote learning, empirical generalization, and explanation-based learning arise as variations in the knowledge- reconstruction process”, where ((knowledge- reconstruction” should be read as “rationalization”. Traditional explanation-based learning differs from rote memorization and induction in that the former uses only high-belief rationalizations while the latter two necessitate aspects that are initially low in belief. Rote memorization differs from induction in that the latter utilizes additional knowledge to affect rationalization. The subsequent sections introduce a new task- independent, rote-memorization capability in Soar; ROSENBLOOMANDAASMAN 82 1 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. extend this to induction from multiple examples, providing a compelling example of nondeductive knowledge level learning (NKLL) (Dietterich, 1986) in Soar, and introducing a complication caused by Soar’s inability to directly examine its own rules; cover the third leg of the integrated-learning hypothesis, explanation-based learning, and its use in induction; and conclude. EBL and KLL EBL can be applied over many types of domain theories - the only requirement being the provision by the domain theory of a way to generate a (possibly generalized) dependency structure for the instance that relates the training example to the goal concept. In a classical EBL domain theory, all of the elements - e.g., facts and rules - are ascribed a uniform high level of belief. In such a domain theory, EBL performs symbol level learning by explicitly storing knowledge that is already implicit in this domain theory. However, it does not alter levels of belief - they are all already as high as they can get. For knowledge level learning, the domain theory needs to include low-belief elements. With such a theory, EBL can lead to knowledge level learning by increasing the level of belief in selected elements of the theory. Consider the example of a system with a domain theory that allows it to abductively generate rationalizations - that is, plausible explanations - for what it sees (or hears). One of the key ways in which a rationalization differs from a simple deductive proof - as used in EBG (Mitchell, Keller, & Kedar-Cabelli, 1986), for example - is that the facts and rules utilized in a rationalization need not be completely believed in order to be used. It is in fact essential to the rationalization process that the system be able to derive from its domain theory not just facts that are known to be true, but also ones which are not yet believed. For example, suppose the instance consists of telling the system that “Fred, a dolphin, is warm blooded” - “‘Fred is a dolphin ” is the training example, and “Fred is warm blooded” is the goal concept. To rationalize this knowledge it might use that “dolphins are mammals” and ((mammals are warm blooded”, even if its a priori belief is that dolphins are fish rather than mammals. This explanation could have been created in the absence of the instance, but it would have little a priori believability. It is the existence of the instance that provides grounds for increasing the believability of the explanation. When this example is extended to the EBL/PE situation, it becomes clear that arbitrary rules created by partially evaluating this domain theory would have low believability, while comparable rules created by EBL for specific instances could have much higher believability. The instances allow EBL to increase the scope of what is believed, thus enabling knowledge level learning. In the extreme it is possible to start with a theory consisting of a generator able to produce data structures representing all possible pieces of knowledge, all with zero belief. Such a theory has all possible knowledge implicit in it, but none of it initially believed. EBL, in conjunction with appropriate instances, can then be used selectively to learn anything, by increasing the level of belief in the appropriate, possibly implicit, knowledge structures. One way to view this process is as explicit belief propagation, where there are belief-propagation rules that are used to compute a level of belief for an explanation - and thus for the rules learned via EBL from the explanation - as a function of the believability of the individual domain theory elements and the instance. An alternative view of this process, and the one that guides the research reported here, is that the instance acts as a filter, letting through only those explanations which should be believed. Learning then only occurs for these believed explanations. To support this type of processing, plausible domain theories - that is, theories in which only plausible explanations can be generated for conclusions - are required. Such a theory can potentially explain things that aren’t true - necessitating the use of instances as filters - but what is true is explained in a plausible manner. As long as a domain theory meets this condition, the a priori believability of the individual elements of the theory can be zero without affecting the believability of what is learned. At the extreme end, the domain theory could contain elements which are not representational in the domain, such as the letter “d” - or which are representational but do not have belief values, such as “‘dolphin” - but which can be combined syntactically to generate elements which do have belief values. Given EBL and plausible low-belief domain theories, the one missing component is what makes the newly learned rule believed. A variety of approaches are possible, including ones that augment the EBL mechanism to generate explicit belief values, and ones that partition learned rules - which automatically receive high belief - from domain theory rules. In Soar a variation on this latter approach is used. The domain theory corresponds to the problem space used in a subgoal (Rosenbloom & Laird, 1986) - in our case, this might be a generation problem space, where all possible structures are generatable, but none are believed. However, the learned rule is always created for the problem space in the parent goal; perhaps a fact problem space, in which all retrievable structures are believed. If restrictions are then placed on which problem spaces are utilized at any point in time, it is possible to know what level of belief - 0 or 1 - to assign to the retrieved knowledge. This point is related to the one recently made in (Flann & Dietterich, 1989). They focus on how EBL can perform KLL if an explanation that is 822 MACHINE LEARNING generated for one concept is used to define a second concept that is a specialization of the first one. In our work, EBL performs KLL by transferring (possibly implicit) structures from an unbelieved domain theory to a believed domain theory that contains a subset of the structures in the original theory. Despite the differences, what both have in common - in fact, what every use of EBL for KLL must have in common - is that the learned knowledge is used differently than would be the domain theory from which it is learned. It can be used at a different belief level (as here), for a different goal concept (Flann & Dietterich, 1989), or even as evidence that a particular episode occurred (in which the rule was learned) (Rosenbloom, Newell, & Laird, 1990). Rote Memorization In previous work this general approach to knowledge level learning has been employed to perform several simple rote memorization tasks - recognition, recall, cued recall, and paired-associate recall - for hierarchical letter strings and for objects that are described by attributes with values (Rosenbloom, Laird, & Newell, 1987; Rosenbloom, Laird, & Newell, 1988; Rosenbloom, Newell, & Laird, 1990). This work also provided solutions for two additional problems that arise when rules are to be learned that recall new structure: (1) the data chunking problem - how to enable the retrieval of new information without its already being present - and (2) the selective retrieval problem - how to avoid retrieving everything ever learned. The data chunking problem is solved by reconstructing new knowledge from what is already known - that is, the domain theory - rather than directly rationalizing the input structures representing the new knowledge. Since this reconstruction process is not dependent on the input structures, tests of the input do not appear in the conditions of the learned rules. The selective retrieval problem is solved by selectively acquiring retrieval cues as conditions of learned rules. Recently, this earlier work has been extended with the development of a general, task-independent rote-memorization operator. This is a normal Soar operator that uses problem solving in a subgoal to implement solutions to the data chunking and selective retrieval problems. The operator takes two arbitrary graphs of Soar working memory elements as inputs - the first is the training example, and the second is the goal concept. The result of memorizing the pair is the acquisition of a rule that tests for the existence of the first graph (the cue graph) and if the test succeeds, retrieves the second graph (the recalled graph) into working memory. To do this, the memorization operator reconstructs the recalled graph by assembling primitive domain- theory elements into a copy of the graph (solving the data chunking problem), and then makes this copy dependent on the structure of the cue graph (solving the selective retrieval problem), Figure 1 shows such a chunk, schematized as a pair of graph structures. The cue and recalled graphs are attached to operator <Ol > in the rule’s conditions and actions, respectively. Figure 1: Chunk learned from memorizing a pair of graph structures. Multi-Example Induction When the contents of the initial domain theory are restricted to only elements that are completely believed, it is not possible for the domain theory itself to perform inductive leaps. If a system containing such a domain theory is to perform inductive generalization, it must be done outside the domain theory - in the EBL mechanism, for example. The most common approach is to augment EBL’s standard explanation processing with some form of inductive postprocessing of either the entire explanation, or just its operational components (Flann & Dietterich, 1989; Hirsh, 1989; Sarrett & Pazzani, 1989). If, on the other hand, the contents of the initial domain theory can include unbelieved elements, the option opens up of doing induction directly in the domain theory, and leaving explanation processing unaltered. This was proposed in (Rosenbloom, 1988) as part of the integrated-learning hypothesis, and is the approach taken here.’ The domain theory includes knowledge about how to reconstruct presented objects (a stripped-down version of the task-independent memorization operator), generalization hierarchies (the basis of the concept language), rules which perform inductive leaps, and rules learned from prior instances. When a new instance is perceived, this domain theory is used to determine what is to be rationalized - it may be a generalization of the instance, rather than the instance itself - as well as how it should be rationalized. ROSENBLOOMANDAASMAN 823 One decision that must be made at this point is the orientation of the rule to be learned - whether the concept’s name should be in the actions and its definition in the conditions, or vice versa. In EBL (and chunking), the concept definition appears in the conditions because that is where EBL has its primary generalization effect, and also where the definition can act as a recognizer of instances of the concept. However, to go beyond simple concept recognition - to retrieval and modification of the concept definition as further instances are processed - requires a fuller declarative access to the definition. Such access is not available in Soar if the definition is stored in the conditions because Soar’s rule memory is non-penetrable - rules can be executed (forward), but can not be directly examined. Non-penetrability arises because the rules are compiled procedures that can be executed but not examined. In psychological terms, they represent automatized behavior (for example, (Shiffrin & Schneider, 1977)). A consequence of non-penetrability is that, to access the concept definition explicitly, it must be stored in the rule’s actions, where it can be retrieved by rule execution. Once retrieved, the concept definition can be used to influence what is learned for new instances, and to interpretively recognize instances of the concept (concept recognition rules, in which the definition is in the conditions, can be learned by chunking this interpretation process). As discussed later, the downside of this decision is that the generality of the concept definition is affected only by the decision of what to rationalize, and not by the rationalization process or the chunking/EBL mechanism. The domain theory that has been implemented within Soar utilizes a variant of the Focussing algorithm (Young, Plotkin, & Linz, 1977; Bundy, Silver, & Plummer, 1985). This is a version space ’ algorithm that works for spaces describable by a conjunction of attributes, where each attribute is defined by a tree-structured generalization hierarchy. The key ideas underlying the implemented algorithm are that: (1) the version space is kept factored - bounds are maintained independently for each attribute (Subramanian & Feigenbaum, 1986); (2) only near-miss negatives (Winston, 1975) are processed (the zero option)3 to guarantee that the boundary sets will not fragment; and (3) once the first positive example is processed, the entire factored version space is explicitly represented, rather than just the boundary sets (at worst this requires space proportional to the number of attributes times the maximum depth of the generalization hierarchies). Suppose that the task is to learn that the definition of (GOOD=TRUE) is (MOBILITT=MOBILE, SHAPE=ANY, SIZE=ANY). The algorithm starts with the goal concept, (GOOD=TRUE), and a positive example, such as (MOBILITY=WHEELED, SHAPE=SQUARE, SIZE=LARGE).4 It then uses the generalization hierarchies in the domain theory to elaborate the example with all of the superclasses of its attributes’ values, yielding for this case (MOBILITY=CANY, MOBILE, WHEELED), SHAPE=(ANY, POLYGON, REGULAR, SQUARE), SIZE=<ANY, LARGE)). This elaborated example is then memorized as the initial version space, with the retrieval cue being the goal concept: (GOOD=TRUE) --> (MOBILITY=(ANY, MOBILE, WHEELED), SHAPE=<ANY, POLYGON, REGULAR, SQUARE), SIZE=(ANY, LARGE)) The memorization a operator succeeds here only because the initial domain theory implicitly contains within itself every possible version space that could be generated for (GOOD=TRUE), all initially unbelieved. The example then determines which of these version spaces is explicitly stored, and thus believed. When a succeeding positive example of this same goal concept is encountered, processing again begins by elaborating it with its values’ superclasses. The current version space for the concept is then retrieved and compared to the elaborated example. All value classes in the concept version space that are not also in the elaborated example are then rejected from the version space (a form of incremental version space merging (Hirsh, 1989)). Chunking over this rejection process yields rules which, in the future, reject the inappropriate segments of the old version space. This rejection process is itself reconstructive so that the updated version space can later be retrieved without the presence of the positive example. As an example, suppose that the next example for this goal concept is (MOBILITY=TRACKFD, SHAPE=ELLIPSE, SIZE=LARGE), which when elaborated becomes (MOBILITY=(ANY, MOBILE, TRACKED), SHAPE=(ANY, CONIC, ELLIPSE), SIZE=(ANY, LARGE)). From this example, the following rejection rules are learned: (GOOD=TRUI~, MoBILIn=WHEIu3D) ---> (GOOD=TRUE, sHApE=poLYGoN) ---> (SHAPE=POLYG~N)- (GOOD=TRUE, SHAPE=REGULAR) ---> (SHAPE=REGULAR)- (GOOD=TRUE, SHAPE=SQUARE) ---> (SHAPE=SQUARE)- The next time the same goal concept is seen, all five learned rules fire, yielding the following updated version space: (MOBILITY=CANY, MOBILE), SHAPE=ANY, SIZE=(ANY, LARGE)). Learning from negative examples introduces an additional issue: because all of the knowledge learned from positive examples is cued off of 3 With a suitable training order, fat misses are convergence (Bundy, Silver, & Plummet, 1985). not needed for 4 This example, and all later ones have been implemented in Soar (Version 5.0.2), and verified to run correctly. 824 MACHINE LEARNING (GOOD=TRUE), it will not be retrieved automatically for a negative example, where (GOOD=FALSE) (the converse is also true). The solution to this is to transiently cue with both TRUE and FALSE, thus retrieving all of the knowledge so far learned about the concept, and then to proceed to process the example while maintaining only the correct value (FALSE, in this case). Then, if the example is a near- miss - that is, if it mismatches in at most one attribute - those classes of the mismatched attribute that match are rejected from the version space. If the example is a far miss, it is ignored. As an illustration, suppose the third example is the negative example (MOBILITY=STATIONARY, SHAPE=REcTANGLE, SIZE=LARGE). This example is elaborated with superclass information to become (MOBILIT~=<ANY, STATIONARY), SHAPE=<ANY, POLYGON, IRREGULAR, RECTANGLE>, SIZE=<ANY, LARGE)), and then the existing information about the concept definition is retrieved. The mismatched attribute is MOBILITY, and the class that matches for that attribute is ANY. The rule learned for rejecting this class is: (GOOD=FALSE, MOBILITY=ANY) --> (MOBILITY=ANY)- The resulting concept version space is (MOBILITY=MOBILE, SHAPE=ANY, SIZE=<A~, LARGE)). These examples demonstrate that Soar can be used not only as the basis for rote memorization I deductive Kw, but also for induction nondeductive KLL). Inductive augmentations of the EBL mechanism are not required, because the induction occurs directly in using the domain theory. As suggested by the integrated-learning hypothesis, rote learning and induction are distinguished by differences in the rationalization process. However, contrary to the intent of the hypothesis, the difference is in terms of what is rationalized rather than how it is rationalized (forced by the decision to store the concept definition in actions rather than conditions). In the induction case, rather than directly rationalizing the concept name in terms of the training example, it is rationalized in terms of the version space (or changes to it). This choice of what to rationalize is essentially a choice of what to learn. Here, this choice was based on the instance, the generalization hierarchies, the previous version space, and knowledge about how to induce. Bringing other knowledge to bear should allow additional useful variations on this choice. Explanation-Based Learning Using the chunking/EBL mechanism to perform explanation-based learning - that is, the standard form of symbol level learning - supports the third, and final leg of the integrated learning hypothesis. However, this needs no explicit demonstration here, as it is the foundational result of EBL. Instead, what is of interest here is the extent to which, in practice, this use of EBL can be integrated with the induction process described in the previous section. In this section we examine three successively weaker versions of this question. The first version is whether the direct use of EBL in induction, as described in the previous section, provides the requisite form of symbol level learning - that is, is EBL itself performing significant acts of “justifiable” generalization during induction? The answer, upon inspection, is “no”. EBL is storing the results of inductive processing, but it is not itself contributing to their level of generalization. This is forced by the decision to store the concept definition in rule actions. The second version is whether the standard use of EBL to perform symbol level learning - that is, with a classical believed domain theory - can help the inductive process described in the previous section (which is independently using EBL). The answer to this version of the question is once again I’ no ‘I . To see this, consider a domain theory with the following two believed rules. (MOBILITY=WHEELED) --B @AST=TRUE) (FAST=TRW swpE=co~~c) ---a (GOOD=TRIJE) If these rules are used to relate the training example to the goal concept, the following rule is learned. This is exactly what EBL should learn. However, it is difficult to use in the induction process described in the previous section because the generalized example is in the conditions of the rule - thus the rule retrieves the goal concept when a matched training example is present, rather than retrieving the generalized example when the goal concept is present. This failure is disturbing because this is the type of gain achieved by other hybrid approaches, such as (Flann & Dietterich, 1989; Hirsh, 1989; Sarrett & Pazzani, 1989). In these other approaches, this problem is solved by enabling the induction process to directly access the explanation, its operational fringe, or the resulting rule. In the present approach, the rule can be fired, but neither it nor any part of the explanation can be directly examined. The third version is whether some form of explanation-based learning can lead to generalizations that are useful in the induction process described in the previous section. The answer here is finally ‘(yes”. However, it requires augmenting the domain theory itself with the ability to interpret rules and to generate and process explanations. These rules are not Soar’s native rules, but declarative structures of limited expressibility that are stored in the actions of Soar’s rules. These rules are retrieved as needed to support a backward-chaining process that starts with the goal concept and ends when it grounds out in attribute-value pairs contained in the elaborated example (the operational predicates). The ROSENBLOOMAND AASMAN 825 operational fringe of the explanation derived from this process is a generalization of the example. Based on the approach in (Hirsh, 1989), this generalized example is used in induction by converting it into an explicit, factored version space - by assigning values to unmentioned attributes ({ANY} for positive examples, and the values in the concept version space for negative examples) and then elaborating it with superclasses - and then merging it with the current concept version space. As illustration, consider the positive example (MOBILITY=WHFELED, SHAPE=CIRCLE, SIZE=SMALL), which becomes upon elaboration (MOBILITY=WJY, MOBILE, WHEELED>, SHAPE=<ANY, CONIC, CIRCLE), SIZE=UNY, SMALL)). If the domain theory consists of the two rules above, backward chaining yields an operational fringe of (MOBILITY=WHEELED, SHAPE=CONIC), which is more general than the original example because it ignores SIZE, and generalizes SHApE from CIRCLE to CONIC. When this generalized example is extended to cover the unmentioned attributes, and elaborated, it becomes (MOBILITY=<ANY, MOBILE, WHEELED), SHAPE=<ANY, CONIC), SIZE=ANY). When this description is then merged with the concept version space, the result is (MOBILITY=MOBILE, SHAPE=ANY, SIZE=ANY). The rule learned from this processing is: (GOOD=TRUE) ---, (SIZE=LARGE)- This same general approach can be used to incorporate other forms of knowledge into the induction process. So far, we have partial implementations of the use of irrelevance knowledge (Subramanian & Genesereth, 1987) and determinations (Davies & Russell, 1987; Mahadevan, 1989; Russell, 1988; Widmer, 1989) in the induction process. When taken together, the answers to the three versions of the question reveal that explanations can be effectively combined with induction in this approach, but that this is achieved only by building additional declarative rule interpretation and EBL mechanisms into the domain theory. The native mechanisms are not usable because there is no way to access the rules (or explanations) they create as declarative structures, as required by the induction process. The question this raises is whether this is evidence that the Soar architecture needs to be changed or is evidence that some of our preconceived notions about induction, and its interaction with chunking/EBL, need to be changed. While the former is a distinct possibility, the utility of architectures as theories of intelligence stems in large part from their ability to predict unexpected but important phenomena. If the architecture is changed whenever one of its consequences violates preconceived notions, this benefit is lost. Also potentially lost are the positive consequences of the way the changed component currently works. The component is usually the way it is for good reason, which in this case is the ability to model basic aspects of human memory. Thus it is useful, before jumping in and changing the architecture, to first consider the possibility that Soar is revealing something important here. When this is done, at least one intriguing speculation arises - that chunking and EBL, though quite similar in mechanism (both compile dependency structures), are really distinct capabilities. Chunking is an automatic architectural process (it learns for every result of every subgoal, and does not compete for cognitive resources with performance), of fixed capability (how it works is not affected by what the system knows), which compiles recognition-driven procedures (productions) from experience. It is an appropriate, and effective, generalized long-term caching mechanism; but it really is a low-level mechanism that is in some ways more analogous to neural-network learning algorithms than to EBL. However, an intelligent system also needs to be able to deliberately create and utilize declarative explanations of new phenomena. This is where EBL, as used here in concept learning, comes in. It is a deliberate cognitive process, of open capability, which processes and creates declarative structures that can be used in induction, and which can also yield behavior, but only indirectly, through interpretation. Conclusions By making a distinction between what is in the domain theory (either implicitly or explicitly) and what is believed, it is possible to distinguish the symbol level and knowledge level uses of EBL - symbol level uses make implicit knowledge explicit, while knowledge level uses make unbelieved knowledge believed. This idea has been explored here as the foundation for chunking(EBL)-based rote memorization (deductive KLL) and induction (nondeductive KLL) capabilities. Utilizing unbelieved knowledge enables induction to be performed in the domain theory itself, rather than as a post hoc process. Chunking is used in this induction process to store the initial version space, and to record modifications to it that are required by new instances of the concept. These capabilities demonstrate why EBL is not simply reducible to partial evaluation - the training examples are essential. When combined with the standard use of EBL for symbol level learning, these capabilities provide the three legs of support required by the integrated learning hypothesis. However, the support is weakened by the difference between rote learning and induction arising from differences in what is rationalized rather than how it is rationalized. Further weakening is engendered by the difficulty in using EBL to generalize instances for use by induction. This has been accomplished, but only by implementing an additional declarative EBL 826 MACHINELEARNING mechanism in the domain theory. Both of these weaknesses occur because of the choice to store the inductive concept definition in the actions of learned rules (rather than in the conditions), which is itself forced by the non-penetrability of Soar’s rules, and the resulting difficulty in determining the contents of rule conditions. As discussed in the previous section, this may actually turn out to be appropriate, or it may reveal an aspect of Soar that should be altered. References Anderson, .I. R. 1986. Knowledge compilation: The general learning mechanism. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine Learning: An Artificial Intelligence Approach, Volume II. Los Altos, CA: Morgan Kaufmann Publishers, Inc. Bundy, A., Silver, B., & Plummer, D. 1985. An analytical comparison of some rule-learning programs. Artificial Intelligence, 27, 137-181. Davies, T. R., & Russell, S. J. 1987. A logical approach to reasoning by analogy. Proceedings of IJCAI-87. Milan. DeJong, G., & Mooney, R. J. 1986. Explanation- based learning: An alternative view. Machine Learning, 1, 145-176. Dietterich, T. G. 1986. Learning at the knowledge level. Machine Learning, 1, 287-315. Flann, N. S., & Dietterich, T. G. 1989. A study of explanation-based methods for inductive learning. Machine Learning, 4, 187-226. Hirsh, H. 1989. Combining empirical and analytical learning with version spaces. fioceedings of the Sixth International Workshop on Machine Learning. Cornell. Laird, J. E., Newell, A., & Rosenbloom, P. S. 1987. Soar: An architecture for general intelligence. Artificial Intelligence, 33, l-64. Mahadevan, S. 1989. Using determinations in EBL: A solution to the incomplete theory problem. fioceedings of the Sixth International Workshop on Machine Learning. Cornell. Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. 1986. Explanation-based generalization: A unifying view. Machine Learning, -1, 47-80. Prieditis, A. E. 1988. Environment-guided program transformation. G. F. DeJong (Ed.), Fkoceedings of the AAAI Symposium on Explanation-Based Learning. Stanford, CA: AAAI. Rosenbloom, P. S. 1988. Beyond generalization as search: Towards a unified framework for the acquisition of new knowledge. G. F. DeJong (Ed.), Proceedings of the AAAI Symposium on Explanation-Based Learning. Stanford, CA: AAAI. Rosenbloom, P. S., & Laird, J. E. 1986. Mapping explanation-based generalization onto Soar. Proceedings of M-86. Philadelphia. Rosenbloom, P. S., Laird, J. E., Newell, A., & McCarl, R. 1990. A preliminary analysis of the Soar architecture as a basis for general intelligence. Artificial Intelligence, . In press. Rosenbloom, P. S., Laird, J. E., & Newell, A. 1987. Knowledge level learning in Soar. Proceedings of M-87. Seattle. Rosenbloom, P. S., Laird, J. E., & Newell, A. 1988. The chunking of skill and knowledge. In B. A. G. Elsendoorn & H. Bouma (Eds.), Working Models of Human Perception. London: Academic Press. Rosenbloom, P. S., Newell, A., & Laird, J. E. 1990. Towards the knowledge level in Soar: The role of the architecture in the use of knowledge. In K. VanLehn (Ed.), Architectures for Intelligence. Hillsdale, NJ: Erlbaum. In press. Russell, S. J. 1988. Tree-structured bias. fioceedings of AAAI-88. St. Paul, MN. Sarrett, W. E. & Pazzani, M. J. 1989. One-sided algorithms for integrating empirical and explanation-based learning. Proceedings of the Sixth International Workshop on Machine Learning. Cornell. Shiffrin, R. M. & Schneider, W. 1977. Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127-190. Subramanian, D., & Feigenbaum, J. 1986. Factorization in experiment generation. fioceedings of AAAI-86. Philadephia. Subramanian, D., & Genesereth, M. R. 1987. The relevance of irrelevance. Proceedings of IJCAI-87. Milan. van Harmelen, F. & Bundy, A. 1988. Explanation- based generalization = partial evaluation. Arti jicial Intelligence, 36, 401-412. Widmer, G. 1989. A tight integration of deductive and inductive learning. Proceedings of the Sixth International Workshop on Machine Learning. Cornell. Winston, P. H. 1975. Learning structural descriptions from examples. In Winston, P. H. (Ed.), The Psychology of Computer Vision. New York: McGraw Hill. Young, R. M., Plotkin, G. D., & Linz, R. F. 1977. Analysis of an extended concept-learning task. FFoceedings of IJCAI-77. Cambridge. ROSENBLOOMANDAASMAN 827
1990
144
898
Changing the Rules: A Comprehensive pproach to T nernent* Dirk Ourston and Raymond J. Mooney Department of Computer Sciences University of Texas Austin, TX 78712 email: dirk@cs.utexas.edu, mooney@cs.utexas.edu Abstract This paper presents a comprehensive approach to automatic theory refinement. In contrast to other systems, the approach is capable of modifying a theory which contains multiple faults and faults which occur at intermediate points in the theory. The approach uses explanations to focus the cor- rections to the theory, with the corrections being supplied by an inductive component. In this way, an attempt is made to preserve the structure of the original theory as much as possible. Because the approach begins with an approximate theory, learning an accurate theory takes fewer examples than a purely inductive system. The approach has application in expert system development, where an initial, approximate theory must be refined. The approach also applies at any point in the ex- pert system lifecycle when the expert system gen- erates incorrect results. The approach has been ap- plied to the domain of molecular biology and shows significantly better results then a purely inductive learner. Introduction This paper presents a comprehensive approach to au- tomatic theory refinement. In expert system develop- ment, theory refinement occurs when an initial, approx- imately correct, knowledge base must be refined into a high performance system. The initial knowledge base may correspond to textbook knowledge or rough knowl- edge from an expert. The refinement process uses a set of training cases to improve the empirical adequacy of the knowledge base, i.e. its ability to reach correct con- clusions within its problem space [Ginsberg et al., 19881. Theory refinement is also required at any point in the lifetime of an expert system when errors are detected in its operation. *This research was supported by the NASA Ames Re- search Center under grant NCC 2-629. Equipment used was donated by the Texas Instruments Corporation. ou r approach to theory refinement uses a combina- tion of explanation-based-and empirical learning meth- ods. Partial explanations of examples and characteris- tics of the detected errors are used to focus corrections on the failing portion of the theory. Empirical methods are used to learn new rules or modify the premises of existing rules. The remainder of the paper is organized as follows. The next section presents some background and moti- vation for our approach. Then we show some simple examples of our system in action, followed by a discus- sion of the refinement algorithm. Next we present ex- perimental results demonstrating the system’s ability to refine a theory for recognizing biological concepts. In the final two sections we present areas for future re- search and conclusions. Background In discussing theories, we will restrict ourselves to propositional Horn clause logic’, although much of what we say can be applied to other formalisms such as predicate calculus, or even schema representations. We also assume that the inference engine makes a “closed world assumption,” i.e. any example provable by the current theory is positive, else it is negative. Errors in a theory can be classified in terms of the type of examples that are provable. One form of incor- rectness is over-generality; i.e. negative examples are provable. This can be caused by two types of errors: 1) an incorrect rule is present in the theory, or 2) an exist- ing rule is missing a constraint from its premise. The other form of incorrectness is over-specificity, i.e. posi- tive examples are not provable. This can also be caused by two types of errors: 1) a rule in the theory has an additional incorrect constraint in its premise, or 2) the theory is missing a rule which is necessary in the proof of certain examples. In general, an incorrect theory can have both overly-general and overly-specific aspects. A ‘Our actual representation is somewhat more general than propositional logic since an atomic proposition can also be an attribute value pair or a threshold on a numerical attribute. OURSTONAND MOONEY 8 15 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. comprehensive theory refinement system must be able to handle multiple faults of all types. Some previous theory refinement systems are only ca- pable of generalizing an overly-specific theory [Wilkins, 1988; Danyluk, 1989; Pazzani, 1989; Ali, 19891 while others are only capable of specializing an overly- general theory [Flann and Dietterich, 1989; Mooney and Ourston, 19891. F or example, the IOU system previ- ously developed by the authors [Mooney and Ourston, 19891 adds constraints to an overly-general theory by using an empirical method to find regularities in the unexplained aspects of the examples. Many systems do not revise the theory itself but instead revise the operational definition of the con- cept [Bergadano and Giordana, 1988; Hirsh, 1989; Ginsberg, 1988; Shavlik and Towell, 1989; Flann and Dietterich, 1989; Mooney and Ourston, 19891. Still oth- ers rely on active experimentation rather than a pro- vided training set to detect and correct errors [Raja- money and DeJong, 19881. Finally, most existing the- ory correction systems assume a single fault is respon- sible for each failure [Wilkins, 1988; Danyluk, 1989; Pazzani, 19891. The system we are developing, called EITHER (Explanation-based and Inductive THeory Extension and Revision), is capable of handling any of the fail- ures described above. The approach can correct multi- ple faults, and uses one or more failing examples (pre- sented in “batch” format) to learn single or multiple corrections to the theory, as appropriate. The correc- tions can be made to intermediate points in the theory, rather than strictly involving operational predicates. The method uses positive and negative examples and is able to learn disjunctive rules. EITHER uses the failures of the explanations created by the domain theory to focus a standard inductive sys- tem (currently ID3 [Quinlan, 19861) to supply correc- tions to the theory. Because it starts with an initial theory, fewer examples are required to obtain an accu- rate theory compared with a purely inductive system. In addition, the purpose of our system is to extend the knowledge represented by the initial theory, preserving the structure of the original theory as much as possible. This allows the system to provide better explanations for its conclusions by making use of intermediate con- cepts in the initial theory. Examples Before presenting the details of the system operation, we present some simple examples of how EITHER works. The correct domain theory for the examples, that of drinking vessels, is shown below. This theory is derived from the original cup theory postulated by Winston [Winston et al., 19831. (stable) A (liftable) A (open-vessel) + (drinking-vessel) (has-bottom) A (flat-bottom) -+ (stable) (graspable) A (lightweight) + (liftable) (has-handle) + (graspable) (width small) A (Styrofoam) --f (graspable) (width small) A (ceramic) + (graspable) (has-concavity) A (upward-pointing-concavity) + (open-vessel) Examples 1 through 6 are a set of examples which consistent with this correct version of the theory. are 1. (+ (has-concavity) (has-bottom) (flat-bottom) (lightweight) (upward-pointing-concavity) (color yellow) (width small) (Styrofoam)) 2. (+ (has-concavity) (has-bottom) (flat-bottom) (lightweight) (upward-pointing-concavity) (has-handle) (color black) (width medium) (Styrofoam)) 3. (+ (has-concavity) (has-bottom) (flat-bottom) (lightweight) (upward-pointing-concavity) (has-handle) (color blue) (width large) (ceramic)) 4. (- (has-bottom) (flat-bottom) (has-concavity) (lightweight) (upward-pointing-concavity) (width small) (color white) (shape cubical)) 5. (- (has-bottom) (has-concavity) (flat-bottom) (upward-pointing-concavity) (lightweight) (width medium) (Styrofoam) (color copper)) 6. (- (has-bottom) (h as-concavity) (flat-bottom) (upward-pointing-concavity) (lightweight) (width medium) (Styrofoam) (color blue)) The following subsections are examples of approxi- mate theories which EITHER can be asked to correct. For simplicity, each illustrates a single type of error. Nevertheless, EITHER is designed to handle multiple errors of different types. Note that in the following an assumption is an assertion which, if assumed about a particular example, would allow the proof of the exam- ple to be completed. Incorrect Theory: Additional Antecedent In this case, the theory has been given an extraneous antecedent in the premise of the second graspable rule: (width small) A (Styrofoam) A (color blue) -+ (graspable). The effect of this is that example 1 fails, requiring the assumption (color blue) in order for it to be prov- able. EITHER tentatively removes the assumption (codor blue) from the graspable rule and checks the neg- ative examples. Since no negative example is provable, EITHER returns the corrected theory with the assump- tion removed. 8 16 MACHINE LEARNING Incorrect Theory: Missing Rule In this case, the rule which accounts for objects which have handles being graspable is omitted from the the- ory: [(has-handle) ---) (graspable)]. Examples 2 and 3 fail. Example 2 requires either the assumption (width smaZZ) or the assumptions (width small) A (ceramic) for its proof to be completed. Ex- ample 3 requires either the assumption (width smadl) or the assumptions (width smaZZ) A (styrofoam) for its proof to be completed. EITHER selects the assumption (width smalb), since it represents the smallest change to the theory, and removes it from the rule in which it participates, and checks the modified theory against the negative examples. In this case, example 5 (a negative example) is prov- able. EITHER removes the antecedent (graspable) from the rule for Eiftabde, and re-tests the negative examples. Those that are provable (4, 5 and 6) are used (along with 2 and 3) to discover a new rule for graspable. EI- THER adds the rule (has-handle) -+ (graspable) to the theory. Incorrect Theory: Missing Antecedent The theory has been modified such that the second gras- pable rule is missing the antecedent (width small): [(width small)] A (Styrofoam) + (graspable). Negative examples 5 and 6 become provable. EITHER returns the erroneous graspable rule as a candidate rule to retract (this is partially due to the fact that other rules used in the proofs of examples 5 and 6 are used in the proofs of all examples). EITHER removes the gras- pable rule from the theory and checks to see if all of the positive examples are still provable. Since example 1 is not provable, EITHER sends examples 1, 5 and 6 to the inductive learner for the purpose of learning an- tecedents to add to the graspable rule. EITHER adds the conjunct (width smadd) to the rule and returns the corrected theory. Theory Refinement Algorithm The issues to be addressed by a theory refinement algo- rithm are: determining that there is an error, identify- ing the incorrect part of the theory, and finding the re- quired correction. This section discusses the approach to theory correction separately for overly-general and overly-specific aspects. The approach which EITHER uses in either case is one-sided: the algorithm for spe- cializing theories is such that no positive examples are eliminated, and the algorithm for generalizing theories is such that no negative examples are admitted. As a result, the corrections discussed below can be sequen- tially added to obtain a total correction to an arbitrarily incorrect theory. Generalizing the Theory For a theory with overly-specific aspects, the ultimate form of the theory correction will be to add rules, to loosen the constraints of existing rules, or both. Identifying an Error. The problem of identifying that a theory has overly-specific aspects is straightfor- ward: a positive example fails to be proven. Finding the Location of the Error. The possible proofs of a given goal in a theory can be represented as an and-or tree (or in the more general case an and-or graph), which we will call the theory tree. The original theory tree may be partitioned into a set of and-trees: one for each possible combination of or-branches in the original theory, each one representing a separate possi- ble proof. These are traditionally called proof trees. For each such proof tree, the leaves of the tree may or may not unify with facts corresponding to the par- ticular example given to the system. In the event that they do not, the system will identify the assumptions required for the proof of the given example. Each such proof is called a partial proof, as it requires assumptions in order to be completed. As mentioned in the previ- ous section, assumptions are facts which, if true for the example, would allow a proof to be completed. More importantly, from our point of view, assumptions are literals which, if removed from the premises of the rule in which they are used, would generalize the theory in such a way that the proof attempt would succeed. Con- structing partial proofs is a form of abduction [Char- niak and McDermott, 19851. In order to restrict the assumptions to observables (assertions expressed using operational predicates) we use most specific abduction [Stickel, 19881. Th e system which we use to generate partial proofs is a modified version of the ABDUCE system, described in [Ng and Mooney, 19891. For a complex theory, there will be many such partial proofs and associated assumptions for each unprovable example. In order to minimize the changes to the initial theory, we have adopted the Occam’s razor heuristic of finding the minimum number of assumptions required to cover all of the failing examples. Stating the problem as a logical expression we have: El A E2 A . . . A En where each of the E’s represents the statement that a failing positive example has one or more partial proofs, i.e. Ei - PiI v Pi2 V . . . V Pa, where the P’s represent the statement that a given par- tial proof for the example is satisfied, i.e. P. 3k - = Ajkl A Ajk2... A Ajkp where the Ajkl represents the Zth assumption used in the Rth partial proof of the jth example. We then find the minimal set of assumptions, Ajkl = True, which satisfy this expression. The missing rule example, expressed in these terms is: OURSTON AND MOONEY 8 17 E2: (width small) V ((width small) A (ceramic)) Es: (width small) V ((width small) A (Styrofoam)) and the minimum set of assumptions would consist of the assumption (width smaZZ). In our research, we are comparing two methods for finding the minimum cover of assumptions: a version of the greedy covering algorithm [Johnson, 19741, and the branch and bound algorithm. The greedy algorithm is not guaranteed to find the minimal cover, but will come within a logarithmic factor of it and runs in polynomial time. The branch and bound algorithm is guaranteed to find the minimal eover which accounts for all of the examples, but the process may take exponential time. Modifying the Theory. Once the minimum cover has been found, the next step is to determine how best to modify the theory so as to account for the failed positive examples. This generalization must also not entail any negative examples. The heart of the theory modification algorithm is as follows. Assumptions are grouped by the rules in which they participate. The assumptions for each rule are tentatively removed from the antecedents of the rule. If no negative examples become provable, the assump- tions are permanently removed. If negative examples become proven, one or more new rules are learned with the same consequent as the current rule. The rules are learned inductively so as to discriminate the appropri- ate positive and negative examples. The positive ex- amples are those who have a partial proof completed by the assumptions. The negative examples are those that are provable when the current rule consequent is removed from the antecedent of its parent rule2. In this way, rules are learned which augment the theory strictly to account for the failure in the given rule chain. For the missing rule example, EITHER removes the assumption (width small) from the graspable rule and tests to see if negative examples are provable. Since they are, EITHER removes (graspable) from the rule for (ZiftubZe) and sees which negative examples are proven (examples 4, 5 and 6). These are passed to the inductive learner along with the positive examples which required the original assumption (2 and 3) in order to learn a new rule with the consequent (graspable). The rule (hus- handle) --+ (graspable) is added to the theory. There are exceptions to the procedure described above. If all of the antecedents of a rule are removed, and no negative examples become provable, then re- move the consequent of the rule from its parent rule in- stead and recurse. This accounts for the situation where a A b A c + d and a b and c have all been removed with no inconsistency with respect to the examples. Since the result is the rule True -+ d, which will cause d to always be provable, it is appropriate to remove d from its parent rule instead. This is a less drastic change to the theory, since in the case where the theory is a graph, d may have participated in multiple rules, and we are only interested in those which were actually used in the proofs of the examples. A second exception is when rules are being learned which are used in the proof of a second, higher level rule. If a majority of the antecedents of a given rule are going to have new rules learned for them on average, then learn a new rule for the consequent of the given rule, instead. As a justification for this heuristic, consider the following example: aAb+d cAe+a fAs+b, and assume that the theory is missing the rule: h A i + d. Then an example which is a perfect discriminator for the additional d rule will cause a and b to fail (i.e. h and i will be true but c and e and f and g will not be true). But the positive examples can have arbitrary feature values, as long as they are provable. Any combination of a and b may be provable for examples that are provable using h A i --) d. Given all possible examples of h A i + d, a majority of the time we would be learning new rules for a and b and hence we will learn a new rule for d instead. This form of rule learning is also done recursively, since the higher level rule may also participate in the proof of a yet higher level rule, etc. Specializing the Theory In the case of a theory with overly-general aspects, the options are to remove rules or add conjuncts to the premises of rules. An overly-general theory manifests itself by having negative examples that are provable. We would like to modify the theory in such a way that the negative examples are not provable, without los- ing any of the positive examples. In analogy with the previous section, we would like to make the following statement true: 7E1 A yEg...TEn i.e. none of the currently provable negative examples El., . En are provable where 7Ei f lpi1 A 7Pi2... A Pim i.e. an example is not provable when none of its current proofs are satisfied. And 1Pjk E TRjkl V TRjkz... V 1Rjkl where Rjkl is the Zth rule used in the kth proof of the jth example, i.e. a proof is not complete if at least one of the rules used in the proof is negated. In analogy with most specific abduction, we consider only rules which occur at the leaves of the proof tree for the particular example. Because of the closed world assumption, the negation of a rule theory. Therefore is equivalent to removing it from the each of the YRikl is equivalent to a rule retraction. As with assumptions, EITHER forms a minimum 2The rule whit h which precedes the given rule in the rule chain used in the partial proof which includes the assumption. cover of rule retractions. If this case, the object is to re- 8 18 MACHINE LEARNING move all proofs of all of the provable negative examples. Note that in computing retractions, EITHER removes from consideration those rules which do not have any disjuncts in their proof path to the goal since these rules are needed to prove any example. EITHER removes each of the rules in the minimum cover. If all of the positive examples remain provable, then the rule is permanently removed. If any positive examples fail to be proven, then additional antecedents are added to the rule to prevent it from providing proofs for negative examples while still providing proofs for positive examples. An appropriate set of antecedents is found by giving the inductive learner the positive exam- ples which fail be proven with the rule removed and the negative examples which used the rule in a proof. The features used in the original rule are removed from the examples before they are sent to the inductive learner, and then added back in to the rule that is learned. In this way, we are guaranteed that the learned rule, which replaces the original rule in the theory, is a specializa- tion of the original rule. For the missing antecedent example, EITHER re- moves the rule (styrofoum) --f (graspable) from the the- ory since this is the only disjunctive rule required in the proofs of the negative examples. Since a positive ex- ample becomes unprovable when this is done, EITHER sends the failing positive example and the provable neg- ative examples to the inductive learner after removing the feature (styrofoum) from the examples. The induc- tive learner learns the rule (width small) ---f (graspable) and EITHER adds the feature (styrofoum) back in to form the rule (width smuZZ) A (styrofoum) + (graspable) which replaces the original rule in the theory. Experimental Results The EITHER algorithm was tested on a theory used for recognizing biological concepts in DNA sequences. The original theory is a modified version of the theory described in [?I. The goal of the theory is to recognize promoters in strings composed of nucleotides (one of A, G, T, or C). A promoter is a genetic region which initi- ates the first step in the expression of an adjacent gene (transcription), by RNA polymerase. We modified the original theory by removing the tests for conformation in order to improve its tractability. The reduced the- ory then corresponds to “Pribrow Boxes”. The input features are 57 sequential DNA nucleotides. The exam- ples used in the tests consisted of 53 positive and 53 negative examples, assembled by a biologist from the biological literature. The initial theory classified four of the positive examples and all of the negative exam- ples correctly. This indicates that the initial theory was entirely overly-specific. Figure 1 shows the performance results obtained when EITHER was used to refine this theory. In each test, performance was measured against twenty five test examples. The number of training examples was var- ied from one to 80, with the training and test exam- ples drawn from the entire example population, with no overlap. The results were averaged over 50 samples. The figure shows that using the provides a significant performance approximate theory advantage, and that this advantage is maintained over the entire training in- terval. An analysis of the runs showed that EITHER was modifying both leaf level and intermediate concepts in obtaining these results. A one-tailed Student t-test on paired differences showed that the superior performance of EITHER is statistically significant at the 1% level for every point plotted on the learning curves. After 80 training ex- amples, the 95% confidence interval for the difference between EITHER and ID3 is 5.8% to 10.0% (i.e. with a probability of 0.95 EITHER’s accuracy is between 5.8 and 10.0 percentage points higher than ID3’s). EITHER 95.00 -lo3 - I I I I =.O”~ 45.005 iii 2600 4chLl 6d.00 ad.00 Numbor of Training Jhmplea Figure 1: EITHER Results for the DNA Theory Future Research Empirical tests on additional domains and theoretical analysis of computational complexity and learnabi lity issues are obvious areas for future research. Other di- rections include extending the approach to deal with noisy data and predicate calculus and allowing the ini- tial theory to be used as a source of rules for construc- tive induction. Conclusions This report has outlined a technique for theory revision which combines elements of empirical and explanation- based learning. The approach attempts to preserve the structure of the theory as much as possible so that the OURSTON AND MOONEY 8 19 intermediate concepts represented in the original the- ory are preserved. Since the technique uses an initial theory it shows definite performance advantages when compared to a purely inductive system. Unlike other theory refinement systems, the proposed approach is capable of handling multiple faults and handles both overly-general and overly-specific aspects of an incor- rect theory. Acknowledgments Jude Shavlik provided us with the theory and data for the DNA tests, and also helped us to understand the background behind the theory. Michiel Noordewier ac- tually prepared the original theory and data base. References [Ali, 19891 K. M. Ali. Augmenting domain theory for explanation-based generalization. In Proceedings of the Sixth International Conference on Machine Learning, pages 40-42, Ithaca, NY, June 1989. [Bergadano and Giordana, 19881 F. Bergadano and A. Giordana. A knowledge intensive approach to concept induction. In Proceedings of the Fifth In- ternutionud Conference on Machine Learning, pages 305-317, Ann Arbor, MI, June 1988. [Charniak and McDermott, 19851 E. Charniak and D. McDermott. Introduction to AI. Addison- Wesley, Reading, MA, 1985. [Danyluk, 19891 A. P. Danyluk. Finding new rules for incomplete theories: Explicit biases for induction with contextual information. In Proceedings of the Sixth International Conference on Machine Learning, pages 34-36, Ithaca, NY, June 1989. [Flann and Dietterich, 19891 N. S. Flann and T. G. Di- etterich. A study of explanation-based methods for inductive learning. Muchine Learning, 4(2):187-226, 1989. [Ginsberg et ub., 1988] A. Ginsberg, S. M. Weiss, and P. Polit akis. Automatic knowledge based refine- ment for classification systems. Artificiud Intedligence, 35:197-226, 1988. [Ginsberg, 19881 A. Ginsberg. Theory revision via prior operationalization. In Proceedings of National Con- ference on Artificial Intedligence, pages 590-595, St. Paul, MN, August 1988. [Hirsh, 19891 H. Hirsh. Incremental Version-Space Merging: A General Framework for Concept Leurn- ing. PhD thesis, Stanford University, Palo Alto, CA, June 1989. [Johnson, 19741 D. S. Johnson. Approximation algo- rithms for combinatorial problems. Journud of Com- puter and System Sciences, 91256-278, 1974. [Mooney and Ourston, 19891 R. J. Mooney and D. Ourston. Induction over the unexplained: Inte- grated learning of concepts with both explainable and conventional aspects. In Proceedings of the Sixth In- ternational Conference on Machine Learning, pages 5-7, Ithaca, NY, June 1989. [Ng and Mooney, 19891 H. T. Ng and R. J. Mooney. Abductive explanations for text understanding: Some problems and solutions. Technical Report AI89- 116, Artificial Intelligence Laboratory, Univer- sity of Texas, Austin, TX, August 1989. [Pazzani, 19891 M. J. Pazzani. Detecting and correct- ing errors of omission after explanation-based learn- ing. In Proceedings of the EZeventh Internutionud Joint conference on Artificial intelbigence, pages 713- 718, Detroit, MI, Aug 1989. [Quinlan, 1986] J. R. Quinlan. Induction of decision trees. Machine Learning, l( 1):81-106, 1986. [Rajamoney and DeJong, 1988] S. A. Rajamoney and G. F. DeJong. Active explanation reduction: An approach to the multiple explanations problem. In Proceedings of the Fifth International Conference on Machine Learning, pages 242-255, Ann Arbor, MI, June 1988. [Shavlik and Towell, 19891 J. W. Shavlik and G. G. Towell. Combining explanation-based learning and artificial neural networks. In Proceedings of the Sixth Internutionud Conference on Machine Leurn.ing, pages 90-92, Ithaca, NY, June 1989. [Stickel, 1988] M. E. Stickel. A Prolog-like inference system for computing minimum-cost abductive ex- planations in natural-language interpretation. Tech- nical Report Technical Note 451, SRI International, Menlo Park, CA, September 1988. [Towell et al., 19901 G. G. Towell, J. W. Shavlik, and Michiel 0. Noordewier . Refinement of approximate domain theories by knowledge-based artificial neural networks. In Proceedings of Nationad Conference on ArtificiuZ Intelbigence, Boston, MA, July 1990. [Wilkins, 1988] D. C. Wilkins. Knowlege base refine- ment using apprenticeship learning techniques. In Proceedings of National Conference on Artificial In- tedbigence, pages 646-651, St. Paul, MN, August 1988. [Winston et ad., 19831 P. H. Winston, T. 0. Binford, B. Katz, and M. Lowry. Learning physical de- scriptions from functional definitions, examples, and precedents. In Proceedings of National Conference on Artificial In.teZZigence, pages 433-439, Washing- ton, D.C., Aug 1983. 820 MACHINE LEARNING
1990
145
899
nduct ive ilist ic ai Yoichiro Nakakuki Yoshiyuki Koseki Midori Tanaka C&C Systems Research Laboratories, NEC Corp. 4-l-l Miyazaki, Miyamae-ku, Kawasaki 213 nakakuki%asl.cl.nec.co.jp@uunet.UU.NET JAPAN Abstract This paper describes an inductive learning method in probabilistic domain. It acquires an appropri- ate probabilistic model from a small amount of ob- servation data. In order to derive an appropri- ate probabilistic model, a presumption tree with least description length is constructed. Descrip- tion length of a presumption tree is defined as the sum of its code length and log-likelihood. Us- ing a constructed presumption tree, the proba- bilistic distribution of future events can be pre- sumed appropriately from observations of occur- rences in the past. This capability enables the ef- ficiency of certain kinds of performance systems, such as diagnostic system, that deal with proba- bilistic problems. The experimental results show that a model-based diagnostic system performs ef- ficiently by making good use of the learning mech- anism. In comparison with a simple probability estimation method, it is shown that the proposed approach requires fewer observations, to acquire an appropriate probabilistic model. 1 Introduction In recent years, there has been a growing amount of re- search on inductive learning. Most of works has focused on deterministic domains, rather than uncertain ones. In a deterministic domain, we can induce deterministic classification rules, such as a decision tree, from given examples. On the other hand, in a probabilistic domain, we can only presume a probabilistic distribution( a prob- abilistic model) based on the observed occurrence of events. Although several approaches[Mingers, 1989; Quinlan, 1986; Yamanishi, 19891 deal with uncertainty, they are concerned with predicting a class, not the prob- abilistic distribution. The objective of this paper is to develop a mechanism to induce a probabilistic distribu- tion in such a domain. In general, a performance system which deals with probabilistic problems, such as malfunctions of a de- vice, must incorporate experiential knowledge about the probabilistic distribution. For example, a model- based diagnostic system without any heuristics re- quires many tests to pinpoint a failing component. Therefore, some heuristics on the probability, such as ‘most-probable-first heuristics’ are indispensable[de Kleer and Williams, 1987; de Kleer and Williams, 1989; Koseki, 19891. However, it is not easy to induce an appropriate prob- abilistic model from observed data. Especially, if the number of observed data is small, quite different mod- els may become candidates for the selection. Therefore, an appropriate criterion for the model selection is indis- pensable. In this paper we introduce a presumption tree to de- scribe a probabilistic model. Using a presumption tree, we can presume the probability of each event. To obtain the most appropriate presumption tree for given ob- servation data, the minimum description length(MDL) criterion [Rissanen, 1978; Rissanen, 19871 is employed. Description length is defined as the sum of the code length and log-likelihood for a model. Here, both val- ues are calculated in bits. To examine the effectiveness of the proposed ap- proach, we incorporated the proposed learning mech- anism into a model-based diagnostic system. The sys- tem accomplishes several tests to narrow down a list of suspected components. In this process, fault proba- bilities for the suspects are used to select an effective test. The proposed technique is used to presume the probabilities, based on the history of malfunctions for the objective device. The experimental results show that an appropriate model can be derived from a small amount of training data. The next section describes the inductive learning problem in a probabilistic domain. In Section 3, we introduce the definition of a presumption tree. A crite- rion for model selection is given in Section 4. A method to utilize the proposed learning mechanism for a perfor- mance system is discussed in Section 5. Experimental results are shown in Section 6. NAKAKUKIETAL. 809 From: AAAI-90 Proceedings. Copyright ©1990, AAAI (www.aaai.org). All rights reserved. 2 Learning in probabilistic domain In a probabilistic domain, it is assumed that each indi- vidual event occurs according to a certain probabilistic distribution. Moreover, it is also assumed that only a few event occurrences can be observed. Therefore, it is necessary to acquire an appropriate probabilistic model based on the observed data, to estimate future events. As an example of such a domain, we consider malfunc- tions of a device. As shown in Table 2-1, the device is composed of 16 components, where each component has two kinds of attributes, i.e., component type and its age. In this example, malfunctions were observed 32 times. Component 1 2 3 4 ii 7 8 190 11 12 :s 15 16 ‘lotal Table 2-l Example Attl ‘J-We a : b C : d e “f f I3 i? h butes Age old new old new old new old new old new old new old new old No. of Ohs. 1 0 13 9 : 0 0 8 1 i 5 ii First, we pay attention to the component type. Here, let the fault probability for a component of type x be p(x). As shown in Table 2-2, it seems that p(b) is very high and p(g) is also higher than the others. However, it may be dangerous to estimate that “p(a) is higher than p(d)” or “p(c) is about twice as large as p(a)". The reason is that only a few bits of data are given and it is possible that the observed events happened by ;pdn;;h;hile there is little difference among p(a),p(e) . Table 2-2 Fault frequency for each type Type No. of Components No. of Ob S. i3l 2 1 2 22 e 3 0 f 1 P 2 2 5 1 On the other hand, regarding the age attribute, old components broke down 17 times, and new ones broke down 15 times. Therefore, the information about age is less helpful than the component type to presume the fault probability for each component. However, if the component type is g, then the age factor may be impor- tant. Hence, in the process of presuming the probabil- ities of future events, it is important to choose helpful attributes and/or their combinations. 2.1 Presumption problem This section presents the presumption problem. Con- sider a set of events X = (~1, x2, . . . . x,} and attributes al, a2, . . . . an. Here, we assume that the events are ex- haustive and mutually exclusive, and that the domain for each attribute aj (j = 1,2, . . ..n) is a finite set Dom(aj). As shown in Table 2-3, for each event xi, a value Vij (E Dom(aj)) for each attribute aj is given. Also, ni, the number of observations is given. Table 2-3 Table of events The problem is to presume the probability #i for each event xi (; = 1,2, . . . . m), from the number of observa- tions nj. However, this task is not easy. Consider two distinct events, xi and xj, such that ni # nj. Here, it must be concluded that either fii = @j or fii # fij. If the number of observations, ni and nj, are evidently different, it can be concluded that @i # fij . However, if there are few differences between ni and yj, it may not be concluded that fii # @j, because the difference may be due to expected random variation. Here, we consider the two extreme decision strategies as follows. (a) Only if ni = nj , conclude that @i = #j, otherwise Pi # fij- (b) Only if ni and nj are extremely different, conclude that #i # pj, otherwise, J& = fij . Although strategy (a) leads to a more precise model, it is very sensitive. That is, it tends to over specializa- tion. On the other hand, strategy (b) is insensitive and tends to over generalization. Consequently, plausible probabilities can not be derived by using these extreme strategies. Moreover, if the number of observed data is small, quite different probabilistic models may become the candidate for the selection. Therefore, a criterion to select the most plausible probabilistic model is nec- essary. In the following sections, we introduce a method to resolve such a problem. 8 10 MACHINE LEARNING 3 Presumption tree In this section, a presumption tree is introduced to ex- press a probabilistic model. Using a presumption tree, all the events are classified into several groups. Here, each event, xi, in a group is assumed to have the same probability, fii, of occurrence. Therefore, the probabil- ities for individual events can be derived from a pre- sumption tree. The details are described below. : Branching node A A21 Ah2 d LJ 0 : Leaf G G2 Fig. 3-l Presumption tree As shown in Fig. 3-1, a presumption tree consists of several branching nodes and leaves. An attribute aj corresponds to each branching node, and subset Ajk of Dom(aj) corresponds to each branch from the branch- ing node. Here, each Ajk must satisfy the following conditions. Ajk C Dom(aj) (subset) Ajk “Ail = 4 (k # I) (disjoint) Uk Ajk = Dom(aj) (exhaustive) A presumption tree is used to classify all the events into several groups. For each leaf I, a group Gl of events corresponds to it. For example, for a presumption tree shown in Fig. 3-1, a group Gl of events for each leaf I (I = 1,2,3,4) is as follows: G1 = {Xi1 Vij, E Aj,l A vija E Ajal} G2 = {Xi I Vijl E Aj,l A vjja E Ajz2} G3 = {Xi1 %jl E Aj12} G4 = {Xi/ Vij, E Aj13} A presumption tree can be regarded as a description for a probabilistic model by assuming that all the events xi in a class Gl have the same probability @j : ,. 01 Pi = & * xk ok (xi E Gl) Here, 01 denotes the total number of observations for events in Gl. For example, consider events 21, x2, x3 as shown in Fig. 3-2. Event Attribute al No. of Obs. Xl X 17 x2 Y 1 23 Y I 2 I Fig. 3-2 Example 1 Attribute al 1 Fig. 3-3 Example presumption tree Figure 3-3 shows an example of a presumption tree for the events. It indicates that the events are classified into two groups, Gi and G2, such that GI = (Xi1 Vi1 E {X} } = {XI} G2 = (Xi1 Vi1 E {Y} } = {x2,x3}. The probability @i can be estimated for each event xi: 1021 fi2=h=&04+lo2j=; In the following section, we introduce a criterion for selecting a presumption tree which describes the most appropriate model according to the observed data. 4 Model selection with MDL criterion As a criterion for the selection, we adopted the min- imum description length (MDL) criterion [Rissanen, 1978; Rissanen, 1987; Rissanen, 19861. He argued that the least description length model is expected to fit for presuming the future events better than any other mod- els. Here, description length for a model is defined as the sum of: (1) Code length of the model. (2) Code length of the data w.r.t. the model. That is, the sum of the model complexity and model fitness for the observed data. The MDL principle is used to induce classification rules, such as a decision tree [Quinlan and Rivest, 19891 or a decision list [Ya- manishi, 19891. In our approach, the MDL criterion is adopted to select the most appropriate presumption tree(i.e., the most plausible probabilistic distribution). We define the description length of a presumption tree as the sum of: (1) Code length of the tree. (2) Log-likelihood of data given tree. The log-likelihood function is frequently used to mea- sure the distance (fitness) of a model and observed data. Here, both of the code length (l), (2) are measured in bits. Since the calculation for the code length of a tree is very complicated, we restrict the shape of the tree. NAKAKUKIETAL. 8 11 Although the selected model may not be the optimal one, it seems near optimal in most cases. The re- striction is as follows, For a branching node with I branches, let the corresponding attribut; be ai. Then, Aij (j = 132, . . ..I - 1) must be a singleton set, and Ai, is Dom(a,)-Ui?i Aij. Under this restriction, code length(mode1 complexity) Ll for a presumption tree is as follows (see appendix for the proof). Ll = C{log(n -dx)+l%kx+log ) XEP c +~+~~O~~+W’I+lQl~ - xEQ Here, P is a set of all the branching nodes and Q is a set of all the leaves. For each branching node Z, I, is the number of branches, d, is the depth of the node and kx = lDom(ai)l (ai is the corresponding attribute for node z). On the other hand, log-likelihood(mode1 fitness) L2 is defined as follows, where, ni is the number of obser- vations for each event xi, and n = c ni. L2 = -2 ??Ji (1Ogfii -log:) i=l An example of the calculation is given below. Fig. 4-l shows three example presumption trees for the device malfunction example in section 2. Tree A is a trivial one. It estimates the probabilities of all the events being equal, i.e., & = l/16 (i = 1,2, . . . . 16). Tree B classifies events into 4 groups(i.e. type b, type g & old, type g & new and others), and tree C classifies events into 16 groups. A B C 0 (old) (new1 cd,af, (old) (new) (old) (new) (old) (new) Fig. 4-l ‘Example of presumption trees The description length for each tree is shown in Table 4-l. Model A has the shortest code length(Ll), but its data description length(L2) is large. On the other hand, model C is just the opposite. Model B has the least description length(Ll+L2). Therefore, utilizing the MDL criterion, model B is the most appropriate one -- among the three. This result agrees with our-intuition. Table 4-l Individual description length Model Ll L2 Total Length Ll+L2 (bits) A 3.5 56.2 59.7 B 16.2 6.8 23.0 C 49.6 0.0 49.6 5 Application to performance systems An induced probabilistic model can be used to improve the efficiency of certain kinds of performance systems. In several performance systems, the expected computa- tion costs can be estimated by using information about the probabilistic distribution of events. Therefore, if the most appropriate probabilistic model is derived, it is possible to select a computation strategy with the minimum expected cost. For example, by using a probabilistic distribution, a model-based diagnostic system can estimate the ex- pected information gain for each possible test, and can select the most appropriate one[de Kleer and Williams, 1987; de Kleer and Williams, 19891. de Kleer and Williams introduced the minimum entropy technique where entropy is calculated from the fault probability for each suspected component and is used to evaluate a test to be carried out next. That is, the system cal- culates the expected entropy gain(information gain) for each possible test, and selects the most appropriate one. However, if the presumed fault probability distribu- tion is quite different from the real one, the calcula- tion for the expected information gain is meaningless. Therefore, we must acquire a precise probabilistic dis- tribution. For example, consider a communication sys- tem, which consists of 100 modems (ml, m2, . . . . rnloo), 100 terminals(tl, t2 , . . . . tloo) and a bus(bl). Suppose malfunctions in the system were observed 10 times and all of the faulty components were distinct modems, say mi,, ma, . . . . mile. By a simple estimation, the each fault probability for mi,, miZ, . . . . rni,,, is l/10, and that for the other 191 components are all 0. However, in- tuitively, it is natural to estimate that all the modems have higher fault probabilities than the other compo- nents. By using the proposed technique, such a model can be derived. The difference between the two estima tions mentioned above is considered to affect the perfor- mance of the diagnosis. The details of the experimental results are shown in the next section. Another application is to utilize deductively acquired knowledge. Although a deductive learning mechanism, such as EBL[Mitchell and Keller, 1986; DeJong and Mooney, 1986] or chunking [Rosenbloom and Newell, 19821 can be used to acquire knowledge, it does not always improve the system performance. A strat- egy to use the acquired knowledge greatly affects the performance[Minton, 1988; Greiner and Likuski, 19891. Therefore, to acquire an appropriate strategy, it is in- dispensable to presume future events based on expe- rience. Our approach is considered to be effective for 8 12 MACH~VE LEARNING such applications. 6 Experimental results To examine the effectiveness of the proposed approach, the learning mechanism was incorporated into a model- based diagnostic system. The system performs the fol- lowing procedures repeatedly until a faulty component is pinpointed. 1. Analyzes the symptom and/or the test results by using the model-based knowledge, and then creates a list of suspected components. 2. Selects and performs the most appropriate test to narrow down the suspected components. In step 2, estimation of the effectiveness of each pos- sible test(i.e., the expected information gain) is calcu- lated by using the fault probability for each suspected component. In the experiments, the fault probability for each component is estimated two ways, i.e., the proposed method and a simple estimation method. For each de- rived probabilistic distribution, the average system per- formance is examined. The details are described below. The objective is to diagnose a communication system as discussed in the previous section. Here, assume that the components are classified into three groups as shown in Table 6-l. We also assume that the fault probabil- ities for group 1, 2, 3 are 0.33, 0.66, 0.01, respectively, and each component in a group has the same proba- bility(e.g., p(ml) = p(m2) = - -- = p(mso) = 0.66/50 = 0.0132). Table 6-l Model of communication system malfunction Group Components Type Age 1 ml, m2, . . . . m50 modem old ’ ~. -, . -- 2 ml, m52, --, ml00 modem new 3 w2 , 'a', t 100 terminal new 61 bus new At first, several faults are artificially generated as training examples, according to the probabilistic distri- bution as shown above. From these examples, the most appropriate probabilistic model (presumption tree) is derived by the proposed mechanism. By using the de- rived model, the fault probability for each component is presumed. On the other hand, to make comparisons, we estimated each probability in a simple manner by assuming the probabilities to be proportional to the number of observations. In order to compare these two estimated probabilistic distributions, additional 100 faults are generated, ac- cording to the probabilistic distribution. The average numbers of required tests were compared and the results are shown in Fig. 6-l. The model that is derived by the proposed mechanism could classify the events into three correct groups based on only 20 training data ex- amples. Therefore, the system performance could be improved according to such a small amount of training data, while a simple estimation method requires a great amount of training data to gain an equivalent perfor- mance level. No. of Tests (avg.) + . . . . . . . . . . . . . . . Simple =hation - Estimation with presumption tree No. of Training 5 10 20 40 80 166 320 640 1280 2566 examples Fig. 6-l The effect of learning 7 Conclusion An inductive learning method in probabilistic domain is introduced. A presumption tree is adopted to de- scribe a probabilistic model. It is shown that the most plausible model can be derived from a small amount of observation data by using the proposed technique. Also, it is shown that the derived model can be used to presume the probability distribution based on the expe- rience, and can control the execution of a performance system to improve its efficiency. Although the proposed mechanism works well, it searches all possible presumption trees to derive the least description length tree. Hence, with the growth in the number of attributes, much computation time would be required. Therefore, it is necessary to develop an algorithm with a more sophisticated searching strat- egy, such as ‘branch and bound’ technique. Acknowledgment The authors would like to express their thanks to Tat- suo Ishiguro, Yoshihiro Nagai and Tomoyuki Fujita for their encouragement in this work. Further, they also thank Kenji Ikoma of the Institute for New Generation Computer Technology. Appendix The code length for a presumption tree is calculated in manner similar to that reported by [Quinlan and Rivest, 19891. Here, we assume that - log p bits are re- quired to describe information with probability p. The code length is calculated as the sum of individual code lengths for the following information. (1) Category of each node. NAKAKUKIETAL. 813 (2) Corresponding attribute for each branching node. (3) Corresponding value set for each branch. (4) Estimated probability for each leaf. To describe the information about (l), it requires 1 bit/node, because the category for a node could be a branching node or leaf. Hence, ]P] + IQ] bits are required in total. The information about (2) requires c,,, lodn - dx) bits, because the attribute is one of the n-dz attributes. Next, we calculate the code length for (3). The number of branch I,for a node x can-be described in log k, bits. Ai; (j = 1,2, . . ..I. - 1) is a singleton subset of Don&&, and we do not care about the% order, therefore the description for Aij requires bits. Hence, total code length for (3) is: ce% kx + 1% XEP Finally, the code length for (4) is xxEQ{ + log 0,) ([Ya manishi, 19891). C onsequently, total code length is: c w%(n - dx)+l&x+log } XEP + &%0x) + (IPI + IQI) xEQ References [de Kleer and Williams, 19871 J. de Kleer and B. C. Williams. Diagnosing multiple faults. Artificial In- telligence, 32:97-130, 1987. [de Kleer and Williams, 19891 J. de Kleer and B. C. Williams. Diagnosis with behavioral modes. Proc. IJCAI-89, 2:1324-1330, 1989. [DeJong and Mooney, 19861 G. F. DeJong and R. Mooney. Explanation-based learning: An alter- native view. Machine Learning, 1(2):145-176, 1986. [Greiner and Likuski, 19891 R. Greiner and J. Likuski. Incorporating redundant learned rules: A prelimi- nary formal analysis of EBL. Proc. IJCAI-89, 1:744- 749, 1989. [Koseki, 19891 Y. Koseki. Experience learning in model-based diagnostic systems. Proc. IJCA I- 89, 2:1356-1361, 1989. [Mingers, 19891 J. Mingers. An empirical comparison of pruning methods for decision tree induction. Machine Learning, 4:227-243, 1989. [Minton, 19881 S. Minton. Quantitative results con- cerning the utility of explanation-Based learning. Proc. AAAI-88, 2:564-569, 1988. [Mitchell and Keller, 19861 T. M. Mitchell and R. M. Keller. Explanation-Based generalization: A unify- ing view. Machine Learning, 1(1):47-80, 1986. [Quinlan and Rivest, 19891 J. R. Quinlan and R. L. Rivest. Inferring decision trees using the minimum description length principle. Information and Com- putation, 80(3):227-248, 1989. [Quinlan, 19861 J. R. Quinlan. Induction of decision trees. Machine Learning, l( 1):81-106, 1986. [Rissanen, 19781 J. Rissanen. Modeling by shortest data description. Automatica, 14:465-471, 1978. [Rissanen, 19861 J. Rissanen. Complexity of stings in the class of Markov sources. IEEE Trans. on Infor- mation Theory, 32(4):526-531, 1986. [Rissanen, 19871 J. Rissanen. Stochastic complexity. Jnl. Roy. statist. Sot. B, 49(3):223-239, 1987. [Rosenbloom and Newell, 19821 P. S. Rosenbloom and A. Newell. Learning by chunking summary of a task and a model. Proc. AAAI-82, pages 255-257, 1982. [Yamanishi, 19891 K. Yamanishi. Inductive inference and learning criterion of stochastic classification rules with hierarchical parameter structures. Proc. SITA- 89, 1989. 814 MACHINE LEARNING
1990
146