| { |
| "paper_id": "J75-4010", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:39:54.073880Z" |
| }, |
| "title": "", |
| "authors": [], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe a semantic processor we are constructing which is i n t e n d e d to be of general applicability. It is designed around semantic operations which work on a s t r u c t u r e d data base of world knowledge to draw the appropriate i n f e r e n c e s and to identify the same entities i n d i f f e r e n t parts of t h e t e x t. The semantic operations capitalize on the high degree of redundancy e x h i b i t e d by all texts. Described are the operations for interpreting higher predicates, f o r d e t e c t i n g some intersententialqrelations, and in particular detail, for f i n d i n g t h e a n t e c e 6 e n t s of definite noun phrases. The processor is applied to the problem of drawing maps from d i r e c t i o n s. We describe a l a t t i c e-l i k e representation intermediate between the linguistic representation of directions and the visual representation of maps. OVERVIEW 1,2 We are trying to c o n s t r u c t a semantic processor of some 7 A This research was supported by the Research Foundation of the City University of New York under F a c u l t y G r a n t No. 11233. The author would like to express h i s indebtedness to H a r r y Elam f o r many insights i n t o the problems discussed here.", |
| "pdf_parse": { |
| "paper_id": "J75-4010", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe a semantic processor we are constructing which is i n t e n d e d to be of general applicability. It is designed around semantic operations which work on a s t r u c t u r e d data base of world knowledge to draw the appropriate i n f e r e n c e s and to identify the same entities i n d i f f e r e n t parts of t h e t e x t. The semantic operations capitalize on the high degree of redundancy e x h i b i t e d by all texts. Described are the operations for interpreting higher predicates, f o r d e t e c t i n g some intersententialqrelations, and in particular detail, for f i n d i n g t h e a n t e c e 6 e n t s of definite noun phrases. The processor is applied to the problem of drawing maps from d i r e c t i o n s. We describe a l a t t i c e-l i k e representation intermediate between the linguistic representation of directions and the visual representation of maps. OVERVIEW 1,2 We are trying to c o n s t r u c t a semantic processor of some 7 A This research was supported by the Research Foundation of the City University of New York under F a c u l t y G r a n t No. 11233. The author would like to express h i s indebtedness to H a r r y Elam f o r many insights i n t o the problems discussed here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "generality. We are using as our data base a set of f a c t s involvi n g s p a t i a l terms i n English. To test t h e processor and to s t u d y the interfacing of semantic and task components, we are building a system which takes as i n p u t directions in E n g l i s h of how to get from one place to another and outputs a map, a map such as one might sketch for an unfamiliar region, hearing the directions over the phone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A typical input might be the text \"Upon leaving thi,s building, turn right and follow Washington Street three blocks. Make a left, The l i b r a r y is an t h e r i g h t side of the s t r e e t before the next coxner.\" which refer to t h e same e p t i t y . The text, augmented and i n t e rrelated in t h i s way, is then passed over to the task component, which makes arbitrary decisions when the map requires information not given by the directions and produces the map.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The kwp problems of semantic analysis are to f i n d , o u t of a p o t e n t i a l l y enormous collection of inferences, the appropriate i n f e r e n c e s , and t o f i n d them q u i c k l y . Our s o l u t i o n t o t h e first is i n our semantic o p e r a t i o n s described below. Our approach t o the second problem is in the organization of the data base.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ORGANIZATION OF TEXT AND WORLD KNOWLEDGE", |
| "sec_num": null |
| }, |
| { |
| "text": "The d a t a i n the semantic coptponent is of two sorts:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ORGANIZATION OF TEXT AND WORLD KNOWLEDGE", |
| "sec_num": null |
| }, |
| { |
| "text": "1. The Text: the information which is explicitly in t h e t e x t , I n the course o f semantic processing t h i s is augmented by i n f o r m a t i o n which is o n l y implicit i n the text. The text consists of the set of entities X1,X2, ..., e x p l i c i t l y and i m p l i c i t l y referred to in the text, and s t r u c t u r e s of $he form p (X1,X2) representing the statements m#de or implied about t h e s e e n t i t i e s , e . g . walk (XI) = X1 walks, building (XZ) = X is a building, 2 door ( X 3 , X2) = X is a &or of X2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ORGANIZATION OF TEXT AND WORLD KNOWLEDGE", |
| "sec_num": null |
| }, |
| { |
| "text": "The World Knowledge or the Lexicon: the system's knowledge of words and the world. Words are the boundary between the Text and the LexPcon. A word is viewed a s a key indexing a l a r g e body of facts (Holzman, 1 9 7 1 ) . occurs i n t h e Text and the semantic operations determine a particular inference appropriate, its enabling conditions are checked. If they hold, t h e conclusions are instantiated by c r e a t i n g a copy of them in t h e Text with the lexical variables r e p l a c e d by Text entities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ".", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Clusters. One way td state the \"frames\" problem (Minsky 1974) is \"How should the data base be organized to guide, confine, and make e f f i c i e n t t h e searches which the semantic o p e r a t i o n s require?\" W e approach this by dividing the sets of inferences i n t o clusters according to topic and salience in the particular application. In the searches, the clusters are probed in order of their salience. In our application, the top-level cluster concerns the one-dimensional aspects of objects and actions. For example, the fact about a block that it is the distance between two intersections i s in the cluster. If \"around the block\" is encountered, less salient clusters will have to be accessed to f i n d i n f o r m a t i o ,~ about the two-dimensional nature of blocks, The mast important fact about an apartment building is that it is a building, to be represented by a square on the map. But if the d i r e c t i o n s take us inside the building, up the elevator, and along the hallway, the cluster of facts about the interiors of buildings must be accessed, A self-organizing list (Knath 1973) of the clusters is maintained--when a fact in a cluster i s used, it becqmes t h e toplevel cluster--on the ,assumption that t h e t e x t will continue to talk about the same thing.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 61, |
| "text": "(Minsky 1974)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Associated with each word are a number of", |
| "sec_num": null |
| }, |
| { |
| "text": "The ''<Truth S t a t u s \" of Inferences. In natural language, unlike mathematics, one is n o t always free to draw c e r t a i n inferehces. We t a g our i n f e r e n c e s always, normally, o r sometimes. These notions are d e f i n e d o p e r a t i o n a l l y . An a l w a y s i n f e r e n c e i s one we are always f r e e t o draw, such as that a street i s a p a t h through space. A n o r m a l l y i n f e r e n c e i s o n e w e c a n draw if it is I n any p a r t i c u l a r t y p e of t e x t there are scales o r t r a n s i t i v e relations which are important enough t o deserve a more economical r e p r e d e n t a t i o n than predicate n o t a t i o n . I n this particulak task, the i m p o r t a n t scales are a distance scale, a s u b s c a l e of t h b i s indicating t h e p a t h \"you\" $ill travel, and a scale representing angular orientation. This is the principal information used in constructing the map. For these scales w e t r a n s l a t e i n t o a directed graph o r l a t t i c e -l i k e representation (Hobbs 1 9 7 4 ) . and ' ' p l e a s a n t \" a11 a p p l y t o \"walk\", b u t t h e y narrow i n on d i f f e re n t aspects o f walking. T h a t i s , each demands t h a t a d i f f e r e n t inference be drawn from t h e s t a t e m e n t t h a t \"X walks\". \"Out\" and \"slow\" demand t h e i r arguments be motion from one place t o", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Associated with each word are a number of", |
| "sec_num": null |
| }, |
| { |
| "text": "another., f o r c i n g us t o infe'r f r o m \" X walks'' t h a t \"X goes from A \"Pleasant\", on the other hand, r e q u i r e s i t s argument t o be an awareness, so we must i n f e r from \"X walks\" t h a t \"X engages i n a However this r e q u i r e s that w e t a k e very seriously m y suggestion in Hobbs (1974) t h a t the lexicon for the entire language be built, insofar as possible, along the lines of a s p a t i a l metaphor. We have n o t yet had to f a c e these problems since our only scales are p h y s i c a l -our \" a t \" and \"on\" are the locative \" a t \" and \"on\". \"The walk was t i r i n g \" . Here we look back for a statement whose predicate is \"walk\" or from which a statement involving \"walkn can be i n f e r r e d . There a r e cases in which the required i n f e r e n c e is in f a c t a summary o f an entire paragraph--e.g.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Some of the things", |
| "sec_num": null |
| }, |
| { |
| "text": "\"These actions surprised. , . \"--although of course we cannot handle these cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Some of the things", |
| "sec_num": null |
| }, |
| { |
| "text": "for consistency. Suppose X1 is the definite entity which prompted the search and its properties are and X2 is the proposed antecedent with properties", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "We must cycle through the q ' s and the r ' s to ensure they are consistent properties. Of course, to prove t w o properties q(X) and r(X) inconsistent can be an indefinitely long process with no assurance of termination. One admittedly ad hoc way we get around this is by placing into a special c l u s t e r those f a c t s we f e e l are likely to lead quickly to a contradiction. The second tool we use f o r deriving inconsistencies may t u r n out to be q u i t e significant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "In the course of processing, the lattice described abave is constructed for several predicates. They c o n t a i n i n f o r m a t i o n which can be useful i n deriving a n inconsistency. Suppose we have a t e x t in which \"the block\" occurs explicitly several times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "Toward the end of it, we encounter The search algorithm looks first for explicit mentions of \"blockl\" and finds them. Yet none of these entities is the one we want.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "Intuitively, the reason we know this is our almost visual feeling that we are already beyond those points. To do this, we appeal to the Principle of Knitting again and make the choice that will maximize the redundancy in the simplest begin with a g e n e r a l grammar and specialize it, by weeding out the rules for constructions that don't occur in the texts one is dealing with, and by adding a few rules f o r constructions and constraints peculiar to orre's application.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "We are trying to make a similar facility available for the most common kinds of semantic processing. Specializing the general semantic component would consist of several relatively easy steps. First the Lexicon would be organized into a cluster structure appropriate to the task. At worst, this would mean specifying the necessary knowledge in a fairly simple format.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "If a very large Lexicon were available, this could mean no more", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistencv. Each of the plausible antecedents is checked", |
| "sec_num": null |
| }, |
| { |
| "text": "t h e p r o p e r t i e s p1 (X) , p Z (X), ..., are known about the d e f i n i t e entity X, the definitions o f p1,p2, ..., are probed f o r the f a c t that the entity does not normally occur in the plural. Included under this heading are proper names beginning with \"the\", like", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "than designating for each fact the cluster it should appear i n . C e r t a i n inferences could be made obligatory while others which are irrelevant t o the t a s k c o u l d be l e f t out of the s p e c i a l Lexicon altogether. Second a Task Component would be built which would take, as ours does, the semantically processed Text, and use it t o perform t h e task. W e are demonstrating the usefulness of this approach in performing a task i n ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": {}, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "The o u t p u t would be t h e map I L i b r a r y I To bypass syntactic problems, we are u s i n g a s our input the o u t p u t of t h e Linguistic String Project's transformational prou c t u r e d data base of world knowledge to draw the appropriate N inferences and to identify phrases in different p a r t s of the t e x t" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "f a c t s or i n f e r e n c e s which can be drawn from the occurrknce of p(X1, ..., X , ) in the Text. The facts are expressed in terms of p ' s s e t of parameters Y l f ,Ykt and a s e t of other l e x i c a l variables z l , . . , , z m' stanaing for entities whose existence i s also implied. A fact consists of enabling c o n d i t i o n s and conclusions. When p ( X 1 , ... X,)" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "not explicitly c o n t r a d i c t e d e l s e w h e r e , s u c h as that b u i l d i n g s have windows. A sometimes inference may be drawn i f r e i n f o r c e d elsewhere, such as the f a c t used below t h a t a b u i l d i n g i s by a street. T h i s c l a s s i f i c a t i o n of i n f e r e n c e s c u t s across t h e cluster structure of the Lexicon. Lattices. A large number of statements i n any natural language t e x t , especially t h e texts this system analyzes, involve a transitive relation, or e q u i v a l e n t l y , say something about an underlying scale. For example, the word \"walk\" i n d i c a t e s a change of location along a p a t h through space, o r a distancescale; \" t u r n \" indicates a change a l o n g a scale of a n g u l a r orie,n-t a t i o n ." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "which can be said about t h e structure of a scale are mat some p o i n t i s on t h e scale, t h a t of t w o p o i n t s on the scale one is closer t o t h e positive end tHan t h e o t h e r , and t h a t a scale i s a part of another s c a l e . I f a p o i n t B i s closer to t h e positive end of the s c a l e than p o i n t A , mean& the scale from C to D is part of the scale from A to B, It is possible to represent incompleteness of information. For example, if it i s known that p o i n t s A and B both lie in a region R of a scale b u t their r e l a t i v e positions are n o t known and if it is known about C o n l y thati,tprecedes B t h i s i s represented by The lattice f o r the d i s t a n c e scale for t e x t (1) is as follows: The lattices are intermediate between the linguistic repres e n t a t i o n of the directions and t h e v i s u a l representation of the maps. They are used at several p o i n t s in the semantic and t a s k processes. They can be constructed f o r any transitive relation, and could be very u s e f u l , f o r example, in representing causal and enabling r e l a t i o n s in a system translating descriptions of algo-Semantic Analysis. We bedieve the key to t=he first problem of semantic a n a l y s i s , that of finding which inferences are appropriate, i s J o o s ' Semantic A x i o m N u m b e r One (Joos 1972), or what I w i l l call the Principle o f knitting. Restated, this is, \"The important facts in a text w i l l be repeated, explicitly or implicity.\" That is, we capitalize on the very high degree of redundancy that characterizes a11 texts. Consiifer, for example, the simple sentenced \"Walk out the door of this building.\" \"Walk\" implies motion from o n e pLace to another. \"Out\" implies motion from inside something to the o u t s i d e . \"Door\" i s something which permits motion from inside something to the outside or from the outside to the inside, or if closed, prevents this motion. \"Building\" is something whose, purpose is for p e o p l e to be in. Thus, all four c o n t e n t words of t h e s e n t e n c e repeatedly key the same facts. Those i n f e r e n c e s which should be drawn are those which are keyed by more than one element in t h e text. T h i s p r i n c i p l e i s used both formally and informally by the semantic operations. It is used formally in the interpretation. of higher predicates and in finding antecedents. It is used more informally for deciding among competing p l a u s i b l e a n t e c e d e n t s , resolving ambiguities, d e t e c t i n g intersentential relations, and knitting the text together in some minimal way. Here it isd p r i m a r i l y t h e f o r m a l uses that w i l l be d e s c r i b e d .X n t e r p r e t a t i o n . o f Higher P r e d i c a t e s . I n \"walk o u t \" , \"walk s l w o l y \" , and \"pleasant w a l k \" , t h e h i g h e r p r e d i c a t e s \"out\", \" s l o w \"" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "t o B \" . \"Out\" then adds i n f o r m a t i o n a b o u t t h e l o c a t i o n s s f A and B, w h i l e \"slow\" says something a b o u t t h e speed of t h i s motion." |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "b o d i l y a c t i v i t y he i s aware o f \" .Stored i n t h e Lexicon w i t h each h i g h e r predicate i s t h ei n f e r e n c e which must be drawn from i t s argument and t h e informa-11 t i o n it adds t o t h i s i n f e r e n c e . F o r example, g o ( z l , z 2 , z 3 ) \" must be inferred from t h e argument of \" o u t \" . When t h e s t a t e m e n t \"out(waDk(X1))\" i s e n c o u n t e r e d i n t h e Text, t h e higher predicate o p e r a t i o n makes e f f o r t s t o f i n d a proof of 1 1 g o ( z l ,~1 ,~3 ) I1 from \" w a l k ( X L ) \" . The search for t h i s i n f e r e n c e i s s i m i l a r t d t h e search p r o c e d u r e described below f o r f i n d i n g antecefienes. T h e f a c t s in t h e resulting c h a i n o f inference are i n s t a n t i a t e d t o g e t h e r w i t h t h e i n f o r m a t i o n added by the h i g h e r p r e d i c a t e , and t h e y are s u b s e q u e n t l y treated as though p a r t ofthe e x p l i c i t Text.I t i s u s u a l for them t o be u s e f u l in f u r t h e r p r o c e s s i n g , u n l e s s t h e m o d i f i e r i s simply g r a t u i t o u s i n f o r m a t i o n .Note t h a t t h i s o p e r a t i o n a l l o w s c o n s i d e r a b l e compression i n the number of senses that must be s t o r e d for each word* It ellows us, f o r example, to define \"slow\" a s something like \" F i n d t h e most salient associated motion. Find t h e most specific speed S c a l e for the object X of this motion. X ' s speed i s on t h e lower end of t h i s scale\". This definition is adequate for s u c h phrases as \"walk slowlyn (the most salient motion is the forward motion of t h e w a l k i n g ) , \"slow race\" [the forward motion of the competitors), \"slow horsew (its running at f u l l speed, usually in a race), and \"slow personw. This last case is highly dependent on context, and could mean t h e person's physical acts in general, h i s mental processes, o r t h e act h e is engaged in at the moment.This operation has a default f e a t u r e , If a proof of t h e required inference can't be found, it is assumed anyway. This allows a t e x t to be understood even if all the words aren't known. Suppose, for example, \"veer rightw is encountered, and the word \"veern isn't known, i . e . no inferences can be drawn f r o m it. Since \"rightn requires a change i n angular o r i e n t a t i o n a s its argument, i t i s assumed this is w h a t \"veer\" means. Only the information that the change i s small is lost.FIND ANTECEDENTS OFDEFINITE NOUN PHRASES ~n t i t i e s referred to in a text may be arranged in a h i e r a r c h y according to t h e i r degree of specification: So f a r our work h a s concerned p r i m a r i l y definite noun p h r a s e s , but it i s expected that many f e a t u r e s of t h e d e f i n i t e noun phrase algorithm w i l l carry over t o other cases, The d e f i n i t e noun phrase a l g o r i t h m consists of f o u r steps. First, \"uniquent2~s c o n d i t i o n s n are checked t o determine whether an antecedent i s r e q u i r e d . If so, t h e Text and Lexicon are searched for p l a u s i b l e a n t e c e a e n t s . Third, c o n s i s t e n c y checks are made on these. F i n a l l y i f more t h a n one p l a u s i b l e a n t e c e d e n t remains t h e Principle of K n i t t i n g i s a p p l i e d t o decide between them. Vniqueness C o n d i t i o n s , I n t h e phrase \"the end of the block\", w e know we must look back i n the t e x t f o r an e x p l i c i t l y o r i m p l icitly mentioned \"block\" ( t h e search case), b u t w e do fiat neqess a r i l y look for a previously meptioned \"end\" (the no-search case) . Given a d e f i n i t e noun phrase t h e a l g o r i t h m first tries t o determine whether it b e l o n g s t o t h e search or no-search case. This i s done by checking two broad c r i t e r i a . (These criteria were motivated by a large number of examples n o t o n l y from s e t s of directions but a l s o from t e c h n i c a l and news a r t i c l e s , ) These criteria are checked by s e a r c h i n g t h e Lexicon for c e r t a i n f e a t u r e s . However these searches are generally v e r y shallow, i n c o n t r a s t t o the p o t e n t i a l l y much deeper searches in the riext s t e p of the algorithm. S i n c s by far the majority of d e f i n i t e noun p h r a s e s are i n t h e no-search case, checking uniquen e s s c o n d i t i o n s can r e s u l t i n g r e a t s a v i n g s . A caveat is in order. W e state t h e c r i t e r i a at a very high level of abstraction, We feel i n f a c t t h a t t h e a l g o r i t h m can work at that level of abstraction if the ex icon is p r o p e r l y constructed. But how to construct a large exi icon properly is a problem we have not yet tackled in detail. In any event, we g i v e examples f o r each case, and t h e examples themselves form a reasonably exhaustive classification. 1. A d e f i n i t e entity is in the no-search case i f it can be located precisely w i t h respect to some framework. n his includes me following conditions. a. Objects which are located with r e s p e c t to some identif i e d p o i n t in space: \"the building on the corner\". b, Plurals and mass nouns which are restricted to some identified region sf space: \"the trees in the park\", \" t h e water in the swimming pool\". Here \"the\" indicates a l l s u c h in which at least some of the participants are identified and which can be recognized as occurring at a specific time: nthe ride you took through the park yesterday1'; e , P o i n t s or intervqls on more abstract scales: \"the e n d of the block\", \"the size o f t h e b u i l d i n g \" . The end is a specific p o i n t on the distance scale defined by the block. The size of the building is a specific point on the general s i z e scale f o r objects , i . e . the volume scale.f. Superlatives, ordinals, and r e l a t e d terms: \" t h e largest house on t h e block\", \" t h e second house on the block\", \" t h e only house on the block\". If the set of comparison is identified, the superlative or ordinal indicates the scale oE comparison and the place on that scale of t h e e n t i t y it describes. This is a subcase of (e) .A l l of these c o n d i t i o n s c a n be checked in one operation if the facts in the Lexicon are expressed in terms of suitably abstract operators r e l a t i n g entities t o scales. We simply ask if the definite entity is on or part of a scale o r a t a p o i n t on or -along an +interval of a scale, where the scale can be identified." |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Also checking this c r i t e r i o n presupposes a very sophisticated s y n t a c t i c and semantic analysis. For example, [d) assumes that the times of events mentioned in tenseless constructions can be recovered.2. A definite entity is in the no-search case i f it i s the dominant entity of t h a t description. This d i v i d e s i n t o two sub-c r i t e r i a : a , Those e n t i t i e s which are u n i q u e or dominant by virtue of the properties which describe them: \" t h e sun1', \"the wind\".I f \"the Empire State Buildingff, and appositives, like \"the city of Bos tonr' . b. Those entities which are unique by virtue of t h e properties of an entity with which they are grammatically r e l a t e d : \"the door of the building\", \" t h e Hudson River valley\". \"The door of the buildingn is represented in t h e Text a s \"xl 1 door'(^^,^^ 1 building{X2))' i.e. \" t h e Xl such t h a t XI i s t h e door of X2 which is a building\". The uniqueness or dominance of XI is not a prope r t y of \"door\" b u t of \"building\". Stored w i t h \"building\" is the f a c t t h a t a building has in its front surface a main door which does not normally occur i n t h e p l u r a l . \"The door of t h e b u i l d i n g r ' is interpreted as this dominant dosr. If the tvliqueness conditions succeed, a p o i n t e r is s e t from t h e dominant l e x i c a l variable to t h e corresponding e n t i t y . If subsequently the same definite noun phrase occurs, the uniqueness check will discover t h i s p o i n t e r and c o r r e c t l y identify the antecedent. Thus, we can handle the example \"Walk up to the door of t h e building. Go through the door of the building.\" Here the uniqueness check gives us a s h o r t c u t around the n Walk out the door of this b u i l 8 i n g . T u r n right.Walk to the end of the block. \"What block? From \"block\" W e follow a back p o i n t e r t o the f a c t stored with \"streetn *that \"streets consist of blocks\", and from \"street1' the fact with \"buildingt' that \"Buildings are by streets\" Since a building is mentioned, we assume it is \"the block of the street the b u i l d i n g is on\". T h e facts in the c h a i n of inference leading to this are instantiated, An entity is introduced i n t o the t e x t f o r t h e \"street\" and the Text is augmented by the statements that \"the b u i l d i n g i s on the street\" and \" t h e block is part of the street\". This information turns out to be required for the map. Note that t h e Eact that a building is on a street is a sometimes f a c t and that we are free to d'raw it only because \"the blockn occurs*To conduct the search of the Lexicon, ideally we would like to send out a pulse from the word \"block\" which travels faster over more salient paths, and look for the first entity which theptXlse reaches. The saliency is simulated by the cluster structure descrihea above, The parallel process of the spreading signal is simulated by interleafing deeper pfobes from salient clusters with shallower probes from less salient clusters. For example, i f \"streets consist of blocks\" i s a c l u s t e r 1 f a c t , t h e n we might probe for a cluster 1 f a c t involving syreets and a cluster 2 Eact involving blocks at r o u g h l y the same time, After one p l a u s i b l e antecedent is found in this way, t h e search is continued for possible antecedents which are n e a r l y as plausible. If after a time no plausible antecedents are found, the search is discontinued. Searches f o r antecedents are conducted not only for entities but also for definite noun phrases that the nominalization transformations of t h e syntactic component have turned into statements --e . g ." |
| }, |
| "FIGREF7": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Turn right o n t o Adarnii Street. The library fs at the end of t h e b l o c k \" ." |
| }, |
| "FIGREF8": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "The lattice consistency check corresponds precisely to this feeling. If a definite entity X1 is a point or interval in a lattice or at a point or along an interval, we ask if the proposed antecedent X2 is or can be related to a portion of the l a t t i c e . If so, then s i n c e the lattice represents a transitive relation, we need only ask i f there is a path in the lattice from X2 to XI.If there is, they cannot be the same entity.Many cases whichpass for applications of the supposed recency principle--\"Pick the most recent plausible antecedentn-are in reality examples of this consistency check. The earlier plausible antecedent is rejected because of lattice considerations.As the text is processed, the whole structure of the discourse is built up. When a definite noun phrase is encountered, this discourse structure is known and it is this knowledge that is used to determine the antecedent rather than the linear ordering of the words on the page.Competition among Remaining Plausible Antecedents. Even after the consistency checks, several plausible antecedents may remain, forcing us to decide among them on less certain criteria." |
| }, |
| "FIGREF9": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "probe is s e n t out f r o m the definite entity and from each plausible antecedent. Each plausible antecedent i s searched for properties it has in comon with the definite entity. Common properties Count most if they are already in the Text, an8 within the Lexicon, comon properties count more if they are within more salient clusters or they result from shorter chains of inference.Default. Like the higher predicate algorithm, the definite noun phrase algorithm has a default feature. If the uniqueness conditions fail and the search turns up no antecedent, we simply introduce a new e n t i t y .In fact, in the d i r e c t i a n s texts there are a disproportionately large number of default cases, for \"the object\" may simply be the object you will see when you reach that point in following the directions.Other Anaphora.We have not y e t implemented r o u t i n e s for handling o t h e r anaphora. However, we believe they a r e very similar to the definite noun phrase r o u t i n e , w i t h c e r t a i nd i f f e rences. For entities tagged with demonstrative a r t i c l e s , we do not check uniqueness conditions, and the search will be narrower since the antecedent must be an entity or statement actually o c c u r r i n g in t h e text. For pronouns also, no uniqueness c o n d it i o n s are checked. The search will turn up more consistent p l a u s i b l e antecedents, and a correspondingly greater b u r d e n will be placed on the competition routine. INTERSENTENTIAL CONNECTIVES We d e t e c t unstated inter-sentence connectives by matching two successive sentences S1 S 2 with a small number of common patterns. In the directions texts the patterns are usually few and simple. The most common are 1. S1 asserts a change whose final state is asserted or presupposed by S 2 . 2. S1 asserts or presupposes a state which is the initial state of a change asserted by S2. (These are likely very common patterns in all n a r r a t i v e s , ) For example, in the text \"Walk out the door of this building. Turn right. Walk to the end of the b l a c k \" , pattern(1) j o i n s the first two sentences, where the state is \"You at X\", Pattern(2') joins the last two sentences, where again the state is \"You at X-\". Note moreover that the sentences axe interlocked by n second application of the two p a t t e r n s : The f i r s t sentence assumes an angular orientation which is the initial state of the change asserted in the second sentence. The final state of this change is assumed by the third sentence. In addition to providing the discourse with structure, this operation i s one of t h e -p r i n c l i p a l means by which implied entities in one sentence, like X above, are identified with those in another. When pqttern (2) is applied, we delete the independent occurrence of the s t a t e in the Text, so that subsequently i t e x i s t s only as one intermediate state ih a l a r g e r event. Changes across time are handled in this way. TASK PERF-ORMANCE COMPONENT Arbitrary Decisians, The semantic operations are quite g e n e r a l and can be used for any application. The augmented and i n t e r r e l a t e d Text i s t h e n handed aver to the task performance component, which of course is specific to the a p p l i c a t i o n . Our task component first makes arbitrary decisions r e q u i r e d by the map but not given in the text. Both natural language d i r e c t i o n s and s k e t c h e d maps allow information to be incomplete and imprecise, but in different ways. Far example, in nTurn right at the third street or the second stoplight\". we must decide whether to put the first stoplight at the first or second street, The l a t t i c e representing the p a t h \"your' take must be complete i n the sense t h a t it i s continuous, begins at the initial location, and ends at the desired goal, and that the relative locations of all points on the path are known. The lattide is complete if and only if there is a directed path passing through every point in t h e lattice at least once. If it is not complete, it is completed by supplying t h e fewest possible new links. Gsometr-izing the Lattices. The second task operation is to c o n v e r t t h e topological lattice representation into the geometric r e p r e s e n t a t i o n required by t h e maps. First we assign d i r e c t i o n s to all t h e points in the angular orientation lattice. In the simplest case we may have something like where \"ab\" means direction b results from a clockwise rotation of d i r e c t i o n a. If no explicit directional information 4 (0 is present, we simply assume a, c, and e are the same direction, and b and d are the same, and then assume the two directions are at right angles, Then in the distance lattice, contiguous or overlapping paths which share the same orientation are assumed to be parts of the same path and are mapped into a straight line. Information about names is accessed and assigned to the streets and buildings and the map is drawn, Specific Systems with a General Semantic Component. We are aiming not so much at the construction o f a general natural language processing system, which still seems r e a s o n a b l y f a r o f f b u t a t an easier way of constructing specific systems. The case of syntax is instructive. It would be foolish for one who is building a natural language processing system to build his syntactic component from scratch. Large general grammars and parsers for them exist (e.g. Grishman et al. 1973, Sager & Grishrnan 1975). It is easier by several orders of magnitude to" |
| } |
| } |
| } |
| } |