| { |
| "paper_id": "P91-1040", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:03:32.342380Z" |
| }, |
| "title": "CONSTRAINT PROJECTION: AN EFFICIENT TREATMENT OF DISJUNCTIVE FEATURE DESCRIPTIONS", |
| "authors": [ |
| { |
| "first": "Mikio", |
| "middle": [], |
| "last": "Nakano", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Basic Research Laboratories", |
| "location": { |
| "addrLine": "3-9-11 Midori-cho, Musashino-shi", |
| "postCode": "180", |
| "settlement": "Tokyo", |
| "country": "JAPAN" |
| } |
| }, |
| "email": "nakano@atom.ntt.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Unification of disjunctive feature descriptions is important for efficient unification-based parsing. This paper presents constraint projection, a new method for unification of disjunctive feature structures represented by logical constraints. Constraint projection is a generalization of constraint unification, and is more efficient because constraint projection has a mechanism for abandoning information irrelevant to a goal specified by a list of variables.", |
| "pdf_parse": { |
| "paper_id": "P91-1040", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Unification of disjunctive feature descriptions is important for efficient unification-based parsing. This paper presents constraint projection, a new method for unification of disjunctive feature structures represented by logical constraints. Constraint projection is a generalization of constraint unification, and is more efficient because constraint projection has a mechanism for abandoning information irrelevant to a goal specified by a list of variables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Unification is a central operation in recent computational linguistic research. Much work on syntactic theory and natural language parsing is based on unification because unification-based approaches have many advantages over other syntactic and computational theories. Unificationbased formalisms make it easy to write a grammar. In particular, they allow rules and lexicon to be written declaratively and do not need transformations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Some problems remain, however. One of the main problems is the computational inefficiency of the unification of disjunctive feature structures. Functional unification grammar (FUG) (Kay 1985) uses disjunctive feature structures for economical representation of lexical items. Using disjunctive feature structures reduces the number of lexical items. However, if disjunctive feature structures were expanded to disjunctive normal form (DNF) 1 as in definite clause grammar (Pereira and Warren 1980 ) and Kay's parser (Kay 1985) , unification would take exponential time in the number of disjuncts. Avoiding unnecessary expansion of disjunction is important for efficient disjunctive unification. Kasper (1987) and Eisele and DSrre (1988) have tackled this problem and proposed unification methods for disjunctive feature descriptions.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 191, |
| "text": "(Kay 1985)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 472, |
| "end": 496, |
| "text": "(Pereira and Warren 1980", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 516, |
| "end": 526, |
| "text": "(Kay 1985)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 695, |
| "end": 708, |
| "text": "Kasper (1987)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 713, |
| "end": 736, |
| "text": "Eisele and DSrre (1988)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "~DNF has a form \u00a2bt Vq~ V\u00a23 V.-. Vq~n, where \u00a2i includes no disjunctions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "These works are based on graph unification rather than on term unification. Graph unification has the advantage that the number of arguments is free and arguments are selected by labels so that it is easy to write a grammar and lexicon. Graph unification, however, has two disadvantages: it takes excessive time to search for a specified feature and it requires much copying. We adopt term unification for these reasons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although Eisele and DSrre (1988) have mentioned that their algorithm is applicable to term unification as well as graph unification, this method would lose term unification's advantage of not requiring so much copying. On the contrary, constraint unification (CU) (Hasida 1986 , Tuda et al. 1989 , a disjunctive unification method, makes full use of term unification advantages. In CU, disjunctive feature structures are represented by logical constraints, particularly by Horn clauses, and unification is regarded as a constraint satisfaction problem. Furthermore, solving a constraint satisfaction problem is identical to transforming a constraint into an equivalent and satisfiable constraint. CU unifies feature structures by transforming the constraints on them. The basic idea of CU is to transform constraints in a demand-driven way; that is, to transform only those constraints which may not be satisfiable. This is why CU is efficient and does not require excessive copying.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 32, |
| "text": "Eisele and DSrre (1988)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 259, |
| "end": 263, |
| "text": "(CU)", |
| "ref_id": null |
| }, |
| { |
| "start": 264, |
| "end": 276, |
| "text": "(Hasida 1986", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 277, |
| "end": 295, |
| "text": ", Tuda et al. 1989", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, CU has a serious disadvantage. It does not have a mechanism for abandoning irrelevant information, so the number of arguments in constraint-terms (atomic formulas) becomes so large that transt'ormation takes much time. Therefore, from the viewpoint of general natural language processing, although CU is suitable for processing logical constraints with small structures, it is not suitable for constraints with large structures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper presents constraint projection (CP), another method for disjunctive unification. The basic idea of CP is to abandon information irrelevant to goals. For example, in bottom-up parsing, if grammar consists of local constraints as in contemporary unification-based formalisms, it is possible to abandon information about daughter nodes after the application of rules, because the feature structure of a mother node is determined only by the feature structures of its daughter nodes and phrase structure rules. Since abandoning irrelevant information makes the resulting structure tighter, another application of phrase structure rules to it will be efficient. We use the term projection in the sense that CP returns a projection of the input constraint on the specified variables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We explain how to express disjunctive feature structures by logical constraints in Section 2. Section 3 introduces CU and indicates its disadvantages. Section 4 explains the basic ideas and the algorithm of CP. Section 5 presents some results of implementation and shows that adopting CP makes parsing efficient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This section explains the representation of disjunctive feature structures by Horn clauses. We use the DEC-10 Prolog notation for writing Horn clauses. First, we can express a feature structure without disjunctions by a logical term. For example, (1) is translated into (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expressing Disjunctive Feature Structures by Logical Constraints", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1 (2) cat (v, agr (sing, 3rd), cat (_, agr (sing, 3rd), _) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FP\u00b0'\" ]", |
| "sec_num": null |
| }, |
| { |
| "text": "The arguments of the functor cat correspond to the pos (part of speech), agr (agreement), and snbj (subject) features. Disjunction and sharing are represented by the bodies of Horn clauses. An atomic formula in the body whose predicate has multiple definition clauses represents a disjunction. For example, a disjunctive feature structure (3) in FUG (Kay 1985) notation, is translated into (4).", |
| "cite_spans": [ |
| { |
| "start": 350, |
| "end": 360, |
| "text": "(Kay 1985)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FP\u00b0'\" ]", |
| "sec_num": null |
| }, |
| { |
| "text": "v { [numsing .] }...~ plural] agr [] [per j 1st t/ 12nd j'J (3) subj [ gr ! [num L agr per (4) p(cat (v, Agr, cat (_, Agr,_)))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\"pos", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 -not_3s (Agr). p(cat (n, agr (s ing, 3rd), _) ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\"pos", |
| "sec_num": null |
| }, |
| { |
| "text": ": -Ist_or_2nd (Per). not_3s (agr(plural, _)). Ist_or_2nd(Ist). Ist_or_2nd(2nd).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "Here, the predicate p corresponds to the specification of the feature structure. A term p(X) means that the variable I is a candidate of the disjunctive feature structure specified by the predicate p. The ANY value used in FUG or the value of an unspecified feature can be represented by an anonymous variable '_'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "We consider atomic formulas to be constraints on the variables they include. The atomic formula lst_or_2nd(Per) in (4) constrains the variable Per to be either 1st or hd. In a similar way, not_3s (Agr) means that Agr is a term which has the form agr(l~um,Per), and that//am is sing and Per is subject to the constraint lst_or_2nd(Per) or that }lure is plural.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "We do not use or consider predicates without their definition clauses because they make no sense as constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "We call an atomic formula whose predicate has definition clauses a constraint-term, and we call a sequence of constraint-terms a constraint. A set of definition clauses like (4) is called a structure of a constraint.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "Phrase structure rules are also represented by logical constraints. For example, If rules are binary and if L, R, and M stand for the left daughter, the right daughter, and the mother, respectively, they stand in a ternary relation, which we represent as psr(L,R,M). Each definition clause ofpsr corresponds to a phrase structure rule. Clause (5) is an example.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "(5) psr(Subj, cat (v, Agr, Subj ), cat ( s, Agr, _) ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition clauses ofpsr may have their own bodies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "If a disjunctive feature structure is specified by a constraint-term p(X) and another is specified by q(Y), the unification of X and Y is equivalent to the problem of finding X which satisfies (6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "(6) [p(X),q(X)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus a unification of disjunctive feature structures is equivalent to a constraint satisfaction problem. An application of a phrase structure rule also can be considered to be a constraint satisfaction problem. For instance, if categories of left daughter and right daughter are stipulated by el(L) and c2(R), computing a mother category is equivalent to finding M which satisfies constraint (7).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "(7) [cl (L), c2 (R) ,psr (L,R, M)] A Prolog call like (8) realizes this constraint satisfaction. (8) :-el (L), c2(R) ,psr (L,R,M), assert (c3(M)) ,fail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "This method, however, is inefficient. Since Prolog chooses one definition clause when multiple definition clauses are available, it must repeat a procedure many times. This method is equivalent to expanding disjunctions to DNF before unification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "not_3s ( agr ( sing, Per) )", |
| "sec_num": null |
| }, |
| { |
| "text": "This section explains constraint unification ~ (Hasida 1986 , Tuda et al. 1989 , a method of disjunctive unification, and indicates its disadvantage.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 59, |
| "text": "(Hasida 1986", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 60, |
| "end": 78, |
| "text": ", Tuda et al. 1989", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Unification and Its Problem", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As mentioned in Section 1, we can solve a constraint satisfaction problem by constraint transformation. What we seek is an efficient algorithm of transformation whose resulting structure is guaranteed satisfiability and includes a small number of disjuncts. CU is a constraint transformation system which avoids excessive expansion of disjunctions. The goal of CU is to transform an input constraint to a modular constraint. Modular constraints are defined as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(9) (Definition: modular) A constraint is modular, iff 1. every argument of every atomic formula is a variable, 2. no variable occurs in two distinct places, and 3. every predicate is modularly defined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A predicate is modularly defined iff the bodies of its definition clauses are either modular or NIL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For example, (10) is a modular constraint, while (11), (12), and (13) are not modular, when all the predicates are modularly defined. The main ideas of CU are (a) it classifies constraint-terms in the input constraint into groups so that they do not share a variable and it transforms them into modular constraints separately, and (b) it does not transform modular constraints. Briefly, CU processes only constraints which have dependencies. This corresponds to avoiding unnecessary expansion of disjunctions. In CU, the order of processes is decided according to dependencies. This flexibility enables CU to reduce the amount of processing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(10) [p(X,Y) ,q(Z,\u2022)] (11) [p(X.X)] (12) [p(X,\u00a5) ,q(Y.Z)]", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We explain these ideas and the algorithm of CU briefly through an example. CU consists of two functions, namely, modularize(constraint) and integrate(constraint). We can execute CU by calling modularize. If the input constraint were not divided and (21) had multiple solutions, the processing of (22) would be repeated many times. This is one reason for the efficiency of CU. Constraint (23) is not transformed because it is already modular (idea (b)). Prolog would exploit the definition clauses of r and expend unnecessary computation time. This is another reason for CU's efficiency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To transform (21) and (22) into modular constraint-terms, (24) and (25) are called.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(24) integrate([p(X,Y),q(Y, Z)]) (25) integrate([p(A,B), r(A)])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Since (24~ and (25) succeed and return e0(X,Y,Z)\" and el(A,B), respectively, (14) returns (26).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(26) [c0(X,Y,Z), el (A,B) ,r(C)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This modularization would fail if either (24) or (25) failed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Next, we explain integrate through the execution of (24). First, a new predicate c0 is made so that we can suppose (27).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(27) cO (X,Y, Z) 4=:#p(X,Y), q(Y,Z) Formula (27) means that (24) returns c0(X,Y,Z) if the constraint [p(X,Y) ,q(Y,Z)] is satisfiable; that is, e0(X,\u00a5,Z) can be modularly defined so that c0(X,Y,Z) and p(X,Y),q(Y,Z) constrain X, Y, and Z in the same way. Next, a target constraint-term is chosen. Although some heuristics may be applicable to this choice, we simply choose the first element p(X,Y) here. Then, the definition clauses of p are consulted. Note that this corresponds to the expansion of a disjunction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "First, (15) is exploited. The head of (15) is unified with p(X,Y) in (27) so that (27) becomes (28).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Unification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The term p(f(A),C) has been replaced by its body r(A),r(C) in the right-hand side of (28). Formula (28) means that cO(f (A) ,C,Z) is true if the variables satisfy the right-hand side of (28). Since the right-hand side of (28) is not modular, (29) is called and it must return a constraint like (30). Second, (16) is exploited. Then, (28) becomes (32), (33) is called and returns (34), and (35) is created.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "(32) c0(a,b,Z) \u00a2==~q(b,Z) (33) modularize ( [q(b,Z) ", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 51, |
| "text": "( [q(b,Z)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "] ) (34) [c3(Z)] (35) cO(a,b,Z):-c3(Z).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "As a result, (24) returns c0(X,Y,Z) because its definition clauses are made.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "All the Horn clauses made in this CU invoked by (14) are shown in (36). ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "aWe use cn (n = 0, 1, 2,.--) for the names of newlymade predicates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "c2(a,b).", |
| "sec_num": null |
| }, |
| { |
| "text": "When a new clause is created, if the predicate of a term in its body has only one definition clause, the term is unified with the head of the definition clause and is replaced by the body. This operation is called reduction. For example, the second clause of (36) is reduced to (37) because c3 has only one definition clause. (37) c0(a,b,a) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 326, |
| "end": 340, |
| "text": "(37) c0(a,b,a)", |
| "ref_id": "FIGREF20" |
| } |
| ], |
| "eq_spans": [], |
| "section": "c2(b,a). c3(a). cl(a,b).", |
| "sec_num": null |
| }, |
| { |
| "text": "CU has another operation called folding. It avoids repeating the same type of integrations so that it makes the transformation efficient. Folding also enables CU to handle some of the recursively-defined predicates such as member and append.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "c2(b,a). c3(a). cl(a,b).", |
| "sec_num": null |
| }, |
| { |
| "text": "We adopt the CYK algorithm (Aho and Ullman 1972) for simplicity, although any algorithms may be adopted. Suppose the constraint-term caZ_n_m(X) means X is the category of a phrase from the (n + 1)th word to the ruth word in an input sentence. Then, application of a phrase structure rule is reduced to creating Horn clauses like (38). The body of the created clause is the constraint returned by the modularization in the right-hand side. If the modularization fails, the clause is not created.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 48, |
| "text": "(Aho and Ullman 1972)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with Constraint Unification", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The main problem of a CU-based parser is that the number of constraint-term arguments increases as parsing proceeds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem of Constraint Unification", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For example, cat_0_2(M) is computed by (39). Next, suppose that (40) is exploited in the following application of rules. It returns a constraint like cl(L,R,M,R1,M1). Thus the number of the constraint-term arguments increases. This causes computation time explosion for two reasons: (a) the augmentation of arguments increases the computation time for making new terms and environments, dividing into groups, unification, and so on, and (b) resulting structures may include excessive disjunctions because of the ambiguity of features irrelevant to the mother categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem of Constraint Unification", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "This section describes constraint projection (CP), which is a generalization of CU and overcomes the disadvantage explained in the previous section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Constraint Projection", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Inefficiency of parsing based on CU is caused by keeping information about daughter nodes. Such information can be abandoned if it is assumed that we want only information about mother nodes. That is, transformation (43) is more useful in parsing than (44). (45) (Definition: Normal) A constraint is normal iff (a) it is modular, and (b) each definition clause is a normal definition clause; that is, its body does not include variables which do not appear in the head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For example, (46) is a normal definition clause while (47) is not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(46) p(a,X) :-r(X). (47) q(X) :-s(X,\u00a5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The operation (43) is generalized into a new operation constraint projection which is defined in (48).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(48) Given a constraint C and a list of variables which we call goal, CP returns a normal constraint which is equivalent to C concerning the variables in the goal, and includes only variables in the goal. \u2022 project(P, X) returns a normal constraint (list of atomic formulas) on X.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "1. If P = NIL then return NIL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Basic Ideas of Constraint Projection", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "If not(satisfiable(P)), then return \"fail\", Else return NIL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IfX=NIL,", |
| "sec_num": "2." |
| }, |
| { |
| "text": "3. II := divide(P). -R := normalize(T, V).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IfX=NIL,", |
| "sec_num": "2." |
| }, |
| { |
| "text": "If R = 'faT', then return \"fail\", Else add R to S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "IfX=NIL,", |
| "sec_num": "2." |
| }, |
| { |
| "text": "\u2022 normalize(S, V) returns a normal constraintterm (atomic formula) on V. -Q := pro~ect(append(BO, S'0), X ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Return S.", |
| "sec_num": "9." |
| }, |
| { |
| "text": "If. Q = fall, then go to the next definitton clause Else add C0:-Q. to the database with reduction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If S does not include variables appearing in", |
| "sec_num": "1." |
| }, |
| { |
| "text": "7. If success-flag = NIL, then return \"fail\", else return C.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If S does not include variables appearing in", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 mgu returns the most general unifier (Lloyd", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If S does not include variables appearing in", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 divide(P) divides P into a number of constraints which share no variables and returns the list of the constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1984)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 satisfiable(P) returns T if P is satisfiable, and NIL otherwise. project ([p(X,Y),q(Y,Z) ,p(A,S),r(A),r(e)], [X,e] ([pll,[l,qlT,Zll,[g] CP also divides input constraint C into several constraints according to dependencies, and transforms them separately. The divided constraints are classified into two groups: constraints which include variables in the goal, and the others.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 90, |
| "text": "([p(X,Y),q(Y,Z)", |
| "ref_id": null |
| }, |
| { |
| "start": 111, |
| "end": 116, |
| "text": "[X,e]", |
| "ref_id": null |
| }, |
| { |
| "start": 117, |
| "end": 137, |
| "text": "([pll,[l,qlT,Zll,[g]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1984)", |
| "sec_num": null |
| }, |
| { |
| "text": "We call the former goal-relevant constraints and the latter goal-irrelevant constraints. Only goalrelevant constraints are transformed into normal constraints. As for goal-irrelevant constraints, only their satisfiability is examined, because they are no longer used and examining satisfiability is easier than transforming. This is a reason for the efficiency of CP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1984)", |
| "sec_num": null |
| }, |
| { |
| "text": "CP consists of two functions, project(constraint, goal(variable list)) and normalize(constraint, goal(variable list)), which respectively correspond to modularize and integrate in CU. We can execute CP by calling project. The algorithm of constraint projection is shown in Figure 14 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 273, |
| "end": 282, |
| "text": "Figure 14", |
| "ref_id": "FIGREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We explain the algorithm of CP through the execution of (49). The predicates are defined in the same way as (15) to (20). This execution is illustrated in Figure 2 . First, the input constraint is divided into (50), (51) and (52) according to dependency. ([rlll.,rlCl,qlC,g) (51) is goal-irrelevant, only its satisfiability is examined and confirmed. If some goal-irrelevant constraints were proved not satisfiable, the projection would fail. Constraint (52) is already normal, so it is not processed. Then (53) is called to transform (50).", |
| "cite_spans": [ |
| { |
| "start": 255, |
| "end": 274, |
| "text": "([rlll.,rlCl,qlC,g)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 155, |
| "end": 163, |
| "text": "Figure 2", |
| "ref_id": "FIGREF15" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(53) normalize ( [p(X, Y), q(\u00a5, Z) ], [X])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The second argument (goal) is the list of variables that appear in both (50) and the goal of (49). Since this normalization must return a constraint like [c0(X)], (49) returns (54).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(54) [c0(X) ,r(C)] This includes only variables in the goal. This constraint has a tighter structure than (26).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Next, we explain the function normalize through the execution of (53). This execution is illustrated in Figure 3 . First, a new term c0(X) is made so that we can suppose (55). Its arguments are all the variables in the goal.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 104, |
| "end": 112, |
| "text": "Figure 3", |
| "ref_id": "FIGREF20" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Algorithm of Constraint Projection", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The normal definition of cO should be found. Since a target constraint must include a variable in the goal, p(X,Y) is chosen. The definition clauses of p are (15) and (16).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(55) c0 (x)c=~p(x,Y) ,q(Y,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "(15) pCfCA) ,C) :-rCA),r(C). (16) p(a,b). The clause (15) is exploited at first. Its head is unified with p(X,Y) in (55) so that (55) becomes (56). (If this unification failed, the next definition clause would be exploited.) (56) c0 (f CA)) \u00a2=:\u00a2,r (A) ,r (C), q(C, Z) Tlm right-hand side includes some variables which", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "(55) c0 (x)c=~p(x,Y) ,q(Y,Z)", |
| "sec_num": null |
| }, |
| { |
| "text": "He wanted to be a doctor. You were a doctor when you were young. I saw a man with a telescope on the hill. He wanted to be a doctor when he was a student. In the context of graph unification, Carter (1990) proposed a bottom-up parsing method which abandons information irrelevant to the mother structures. His method, however, fails to check the inconsistency of the abandoned information. Furthermore, it abandons irrelevant information after the application of the rule is completed, while CP abandons goal-irrelevant constraints dynamically in its processes. This is another reason why our method is better.", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 205, |
| "text": "Carter (1990)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input sentence", |
| "sec_num": null |
| }, |
| { |
| "text": "Another advantage of CP is that it does not need much copying. CP copies only the Horn clauses which are to be exploited. This is why CP is expected to be more efficient and need less memory space than other disjunctive unification methods. Hasida (1990) proposed another method called dependency propagation for overcoming the problem explained in Section 3.3. It uses transclausal variables for efficient detection of dependencies. Under the assumption that information about daughter categories can be abandoned, however, CP should be more efficient because of its simplicity.", |
| "cite_spans": [ |
| { |
| "start": 241, |
| "end": 254, |
| "text": "Hasida (1990)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input sentence", |
| "sec_num": null |
| }, |
| { |
| "text": "We have presented constraint projection, a new operation for efficient disjunctive unification. The important feature of CP is that it returns constraints only on the specified variables. CP can be considered not only as a disjunctive unification method but also as a logical inference system. Therefore, it is expected to play an important role in synthesizing linguistic analyses such as parsing and semantic analysis, and linguistic and non-linguistic inferences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "I would like to thank Kiyoshi Kogure and Akira Shimazu for their helpful comments. I had precious discussions with KSichi Hasida and Hiroshi Tuda concerning constraint unification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "do not appear in the left-hand side. Therefore, (57) is called.(57) project ([r(h) ,r(C),q (C,Z) ], [AJ) This returns r(A), and (58) is created.(58) c0(f(a)):-r(A).Second, (16) is exploited and (59) is created in the same way.(59) c0(a).Consequently, (53) returns c0(X) because some definition clauses of cO have been created.All the Horn clauses created in this CP are shown in (60).cO(a).Comparing (60) with (36), we see that CP not only is efficient but also needs less memory space than CU.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 82, |
| "text": "([r(h)", |
| "ref_id": null |
| }, |
| { |
| "start": 91, |
| "end": 96, |
| "text": "(C,Z)", |
| "ref_id": null |
| }, |
| { |
| "start": 100, |
| "end": 104, |
| "text": "[AJ)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "We can construct a CYK parser by using CP as in (61).[.] ).(2<m<l, 0<n<m -2, n + l<k<m -1, where l is the sentence length.)For a simple example, let us consider parsing the sentence \"Japanese work.\" by the following projection.The rules and leyScon are defined as follows:(63) psr(n(Num,Per), v(Num,Per, Tense), s (Tense)). (64) cat_of_j apanes e (n (Num, third) ). : -first_or_second(Per). (68) first_or_second(first). (69) first_or_second(second).Since the constraint cannot be divided, (70) is called.(70) normalize ([cat_of_japanese(L) ,The new term c0(M) is made, and (63) is exploited. Then (71) is to be created if its righthand side succeeds. Thus CP can he applied to CYK parsing, but needless to say, CP can be applied to parsing algorithms other than CYK, such as active chart parsing.", |
| "cite_spans": [ |
| { |
| "start": 519, |
| "end": 539, |
| "text": "([cat_of_japanese(L)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with Constraint Projection", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Both CU and CP have been implemented in Sun Common Lisp 3.0 on a Sun 4 spare station 1. They are based on a small Prolog interpreter written in Lisp so that they use the same nondisjunctive unification mechanism. We also implemented three CYK parsers that adopt Prolog, CU, and CP as the disjunctive unification mechanism. Grammar and lexicon are based on ttPSG (Pollard and Sag 1987) . Each lexical item has about three disjuncts on average. Table I shows comparison of the computation time of the three parsers. It indicates CU is not as efficient as CP when the input sentences are long.", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 384, |
| "text": "(Pollard and Sag 1987)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 443, |
| "end": 450, |
| "text": "Table I", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "5" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Theory of Parsing, Translation, and Compiling", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "V" |
| ], |
| "last": "Aho", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Ullman", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aho, A. V. and Ullman, J. D. (1972) The Theory of Parsing, Translation, and Compiling, Vol- ume I: Parsing. Prentice-Hall.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Eff\u00c9cient Disjunctive Unification for Bottom-Up Parsing", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Carter", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of the 13th International Conference on Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "70--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carter, D. (1990) Eff\u00c9cient Disjunctive Unifica- tion for Bottom-Up Parsing. In Proceedings of the 13th International Conference on Computa- tional Linguistics, Volume 3. pages 70-75.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Unification of", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Eisele", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dsrre", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eisele, A. and DSrre, J. (1988) Unification of", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Disjunctive Feature Descriptions", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Disjunctive Feature Descriptions. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Conditioned Unification for Natural Language Processing", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Hasida", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings of the llth International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "85--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hasida, K. (1986) Conditioned Unification for Natural Language Processing. In Proceedings of the llth International Conference on Computa- tional Linguistics, pages 85--87.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Sentence Processing as Constraint Transformation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Hasida", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of the 9th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hasida, K. (1990) Sentence Processing as Con- straint Transformation. In Proceedings of the 9th", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "European Conference on Artificial Intelligence", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "339--344", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "European Conference on Artificial Intelligence, pages 339-344.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Unification Method for", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "T" |
| ], |
| "last": "Kasper", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kasper, R. T. (1987) A Unification Method for", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Disjunctive Feature Descriptions", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "235--242", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Disjunctive Feature Descriptions. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 235-242.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Parsing in Functional Unification Grammar", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kay", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Natural Language Parsing: Psychological, Computational and Theoretical Perspectives", |
| "volume": "", |
| "issue": "", |
| "pages": "251--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kay, M. (1985) Parsing in Functional Unifi- cation Grammar. In Natural Language Pars- ing: Psychological, Computational and Theoreti- cal Perspectives, pages 251-278. Cambridge Uni- versity Press.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Foundations of Logic Programming", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "W" |
| ], |
| "last": "Lloyd", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lloyd, J. W. (1984) Foundations of Logic Pro- gramming. Springer-Verlag.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Definite Clause Grammar for Language Analysis--A Survay of the Formalism and a Comparison with Augmented Transition Networks", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "H D" |
| ], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Artificial Intelligence", |
| "volume": "13", |
| "issue": "", |
| "pages": "231--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pereira, F. C. N. and Warren, D. H. D. (1980) Definite Clause Grammar for Language Analysis--A Survay of the Formalism and a Comparison with Augmented Transition Net- works. Artificial Intelligence, 13:231-278.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Information-Based Syntax and Semantics", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "J" |
| ], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "A" |
| ], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Fundamentals. CSLI Lecture Notes Series", |
| "volume": "1", |
| "issue": "13", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pollard, C. J. and Sag, I. A. (1987) Information- Based Syntax and Semantics, Volume 1 Funda- mentals. CSLI Lecture Notes Series No.13. Stan- ford:CSLI.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "JPSG Parser on Constraint Logic Programming", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tuda", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Hasida", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sirai", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of 4th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "95--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tuda, H., Hasida, K., and Sirai, H. (1989) JPSG Parser on Constraint Logic Programming. In Proceedings of 4th Conference of the European Chapter of the Association for Computational Linguistics, pages 95-102.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "Function modularize divides the input constraint into several constraints, and returns a list of their integrations. If one of the integrations fails, modularization also fails. The function integrate creates a new constraintterm equivalent to the input constraint, finds its modular definition clauses, and returns the new constraint-term. Functions rnodularize and integrate call each other. Let us consider the execution of (14The predicates are defined as follows. (15) pCfCA),C):-rCA),rCC). (16) p(a.b).", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "r(b).The input constraint is divided into (21), (22), and (23), which are processed independently (idea (a)).", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "text": "modularize(Er(A) ,rCC), qCC, Z)'l) (30) It(A) ,c2(C,Z)]Then, (31) is created as a definition clause of cO.", |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "num": null, |
| "uris": null, |
| "text": "c0(fCA) ,C,Z) :-r(A) ,c2(C,Z). c0(a,b,Z) :-c3(Z).", |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "uris": null, |
| "text": "0<n<m -2, n + l<_k<m -1, where I is the sentence length.)", |
| "type_str": "figure" |
| }, |
| "FIGREF8": { |
| "num": null, |
| "uris": null, |
| "text": "modularize( [cat_0_2(M), cat_2_3(Rl), psr(M,RI,MI)])", |
| "type_str": "figure" |
| }, |
| "FIGREF9": { |
| "num": null, |
| "uris": null, |
| "text": "43) rclCL),c2CR),psrCL,a,H)'l ~ [c3(H)] (44) [cl (L), c2(R) ,psr(L,R,H)] :=~ [c3(L,R,R)] Constraint [c3(M)]in (43) must be satisfiable and equivalent to the left-hand side concerning H. Since [c3(M)] includes only information about H, it must be a normal constraint, which is defined in (45).", |
| "type_str": "figure" |
| }, |
| "FIGREF11": { |
| "num": null, |
| "uris": null, |
| "text": "Hin := the list of the members of H which include variables in X. 5. ]-[ex :---the list of the members of H other than the members of ~in. 6. For each member R of ]]cx, If not(satisfiable(R)) then return \"fail\" 7. S := NIL. 8. For each member T of Hi,=: -V := intersection(X, variables appearing in T).", |
| "type_str": "figure" |
| }, |
| "FIGREF12": { |
| "num": null, |
| "uris": null, |
| "text": "V, and S consists of a modular term, then Return S. 2. S := a member of S that includes a variable in V. 3. S' := the rest of S. 4. C := a term c.(v], v2 ..... vn). where v], .... vn are all the members of V and c. is a new functor. 5. success-flag := NIL. 6. For each definition clause H :-B. of the predicate of S: -0 := mgu(S, H). If 0 = fail, go to the next definition clause. -X := a list of variables in C8.", |
| "type_str": "figure" |
| }, |
| "FIGREF13": { |
| "num": null, |
| "uris": null, |
| "text": "(satisfiable is a slight modification of modularize of CU.) Algorithm of Constraint Projection", |
| "type_str": "figure" |
| }, |
| "FIGREF15": { |
| "num": null, |
| "uris": null, |
| "text": "A Sample Execution of project", |
| "type_str": "figure" |
| }, |
| "FIGREF17": { |
| "num": null, |
| "uris": null, |
| "text": "and (52) are goal-relevant because they include X and C, respectively. Since 4Since the current version of CP does not have an operation corresponding to folding, it cannot handle recursively-defined predicates. normalize( I'p (X, Y) ,q(Y ,Z)], [X])", |
| "type_str": "figure" |
| }, |
| "FIGREF20": { |
| "num": null, |
| "uris": null, |
| "text": ": A Sample Execution of normalize", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>[pCf(a) ,g(Z))]</td></tr><tr><td>Constraint (10) is satisfiable because the predi-</td></tr><tr><td>cates have definition clauses. Omitting the proof,</td></tr><tr><td>a modular constraint is necessarily satisfiable.</td></tr><tr><td>Transforming a constraint into a modular one is</td></tr><tr><td>equivalent to finding the set of instances which</td></tr><tr><td>satisfy the constraint. On the contrary, non-</td></tr><tr><td>modular constraint may not be satisfiable. When</td></tr><tr><td>~Constralnt unification is called conditioned unifi-</td></tr><tr><td>cation in earlier papers.</td></tr></table>", |
| "text": "constraint is not modular, it is said to have dependencies. For example, (12) has a dependency concerning \u00a5." |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td/><td>Prolog</td><td>CU</td><td>CP</td></tr><tr><td/><td>3.88</td><td>6.88</td><td>5.64</td></tr><tr><td/><td>29.84</td><td colspan=\"2\">19.54 12.49</td></tr><tr><td/><td colspan=\"3\">(out of memory) 245.34 17.32</td></tr><tr><td/><td>65.27</td><td colspan=\"2\">19.34 14.66</td></tr><tr><td/><td>Table h Computation Time</td><td/></tr><tr><td>6</td><td>Related Work</td><td/></tr></table>", |
| "text": "CPU time (see.)" |
| } |
| } |
| } |
| } |