ACL-OCL / Base_JSON /prefixP /json /P90 /P90-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P90-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:39:06.727917Z"
},
"title": "STRUCTURAL DISAMBIGUATION WITH CONSTRAINT PROPAGATION",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Maruyama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Research Laboratory",
"location": {
"addrLine": "5-19 Sanbancho, Chiyoda-ku",
"postCode": "102",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "maruyama@jpntscvm.bitnet"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new grammatical formalism called Constraint Dependency Grammar (CDG) in which every grammatical rule is given as a constraint on wordto-word modifications. CDG parsing is formalized as a constraint satisfaction problem over a finite domain so that efficient constraint-propagation algorithms can be employed to reduce structural ambiguity without generating individual parse trees. The weak generative capacity and the computational complexity of CDG parsing are also discussed.",
"pdf_parse": {
"paper_id": "P90-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new grammatical formalism called Constraint Dependency Grammar (CDG) in which every grammatical rule is given as a constraint on wordto-word modifications. CDG parsing is formalized as a constraint satisfaction problem over a finite domain so that efficient constraint-propagation algorithms can be employed to reduce structural ambiguity without generating individual parse trees. The weak generative capacity and the computational complexity of CDG parsing are also discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We are interested in an efficient treatment of structural ambiguity in natural language analysis. It is known that \"every-way\" ambiguous constructs, such as prepositional attachment in English, have a Catalan number of ambiguous parses (Church and Patil 1982) , which grows at a faster than exponential rate (Knuth 1975) . A parser should be provided with a disambiguation mechanism that does not involve generating such a combinatorial number of parse trees explicitly.",
"cite_spans": [
{
"start": 236,
"end": 259,
"text": "(Church and Patil 1982)",
"ref_id": "BIBREF0"
},
{
"start": 308,
"end": 320,
"text": "(Knuth 1975)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "We have developed a parsing method in which an intermediate parsing result is represented as a data structure called a constraint network. Every solution that satisfies all the constraints simultaneously corresponds to an individual parse tree. No explicit parse trees are generated until ultimately necessary. Parsing and successive disambiguation are performed by adding new constraints to the constraint network. Newly added constraints are efficiently propagated over the network by Constraint Propagation (Waltz 1975 , Montanari 1976 to remove inconsistent values.",
"cite_spans": [
{
"start": 510,
"end": 521,
"text": "(Waltz 1975",
"ref_id": "BIBREF13"
},
{
"start": 522,
"end": 538,
"text": ", Montanari 1976",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "In this paper, we present the basic ideas of a formal grammatical theory called Constraint Dependency Grammar (CDG for short) that makes this parsing technique possible. CDG has a reasonable time bound in its parsing, while its weak generative capacity is strictly greater than that of Context Free Grammar (CFG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "We give the definition of CDG in the next section. Then, in Section 3, we describe the parsing method based on constraint propagation, using a step-bystep example. Formal properties of CDG are discussed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "CDG: DEFINITION Let a sentence s = wlw2 ... w,, be a finite string on a finite alphabet E. Let R --{rl,r2,...,rk} be a finite set of role-iris. Suppose that each word i in a sentence s has k-different roles rl(i), r2(i) .... , rk(i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "Roles are like variables, and each role can have a pair <a, d> as its value, where the label a is a member of a finite set L = {al,a2,...,at} and the modifiee d is either 1 < d < n or a special symbol nil. An analysis of the sentence s is obtained by assigning appropriate values to the n x k roles (we can regard this situation as one in which each word has a frame with k slots, as shown in Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 401,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "An assignment A of a sentence s is a function that assigns values to the roles. Given an assignment A, the label and the modifiee of a role x are determined. We define the following four functions to represent the various aspect of the role x, assuming that x is an rj-role of the word i: \u2022 R is a finite set of role-ids.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "\u2022 L is a finite set of labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "\u2022 C is a constraint that an assignment A should satisfy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "A constraint C is a logical formula in a form lIn this paper, when referring to a word, we purposely use the position (1,2,. ..,n) of the word rather than the word itself (Wl,W2, ,--,Wn), because the same word can occur in many different positions in a sentence. For readability, however, we sometimes use the notation word~os~tion.",
"cite_spans": [
{
"start": 118,
"end": 124,
"text": "(1,2,.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "\u2022 Predicate symbols: =, <, >, and E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "\u2022 Logical connectors: &, l, \"~, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "Specifically, we call a subformula Pi a unary constraint when P.i contains only one variable, and a binary constraint when Pi contains exactly two variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "The semantics of the functions have been defined above. The semantics of the predicates and the logical connectors are defined as usual, except that comparing an expression containing nil with another value by the inequality predicates always yields the truth value false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "These conditions guarantee that, given an assignment A, it is possible to compute whether the values of xl, x2 .... , xp satisfy C in a constant time, regardless of the sentence length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": "31"
},
{
"text": "\u2022 The degree of a grammar G is the size k of the role-id set R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\u2022 The arity of a grammar G is the number of variables p in the constraint C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Unless otherwise stated, we deal with only arity-2 cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\u2022 A nonnull string s over the alphabet ~ is generated iff there exits an assignment A that satisfies the constraint C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\u2022 L(G) is a language generated by the grammar G iff L(G) is the set of all sentences generated by a grammar G. The formula P1 of the constraint C1 is the conjunction of the following four subformulas (an informal description is attached to each constraint):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "GI-1) word(pos(x))=D ~ ( lab(x)=DgT, word(mod(x))=N, pos(x) < rood(x) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\"A determiner (D) modifies a noun (N) on the right with the label DET.\" Role Value governor( \"al\" ) governor(\"dog2\") governor( \"runs3\" )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "<DET,2> <SUBJ,3> <R00T,nil>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Figure 2: Assignment Satisfying (GI-1) to (G1-4) ~SUB3 (G1-2) word(pos(x))=N ~ ( lab(x)=SUBJ, word(mod(x))=V, pos(x) < mod(x) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\"A noun modifies a verb (V) on the right with the label SUBJ.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "(G1-3) word(pos(x))=V ~ ( lab(x)=ROOT, mod(x)=nil )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\"A verb modifies nothing and its label should be ROOT.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "(G1-4) (mod(x)=mod(y), lab(x)=lab(y) ) ~ x=y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "\"No two words can modify the same word with the same label.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Analyzing a sentence with G1 means assigning a label-modifiee pair to the only role \"governor\" of each word so that the assignment satisfies (GI-1) to (G1-4) simultaneously. For example, sentence (1) is analyzed as shown in Figure 2 provided that the words \"a,\" \"dog,\" and \"runs\" are given parts-ofspeech D, N, and V, respectively (the subscript attached to the words indicates the position of the word in the sentence).",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "(1) A1 dog2 runs3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "Thus, sentence (1) is generated by the grammar G1. On the other hand, sentences (2) and 3are not generated since there are no proper assignments for such sentences.",
"cite_spans": [
{
"start": 80,
"end": 83,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "(2) A runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "(3) Dog dog runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "We can graphically represent the parsing result of sentence (1) as shown in Figure 3 if we interpret the governor role of a word as a pointer to the syntactic governor of the word. Thus, the syntactic structure produced by a CDG is usually a dependency structure (Hays 1964 ) rather than a phrase structure. CDG parsing is done by assigning values to n \u00d7 k roles, whose values are selected from a finite set L x {1,2,...,n, nil}. Therefore, CDG parsing can be viewed as a constraint satisfaction problem over a finite domain. Many interesting artificial intelligence problems, including graph coloring and scene labeling, are classified in this group of problems, and much effort has been spent on the development of efficient techniques to solve these problems. Constraint propagation (Waltz 1975 , Montanari 1976 ), sometimes called filtering, is one such technique. One advantage of the filtering algorithm is that it allows new constraints to be added easily so that a better solution can be obtained when many candidates remain. Usually, CDG parsing is done in the following three steps:",
"cite_spans": [
{
"start": 263,
"end": 273,
"text": "(Hays 1964",
"ref_id": "BIBREF1"
},
{
"start": 786,
"end": 797,
"text": "(Waltz 1975",
"ref_id": "BIBREF13"
},
{
"start": 798,
"end": 814,
"text": ", Montanari 1976",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "1. Form an initial constraint network using a \"core\" grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "2. Remove local inconsistencies by filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "3. If any ambiguity remains, add new constraints and go to Step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "In this section, we will show, through a step-by-step example, that the filtering algorithms can be effectively used to narrow down the structural ambiguities of CDG parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition",
"sec_num": null
},
{
"text": "We use a PP-attachment example. Consider sentence (4) . Because of the three consecutive prepositional phrases (PPs), this sentence has many structural ambiguities. To simplify tile following discussion, we treat the grammatical symbols V, NP, and PP as terminal symbols (words), since the analysis of the internal structures of such phrases is irrelevant to the point being made. The correspondence between such simplified dependency structures and the equivalent phrase structures should be clear. Formally, the input sentence that we will parse with CDG is (5).",
"cite_spans": [
{
"start": 50,
"end": 53,
"text": "(4)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Example",
"sec_num": null
},
{
"text": "First, we consider a \"core\" grammar that contains purely syntactic rules only. We define a CDG G2a =< E2, R2, L2, C2 > as follows: where the formula P2 is the conjunction of the following unary and binary constraints :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "\u2022 E2 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "(G2a-1) word(pos(x))=PP ~ (word(mod(x)) 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "{PP,NP,V}, rood(x) < pos(x) ) \"A PP modifies a PP, an NP, or a V on the left.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "(G2a-2) word(pos(x))=PP, word(rood(x)) 6 {PP,NP} lab(x)=POSTMOD",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "\"If a PP modifies a PP or an NP, its label should be POSTMOD.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "(G2a-3) word(pos(x) )=PP, word(mod(x) )=V lab(x) =LOC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "\"If a PP modifies a V, its label should be L0\u00a2.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "2In linguistics, arrows are usually drawn in the opposite direction in a dependency diagram: from a governor (modifiee) to its dependent (modifier). In this paper, however, we draw an arrow from a modifier to its modifiee in order to emphasize that this information is contained in a modifier's role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "According to the grammar G2a , sentence (5) has 14 (= Catalan(4)) different syntactic structures. We do not generate these syntactic structures one by one, since the number of the structures may grow more rapidly than exponentially when the sentence becomes long. Instead, we build a packed data structure, called a constraint network, that contains all the syntactic structures implicitly. Explicit parse trees can be generated whenever necessary, but it may take a more than exponential computation time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "Formation of initial network Figure 5 shows the initial constraint network for sentence (5) . A node in a constraint network corresponds to a role. Since each word has only one role governor in the grammar G2, the constraint network has five nodes corresponding to the five words in the sentence. In the figure, the node labeled Vl represents the governor role of the word Vl, and so on. A node is associated with a set of possible values that the role can take as its value, called a domain. The domains of the initial constraint network are computed by examining unary constraints ((G2a-1) to (G2a-5) in our example). For example, the modifiee of the role of the word Vl must be ROOT and its label must be nil according to the unary constraint (G2a-5), and therefore the domain of the corresponding node is a singleton set {<R00T,nil>). In the figure, values are abbreviated by concatenating the initial letter of the label and the modifiee, such as Rnil for <R00T,nil>, 01 for <0BJ,I>, and so on.",
"cite_spans": [
{
"start": 88,
"end": 91,
"text": "(5)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "An arc in a constraint network represents a binary constraint imposed on two roles. Each arc is associated with a two-dimensional matrix called a constraint matlqx, whose xy-elements are either 1 or 0. The rows and the columns correspond to the possible values of each of the two roles. The value 0 indicates that this particular combination of role values violates the binary constraints. A constraint matrix is calculated by generating every possible pair of values and by checking its validity according to the binary constraints. For example, the case in which governor(PP3) = <LOC,I> and governor(PP4) --<POSTMOD,2> violates the binary constraint (G2a-6), so the L1-P2 element of the constraint matrix between PPs and PPa is set to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "The reader should not confuse the undirected arcs in a constraint network with the directed modification links in a dependency diagram. An arc in a constraint network represents the existence of a binary constraint between two nodes, and has nothing to do with the modifier-modifiee relationships. The possible modification relationships are represented as the modifiee part of the domain values in a constraint network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "A constraint network contains all the information needed to produce the parsing results. No grammatical knowledge is necessary to recover parse trees from a constraint network. A simple backtrack search can generate the 14 parse trees of sentence (5) from the constraint network shown in Figure 5 at any time. Therefore, we regard a constraint network as a packed representation of parsing results.",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 296,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "(5) V1 NP2 PP3 PP4 PP5",
"sec_num": null
},
{
"text": "A constraint network is said to be arc consistent if, for any constraint matrix, there are no rows and no columns that contain only zeros. A node value corresponding to such a row or a column cannot participate in any solution, so it can be abandoned without further checking. The filtering algorithm identifies such inconsistent values and removes them from the domains. Removing a value from one domain may make another value in another domain inconsistent, so the process is propagated over the network until the network becomes arc consistent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering",
"sec_num": null
},
{
"text": "Filtering does not generate solutions, but may significantly reduce the search space. In our example, the constraint network shown in Figure 5 is already arc consistent, so nothing can be done by filtering at this point.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Filtering",
"sec_num": null
},
{
"text": "To illustrate how we can add new constraints to narrow down the ambiguity, let us introduce additional constraints (G2b-1) and (G2b-2), assuming that appropriate syntactic and/or semantic features are attached to each word and that the function /e(i) is provided to access these features. Note that these constraints are not purely syntactic. Any kind of knowledge, syntactic, semantic, or even pragmatic, can be applied in CDG parsing as long as it is expressed as a unary or binary constraint on word-to-word modifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding New Constraints",
"sec_num": null
},
{
"text": "Each value or pair of values is tested against the newly added constraints. In the network in Figure 5 , the value P3 (i.e. <POSTMOD,3>) of the node PP4 (i.e.; \"on the table (PP4)\" modifies \"on the floor (PP3)\") violates the constraint (G2b-1), so we remove P3 from the domain of PP4. Accordingly, corresponding rows and columns in the four constraint matrices adjacent to the node PP4 are removed. The binary constraint (G2b-2) affects the elements of the constraint matrices. For the matrix between the nodes PP3 and I L1 P2 P3 P4 This sets the P2-P2 element of the matrix PP3-PP4",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 102,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Adding New Constraints",
"sec_num": null
},
{
"text": "to zero. Filtering on this network again results in the network shown in Figure 8 , which is unambiguous, since every node has a singleton domain. Recovering the dependency structure (the one in Figure 4 ) from this network is straightforward.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 8",
"ref_id": null
},
{
"start": 195,
"end": 203,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Adding New Constraints",
"sec_num": null
},
{
"text": "PP4, the element in row L1 (<LOC,I>) and column L1 (<LOC, 1>) is set to zero, since both are modifications to Vl with the label LOC. Similarly, the L1-L1 elements of the matrices PP3-PP5 and PP4-PP5 are set to zero. The modified network is shown in Figure 6 , where the updated elements are indicated by asterisks.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Note that the network in Figure 6 is not arc consistent. For example, the L1 row of the matrix PP3-PP4 consists of all zero elements. The filtering algorithm identifies such locally inconsistent values and eliminates them until there are no more inconsistent values left. The resultant network is shown in Figure 7 . This network implicitly represents the remaining four parses of sentence (5). Chart (Kaplan 1973) and shared, packed forest (Tomita 1987) are packed data structures for context-free parsing. In these data structures, a substring that is recognized as a certain phrase is represented as a single edge or node regardless of how many different readings are possible for this phrase. Since the production rules are context free, it is unnecessary to check the internal structure of an edge when combining it with another edge to form a higher edge. However, this property is true only when the grammar is purely context-free. If one introduces context sensitivity by attaching augmentations and controlling the applicability of the production rules, different readings of the same string with the same nonterminal symbol have to be represented by separate edges, and this may cause a combinatorial explosion. Seo and Simmons (1988) propose a data structure called a syntactic graph as a packed representation of context-free parsing. A syntactic graph is similar to a constraint network in the sense that it is dependencyoriented (nodes are words) and that an exclusion matrix is used to represent the co-occurrence conditions between modification links. A syntactic graph is, however, built after context-free parsing and is therefore used to represent only context-free parse trees. The formal descriptive power of syntactic graphs is not known. As will be discussed in Section 4, the formal descriptive power of CDG is strictly greater than that of CFG and hence, a constraint network can represent non-context-free parse trees as well. Sugimura et al. (1988) propose the use of a constraint logic program for analyzing modifier-modifiee relationships of Japanese. An arbitrary logical formula can be a constraint, and a constraint solver called CIL (Mukai 1985) is responsible for solving the constraints. The generative capacity and the computational complexity of this formalism are not clear.",
"cite_spans": [
{
"start": 401,
"end": 414,
"text": "(Kaplan 1973)",
"ref_id": "BIBREF3"
},
{
"start": 441,
"end": 454,
"text": "(Tomita 1987)",
"ref_id": "BIBREF12"
},
{
"start": 1222,
"end": 1244,
"text": "Seo and Simmons (1988)",
"ref_id": "BIBREF10"
},
{
"start": 1953,
"end": 1975,
"text": "Sugimura et al. (1988)",
"ref_id": "BIBREF11"
},
{
"start": 2166,
"end": 2178,
"text": "(Mukai 1985)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 6",
"ref_id": null
},
{
"start": 306,
"end": 314,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "The above-mentioned works seem to have concentrated on the efficient representation of the output of a parsing process, and lacked the formalization of a structural disambiguation process, that is, they did not specify what kind of knowledge can be used in what way for structural disambiguation. In CDG parsing, any knowledge is applicable to a constraint network as long as it can be expressed as a constraint between two modifications, and an efficient filtering algorithm effectively uses it to reduce structural ambiguities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "Weak Generative Capacity of CDG Consider the language Lww = {wwlw E (a+b)*}, the language of strings that are obtained by concatenating the same arbitrary string over an alphabet {a,b}. Lww is known to be non-context-free (Hopcroft and Ullman 1979) , and is frequently mentioned when discussing the non-context-freeness of the \"respectively\" construct (e.g. \"A, B, and C do D, E, and F, respectively\") of various natural languages (e.g., Savitch et al. 1987) . is no context-free grammar that generates Lww, the grammar Gww =< E,L,R,C > shown in Figure 9 generates it (Maruyama 1990 ). An assignment given to a sentence \"aabaab\" is shown in Figure 10 .",
"cite_spans": [
{
"start": 222,
"end": 248,
"text": "(Hopcroft and Ullman 1979)",
"ref_id": "BIBREF2"
},
{
"start": 438,
"end": 458,
"text": "Savitch et al. 1987)",
"ref_id": "BIBREF9"
},
{
"start": 568,
"end": 582,
"text": "(Maruyama 1990",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 546,
"end": 554,
"text": "Figure 9",
"ref_id": null
},
{
"start": 641,
"end": 650,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "FORMAL PROPERTIES",
"sec_num": "4"
},
{
"text": "On the other hand, any context-free language can be generated by a degree=2 CDG. This can be proved by constructing a constraint dependency grammar GCDG from an arbitrary context-free grammar GCFG in Greibach Normal Form, and by showing that the two grammars generate exactly the same language. Since GcFc is in Greibach Normal Form, it is easy to make one-to-one correspondence between a word in a sentence and a rule application in a phrase-structure tree. The details of the proof are given in Maruyama (1990) . This, combined with the fact that Gww generates Lww, means that the weak generative capacity of CDG with degree=2 is strictly greater than that of CFG.",
"cite_spans": [
{
"start": 497,
"end": 512,
"text": "Maruyama (1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FORMAL PROPERTIES",
"sec_num": "4"
},
{
"text": "Let us consider a constraint dependency grammar G =< E, R, L, C > with arity=2 and degree=k. Let n be the length of the input sentence. Consider the space complexity of the constraint network first. In CDG parsing, every word has k roles, so there are n \u00d7 k nodes in total. A role can have n x l possible values, where l is the size of L, so the maximum domain size is n x l. Binary constraints may be imposed on arbitrary pairs of roles, and therefore the number of constraint matrices is at most proportional to (nk) 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "Since the size of a constraint matrix is (nl) 2, the total space complexity of the constraint network is O(12k~n4). Since k and l are grammatical constants, it is O(n 4) for the sentence length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "As the initial formation of a constraint network takes a computation time proportional to the size of the constraint network, the time complexity of the initial formation of a constraint network is O(n4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "The complexity of adding new constraints to a constraint network never exceeds the complexity of the initial formation of a constraint network, so it is also bounded by O(n4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "The most efficient filtering algorithm developed so far runs in O(ea 2,) time, where e is the number of arcs and a is the size of the domains in a constraint network (Mohr and Henderson 1986) . Since the number of arcs is at most O((nk)2), filtering can be performed in O((nk)2(nl)2), which is O(n 4) without grammatical constants.",
"cite_spans": [
{
"start": 166,
"end": 191,
"text": "(Mohr and Henderson 1986)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "Thus, in CDG parsing with arity 2, both the initial formation of a constraint network and filtering are bounded in O(n 4) time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational complexity of CDG parsing",
"sec_num": null
},
{
"text": "We have proposed a formal grammar that allows efficient structural disambiguation. Grammar rules are constraints on word-to-word modifications, and parsing is done by adding the constraints to a data structure called a constraint network. The initial formation of a constraint network and the filtering have a polynomial time bound whereas the weak generative capacity of CDG is strictly greater than that of CFG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5"
},
{
"text": "CDG is actually being used for an interactive Japanese parser of a Japanese-to-English machine translation system for a newspaper domain (Maruyama et. al. 1990) . A parser for such a wide domain should make use of any kind of information available to the system, including user-supplied information. The parser treats this information as another set of unary constraints and applies it to the constraint network. 38",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Maruyama et. al. 1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Coping with syntactic ambiguity, or how to put the block in the box on the table",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Patil",
"suffix": ""
}
],
"year": 1982,
"venue": "American Journal of Computational Linguistics",
"volume": "8",
"issue": "3-4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. and Patil, R. 1982, \"Coping with syntactic ambiguity, or how to put the block in the box on the table,\" American Journal of Computational Linguistics, Vol. 8, No. 3-4.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dependency theory: a formalism and some observations",
"authors": [
{
"first": "D",
"middle": [
"E"
],
"last": "Hays",
"suffix": ""
}
],
"year": 1964,
"venue": "Language",
"volume": "40",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hays, D.E. 1964, \"Dependency theory: a for- malism and some observations,\" Language, Vol. 40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to Automata Theory, Languages, and Computation",
"authors": [
{
"first": "J",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hopcroft, J.E. and Ullman, J.D., 1979, Intro- duction to Automata Theory, Languages, and Computation, Addison-Wesley.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A general syntactic processor",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1973,
"venue": "Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaplan, R.M. 1973, \"A general syntactic pro- cessor,\" in: Rustin, R. (ed.) Natural Language Processing, Algorithmics Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Constraint Dependency Grammar",
"authors": [
{
"first": "H",
"middle": [],
"last": "Maruyama",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maruyama, H. 1990, \"Constraint Dependency Grammar,\" TRL Research Report RT0044, IBM Research, Tokyo Research Laboratory.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An interactive Japanese parser for machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Maruyama",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ogino",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maruyama, H., Watanabe, H., and Ogino, S, 1990, \"An interactive Japanese parser for ma- chine translation,\" COLING 'gO, to appear.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Arc and path consistency revisited",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mohr",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 1986,
"venue": "Artificial Intelligence",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohr, R. and Henderson, T. 1986, \"Arc and path consistency revisited,\" Artificial Intelli- gence, Vol. 28.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Networks of constraints: Fundamental properties and applications to picture processing",
"authors": [
{
"first": "U",
"middle": [],
"last": "Montanari",
"suffix": ""
}
],
"year": 1976,
"venue": "Information Science",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Montanari, U. 1976, \"Networks of constraints: Fundamental properties and applications to pic- ture processing,\" Information Science, Vol. 7.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unification over complex indeterminates in Prolog",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mukai",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukai, K. 1985, \"Unification over complex inde- terminates in Prolog,\" ICOT Technical Report TR-113.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Formal Complexity of Natural Language",
"authors": [
{
"first": "W",
"middle": [
"J"
],
"last": "Savitch",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Savitch, W.J. et al. (eds.) 1987, The Formal Complexity of Natural Language, Reidel.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syntactic graphs: a representation for the union of all ambiguous parse trees",
"authors": [
{
"first": "J",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Simmons",
"suffix": ""
}
],
"year": 1988,
"venue": "Computational Linguis. tics",
"volume": "15",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seo, J. and Simmons, R. 1988, \"Syntactic graphs: a representation for the union of all ambiguous parse trees,\" Computational Linguis. tics, Vol. 15, No. 7.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Constraint analysis on Japanese modification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sugimura",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Miyoshi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mukai",
"suffix": ""
}
],
"year": 1988,
"venue": "Natural Language Understanding and Logic Programming",
"volume": "II",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sugimura, R., Miyoshi, H., and Mukai, K. 1988, \"Constraint analysis on Japanese modification,\" in: Dahl, V. and Saint-Dizier, P. (eds.) Natu- ral Language Understanding and Logic Program- ming, II, Elsevier.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An efficient augmentedcontext-free parsing algorithm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M. 1987, \"An efficient augmented- context-free parsing algorithm,\" Computational Linguistics, Vol. 13.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Understanding line drawings of scenes with shadows",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Waltz",
"suffix": ""
}
],
"year": 1975,
"venue": "The Psychology of Computer Vision",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waltz, D.L. 1975, \"Understanding line draw- ings of scenes with shadows,\" in: Winston, P.H. (ed.): The Psychology of Computer Vision, McGraw-Hill.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Words and their roles.\u2022 pos(x)~ f the position i \u2022 rid(x)~ r the role id rj \u2022 lab(x)d-~ f the label of x \u2022 mod(x)d-~ f the modifiee of x We also define word(i) as the terminal symbol occurring at the position i. 1 An individual grammar G =< ~, R, L, C > in the CDG theory determines a set of possible assignments of a given sentence, where \u2022 ~ is a finite set of terminal symbols."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Let us consider G1 =< E1,R1,L1,C1 > where \u2022 = \u2022 R1 = {governor} \u2022 nl = {DET,SUBJ,ROOT} \u2022 C1 = Vxy : role; P1."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 3: Dependency tree"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Possible dependency structure One of the possible syntactic structures is shown inFigure 42."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Initial constraint network (the values Rnil, L1, P2, ... should be read as <ROOT,nil>, <LOC,I>, <POSTMOD,2>, ..., and so on.) (G2a-4) word(pos(x))=NP =~ ( word(mod(x))=V, lab(x)=OBJ, mod(x) < pos(x) ) \"An NP modifies a V on the left with the label OBJ.\" (G2a-5) word(pos(x))=V ~ ( mod(x)=nil, lab(x)=KOOT ) \"A Y modifies nothing with the label ROOT.\" (G2a-6) mod(x) < pos(y) < pos(x) =~ mod(x) < mod(y) < pos(x) \"Modification links do not cross each other.\""
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "word(pos(x))=PP, on_table E ]e(pos(x)) ~(:floor e /e(mM(x)) ) \"A floor is not on a table.\" (G2b-2) lab(x)=LOC, lab(y)=LOC, mod(x)=mod(y), ward(mod(x) )--V ~ x=y \"No verb can take two locatives.\""
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Unambiguous parsing result Filtered network Since the sentence is still ambiguous, let us consider another constraint. (G2c-1) Iab(x)=POSTMOD, lab(y)=POSTMOD, mod(x)=mod(y), on e fe(po~(x)), on e fe(pos(y)) ~ x=y\"No object can be on two distinct objects.\""
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Several researchers have proposed variant data structures for representing a set of syntactic structures."
},
"FIGREF9": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "partner} C = conjunction of the following subformulas: \u2022 (word(pos(x))=a ~ word(mod(x))=a) & (word(pos(x))=b ~ word(mod(x))=b) \u2022 mod(x) = pos(y) ~ rood(y) = pos(x) \u2022 rood(x) \u00a2 pos(x) & rood(x) \u2022 nil \u2022 pos(x) < pos(y) < mod(y) pos(x) < mod(x) < mod(y) \u2022 rood(y) < pos(y) < pos(x) mod(y) < mod(x) < pos(x) Definition of Gww ~aa a b Assignment for a sentence of Lww"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Put the block on the floor on the table in the room. ._t the block on the floor on the table in the room",
"content": "<table><tr><td>PuV,</td><td>NI~</td><td>PP3</td><td>PP4</td><td>PPs</td></tr><tr><td/><td/><td/><td>~'rMO0</td><td/></tr></table>",
"type_str": "table"
}
}
}
}