{ "paper_id": "P96-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:02:36.504202Z" }, "title": "A New Statistical Parser Based on Bigram Lexical Dependencies", "authors": [ { "first": "Michael", "middle": [ "John" ], "last": "Collins", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania Philadelphia", "location": { "postCode": "19104", "region": "PA", "country": "U.S.A" } }, "email": "mcollins@gradient@edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street Journal data show that the method performs at least as well as SPATTER (Magerman 95; Jelinek et al. 94), which has the best published results for a statistical parser on this task. The simplicity of the approach means the model trains on 40,000 sentences in under 15 minutes. With a beam search strategy parsing speed can be improved to over 200 sentences a minute with negligible loss in accuracy.", "pdf_parse": { "paper_id": "P96-1025", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street Journal data show that the method performs at least as well as SPATTER (Magerman 95; Jelinek et al. 94), which has the best published results for a statistical parser on this task. The simplicity of the approach means the model trains on 40,000 sentences in under 15 minutes. With a beam search strategy parsing speed can be improved to over 200 sentences a minute with negligible loss in accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Lexical information has been shown to be crucial for many parsing decisions, such as prepositional-phrase attachment (for example (Hindle and Rooth 93) ). However, early approaches to probabilistic parsing (Pereira and Schabes 92; Magerman and Marcus 91; Briscoe and Carroll 93) conditioned probabilities on non-terminal labels and part of speech tags alone. The SPATTER parser (Magerman 95; 3elinek et ah 94) does use lexical information, and recovers labeled constituents in Wall Street Journal text with above 84% accuracy -as far as we know the best published results on this task. This paper describes a new parser which is much simpler than SPATTER, yet performs at least as well when trained and tested on the same Wall Street Journal data. The method uses lexical information directly by modeling head-modifier 1 relations between pairs of words. In this way it is similar to *This research was supported by ARPA Grant N6600194-C6043.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Hindle and Rooth 93)", "ref_id": null }, { "start": 378, "end": 391, "text": "(Magerman 95;", "ref_id": null }, { "start": 392, "end": 392, "text": "", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1By 'modifier' we mean the linguistic notion of either an argument or adjunct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Link grammars (Lafferty et al. 92) , and dependency grammars in general.", "cite_spans": [ { "start": 14, "end": 34, "text": "(Lafferty et al. 92)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of a parser is to take a tagged sentence as input (for example Figure l(a)) and produce a phrase-structure tree as output (Figure l(b) ). A statistical approach to this problem consists of two components. First, the statistical model assigns a probability to every candidate parse tree for a sentence. Formally, given a sentence S and a tree T, the model estimates the conditional probability P(T [S) .", "cite_spans": [ { "start": 405, "end": 408, "text": "[S)", "ref_id": null } ], "ref_spans": [ { "start": 130, "end": 142, "text": "(Figure l(b)", "ref_id": null } ], "eq_spans": [], "section": "The Statistical Model", "sec_num": "2" }, { "text": "The most likely parse under the model is then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Model", "sec_num": "2" }, { "text": "Tb~,, --argmaxT P (TIS ) (1) Second, the parser is a method for finding Tbest.", "cite_spans": [ { "start": 18, "end": 24, "text": "(TIS )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Statistical Model", "sec_num": "2" }, { "text": "This section describes the statistical model, while section 3 describes the parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Statistical Model", "sec_num": "2" }, { "text": "The key to the statistical model is that any tree such as Figure l(b) can be represented as a set of baseNPs 2 and a set of dependencies as in Figure l(c) . We call the set of baseNPs B, and the set of dependencies D; Figure l ", "cite_spans": [ { "start": 143, "end": 154, "text": "Figure l(c)", "ref_id": null } ], "ref_spans": [ { "start": 218, "end": 226, "text": "Figure l", "ref_id": null } ], "eq_spans": [], "section": "The Statistical Model", "sec_num": "2" }, { "text": "S is the sentence with words tagged for part of speech. That is, S =< (wl,tl), (w2,t2)... (w~,t,) >.", "cite_spans": [ { "start": 90, "end": 97, "text": "(w~,t,)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "P(TIS ) = P(B,D]S) = P(B[S) x P(D]S,B) (2)", "sec_num": null }, { "text": "For POS tagging we use a maximum-entropy tagger described in (Ratnaparkhi 96) . The tagger performs at around 97% accuracy on Wall Street Journal Text, and is trained on the first 40,000 sentences of the Penn Treebank (Marcus et al. 93) .", "cite_spans": [ { "start": 61, "end": 77, "text": "(Ratnaparkhi 96)", "ref_id": null }, { "start": 218, "end": 236, "text": "(Marcus et al. 93)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "P(TIS ) = P(B,D]S) = P(B[S) x P(D]S,B) (2)", "sec_num": null }, { "text": "Given S and B, the reduced sentence :~ is defined as the subsequence of S which is formed by removing punctuation and reducing all baseNPs to their head-word alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(TIS ) = P(B,D]S) = P(B[S) x P(D]S,B) (2)", "sec_num": null }, { "text": "~A baseNP or 'minimal' NP is a non-recursive NP, i.e. none of its child constituents are NPs. The term was first used in (l:tamshaw and Marcus 95).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P(TIS ) = P(B,D]S) = P(B[S) x P(D]S,B) (2)", "sec_num": null }, { "text": "John/NNP Smith/NNP, the/DT president/NN of/IN IBM/NNP, announced/VBD his/PR, P$ resignation/NN yesterday/NN . announced yesterday } Figure 1 : An overview of the representation used by the model. (a) The tagged sentence; (b) A candidate parse-tree (the correct one); (c) A dependency representation of (b). Square brackets enclose baseNPs (heads of baseNPs are marked in bold). Arrows show modifier --* head dependencies. Section 2.1 describes how arrows are labeled with non-terminal triples from the parse-tree. Non-head words within baseNPs are excluded from the dependency structure; (d) B, the set of baseNPs, and D, the set of dependencies, are extracted from (c).", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 140, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Thus the reduced sentence is an array of word/tag pairs, S=< (t~l,tl),(@2,f2)...(@r~,f,~)>, where m _~ n. For example for Figure l(a) Example 1 S = < (Smith, ggP), (president, NN) , (of, IN) , (IBM, NNP) , (announced, VBD), (resignation, N N), (yesterday, N g ) > Sections 2.1 to 2.4 describe the dependency model. Section 2.5 then describes the baseNP model, which uses bigram tagging techniques similar to (Ramshaw and Marcus 95; Church 88 ).", "cite_spans": [ { "start": 164, "end": 179, "text": "(president, NN)", "ref_id": null }, { "start": 182, "end": 190, "text": "(of, IN)", "ref_id": null }, { "start": 193, "end": 203, "text": "(IBM, NNP)", "ref_id": null }, { "start": 206, "end": 259, "text": "(announced, VBD), (resignation, N N), (yesterday, N g", "ref_id": null }, { "start": 408, "end": 431, "text": "(Ramshaw and Marcus 95;", "ref_id": null }, { "start": 432, "end": 441, "text": "Church 88", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Dependencies The dependency model is limited to relationships between words in reduced sentences such as Ex-ample 1. The mapping from trees to dependency structures is central to the dependency model. It is defined in two steps: 1. For each constituent P --.< C1...Cn > in the parse tree a simple set of rules 3 identifies which of the children Ci is the 'head-child' of P. For example, NN would be identified as the head-child of NP ~ , VP would be identified as the head-child of $ -* . Head-words propagate up through the tree, each parent receiving its head-word from its head-child. For example, in S --~ , S gets its head-word, announced, 3The rules are essentially the same as in (Magerman 95; Jelinek et al. 94) . These rules are also used to find the head-word of baseNPs, enabling the mapping from S and B to S. from its head-child, the VP. 2. Head-modifier relationships are now extracted from the tree in Figure 2 . Figure 3 illustrates how each constituent contributes a set of dependency relationships. VBD is identified as the head-child of VP ---,\" . The head-words of the two NPs, resignation and yesterday, both modify the head-word of the VBD, announced. Dependencies are labeled by the modifier non-terminal, lip in both of these cases, the parent non-terminal, VP, and finally the head-child non-terminal, VBD. The triple of nonterminals at the start, middle and end of the arrow specify the nature of the dependency relationship - represents a subject-verb dependency, denotes prepositional phrase modification of an liP, and so on 4. Each word in the reduced sentence, with the exception of the sentential head 'announced', modifies exactly one other word. We use the notation AF(j) = (hi, Rj) ( 3)to state that the jth word in the reduced sentence is a modifier to the hjth word, with relationship Rj 5. AF stands for 'arrow from'. Rj is the triple of labels at the start, middle and end of the arrow. For example, wl = Smith in this sentence, 4The triple can also be viewed as representing a semantic predicate-argument relationship, with the three elements being the type of the argument, result and functot respectively. This is particularly apparent in Categorial Grammar formalisms (Wood 93) , which make an explicit link between dependencies and functional application.", "cite_spans": [ { "start": 716, "end": 729, "text": "(Magerman 95;", "ref_id": null }, { "start": 730, "end": 748, "text": "Jelinek et al. 94)", "ref_id": null }, { "start": 1773, "end": 1781, "text": "(hi, Rj)", "ref_id": null }, { "start": 2276, "end": 2285, "text": "(Wood 93)", "ref_id": null } ], "ref_spans": [ { "start": 946, "end": 954, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 957, "end": 965, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Mapping from Trees to Sets of", "sec_num": "2.1" }, { "text": "5For the head-word of the entire sentence hj = 0, with Rj=